id
stringlengths
11
11
channel
stringclasses
2 values
channel_id
stringclasses
2 values
title
stringlengths
12
100
categories
sequence
tags
sequence
description
stringlengths
66
5k
text
stringlengths
577
90.4k
segments
list
TFiZYA_JfJs
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Population-Based Search and Open-Ended Algorithms
[ "Science & Technology" ]
[ "machine learning", "ai", "artificial intelligence", "open ended learning", "quality diversity", "conference", "icml", "icml2019", "tutorial", "population-based search", "goal switching", "serendipidy", "evolution" ]
Comments on the ICML2019 tutorial on population-based search and open-ended learning. Talk: https://www.facebook.com/icml.imls/videos/481758745967365/ Slides: http://www.cs.uwyo.edu/~jeffclune/share/2019_06_10_ICML_Tutorial.pdf Book: https://www.amazon.com/dp/B00X57B4JG/ Event: https://icml.cc/Conferences/2019/ScheduleMultitrack?event=4336
This is huge. This is just one hall and most people I guess are still waiting for registration. Yeah, but definitely the size of these things is ginormous. The tutorials have just started. There we go. It's finding a place. Hi, so I just wanted to give a little update on a tutorial that I liked which was the population-based search and open-ended learning tutorial which happened on Monday here. So I was pleasantly surprised by this tutorial because I knew almost nothing about these techniques and they seem really cool. It seems to be a really cool line of research. So I started out with what is population-based search and basically in population-based search you don't want to just reach one solution of a problem but you want to maintain a population of solutions that you develop over time. So natural evolution would be an example of that. So this can have many benefits that were explored in the tutorial. So the culprit of traditional optimization, let's say you have a classification problem, you just train one classifier on it, is what they call deception, meaning that a better example is an RL problem where you need to reach some goal but since the goal might be very hard to reach, your algorithm has basically nothing to go on. There's no stepping stone. So usually people go and construct a reward function in a very clever way. But this can be overcome with these techniques as well. So just imagine the hardest video game in the Atari suite. This would be something like Montezuma's Revenge where you first need to collect some key and then go to some door and only then you get a score. So this reward function is too ambitious and is a problem they call your deception. An observation they make is if you look at nature and natural evolution, it is very successful even without a goal. So there's no goal in mind to natural evolution except reproduction creates other reproduction. But it's not a goal, that's simply a kind of underlying mechanism. And if you look at nature, all this variety of life was produced without a goal in mind. And all this variety of life filling different niches and basically reproducing at their own pace. So it's a very interesting observation. The goal of this entire field is kind of to model, to go into this direction of what if we don't really go after only the cost function, but what if we... So in the most extreme case, what if we build a search algorithm that only wants to create novel things? So where kind of novelty is the only goal, what happens then? And it turns out some interesting things can be achieved with that. So they introduced this notion of quality diversity, which basically means if you look at, let's again take a life on earth, you want all the achievable behaviors that there are. So maybe one achievable behavior is a very fast life form that can hunt other life forms, and another achievable behavior is one that camouflages very well and so on. And you want to kind of find for each of these behaviors, you want to find the best possible example. So that's the direction that these algorithms go into. And an algorithm that they presented was MapElites, so M-A-P-Elites, which goes as follows. So let's say you have a bunch of dimensions you care about, say how fast a creature is, how tall it is, how well it is camouflaged and so on. Now you want to discretize each of those dimensions. So this will give you cells basically. So each of these discretization will introduce a grid of cells. And what you now do is you want to keep the best examples of each cell. So if you have a creature that's very fast but not very well camouflaged at some cell, you look at how well it's doing at the goal that you have in mind. And you want to keep the best one of those. You have a population and whichever ones are in that cell, you keep the best. And then you go ahead and you kind of change them. You could do this via evolutionary process, like you can mutate them, or it could be via gradient descent something. But you mutate them and I guess they will probably end up in a different cell. So you go look at that cell. Are these new ones better than the ones that you remembered from that old cell? And if so, replace them. For each cell, keep the best one and then kind of start continue developing from those. Sort of like Dijkstra's shortest path algorithm. So what it will return is like an entire landscape of possible behaviors. And for each behavior, it will give you the best result. Now it doesn't mean they all do equally. Some will be better, some cells will be not as good with regards to your cost function. But it will give you an entire landscape and you could see then that there are many kind of modes in this landscape. As I said, some creatures are very fast hunters, some camouflage very well. But then they are kind of slower. So you will be able to see these modes in that. I found this pretty interesting and opens the door to a lot of different applications. So a principle they employ is what is called goal switching. Namely, that means if a line of development can benefit from inventions of another line. So let's say the very fast hunters, they are good at that, but then maybe they don't reach quite optimal performance. But then another line develops somewhere else and these are camouflaged, like the camouflaged life forms develop. So they invent kind of camouflage. Now because of the way this mutation and so on is, you kind of keep the camouflaged ones around and the hunters. And now the camouflage can kind of jump over to the hunters. It's very difficult to explain like this, but they call this goal switching. And what it means is that the hunters can now adopt a little bit of camouflage through, let's say mutating one of the camouflaged ones into the hunters or vice versa. And then can kind of benefit from that invention over there. And so a good example of that, they mentioned, is that in order to discover the microwave, you first had to work on radar technology, which had nothing to do with microwaves. But because of the inventions made in radar technology, you could then invent the microwave easily. So it kind of jumped over into the space of ovens, basically. Before, all you had to make food warm was just put it in an oven and heat it up. Now you had the microwave. So that kind of these algorithms capture the spirit of this. A book that the people who gave the tutorial wrote is Why Greatness Cannot Be Planned. I'll definitely get that. And I can't recommend it since I haven't read it yet, but I'm going to get and read it. Should be fairly interesting. So they give them a number. They gave a number of examples of this, for example, robots that can recover from damage because so they had a robot with six legs. They trained it to move. Now they disabled one leg. Now, usually you have one solution like you trained your neural network. I don't think it was even a neural network, but you trained your like your system to move this robot as efficiently as possible. And now because you only have one solution, one legs broken, it doesn't work anymore. But since you have the entire landscape of solutions, you can easily kind of jump to other not as good solutions if you have all legs. But you can jump to other solutions in the solution space and try them out. Which ones do still work? If I only now have five legs, since you have the entire landscape, you're very well able to do that. So that's pretty cool. Another algorithm they presented was GoExplore, which is an algorithm that kind of solved these really hard Atari games while back. And what they do in specific is they kind of have an archive of states that they have reached in the past. So it's a video game and you do some things and then you are in certain states. So it's an archive of states. And you just pick one of that. Right. You pick like, OK, this state means I'm like my little person I control is somewhere over there. And then you just explore from it. Right. You do a population based. You just kind of go around from it and so on. And then you look at the state you end up in. And if the state you end up in is a known state like you've been there before. So it's also in your archive. Then you compare the two. Did you get faster to that state via the new route or did you get faster to that state via the route that was already in your archive? And if you're faster in that state via the new route, you will you replace the archived one with the new one. So this again is kind of like a Dijkstra shortest path algorithm extrapolated to this to this kind of domain where you have to explore. You don't actually have a graph. So I think it's it's pretty cool. It's all kind of the same principle, but it can employ this goal switching thing. Right. So you go to a certain state, but then all of a sudden, because you explored something else, you find a much quicker way to that state, which you never intended. But it happens. So this is a basic principle that kind of if you explore a lot, then good things might happen. So kind of a serendipity discovery mechanism, and you could use those good things, incorporate them into the things that already work. The last topic they covered was open ended search. So a distinction from what they've already discussed to open ended is now. They give the example again life on earth. If you consider it, it's a single run of an algorithm. It's not that for every life form, a different optimization was started and kind of started and finished, optimized for a certain thing. It's all one single run of the same algorithm. And it doesn't really have a goal in mind. So open ended algorithms are like that. They kind of define interesting notion. Is it still interesting if we were to just let it run for a billion years? Like, would it still be interesting? If yes, consider it an open ended algorithm, which I find a really good kind of definition. So the fundamental property that open ended algorithms have and research in this has defined is that constantly not only is the population shifting, but also the environment is shifting. So there's kind of a never static situation. The environment's always shifting. That also means there's always new opportunities opening up for kind of new life on earth, for new creatures to evolve, to kind of fill the niches that open up. And the research community around this, the open ended search, open ended learning community is considering exactly those types of environments. Like how can they even describe those, manufacture those and then learn in those. So pretty cool. The cool experiment they've shown was the pick breeder experiment, where basically it's a human in the loop. So they gave humans could cooperate. So as a human, you go to a website, you pick one picture and these pictures are procedurally generated. So they start out with a very simple pattern and you just have the opportunity to kind of you pick one and it gives you a bunch of random perturbations of the procedurally generated image. And you pick the ones that you like and then you continue exploring from there. And if you're happy, you can just save that to the database and someone else can look through the database and then pick yours, for example, to continue. And the things that the humans came up with or the result of that was extremely interesting. So not only could you perturb, but you could also kind of mix pictures as far as I remember. Not sure anymore. But the things they end up with is you could breed pictures, right? You could you could kind of also put pictures together. So the procedural generation of them and what you end up with is remarkable, remarkably interesting things. And the point they made is it's really only from very few iterations. These are like tens or hundreds of iterations of development, not like a million like we're used to. And there's a real tree of phylogenies that emerge. And the crucial lesson, they say, is people only find when they are not looking. So if you had a certain goal in mind, you would never be able to, you know, change the pictures in the way that this goal would appear. But if you have no goal in mind, you might discover all kinds of interesting things. So that that is kind of all I'm going to say of this. They discussed many more things, but I think these are the main takeaways. So population population based search is interesting because it can kind of overcome the problems that if you only had one optimizer, one optimization run of one algorithm, if you employ quality diversity in the algorithm map elites, this this enables this kind of goal switching gives you back an entire landscape of of the of learned actors or systems that for each one, you know, it's kind of the best performing one in that particular constraint of of the of the dimensions you care about. And yeah, open ended algorithms, open ended search is definitely a cool research direction. And I encourage you to check it out. All right. That was it so far. Thanks for listening. Bye.
[ { "start": 0, "end": 8, "text": " This is huge. This is just one hall and most people I guess are still waiting for registration." }, { "start": 8, "end": 14, "text": " Yeah, but definitely the size of these things is ginormous." }, { "start": 14, "end": 17, "text": " The tutorials have just started." }, { "start": 17, "end": 20, "text": " There we go. It's finding a place." }, { "start": 20, "end": 26, "text": " Hi, so I just wanted to give a little update on a tutorial that I liked" }, { "start": 26, "end": 30, "text": " which was the population-based search and open-ended learning tutorial" }, { "start": 30, "end": 34, "text": " which happened on Monday here." }, { "start": 34, "end": 40, "text": " So I was pleasantly surprised by this tutorial because I knew almost nothing about these techniques" }, { "start": 40, "end": 44, "text": " and they seem really cool. It seems to be a really cool line of research." }, { "start": 44, "end": 48, "text": " So I started out with what is population-based search" }, { "start": 48, "end": 54, "text": " and basically in population-based search you don't want to just reach one solution of a problem" }, { "start": 54, "end": 60, "text": " but you want to maintain a population of solutions that you develop over time." }, { "start": 60, "end": 66, "text": " So natural evolution would be an example of that." }, { "start": 66, "end": 73, "text": " So this can have many benefits that were explored in the tutorial." }, { "start": 73, "end": 80, "text": " So the culprit of traditional optimization, let's say you have a classification problem," }, { "start": 80, "end": 86, "text": " you just train one classifier on it, is what they call deception," }, { "start": 86, "end": 93, "text": " meaning that a better example is an RL problem where you need to reach some goal" }, { "start": 93, "end": 101, "text": " but since the goal might be very hard to reach, your algorithm has basically nothing to go on." }, { "start": 101, "end": 103, "text": " There's no stepping stone." }, { "start": 103, "end": 108, "text": " So usually people go and construct a reward function in a very clever way." }, { "start": 108, "end": 113, "text": " But this can be overcome with these techniques as well." }, { "start": 113, "end": 119, "text": " So just imagine the hardest video game in the Atari suite." }, { "start": 119, "end": 123, "text": " This would be something like Montezuma's Revenge where you first need to collect some key" }, { "start": 123, "end": 127, "text": " and then go to some door and only then you get a score." }, { "start": 127, "end": 134, "text": " So this reward function is too ambitious and is a problem they call your deception." }, { "start": 134, "end": 140, "text": " An observation they make is if you look at nature and natural evolution," }, { "start": 140, "end": 144, "text": " it is very successful even without a goal." }, { "start": 144, "end": 152, "text": " So there's no goal in mind to natural evolution except reproduction creates other reproduction." }, { "start": 152, "end": 159, "text": " But it's not a goal, that's simply a kind of underlying mechanism." }, { "start": 159, "end": 165, "text": " And if you look at nature, all this variety of life was produced without a goal in mind." }, { "start": 165, "end": 173, "text": " And all this variety of life filling different niches and basically reproducing at their own pace." }, { "start": 173, "end": 176, "text": " So it's a very interesting observation." }, { "start": 176, "end": 181, "text": " The goal of this entire field is kind of to model, to go into this direction of" }, { "start": 181, "end": 188, "text": " what if we don't really go after only the cost function, but what if we..." }, { "start": 188, "end": 196, "text": " So in the most extreme case, what if we build a search algorithm that only wants to create novel things?" }, { "start": 196, "end": 202, "text": " So where kind of novelty is the only goal, what happens then?" }, { "start": 202, "end": 207, "text": " And it turns out some interesting things can be achieved with that." }, { "start": 207, "end": 215, "text": " So they introduced this notion of quality diversity, which basically means if you look at," }, { "start": 215, "end": 223, "text": " let's again take a life on earth, you want all the achievable behaviors that there are." }, { "start": 223, "end": 230, "text": " So maybe one achievable behavior is a very fast life form that can hunt other life forms," }, { "start": 230, "end": 235, "text": " and another achievable behavior is one that camouflages very well and so on." }, { "start": 235, "end": 243, "text": " And you want to kind of find for each of these behaviors, you want to find the best possible example." }, { "start": 243, "end": 247, "text": " So that's the direction that these algorithms go into." }, { "start": 247, "end": 256, "text": " And an algorithm that they presented was MapElites, so M-A-P-Elites, which goes as follows." }, { "start": 256, "end": 263, "text": " So let's say you have a bunch of dimensions you care about, say how fast a creature is," }, { "start": 263, "end": 266, "text": " how tall it is, how well it is camouflaged and so on." }, { "start": 266, "end": 270, "text": " Now you want to discretize each of those dimensions." }, { "start": 270, "end": 274, "text": " So this will give you cells basically." }, { "start": 274, "end": 279, "text": " So each of these discretization will introduce a grid of cells." }, { "start": 279, "end": 285, "text": " And what you now do is you want to keep the best examples of each cell." }, { "start": 285, "end": 291, "text": " So if you have a creature that's very fast but not very well camouflaged at some cell," }, { "start": 291, "end": 297, "text": " you look at how well it's doing at the goal that you have in mind." }, { "start": 297, "end": 300, "text": " And you want to keep the best one of those." }, { "start": 300, "end": 305, "text": " You have a population and whichever ones are in that cell, you keep the best." }, { "start": 305, "end": 308, "text": " And then you go ahead and you kind of change them." }, { "start": 308, "end": 312, "text": " You could do this via evolutionary process, like you can mutate them," }, { "start": 312, "end": 317, "text": " or it could be via gradient descent something." }, { "start": 317, "end": 322, "text": " But you mutate them and I guess they will probably end up in a different cell." }, { "start": 322, "end": 329, "text": " So you go look at that cell. Are these new ones better than the ones that you remembered from that old cell?" }, { "start": 329, "end": 331, "text": " And if so, replace them." }, { "start": 331, "end": 338, "text": " For each cell, keep the best one and then kind of start continue developing from those." }, { "start": 338, "end": 342, "text": " Sort of like Dijkstra's shortest path algorithm." }, { "start": 342, "end": 350, "text": " So what it will return is like an entire landscape of possible behaviors." }, { "start": 350, "end": 355, "text": " And for each behavior, it will give you the best result." }, { "start": 355, "end": 358, "text": " Now it doesn't mean they all do equally." }, { "start": 358, "end": 365, "text": " Some will be better, some cells will be not as good with regards to your cost function." }, { "start": 365, "end": 372, "text": " But it will give you an entire landscape and you could see then that there are many kind of modes in this landscape." }, { "start": 372, "end": 377, "text": " As I said, some creatures are very fast hunters, some camouflage very well." }, { "start": 377, "end": 380, "text": " But then they are kind of slower." }, { "start": 380, "end": 383, "text": " So you will be able to see these modes in that." }, { "start": 383, "end": 392, "text": " I found this pretty interesting and opens the door to a lot of different applications." }, { "start": 392, "end": 397, "text": " So a principle they employ is what is called goal switching." }, { "start": 397, "end": 406, "text": " Namely, that means if a line of development can benefit from inventions of another line." }, { "start": 406, "end": 419, "text": " So let's say the very fast hunters, they are good at that, but then maybe they don't reach quite optimal performance." }, { "start": 419, "end": 427, "text": " But then another line develops somewhere else and these are camouflaged, like the camouflaged life forms develop." }, { "start": 427, "end": 429, "text": " So they invent kind of camouflage." }, { "start": 429, "end": 438, "text": " Now because of the way this mutation and so on is, you kind of keep the camouflaged ones around and the hunters." }, { "start": 438, "end": 442, "text": " And now the camouflage can kind of jump over to the hunters." }, { "start": 442, "end": 448, "text": " It's very difficult to explain like this, but they call this goal switching." }, { "start": 448, "end": 461, "text": " And what it means is that the hunters can now adopt a little bit of camouflage through, let's say mutating one of the camouflaged ones into the hunters or vice versa." }, { "start": 461, "end": 465, "text": " And then can kind of benefit from that invention over there." }, { "start": 465, "end": 478, "text": " And so a good example of that, they mentioned, is that in order to discover the microwave, you first had to work on radar technology, which had nothing to do with microwaves." }, { "start": 478, "end": 485, "text": " But because of the inventions made in radar technology, you could then invent the microwave easily." }, { "start": 485, "end": 489, "text": " So it kind of jumped over into the space of ovens, basically." }, { "start": 489, "end": 494, "text": " Before, all you had to make food warm was just put it in an oven and heat it up." }, { "start": 494, "end": 500, "text": " Now you had the microwave. So that kind of these algorithms capture the spirit of this." }, { "start": 500, "end": 508, "text": " A book that the people who gave the tutorial wrote is Why Greatness Cannot Be Planned." }, { "start": 508, "end": 516, "text": " I'll definitely get that. And I can't recommend it since I haven't read it yet, but I'm going to get and read it." }, { "start": 516, "end": 519, "text": " Should be fairly interesting." }, { "start": 519, "end": 531, "text": " So they give them a number. They gave a number of examples of this, for example, robots that can recover from damage because so they had a robot with six legs." }, { "start": 531, "end": 535, "text": " They trained it to move. Now they disabled one leg." }, { "start": 535, "end": 540, "text": " Now, usually you have one solution like you trained your neural network." }, { "start": 540, "end": 547, "text": " I don't think it was even a neural network, but you trained your like your system to move this robot as efficiently as possible." }, { "start": 547, "end": 552, "text": " And now because you only have one solution, one legs broken, it doesn't work anymore." }, { "start": 552, "end": 562, "text": " But since you have the entire landscape of solutions, you can easily kind of jump to other not as good solutions if you have all legs." }, { "start": 562, "end": 568, "text": " But you can jump to other solutions in the solution space and try them out." }, { "start": 568, "end": 576, "text": " Which ones do still work? If I only now have five legs, since you have the entire landscape, you're very well able to do that." }, { "start": 576, "end": 579, "text": " So that's pretty cool." }, { "start": 579, "end": 591, "text": " Another algorithm they presented was GoExplore, which is an algorithm that kind of solved these really hard Atari games while back." }, { "start": 591, "end": 600, "text": " And what they do in specific is they kind of have an archive of states that they have reached in the past." }, { "start": 600, "end": 608, "text": " So it's a video game and you do some things and then you are in certain states. So it's an archive of states." }, { "start": 608, "end": 619, "text": " And you just pick one of that. Right. You pick like, OK, this state means I'm like my little person I control is somewhere over there." }, { "start": 619, "end": 626, "text": " And then you just explore from it. Right. You do a population based. You just kind of go around from it and so on." }, { "start": 626, "end": 636, "text": " And then you look at the state you end up in. And if the state you end up in is a known state like you've been there before." }, { "start": 636, "end": 649, "text": " So it's also in your archive. Then you compare the two. Did you get faster to that state via the new route or did you get faster to that state via the route that was already in your archive?" }, { "start": 649, "end": 657, "text": " And if you're faster in that state via the new route, you will you replace the archived one with the new one." }, { "start": 657, "end": 667, "text": " So this again is kind of like a Dijkstra shortest path algorithm extrapolated to this to this kind of domain where you have to explore." }, { "start": 667, "end": 678, "text": " You don't actually have a graph. So I think it's it's pretty cool. It's all kind of the same principle, but it can employ this goal switching thing." }, { "start": 678, "end": 687, "text": " Right. So you go to a certain state, but then all of a sudden, because you explored something else, you find a much quicker way to that state, which you never intended." }, { "start": 687, "end": 699, "text": " But it happens. So this is a basic principle that kind of if you explore a lot, then good things might happen." }, { "start": 699, "end": 710, "text": " So kind of a serendipity discovery mechanism, and you could use those good things, incorporate them into the things that already work." }, { "start": 710, "end": 722, "text": " The last topic they covered was open ended search. So a distinction from what they've already discussed to open ended is now." }, { "start": 722, "end": 729, "text": " They give the example again life on earth. If you consider it, it's a single run of an algorithm." }, { "start": 729, "end": 740, "text": " It's not that for every life form, a different optimization was started and kind of started and finished, optimized for a certain thing." }, { "start": 740, "end": 747, "text": " It's all one single run of the same algorithm. And it doesn't really have a goal in mind." }, { "start": 747, "end": 752, "text": " So open ended algorithms are like that. They kind of define interesting notion." }, { "start": 752, "end": 758, "text": " Is it still interesting if we were to just let it run for a billion years? Like, would it still be interesting?" }, { "start": 758, "end": 765, "text": " If yes, consider it an open ended algorithm, which I find a really good kind of definition." }, { "start": 765, "end": 781, "text": " So the fundamental property that open ended algorithms have and research in this has defined is that constantly not only is the population shifting, but also the environment is shifting." }, { "start": 781, "end": 800, "text": " So there's kind of a never static situation. The environment's always shifting. That also means there's always new opportunities opening up for kind of new life on earth, for new creatures to evolve, to kind of fill the niches that open up." }, { "start": 800, "end": 814, "text": " And the research community around this, the open ended search, open ended learning community is considering exactly those types of environments." }, { "start": 814, "end": 821, "text": " Like how can they even describe those, manufacture those and then learn in those. So pretty cool." }, { "start": 821, "end": 832, "text": " The cool experiment they've shown was the pick breeder experiment, where basically it's a human in the loop. So they gave humans could cooperate." }, { "start": 832, "end": 840, "text": " So as a human, you go to a website, you pick one picture and these pictures are procedurally generated." }, { "start": 840, "end": 850, "text": " So they start out with a very simple pattern and you just have the opportunity to kind of you pick one and it gives you a bunch of random perturbations of the procedurally generated image." }, { "start": 850, "end": 855, "text": " And you pick the ones that you like and then you continue exploring from there." }, { "start": 855, "end": 864, "text": " And if you're happy, you can just save that to the database and someone else can look through the database and then pick yours, for example, to continue." }, { "start": 864, "end": 872, "text": " And the things that the humans came up with or the result of that was extremely interesting." }, { "start": 872, "end": 881, "text": " So not only could you perturb, but you could also kind of mix pictures as far as I remember. Not sure anymore." }, { "start": 881, "end": 891, "text": " But the things they end up with is you could breed pictures, right? You could you could kind of also put pictures together." }, { "start": 891, "end": 900, "text": " So the procedural generation of them and what you end up with is remarkable, remarkably interesting things." }, { "start": 900, "end": 905, "text": " And the point they made is it's really only from very few iterations." }, { "start": 905, "end": 911, "text": " These are like tens or hundreds of iterations of development, not like a million like we're used to." }, { "start": 911, "end": 915, "text": " And there's a real tree of phylogenies that emerge." }, { "start": 915, "end": 922, "text": " And the crucial lesson, they say, is people only find when they are not looking." }, { "start": 922, "end": 931, "text": " So if you had a certain goal in mind, you would never be able to, you know, change the pictures in the way that this goal would appear." }, { "start": 931, "end": 937, "text": " But if you have no goal in mind, you might discover all kinds of interesting things." }, { "start": 937, "end": 944, "text": " So that that is kind of all I'm going to say of this." }, { "start": 944, "end": 948, "text": " They discussed many more things, but I think these are the main takeaways." }, { "start": 948, "end": 958, "text": " So population population based search is interesting because it can kind of overcome the problems that if you only had one optimizer," }, { "start": 958, "end": 965, "text": " one optimization run of one algorithm, if you employ quality diversity in the algorithm map elites," }, { "start": 965, "end": 977, "text": " this this enables this kind of goal switching gives you back an entire landscape of of the of learned actors or systems" }, { "start": 977, "end": 988, "text": " that for each one, you know, it's kind of the best performing one in that particular constraint of of the of the dimensions you care about." }, { "start": 988, "end": 997, "text": " And yeah, open ended algorithms, open ended search is definitely a cool research direction." }, { "start": 997, "end": 1002, "text": " And I encourage you to check it out. All right. That was it so far." }, { "start": 1002, "end": 1007, "text": " Thanks for listening. Bye." } ]
dND-7llwrpw
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Grokking: Generalization beyond Overfitting on small algorithmic datasets (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "grokking", "openai", "double descent", "belkin", "overfitting", "bias variance", "steps", "training", "binary tables", "binary operations", "binary operation", "multiplication table", "algorithmic datasets", "groups", "s5 group", "deep learning algorithmic", "deep learning generalization", "generalization research", "why do neural networks generalize" ]
#grokking #openai #deeplearning Grokking is a phenomenon when a neural network suddenly learns a pattern in the dataset and jumps from random chance generalization to perfect generalization very suddenly. This paper demonstrates grokking on small algorithmic datasets where a network has to fill in binary tables. Interestingly, the learned latent spaces show an emergence of the underlying binary operations that the data were created with. OUTLINE: 0:00 - Intro & Overview 1:40 - The Grokking Phenomenon 3:50 - Related: Double Descent 7:50 - Binary Operations Datasets 11:45 - What quantities influence grokking? 15:40 - Learned Emerging Structure 17:35 - The role of smoothness 21:30 - Simple explanations win 24:30 - Why does weight decay encourage simplicity? 26:40 - Appendix 28:55 - Conclusion & Comments Paper: https://mathai-iclr.github.io/papers/papers/MATHAI_29_paper.pdf Abstract: In this paper we propose to study generalization of neural networks on small algorithmically generated datasets. In this setting, questions about data efficiency, memorization, generalization, and speed of learning can be studied in great detail. In some situations we show that neural networks learn through a process of “grokking” a pattern in the data, improving generalization performance from random chance level to perfect generalization, and that this improvement in generalization can happen well past the point of overfitting. We also study generalization as a function of dataset size and find that smaller datasets require increasing amounts of optimization for generalization. We argue that these datasets provide a fertile ground for studying a poorly understood aspect of deep learning: generalization of overparametrized neural networks beyond memorization of the finite training dataset. Authors: Alethea Power, Yuri Burda, Harri Edwards, Igor Babuschkin & Vedant Misra Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher LinkedIn: https://www.linkedin.com/in/ykilcher BiliBili: https://space.bilibili.com/1824646584 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Hi there. Today we'll look at Grokking generalization beyond overfitting on small algorithmic datasets by Alethea Power, Yuri Burda, Harry Edwards, Igor Babushkin and Vedant Misra of OpenAI. On a high level this paper presents a phenomenon that the researchers call Grokking where a neural network will generalize all of a sudden after having after way the point of overfitting on a dataset. So you train the network it completely overfits on a dataset. Training loss is down, training accuracy is 100% but it doesn't generalize at all to the validation set and then when you continue training the network at some point it will just snap into over into generalizing on these datasets that they're researching to a like a hundred percent generalization so a hundred percent accuracy on the validation set. This is extremely interesting and as you can see the paper has been presented at a workshop at Eichler 2021 which means that it is not yet it's sort of work in progress so there is still a lot of unclear things about this phenomenon it's a as I understand it a phenomenological paper that just presents look here is something interesting that we found and I think it's pretty cool so we'll dive into the paper we'll look at this phenomenon they do dig into it a little bit into what's happening here and try to come up with some explanation. So the basic premise of rocking is the graph you see on the left right here now it is a little bit pixel-ish but I hope you can still see what's happening. The red part is the training accuracy and on the x-axis you have number of optimization steps and this is a log scale so that's important to see this is a log scale for training steps in this direction. Now the training accuracy naturally after a few steps it shoots up to a hundred percent. We'll get to what data sets these things are in a second but it's important to see the network can in fact fit the training data extremely well and it just overfits however the validation accuracy it if you can see it there is a little bump here but then it goes it goes down again almost I don't know whether we should even regard this as a little bump that's actually happening however it just stays it stays down it stays down and then after you can see orders of magnitude more steps this is 10 to the second 10 to the third 10 to the fourth 10 to the fifth steps it shoots up and it starts to generalize as well. This is very interesting because you know this essentially means you you keep on training for a long time and when all hope is lost still the network at some point will will generalize now why is this happening and as I understand it it's not the case often that the network like drops down again out of generalization though I haven't I haven't actually seen this investigated like if they run for 10 to the I don't know how many steps but it seems like once the network is generalizing is has training accuracy of a hundred percent it doesn't fall out of that again so the question is how does this happen like what's happening here why is this happening why is it all of a sudden and what makes it work and for that it's a bit important to understand a very related phenomenon in fact a connected probably phenomenon called the double descent phenomenon in deep learning the double descent phenomenon graph looks somewhat similar in that the premise is that on the x-axis you have the number of parameters in a network so the number of parameters in a neural network and then on the on the y-axis you have let's say loss okay or actually let's say let's say accuracy I'm not sure loss most of these plots for the double descent phenomenon are actually loss so if you consider the training loss as you increase the number of parameters in your neural network you will fit the data better and better the training data so you get a curve that goes something like this and then it just stays at zero right so there's zero training loss as you increase the number of parameters these every point on this line is a neural network with a given number of parameters that has just been optimized to convergence okay that's important to remember on the left here we saw a graph during optimization on the right here is a graph of many different networks all of which have been trained to convergence now what you see with the validation loss in this case so if you look at the validation loss it might at some point it might come down with the training loss right and then in the classic fashion of machine learning you as the number of parameters go up you start to sort of overfit the validation loss goes up again because you start overfitting you start memorizing the training data set and then at a point where pretty much the number of parameters equal the number of training data points like the number of let's just call this n then you have again like a really crappy validation loss because you just remembering the training data however if you increase your parameters beyond that point so if you scale up your neural networks even more the validation loss will come down again and actually end up at a lower point than if you were on this place over here if you had not enough parameters so there is a point beyond overfitting where you have more parameters than data points and interest interestingly for neural networks it is the case that it happens that they can achieve generalization in fact better generalization with over parameter ization than comparable under parameterized models which flies in the face of of all statistics and whatnot but we know this phenomenon exists okay so we knew that things like this can happen like the training loss can be perfect and still we can have generalization right the grokking phenomenon is a phenomenon where I'm gonna guess I'm gonna guess the the creators of the double descent phenomenon haven't looked quite as far in order to I guess they simply ran training to convergence for a number of steps and then they they they looked at the validation loss so I guess they would have stopped somewhere in between here between 10 to the third and 10 to the fourth steps this research here is simply what happens if we like let it run for a really long time then this shoots up as well and it seems like it seems like for a lot of conditions you you can you can do this so now it's worth looking at what kind of data sets we are we are interested in here the data sets are synthetic data sets in this paper the synthetic data sets are binary operation tables so here the data sets we consider our binary operation tables of the form a and then here this is like some sort of an binary operation a let's just call it multiplied a multiplied by B equals C where a B and C are discrete symbols with no internal structure and the circle is a binary operation examples of binary operations include addition composition of permutations by variant polynomials and many many more in fact they have some examples I think down here so here you see some examples like addition and multiplication but also more complicated things like a polynomial that you then you then do modulo a prime number a division modulo a prime number and so on so the way you the way you create a data set is you construct a table and then the table you have a number of these symbols and then you define binary operations by simply filling in that table okay so if this were I don't know like a plus a plus B and a and B are numbers then right a plus B is C if a is 1 B is 2 C is 3 and so on but you can define this as many different things a lot of the experiments in this paper are of the group s5 which is the group of all permutations of five elements which I think has like so this is a group with 120 elements so your table would here be 120 by 120 and the operation would be the sort of composition of permutation so every permutation of five elements composed with another permutation gives you yet another permutation of five elements so you can just construct this this table and then what you do is you just simply cross out a few things in the table so you say okay here I'm just gonna cross out a few things and this is what the network should predict right I'm gonna train the network on the data that I have and I'm gonna predict the cells that I crossed out this way you can exactly measure how good the network is right there is no noise effectively in the data it's all very well defined and a human goes about this with I guess was sort of a logical mind they try to figure out like ah what's the rule what's the rule a neural network can simply remember the training data but then it will not generalize to the hidden fields because it cannot memorize those so if a neural network generalizes here it also kind of means that it must have somehow learned the rule and this this is pretty interesting so there are a number of quantities to keep in mind the the three quantities are first of all what's the operation because there are more and less complicated things for these networks to learn just from the kind of difficulty the complexity of the operation itself second of all is the data set size or the size of the binary table itself in this case it's 120 by 120 and the third one is how many things are left away so how large is the training data fraction the fraction of the table that is filled in for the network to learn all of these three things are going to play a crucial role in this in this grokking phenomenon and when and how it appears for example here you see they they have trained neural networks on this s5 group right the permutations of groups of five elements until they reach generalization so they simply run it and they measure how long does it take a network to reach 99% validation accuracy or higher right that's that's the thing on the left is essentially you know the answer would be something like between 10 to the 5 and 10 to the 6 okay so and they measure this as a function of you might not be able to read this but it says training data fraction how much of the training data is filled in and you can pretty clearly see if I just give it like here 20% of training data there are even some runs that do not generalize in this number of steps now would they generalize if you were to optimize for even longer who knows honestly but you can see that as soon as you give like 30% of the training data the runs in general do generalize but they take something like here yeah 10 to the 5 number of steps to do so and then as you increase the training data fraction at this snap to the generalization happens faster and faster you can see right here as you give more training data it goes faster and faster until it generalizes and the generalization happens as I understand it yeah fairly like quickly like it it doesn't generalize because it remembers the training data and this always happens as I understand it in a fairly similar number of steps but then at some later point it just kind of snaps and completely generalizes to the validation set and this is this is really interesting so we know that the more training data we have around the better right that's one recognition then the other the other thing is they try to figure out okay which parts of the optimization algorithm are are making this grokking phenomenon happen and here they figure out that weight decay in fact is one of the is one of the big drivers of this so if they add weight decay to the algorithm and they try a lot of different things they try full batch versus mini batch with dropout without dropout modulating the learning rate and so on but weight decay seems to be one of the biggest contributors to this grokking phenomenon to the fact or to how fast these networks generalize you can see that the network generalizes much sooner if you have weight decay turned up than not also they make the observation that if you have symmetric operations if your binary operation is symmetric then also the grokking phenomenon happens much faster than if you have like non symmetric operations this might just be a function of these networks which if you if you have like something like a transformer you know it it's it's sort of kind of invariant to to the symmetry so it might like essentially one data point is sort of two data points in disguise of its symmetric or there's only half as much stuff to learn you choose whatever you you want to interpret this as but I think yeah this is not as important as the weight decay and why do I highlight this I highlight this because also down here you can see they analyze then they analyze the results of a network that has learned to generalize like this so on the right you see a t-sne projection of the output layer weights from a network trained on modular addition so this is x plus y modulo 8 I think the lines show the result of adding 8 to each element the colors show the residue of each element modulo 8 so if you do the t-sne projection you can see the lines are obviously drawn by the authors but you can see there are structures where if you go along the line right here they've colored essentially this is always adding 8 adding 8 adding 8 so there are structures where this the rule for generating the data is clearly present in the data itself sorry in the in the network's weights this gives you a strong indication that the network has not only just remembered the data somehow but has in fact discovered the rule behind the data and we have never incentivized the networks to learn these rules that's the wild point there are there are architectures where you try to specifically make tell the network look there there is a rule behind this I want you to figure out the rule you can maybe do symbolic regression or I don't know like like you can try to build an internal graph of and reason over it no no we just train neural networks right here and it turns out that these networks can learn these rules so why do I relate this to the double descent phenomenon in the double descent phenomenon it is assumed or I've heard the authors of these papers speak about their their kind of hypothesis why this happens and this is a bit mixed with my my hypothesis as well they speak of for example weight decay being one possible explanation so they say if I have a bunch of data points let's say I have a bunch of data points right here right and I want to do regression on them well if I just do linear regression I have one line right it's fairly robust right it's fairly flat it's fairly robust because it's just one parameter now if I start to add parameters I get maybe I get to a point where I have a good number of parameters you know this this polynomial maybe kind of like this still fairly robust right you can see how it might generalize to to new data then right so this the blue one will be somewhere here the dark blue one would be somewhere here where the the validation loss actually goes down with the training loss but then when I add when I keep adding data points sorry parameters then you know classically I'll start you know my my overfitting right here and this it will not generalize to any point that might be in between like one here or so there will just go up so the green would correspond to the point where I just start to interpolate the training data but then what happens if I go on if I make even higher order polynomials or higher order neural networks well at that point at least these authors argue do I have another color this one they argue that you get like a polynomial that or a curve that yes it has a lot of parameters but it uses these parameters such that it can be sort of smoothly interpolate the training data you know this curve is quite complicated in terms of the number of numbers you need to describe it but it uses the fact that it has a lot of freedom you know it can choose to be however it wants as long as it interpolates the training data right yet it chooses to be smooth because of a combination of SGD training it and of weight decay so the weight decay would prevent any of these numbers from getting too big and therefore getting like super out of whack curve so the weight decay would in fact smoothen the curve and that makes the model generalize really well because the smoothness now is reasonably generalizes to training data points that are in between like this data point is still fairly well represented by the purple curve in fact it's better than the dark blue curve in this particular case so you can see that the authors here argue that weight decay might be an important contributor to why over parameterized networks generalize and it's interesting that the these grokking the authors of the grokking phenomenon paper here find the same thing they say okay if we use weight decay the grokking appears to happen much faster is this I don't know what exactly they call grokking I'm just gonna call grokking this whenever the validation loss snaps all of a sudden from 0 to 100 on these these datasets now again these are algorithmic datasets so you know we don't know what happens I think that they do make experiments when they they noise some of the data so they they have some noise in there and I think they find that if they add noise then it's way more difficult I'm I'm not sure though maybe I'm confusing papers here but what what might be happening right here right this is it's interesting because what might be happening is that by imposing this smoothness and the over parameterization we're sort of biasing these networks to find like simple solutions right so if if I have just very few training data points if most of the cells here are blacked out right the simplest solution is simply to remember the training data however as I get more and more training data points right that give me more and more information about a potential underlying rule it becomes simpler for me to simply to understand the underlying rule then to remember the training data it's it's more it's more difficult to remember the training data than simply to learn the rule so what might be happening here is that as I train and this is always training here the training happens always on the same data right you simply sample the same things over and over again train on it I think what might be happening is that you kind of jump around in your optimization procedure you can see there there's some bumps in the training accuracy here so you kind of jump around jump around that's a song no so you jump around a bit and and in your in your loss landscape there there might be many of these local minima where you in fact remember the training data perfectly so you kind of jump around a bit between them right you remember the training data perfectly and then one of them is just you remember the training data as well now this is you remember the training data as well however the solution is just so much simpler that you stay there this is not a good way of visualizing it so it must be something like here are the minima where here are the minima where this is the training just at the loss on the data however there is another loss and that's the loss on like the for example the weight decay loss and the weight decay loss is you know it's pretty good all of these things but then for one of them it's just like because that solution is so much simpler so you're going to choose you're going to jump around between those minima jump around until you know once you reach this one this loss right here that comes on top of this it's just so much lower that you're gonna you're gonna stay there it's like wow I found such an easy solution I'm not gonna go out again so yeah now the big question is of course how and why does something like SGD plus weight decay plus potential other drivers of smoothness in these models how and why do they correspond to simplicity of solutions right because simplicity of solutions is something that kind of we humans have built in like okay what's the rule behind this what's the rule is this essentially assuming that there is a simple rule trying to find it because it would make our life much easier it's a simple explanation for what's happening the interesting part is that weight decay or something similar something that's happening in these neural networks is essentially doing the same thing even though we don't tell it to do it so understanding this I think is going to be quite an important quite an important task for the near future and also maybe maybe we're not exactly right with the weight decay maybe there is some other constraint that we can impose that encourages simple solutions in in the way we care about simplicity even more and you know once we have that the it's it's like you know there that this age old argument do these things actually understand anything well in this case I'm sorry but if you have found this solution with the rule essentially built into the networks of the into the weights of the neural network you can say well the network has in fact learned the rule behind this binary operations so you know who are we to say these networks don't understand anything at that point and also it gives us the opportunity to you know train these networks and then from the structures of their latent spaces we might in fact parse out the rules of data we don't know yet so we let the networks fit and we parse we parse the underlying maybe physical laws maybe social social phenomena we parse them out from the underlying data oh yeah here okay there is an appendix where they list binary operations they have tried out models optimizations so yeah they use a transformer with two layers for attention heads so it's not it's not a big thing and also the data sets aren't aren't super complicated but is pretty cool to see this phenomenon now again on if we have real-world data bigger networks noisy data it's not going to it's not going to happen as drastically and also they say as you increase the size of the data set where is that as you increase the size of the data set then this phenomenon is harder and harder so if the entire data set is bigger the the grokking phenomenon I guess it's it's more tough to see and also here is the experiment I mentioned where you have several outliers so noisy data points and as you so this is the fraction of correctly labeled data points so as you increase the number of correctly labeled data points you can see the grokking happens in more often or to a better validation accuracy than not so well you can I don't know if you can read this but yeah these these down here they have too many outliers so with too many outliers either the validation accuracy just stays at zero or it just turns up like quite late okay that's it here is an example of one of these binary operation tables that is a little bit larger I don't know if it's one of the hundred twenty sized ones but this is something that would be presented to the network and they say they say what we invite the reader to guess which operation is represented here well have fun dear dear reader yeah all right so this was it from me for the grokking paper as I said this seems like it's work in progress I think it's pretty cool work in progress it raises a lot of questions and I think yeah I think it's it's pretty cool I wonder how this happened like like how how did how did people find this they just forget to turn off their computer and in the morning they came back in there like whoopsie-doopsie generalized though if you if you know if you build these kinds of data sets I guess you have something in mind already yeah in any case that was it for me tell me what what you think is going on in neural networks or is there like is there like a super easy outcomes razor explanation that I'm missing I don't know tell me what you think I'll see you next time bye bye
[ { "start": 0, "end": 5.5200000000000005, "text": " Hi there. Today we'll look at Grokking generalization beyond overfitting on" }, { "start": 5.5200000000000005, "end": 12, "text": " small algorithmic datasets by Alethea Power, Yuri Burda, Harry Edwards, Igor Babushkin" }, { "start": 12, "end": 16.64, "text": " and Vedant Misra of OpenAI. On a high level this paper presents a" }, { "start": 16.64, "end": 23.080000000000002, "text": " phenomenon that the researchers call Grokking where a neural network will" }, { "start": 23.08, "end": 31, "text": " generalize all of a sudden after having after way the point of overfitting on a" }, { "start": 31, "end": 36.04, "text": " dataset. So you train the network it completely overfits on a dataset." }, { "start": 36.04, "end": 41.519999999999996, "text": " Training loss is down, training accuracy is 100% but it doesn't" }, { "start": 41.519999999999996, "end": 46.4, "text": " generalize at all to the validation set and then when you continue training the" }, { "start": 46.4, "end": 53.839999999999996, "text": " network at some point it will just snap into over into generalizing on these" }, { "start": 53.839999999999996, "end": 58.46, "text": " datasets that they're researching to a like a hundred percent generalization so" }, { "start": 58.46, "end": 62.32, "text": " a hundred percent accuracy on the validation set. This is extremely" }, { "start": 62.32, "end": 66.96, "text": " interesting and as you can see the paper has been presented at a workshop at" }, { "start": 66.96, "end": 73.2, "text": " Eichler 2021 which means that it is not yet it's sort of work in progress so" }, { "start": 73.2, "end": 80.36, "text": " there is still a lot of unclear things about this phenomenon it's a as I" }, { "start": 80.36, "end": 84.84, "text": " understand it a phenomenological paper that just presents look here is" }, { "start": 84.84, "end": 90.60000000000001, "text": " something interesting that we found and I think it's pretty cool so we'll dive" }, { "start": 90.60000000000001, "end": 95.60000000000001, "text": " into the paper we'll look at this phenomenon they do dig into it a little" }, { "start": 95.60000000000001, "end": 102.24000000000001, "text": " bit into what's happening here and try to come up with some explanation. So the" }, { "start": 102.24, "end": 108.16, "text": " basic premise of rocking is the graph you see on the left right here now it is" }, { "start": 108.16, "end": 113.19999999999999, "text": " a little bit pixel-ish but I hope you can still see what's happening. The red" }, { "start": 113.19999999999999, "end": 119.91999999999999, "text": " part is the training accuracy and on the x-axis you have number of optimization" }, { "start": 119.91999999999999, "end": 126.11999999999999, "text": " steps and this is a log scale so that's important to see this is a log scale for" }, { "start": 126.12, "end": 133.16, "text": " training steps in this direction. Now the training accuracy naturally after a few" }, { "start": 133.16, "end": 138.88, "text": " steps it shoots up to a hundred percent. We'll get to what data sets these things" }, { "start": 138.88, "end": 143.48000000000002, "text": " are in a second but it's important to see the network can in fact fit the" }, { "start": 143.48000000000002, "end": 149.52, "text": " training data extremely well and it just overfits however the validation" }, { "start": 149.52, "end": 155.12, "text": " accuracy it if you can see it there is a little bump here but then it goes it" }, { "start": 155.12, "end": 160.52, "text": " goes down again almost I don't know whether we should even regard this as a" }, { "start": 160.52, "end": 165.6, "text": " little bump that's actually happening however it just stays it stays down it" }, { "start": 165.6, "end": 170.42000000000002, "text": " stays down and then after you can see orders of magnitude more steps this is" }, { "start": 170.42000000000002, "end": 174.96, "text": " 10 to the second 10 to the third 10 to the fourth 10 to the fifth steps it" }, { "start": 174.96, "end": 182.88, "text": " shoots up and it starts to generalize as well. This is very interesting because" }, { "start": 182.88, "end": 190.32, "text": " you know this essentially means you you keep on training for a long time and" }, { "start": 190.32, "end": 195.92, "text": " when all hope is lost still the network at some point will will generalize now" }, { "start": 195.92, "end": 202.07999999999998, "text": " why is this happening and as I understand it it's not the case often" }, { "start": 202.07999999999998, "end": 205.8, "text": " that the network like drops down again out of generalization though I haven't" }, { "start": 205.8, "end": 209.48, "text": " I haven't actually seen this investigated like if they run for 10 to" }, { "start": 209.48, "end": 214.23999999999998, "text": " the I don't know how many steps but it seems like once the network is" }, { "start": 214.23999999999998, "end": 219.76, "text": " generalizing is has training accuracy of a hundred percent it doesn't fall out" }, { "start": 219.76, "end": 224.88, "text": " of that again so the question is how does this happen like what's happening" }, { "start": 224.88, "end": 231.28, "text": " here why is this happening why is it all of a sudden and what makes it work and" }, { "start": 231.28, "end": 236.32, "text": " for that it's a bit important to understand a very related phenomenon in" }, { "start": 236.32, "end": 240.64, "text": " fact a connected probably phenomenon called the double descent phenomenon in" }, { "start": 240.64, "end": 245.07999999999998, "text": " deep learning the double descent phenomenon graph looks somewhat similar" }, { "start": 245.07999999999998, "end": 249.98, "text": " in that the premise is that on the x-axis you have the number of" }, { "start": 249.98, "end": 256, "text": " parameters in a network so the number of parameters in a neural network and then" }, { "start": 256, "end": 263.68, "text": " on the on the y-axis you have let's say loss okay or actually let's say let's" }, { "start": 263.68, "end": 269.08, "text": " say accuracy I'm not sure loss most of these plots for the double descent" }, { "start": 269.08, "end": 276.76, "text": " phenomenon are actually loss so if you consider the training loss as you" }, { "start": 276.76, "end": 280.8, "text": " increase the number of parameters in your neural network you will fit the" }, { "start": 280.8, "end": 285.48, "text": " data better and better the training data so you get a curve that goes something" }, { "start": 285.48, "end": 292.08, "text": " like this and then it just stays at zero right so there's zero training loss as" }, { "start": 292.08, "end": 296.64, "text": " you increase the number of parameters these every point on this line is a" }, { "start": 296.64, "end": 301.03999999999996, "text": " neural network with a given number of parameters that has just been optimized" }, { "start": 301.03999999999996, "end": 305.59999999999997, "text": " to convergence okay that's important to remember on the left here we saw a graph" }, { "start": 305.59999999999997, "end": 311.47999999999996, "text": " during optimization on the right here is a graph of many different networks all of" }, { "start": 311.47999999999996, "end": 317.88, "text": " which have been trained to convergence now what you see with the validation" }, { "start": 317.88, "end": 322.96, "text": " loss in this case so if you look at the validation loss it might at some point" }, { "start": 322.96, "end": 326.88, "text": " it might come down with the training loss right and then in the classic" }, { "start": 326.88, "end": 331.48, "text": " fashion of machine learning you as the number of parameters go up you start to" }, { "start": 331.48, "end": 337, "text": " sort of overfit the validation loss goes up again because you start overfitting" }, { "start": 337, "end": 341.7, "text": " you start memorizing the training data set and then at a point where pretty" }, { "start": 341.7, "end": 346.64, "text": " much the number of parameters equal the number of training data points like the" }, { "start": 346.64, "end": 352.4, "text": " number of let's just call this n then you have again like a really crappy" }, { "start": 352.4, "end": 357.28, "text": " validation loss because you just remembering the training data however if" }, { "start": 357.28, "end": 362.71999999999997, "text": " you increase your parameters beyond that point so if you scale up your neural" }, { "start": 362.71999999999997, "end": 367.24, "text": " networks even more the validation loss will come down again and actually end up" }, { "start": 367.24, "end": 374.32, "text": " at a lower point than if you were on this place over here if you had not" }, { "start": 374.32, "end": 380.4, "text": " enough parameters so there is a point beyond overfitting where you have more" }, { "start": 380.4, "end": 385.04, "text": " parameters than data points and interest interestingly for neural" }, { "start": 385.04, "end": 392.92, "text": " networks it is the case that it happens that they can achieve generalization in" }, { "start": 392.92, "end": 398.28, "text": " fact better generalization with over parameter ization than comparable" }, { "start": 398.28, "end": 403.4, "text": " under parameterized models which flies in the face of of all statistics and" }, { "start": 403.4, "end": 412.2, "text": " whatnot but we know this phenomenon exists okay so we knew that things like" }, { "start": 412.2, "end": 417.88, "text": " this can happen like the training loss can be perfect and still we can have" }, { "start": 417.88, "end": 426.59999999999997, "text": " generalization right the grokking phenomenon is a phenomenon where I'm" }, { "start": 426.59999999999997, "end": 430.03999999999996, "text": " gonna guess I'm gonna guess the the creators of the double descent" }, { "start": 430.04, "end": 436.44, "text": " phenomenon haven't looked quite as far in order to I guess they simply ran" }, { "start": 436.44, "end": 441, "text": " training to convergence for a number of steps and then they they they looked at" }, { "start": 441, "end": 445.28000000000003, "text": " the validation loss so I guess they would have stopped somewhere in between" }, { "start": 445.28000000000003, "end": 450.6, "text": " here between 10 to the third and 10 to the fourth steps this research here is" }, { "start": 450.6, "end": 456.16, "text": " simply what happens if we like let it run for a really long time then this" }, { "start": 456.16, "end": 462.24, "text": " shoots up as well and it seems like it seems like for a lot of conditions you" }, { "start": 462.24, "end": 468.16, "text": " you can you can do this so now it's worth looking at what kind of data sets" }, { "start": 468.16, "end": 474.52000000000004, "text": " we are we are interested in here the data sets are synthetic data sets in" }, { "start": 474.52000000000004, "end": 479.98, "text": " this paper the synthetic data sets are binary operation tables so here the" }, { "start": 479.98, "end": 485.32000000000005, "text": " data sets we consider our binary operation tables of the form a and then" }, { "start": 485.32, "end": 490.59999999999997, "text": " here this is like some sort of an binary operation a let's just call it" }, { "start": 490.59999999999997, "end": 496.8, "text": " multiplied a multiplied by B equals C where a B and C are discrete symbols" }, { "start": 496.8, "end": 503.32, "text": " with no internal structure and the circle is a binary operation examples of" }, { "start": 503.32, "end": 507.96, "text": " binary operations include addition composition of permutations by" }, { "start": 507.96, "end": 513.52, "text": " variant polynomials and many many more in fact they have some examples I think" }, { "start": 513.52, "end": 518.28, "text": " down here so here you see some examples like addition and multiplication but" }, { "start": 518.28, "end": 524.92, "text": " also more complicated things like a polynomial that you then you then do" }, { "start": 524.92, "end": 533, "text": " modulo a prime number a division modulo a prime number and so on so the way you" }, { "start": 533, "end": 538.12, "text": " the way you create a data set is you construct a table and then the table you" }, { "start": 538.12, "end": 544.48, "text": " have a number of these symbols and then you define binary operations by simply" }, { "start": 544.48, "end": 550.92, "text": " filling in that table okay so if this were I don't know like a plus a plus B" }, { "start": 550.92, "end": 558, "text": " and a and B are numbers then right a plus B is C if a is 1 B is 2 C is 3 and" }, { "start": 558, "end": 564.32, "text": " so on but you can define this as many different things a lot of the" }, { "start": 564.32, "end": 569.82, "text": " experiments in this paper are of the group s5 which is the group of all" }, { "start": 569.82, "end": 575.08, "text": " permutations of five elements which I think has like so this is a group with" }, { "start": 575.08, "end": 583.72, "text": " 120 elements so your table would here be 120 by 120 and the operation would be" }, { "start": 583.72, "end": 589.6400000000001, "text": " the sort of composition of permutation so every permutation of five elements" }, { "start": 589.64, "end": 594.24, "text": " composed with another permutation gives you yet another permutation of five" }, { "start": 594.24, "end": 600.08, "text": " elements so you can just construct this this table and then what you do is you" }, { "start": 600.08, "end": 605, "text": " just simply cross out a few things in the table so you say okay here I'm just" }, { "start": 605, "end": 609.76, "text": " gonna cross out a few things and this is what the network should predict right" }, { "start": 609.76, "end": 614.34, "text": " I'm gonna train the network on the data that I have and I'm gonna predict the" }, { "start": 614.34, "end": 619.6, "text": " cells that I crossed out this way you can exactly measure how good the network" }, { "start": 619.6, "end": 626.24, "text": " is right there is no noise effectively in the data it's all very well defined" }, { "start": 626.24, "end": 633.84, "text": " and a human goes about this with I guess was sort of a logical mind they try to" }, { "start": 633.84, "end": 638.1800000000001, "text": " figure out like ah what's the rule what's the rule a neural network can" }, { "start": 638.1800000000001, "end": 643.16, "text": " simply remember the training data but then it will not generalize to the" }, { "start": 643.16, "end": 648.78, "text": " hidden fields because it cannot memorize those so if a neural network generalizes" }, { "start": 648.78, "end": 655.24, "text": " here it also kind of means that it must have somehow learned the rule and this" }, { "start": 655.24, "end": 661.1999999999999, "text": " this is pretty interesting so there are a number of quantities to keep in mind" }, { "start": 661.1999999999999, "end": 668.48, "text": " the the three quantities are first of all what's the operation because there" }, { "start": 668.48, "end": 673.0799999999999, "text": " are more and less complicated things for these networks to learn just from the" }, { "start": 673.08, "end": 678.8000000000001, "text": " kind of difficulty the complexity of the operation itself second of all is the" }, { "start": 678.8000000000001, "end": 685.44, "text": " data set size or the size of the binary table itself in this case it's 120 by" }, { "start": 685.44, "end": 694.76, "text": " 120 and the third one is how many things are left away so how large is the" }, { "start": 694.76, "end": 699.44, "text": " training data fraction the fraction of the table that is filled in for the" }, { "start": 699.44, "end": 703.08, "text": " network to learn all of these three things are going to play a crucial role" }, { "start": 703.08, "end": 708.1600000000001, "text": " in this in this grokking phenomenon and when and how it appears for example here" }, { "start": 708.1600000000001, "end": 719.8800000000001, "text": " you see they they have trained neural networks on this s5 group right the" }, { "start": 719.8800000000001, "end": 727.24, "text": " permutations of groups of five elements until they reach generalization so they" }, { "start": 727.24, "end": 735.24, "text": " simply run it and they measure how long does it take a network to reach 99%" }, { "start": 735.24, "end": 740.16, "text": " validation accuracy or higher right that's that's the thing on the left is" }, { "start": 740.16, "end": 747.32, "text": " essentially you know the answer would be something like between 10 to the 5 and" }, { "start": 747.32, "end": 753.32, "text": " 10 to the 6 okay so and they measure this as a function of you might not be" }, { "start": 753.32, "end": 757.12, "text": " able to read this but it says training data fraction how much of the" }, { "start": 757.12, "end": 760.8, "text": " training data is filled in and you can pretty clearly see if I just give it" }, { "start": 760.8, "end": 767.32, "text": " like here 20% of training data there are even some runs that do not generalize in" }, { "start": 767.32, "end": 774.24, "text": " this number of steps now would they generalize if you were to optimize for" }, { "start": 774.24, "end": 779.84, "text": " even longer who knows honestly but you can see that as soon as you give like" }, { "start": 779.84, "end": 786.12, "text": " 30% of the training data the runs in general do generalize but they take" }, { "start": 786.12, "end": 792.72, "text": " something like here yeah 10 to the 5 number of steps to do so and then as you" }, { "start": 792.72, "end": 798.16, "text": " increase the training data fraction at this snap to the generalization happens" }, { "start": 798.16, "end": 803.7, "text": " faster and faster you can see right here as you give more training data it goes" }, { "start": 803.7, "end": 809.64, "text": " faster and faster until it generalizes and the generalization happens as I" }, { "start": 809.64, "end": 814.28, "text": " understand it yeah fairly like quickly like it it doesn't generalize because" }, { "start": 814.28, "end": 818.76, "text": " it remembers the training data and this always happens as I understand it in a" }, { "start": 818.76, "end": 824.68, "text": " fairly similar number of steps but then at some later point it just kind of" }, { "start": 824.68, "end": 831.68, "text": " snaps and completely generalizes to the validation set and this is this is" }, { "start": 831.68, "end": 836.52, "text": " really interesting so we know that the more training data we have around the" }, { "start": 836.52, "end": 846.56, "text": " better right that's one recognition then the other the other thing is they try to" }, { "start": 846.56, "end": 854.96, "text": " figure out okay which parts of the optimization algorithm are are making" }, { "start": 854.96, "end": 860.64, "text": " this grokking phenomenon happen and here they figure out that weight decay in" }, { "start": 860.64, "end": 865.56, "text": " fact is one of the is one of the big drivers of this so if they add weight" }, { "start": 865.56, "end": 869.56, "text": " decay to the algorithm and they try a lot of different things they try full" }, { "start": 869.56, "end": 874.92, "text": " batch versus mini batch with dropout without dropout modulating the learning" }, { "start": 874.92, "end": 881.28, "text": " rate and so on but weight decay seems to be one of the biggest contributors to" }, { "start": 881.28, "end": 887.52, "text": " this grokking phenomenon to the fact or to how fast these networks generalize" }, { "start": 887.52, "end": 892.4599999999999, "text": " you can see that the network generalizes much sooner if you have weight decay" }, { "start": 892.46, "end": 901.44, "text": " turned up than not also they make the observation that if you have symmetric" }, { "start": 901.44, "end": 906.72, "text": " operations if your binary operation is symmetric then also the grokking" }, { "start": 906.72, "end": 911.48, "text": " phenomenon happens much faster than if you have like non symmetric operations" }, { "start": 911.48, "end": 917.1600000000001, "text": " this might just be a function of these networks which if you if you have like" }, { "start": 917.16, "end": 922.68, "text": " something like a transformer you know it it's it's sort of kind of invariant to" }, { "start": 922.68, "end": 927.56, "text": " to the symmetry so it might like essentially one data point is sort of" }, { "start": 927.56, "end": 932.1999999999999, "text": " two data points in disguise of its symmetric or there's only half as much" }, { "start": 932.1999999999999, "end": 938, "text": " stuff to learn you choose whatever you you want to interpret this as but I" }, { "start": 938, "end": 942.12, "text": " think yeah this is not as important as the weight decay and why do I highlight" }, { "start": 942.12, "end": 952.36, "text": " this I highlight this because also down here you can see they analyze then they" }, { "start": 952.36, "end": 959.04, "text": " analyze the results of a network that has learned to generalize like this so" }, { "start": 959.04, "end": 963.76, "text": " on the right you see a t-sne projection of the output layer weights from a" }, { "start": 963.76, "end": 970.5600000000001, "text": " network trained on modular addition so this is x plus y modulo 8 I think the" }, { "start": 970.56, "end": 974.68, "text": " lines show the result of adding 8 to each element the colors show the residue" }, { "start": 974.68, "end": 981, "text": " of each element modulo 8 so if you do the t-sne projection you can see the" }, { "start": 981, "end": 985.5999999999999, "text": " lines are obviously drawn by the authors but you can see there are structures" }, { "start": 985.5999999999999, "end": 991.52, "text": " where if you go along the line right here they've colored essentially this is" }, { "start": 991.52, "end": 1001.48, "text": " always adding 8 adding 8 adding 8 so there are structures where this the rule" }, { "start": 1001.48, "end": 1008.12, "text": " for generating the data is clearly present in the data itself sorry in the" }, { "start": 1008.12, "end": 1013.16, "text": " in the network's weights this gives you a strong indication that the network has" }, { "start": 1013.16, "end": 1018.16, "text": " not only just remembered the data somehow but has in fact discovered the" }, { "start": 1018.16, "end": 1024.6, "text": " rule behind the data and we have never incentivized the networks to learn these" }, { "start": 1024.6, "end": 1030.1599999999999, "text": " rules that's the wild point there are there are architectures where you try" }, { "start": 1030.1599999999999, "end": 1035.76, "text": " to specifically make tell the network look there there is a rule behind this I" }, { "start": 1035.76, "end": 1041.24, "text": " want you to figure out the rule you can maybe do symbolic regression or I don't" }, { "start": 1041.24, "end": 1045.6399999999999, "text": " know like like you can try to build an internal graph of and reason over it no" }, { "start": 1045.64, "end": 1050.5600000000002, "text": " no we just train neural networks right here and it turns out that these" }, { "start": 1050.5600000000002, "end": 1056.2, "text": " networks can learn these rules so why do I relate this to the double descent" }, { "start": 1056.2, "end": 1061.8000000000002, "text": " phenomenon in the double descent phenomenon it is assumed or I've heard" }, { "start": 1061.8000000000002, "end": 1068.72, "text": " the authors of these papers speak about their their kind of hypothesis why this" }, { "start": 1068.72, "end": 1074.72, "text": " happens and this is a bit mixed with my my hypothesis as well they speak of for" }, { "start": 1074.72, "end": 1080.72, "text": " example weight decay being one possible explanation so they say if I have a" }, { "start": 1080.72, "end": 1085.16, "text": " bunch of data points let's say I have a bunch of data points right here right" }, { "start": 1085.16, "end": 1091.2, "text": " and I want to do regression on them well if I just do linear regression I have" }, { "start": 1091.2, "end": 1095.84, "text": " one line right it's fairly robust right it's fairly flat it's fairly robust" }, { "start": 1095.84, "end": 1102.96, "text": " because it's just one parameter now if I start to add parameters I get maybe I" }, { "start": 1102.96, "end": 1106.76, "text": " get to a point where I have a good number of parameters you know this this" }, { "start": 1106.76, "end": 1110.76, "text": " polynomial maybe kind of like this still fairly robust right you can see how it" }, { "start": 1110.76, "end": 1116.52, "text": " might generalize to to new data then right so this the blue one will be" }, { "start": 1116.52, "end": 1121.92, "text": " somewhere here the dark blue one would be somewhere here where the the" }, { "start": 1121.92, "end": 1126.04, "text": " validation loss actually goes down with the training loss but then when I add" }, { "start": 1126.04, "end": 1131.88, "text": " when I keep adding data points sorry parameters then you know classically I'll" }, { "start": 1131.88, "end": 1138.0400000000002, "text": " start you know my my overfitting right here and this it will not generalize to" }, { "start": 1138.0400000000002, "end": 1143.5200000000002, "text": " any point that might be in between like one here or so there will just go up so" }, { "start": 1143.5200000000002, "end": 1147.24, "text": " the green would correspond to the point where I just start to interpolate the" }, { "start": 1147.24, "end": 1152.8000000000002, "text": " training data but then what happens if I go on if I make even higher order" }, { "start": 1152.8000000000002, "end": 1157.96, "text": " polynomials or higher order neural networks well at that point at least" }, { "start": 1157.96, "end": 1165.72, "text": " these authors argue do I have another color this one they argue that you get" }, { "start": 1165.72, "end": 1171.68, "text": " like a polynomial that or a curve that yes it has a lot of parameters but it" }, { "start": 1171.68, "end": 1178.64, "text": " uses these parameters such that it can be sort of smoothly interpolate the" }, { "start": 1178.64, "end": 1182.72, "text": " training data you know this curve is quite complicated in terms of the number" }, { "start": 1182.72, "end": 1188.6000000000001, "text": " of numbers you need to describe it but it uses the fact that it has a lot of" }, { "start": 1188.6000000000001, "end": 1192.32, "text": " freedom you know it can choose to be however it wants as long as it" }, { "start": 1192.32, "end": 1197.32, "text": " interpolates the training data right yet it chooses to be smooth because of a" }, { "start": 1197.32, "end": 1203.44, "text": " combination of SGD training it and of weight decay so the weight decay would" }, { "start": 1203.44, "end": 1207.24, "text": " prevent any of these numbers from getting too big and therefore getting" }, { "start": 1207.24, "end": 1212.72, "text": " like super out of whack curve so the weight decay would in fact smoothen the" }, { "start": 1212.72, "end": 1217.56, "text": " curve and that makes the model generalize really well because the" }, { "start": 1217.56, "end": 1223.52, "text": " smoothness now is reasonably generalizes to training data points that are in" }, { "start": 1223.52, "end": 1228.44, "text": " between like this data point is still fairly well represented by the purple" }, { "start": 1228.44, "end": 1234.16, "text": " curve in fact it's better than the dark blue curve in this particular case so" }, { "start": 1234.16, "end": 1239.4, "text": " you can see that the authors here argue that weight decay might be an important" }, { "start": 1239.4, "end": 1243.88, "text": " contributor to why over parameterized networks generalize and it's" }, { "start": 1243.88, "end": 1249.44, "text": " interesting that the these grokking the authors of the grokking phenomenon paper" }, { "start": 1249.44, "end": 1254.88, "text": " here find the same thing they say okay if we use weight decay the grokking" }, { "start": 1254.88, "end": 1260.8000000000002, "text": " appears to happen much faster is this I don't know what exactly they call" }, { "start": 1260.8, "end": 1266, "text": " grokking I'm just gonna call grokking this whenever the validation loss snaps" }, { "start": 1266, "end": 1270.8, "text": " all of a sudden from 0 to 100 on these these datasets now again these are" }, { "start": 1270.8, "end": 1275.1599999999999, "text": " algorithmic datasets so you know we don't know what happens I think that" }, { "start": 1275.1599999999999, "end": 1281.08, "text": " they do make experiments when they they noise some of the data so they they have" }, { "start": 1281.08, "end": 1286.12, "text": " some noise in there and I think they find that if they add noise then it's" }, { "start": 1286.12, "end": 1292.1999999999998, "text": " way more difficult I'm I'm not sure though maybe I'm confusing papers here" }, { "start": 1292.1999999999998, "end": 1298.9199999999998, "text": " but what what might be happening right here right this is it's interesting" }, { "start": 1298.9199999999998, "end": 1309.1999999999998, "text": " because what might be happening is that by imposing this smoothness and the" }, { "start": 1309.1999999999998, "end": 1314.04, "text": " over parameterization we're sort of biasing these networks to find like" }, { "start": 1314.04, "end": 1322.84, "text": " simple solutions right so if if I have just very few training data points if" }, { "start": 1322.84, "end": 1327.96, "text": " most of the cells here are blacked out right the simplest solution is simply to" }, { "start": 1327.96, "end": 1333.72, "text": " remember the training data however as I get more and more training data points" }, { "start": 1333.72, "end": 1338.44, "text": " right that give me more and more information about a potential underlying" }, { "start": 1338.44, "end": 1344.8, "text": " rule it becomes simpler for me to simply to understand the underlying rule then" }, { "start": 1344.8, "end": 1349.2, "text": " to remember the training data it's it's more it's more difficult to remember the" }, { "start": 1349.2, "end": 1355.72, "text": " training data than simply to learn the rule so what might be happening here is" }, { "start": 1355.72, "end": 1359.92, "text": " that as I train and this is always training here the training happens" }, { "start": 1359.92, "end": 1365.3200000000002, "text": " always on the same data right you simply sample the same things over and over" }, { "start": 1365.32, "end": 1368.84, "text": " again train on it I think what might be happening is that you kind of jump" }, { "start": 1368.84, "end": 1372.6799999999998, "text": " around in your optimization procedure you can see there there's some bumps in" }, { "start": 1372.6799999999998, "end": 1378.56, "text": " the training accuracy here so you kind of jump around jump around that's a" }, { "start": 1378.56, "end": 1385.48, "text": " song no so you jump around a bit and and in your in your loss landscape there" }, { "start": 1385.48, "end": 1392.3999999999999, "text": " there might be many of these local minima where you in fact remember the" }, { "start": 1392.4, "end": 1396.68, "text": " training data perfectly so you kind of jump around a bit between them right you" }, { "start": 1396.68, "end": 1401.6000000000001, "text": " remember the training data perfectly and then one of them is just you remember" }, { "start": 1401.6000000000001, "end": 1407.44, "text": " the training data as well now this is you remember the training data as well" }, { "start": 1407.44, "end": 1413.4, "text": " however the solution is just so much simpler that you stay there this is not" }, { "start": 1413.4, "end": 1418.88, "text": " a good way of visualizing it so it must be something like here are the minima" }, { "start": 1418.88, "end": 1425, "text": " where here are the minima where this is the training just at the loss on the" }, { "start": 1425, "end": 1431.64, "text": " data however there is another loss and that's the loss on like the for example" }, { "start": 1431.64, "end": 1436.1200000000001, "text": " the weight decay loss and the weight decay loss is you know it's pretty good" }, { "start": 1436.1200000000001, "end": 1440.2800000000002, "text": " all of these things but then for one of them it's just like because that" }, { "start": 1440.2800000000002, "end": 1445.16, "text": " solution is so much simpler so you're going to choose you're going to jump" }, { "start": 1445.16, "end": 1451.2, "text": " around between those minima jump around until you know once you reach this one" }, { "start": 1451.2, "end": 1456.28, "text": " this loss right here that comes on top of this it's just so much lower that" }, { "start": 1456.28, "end": 1461.0800000000002, "text": " you're gonna you're gonna stay there it's like wow I found such an easy" }, { "start": 1461.0800000000002, "end": 1469.6000000000001, "text": " solution I'm not gonna go out again so yeah now the big question is of course" }, { "start": 1469.6, "end": 1476.4399999999998, "text": " how and why does something like SGD plus weight decay plus potential other" }, { "start": 1476.4399999999998, "end": 1482.52, "text": " drivers of smoothness in these models how and why do they correspond to" }, { "start": 1482.52, "end": 1487.24, "text": " simplicity of solutions right because simplicity of solutions is something" }, { "start": 1487.24, "end": 1491.6399999999999, "text": " that kind of we humans have built in like okay what's the rule behind this" }, { "start": 1491.6399999999999, "end": 1496.32, "text": " what's the rule is this essentially assuming that there is a simple rule" }, { "start": 1496.32, "end": 1500.6, "text": " trying to find it because it would make our life much easier it's a simple" }, { "start": 1500.6, "end": 1505.24, "text": " explanation for what's happening the interesting part is that weight decay or" }, { "start": 1505.24, "end": 1509.8799999999999, "text": " something similar something that's happening in these neural networks is" }, { "start": 1509.8799999999999, "end": 1514.28, "text": " essentially doing the same thing even though we don't tell it to do it so" }, { "start": 1514.28, "end": 1520.4399999999998, "text": " understanding this I think is going to be quite an important quite an important" }, { "start": 1520.44, "end": 1528.1200000000001, "text": " task for the near future and also maybe maybe we're not exactly right with the" }, { "start": 1528.1200000000001, "end": 1532.92, "text": " weight decay maybe there is some other constraint that we can impose that" }, { "start": 1532.92, "end": 1538.8400000000001, "text": " encourages simple solutions in in the way we care about simplicity even more" }, { "start": 1538.8400000000001, "end": 1547.52, "text": " and you know once we have that the it's it's like you know there that this age" }, { "start": 1547.52, "end": 1552.52, "text": " old argument do these things actually understand anything well in this case I'm" }, { "start": 1552.52, "end": 1558.16, "text": " sorry but if you have found this solution with the rule essentially built" }, { "start": 1558.16, "end": 1563.08, "text": " into the networks of the into the weights of the neural network you can" }, { "start": 1563.08, "end": 1569.12, "text": " say well the network has in fact learned the rule behind this binary operations" }, { "start": 1569.12, "end": 1574.6, "text": " so you know who are we to say these networks don't understand anything at" }, { "start": 1574.6, "end": 1579.04, "text": " that point and also it gives us the opportunity to you know train these" }, { "start": 1579.04, "end": 1583.8799999999999, "text": " networks and then from the structures of their latent spaces we might in fact" }, { "start": 1583.8799999999999, "end": 1590, "text": " parse out the rules of data we don't know yet so we let the networks fit and" }, { "start": 1590, "end": 1597.48, "text": " we parse we parse the underlying maybe physical laws maybe social social" }, { "start": 1597.48, "end": 1602.84, "text": " phenomena we parse them out from the underlying data oh yeah here okay there" }, { "start": 1602.84, "end": 1609.28, "text": " is an appendix where they list binary operations they have tried out models" }, { "start": 1609.28, "end": 1614.12, "text": " optimizations so yeah they use a transformer with two layers for" }, { "start": 1614.12, "end": 1619.8, "text": " attention heads so it's not it's not a big thing and also the data sets aren't" }, { "start": 1619.8, "end": 1627.4399999999998, "text": " aren't super complicated but is pretty cool to see this phenomenon now again" }, { "start": 1627.44, "end": 1634.88, "text": " on if we have real-world data bigger networks noisy data it's not going to" }, { "start": 1634.88, "end": 1640.76, "text": " it's not going to happen as drastically and also they say as you increase the" }, { "start": 1640.76, "end": 1646.1200000000001, "text": " size of the data set where is that as you increase the size of the data set" }, { "start": 1646.1200000000001, "end": 1652.48, "text": " then this phenomenon is harder and harder so if the entire data set is" }, { "start": 1652.48, "end": 1658.64, "text": " bigger the the grokking phenomenon I guess it's it's more tough to see and" }, { "start": 1658.64, "end": 1664.16, "text": " also here is the experiment I mentioned where you have several outliers so noisy" }, { "start": 1664.16, "end": 1670.52, "text": " data points and as you so this is the fraction of correctly labeled data" }, { "start": 1670.52, "end": 1676.68, "text": " points so as you increase the number of correctly labeled data points you can" }, { "start": 1676.68, "end": 1683.88, "text": " see the grokking happens in more often or to a better validation accuracy than" }, { "start": 1683.88, "end": 1694.4, "text": " not so well you can I don't know if you can read this but yeah these these down" }, { "start": 1694.4, "end": 1699.74, "text": " here they have too many outliers so with too many outliers either the" }, { "start": 1699.74, "end": 1708, "text": " validation accuracy just stays at zero or it just turns up like quite late okay" }, { "start": 1708, "end": 1713.32, "text": " that's it here is an example of one of these binary operation tables that is a" }, { "start": 1713.32, "end": 1719.08, "text": " little bit larger I don't know if it's one of the hundred twenty sized ones but" }, { "start": 1719.08, "end": 1723.96, "text": " this is something that would be presented to the network and they say" }, { "start": 1723.96, "end": 1729.32, "text": " they say what we invite the reader to guess which operation is represented" }, { "start": 1729.32, "end": 1738.52, "text": " here well have fun dear dear reader yeah all right so this was it from me for the" }, { "start": 1738.52, "end": 1742.36, "text": " grokking paper as I said this seems like it's work in progress I think it's" }, { "start": 1742.36, "end": 1749.8, "text": " pretty cool work in progress it raises a lot of questions and I think yeah I" }, { "start": 1749.8, "end": 1755.6799999999998, "text": " think it's it's pretty cool I wonder how this happened like like how how did how" }, { "start": 1755.68, "end": 1761.48, "text": " did people find this they just forget to turn off their computer and in the" }, { "start": 1761.48, "end": 1766.0800000000002, "text": " morning they came back in there like whoopsie-doopsie generalized though if" }, { "start": 1766.0800000000002, "end": 1770.0800000000002, "text": " you if you know if you build these kinds of data sets I guess you have something" }, { "start": 1770.0800000000002, "end": 1775.0800000000002, "text": " in mind already yeah in any case that was it for me tell me what what you think" }, { "start": 1775.0800000000002, "end": 1779.24, "text": " is going on in neural networks or is there like is there like a super easy" }, { "start": 1779.24, "end": 1784.52, "text": " outcomes razor explanation that I'm missing I don't know tell me what you" }, { "start": 1784.52, "end": 1788.16, "text": " think I'll see you next time bye bye" } ]
DRy_Mr732yA
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
[Drama] Who invented Contrast Sets?
[ "Science & Technology" ]
[ "deep learning", "machine learning", "nlp", "natural language processing", "arxiv", "twitter", "drama", "credit", "related", "lipton", "gardner", "counterfactual", "augmentation", "plagiarism" ]
Funny Twitter spat between researchers arguing who was the first to invent an idea that has probably been around since 1990 :D References: https://arxiv.org/abs/2004.02709 https://twitter.com/nlpmattg/status/1247326213296672768 https://arxiv.org/abs/1909.12434 https://twitter.com/zacharylipton/status/1247357810410762240 https://twitter.com/nlpmattg/status/1247373386839252992 https://twitter.com/zacharylipton/status/1247383141075083267 Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
I love me some good Twitter drama look at this this is awesome so after this contrast set paper appeared and I've done a video on that the author of it tweeted it out with one of the long Twitter threads with screenshots and all this seems to be the new marketing tool of academics and as you know I'm not a fan of this paper I think that the number that comes out of such a contrast set is very either useless or counterproductive and you can see my video on that in any case there there was another researcher Zachary Lipton who felt like he needed to jump in here saying before the media blitz and retweet party gets out of control this idea exists has been published it has a name and a clear justification is called counterfactually augmented data this is amazing look at that and here's the published paper of course and if we look at the published paper this is it right here of course Zach Lipton is an author on that paper and so let's just just read the abstract I haven't read the paper but let's just read the abstract it so I have it classically I have it here my nifty thing here so we can analyze it so this paper if you read the abstract it does sound similar right despite alarm over the reliance of union learning systems blah blah blah blah spurious correlations so it talks about the same problems now what do they say given documents and their initial labels we task humans with revising each document so that it accords with a counterfactual target label retains internal coherence and avoids unnecessary changes right so this sounds very similar to what these contrast sets do so the counterfactual target label would be the necessary of the contrast set to change the label retains internal coherence which is the in the contrast that this simply given by it supposed to conform to the intent of the data set makers which the intent probably includes internal coherence and it avoids unnecessary changes that conforms to the contrast set only searching in the local environment of a test set sample so you see that the definition of these is pretty similar then we go on and say they say class first trained on original data fail on their counterfactually revised counterparts and vice versa this experiment was also done by the contrast that paper and then they say class first trained on combined data sets performed remarkably well just chive those specialized in either domain so immediately we see some differences as well right the main difference I see is they say we task humans and then they train on the the train on the counterfactually revised counterparts which probably means they use some mechanical Turks here when they say humans because if you want to create a training data set you need lots of data so they probably take a data set and run its training data set again through something like mechanical Turk to get annotations this is exactly what the people of the of the contrast sets claim is wrong with the current pipeline and they so here we have this this thing counterfactually augmented stuff so the contrast sets what they say is we actually need the experts to do this that this the these humans are exactly the wrong people to make the data sets so it has the CFA has some elements correctly the same namely how they construct these labels but who construct the labels and for what reason so here it's experts and for what reason it's for testing it's they say the experts that make the data set should provide an additional contrast test set so this is I mean if if this is just my opinion if this is the same idea of course it's very similar but if this counts as the same idea then 95% of all research counts as the same idea as something that Jürgen Schmidhuber has done in the 1990s which of course Jürgen Schmidhuber will eloquently argue exactly he invented GANs basically the same thing so yeah so if this is it's not the same like I have to say this is very close but it's not as I understand they even cited the other ones so then the bickering starts and this is funny I'm like this this is just funny to me so Zach Lippen jumps here it says this has been published has a name and a clearer justification it's called contractual augmented data here is the published paper we just looked at this right and then Matt Gardner answers and he says Zach and Divyansh work is excellent recommend you all go look at it our work provides a different concurrent take on similar issues right and I think here someone comments that so he says it is in the related work section although mischaracterized and misattributed as contemporary work so position is really that it is kind of a stolen idea and they were apparently in contact with each other during that so this Matt Gardner here says what the differences are he says we take a geometrical view we demonstrate such a wider variety I mean for all intents and purposes if you go through any of the research go to computer vision go to NLP you'll find like the exact I have like I have I review two papers each year that want to produce data that better defines the decision boundary like these people here I mean this is this idea is just get rehashed over and over in the slightly different form these two are particularly close but and then see how they pick our paper was finished two months after theirs and then they say we started the project well before and so on why do we feel defensive and then he answers again with this is absolutely false our paper was drafted in July your paper was finished the night before the ACL deadline this is not two months ago but a half a year it is nothing to do it says why do you presume to know when we started drop the nonsense we did this work in May 2019 present the public results in July posted it's a better drop the posturing so much of what you're doing here is the very cancer in the system I mean I agree just you know slightly refining ideas that previously were there is very bad problem in academia so this is actually correct to point out but I don't think that this particular instance is particularly bad and then he says I'm afraid you're simply mistaken I have a history of publishing similar so I've I've something like the last thing I say I just invite you to read this beautiful but the last thing to say here if if this counterfactually augmented data if this is in fact the first instance of this general idea to produce counterfactually augmented data that that that does actually fulfill these criteria I would be extremely surprised because this is nothing to do with deep learning right and the real novelty in her field is mostly deep learning so I'm pretty sure someone must have thought of something like this when everyone was just doing grammars and manual features and things like this so I'm I would be extremely surprised if this hasn't been there in one form or another and why the authors of that shouldn't make exactly the same argument that being said it is fairly close like that the fun part here is that it is actually a fairly similar idea except after so the idea itself is fairly similar but here the focus is on different things and it's also on different data sets and I believe yeah as I said 95% of research falls into exactly this category so much fun check it out yeah bye bye
[ { "start": 0, "end": 7.24, "text": " I love me some good Twitter drama look at this this is awesome so after this" }, { "start": 7.24, "end": 13.4, "text": " contrast set paper appeared and I've done a video on that the author of it" }, { "start": 13.4, "end": 19.72, "text": " tweeted it out with one of the long Twitter threads with screenshots and all" }, { "start": 19.72, "end": 26.2, "text": " this seems to be the new marketing tool of academics and as you know I'm not a" }, { "start": 26.2, "end": 30.4, "text": " fan of this paper I think that the number that comes out of such a contrast" }, { "start": 30.4, "end": 35.6, "text": " set is very either useless or counterproductive and you can see my" }, { "start": 35.6, "end": 42.96, "text": " video on that in any case there there was another researcher Zachary Lipton" }, { "start": 42.96, "end": 50, "text": " who felt like he needed to jump in here saying before the media blitz and" }, { "start": 50, "end": 56, "text": " retweet party gets out of control this idea exists has been published it has" }, { "start": 56, "end": 61.6, "text": " a name and a clear justification is called counterfactually augmented data" }, { "start": 61.6, "end": 69.36, "text": " this is amazing look at that and here's the published paper of course and if we" }, { "start": 69.36, "end": 76.08, "text": " look at the published paper this is it right here of course Zach Lipton is an" }, { "start": 76.08, "end": 82.8, "text": " author on that paper and so let's just just read the abstract I haven't read" }, { "start": 82.8, "end": 88.12, "text": " the paper but let's just read the abstract it so I have it classically I" }, { "start": 88.12, "end": 98.03999999999999, "text": " have it here my nifty thing here so we can analyze it so this paper if you read" }, { "start": 98.03999999999999, "end": 105, "text": " the abstract it does sound similar right despite alarm over the reliance of" }, { "start": 105, "end": 108.12, "text": " union learning systems blah blah blah blah spurious correlations so it talks" }, { "start": 108.12, "end": 114, "text": " about the same problems now what do they say given documents and their initial" }, { "start": 114, "end": 119.08000000000001, "text": " labels we task humans with revising each document so that it accords with a" }, { "start": 119.08000000000001, "end": 123.36, "text": " counterfactual target label retains internal coherence and avoids" }, { "start": 123.36, "end": 129.64000000000001, "text": " unnecessary changes right so this sounds very similar to what these contrast sets" }, { "start": 129.64000000000001, "end": 135.8, "text": " do so the counterfactual target label would be the necessary of the" }, { "start": 135.8, "end": 143.36, "text": " contrast set to change the label retains internal coherence which is the in the" }, { "start": 143.36, "end": 148.92000000000002, "text": " contrast that this simply given by it supposed to conform to the intent of the" }, { "start": 148.92000000000002, "end": 154.52, "text": " data set makers which the intent probably includes internal coherence and" }, { "start": 154.52, "end": 160.12, "text": " it avoids unnecessary changes that conforms to the contrast set only" }, { "start": 160.12, "end": 166.72, "text": " searching in the local environment of a test set sample so you see that the" }, { "start": 166.72, "end": 174.04, "text": " definition of these is pretty similar then we go on and say they say class" }, { "start": 174.04, "end": 177.16, "text": " first trained on original data fail on their counterfactually revised" }, { "start": 177.16, "end": 180.72, "text": " counterparts and vice versa this experiment was also done by the" }, { "start": 180.72, "end": 186.56, "text": " contrast that paper and then they say class first trained on combined data" }, { "start": 186.56, "end": 190.44, "text": " sets performed remarkably well just chive those specialized in either" }, { "start": 190.44, "end": 197.76, "text": " domain so immediately we see some differences as well right the main" }, { "start": 197.76, "end": 203.92000000000002, "text": " difference I see is they say we task humans and then they train on the the" }, { "start": 203.92000000000002, "end": 208.24, "text": " train on the counterfactually revised counterparts which probably means they" }, { "start": 208.24, "end": 213.28, "text": " use some mechanical Turks here when they say humans because if you want to create" }, { "start": 213.28, "end": 218.48, "text": " a training data set you need lots of data so they probably take a data set" }, { "start": 218.48, "end": 222.24, "text": " and run its training data set again through something like mechanical Turk" }, { "start": 222.24, "end": 230.72, "text": " to get annotations this is exactly what the people of the of the contrast sets" }, { "start": 230.72, "end": 237.88, "text": " claim is wrong with the current pipeline and they so here we have this this thing" }, { "start": 237.88, "end": 243.2, "text": " counterfactually augmented stuff so the contrast sets what they say is we" }, { "start": 243.2, "end": 248.44, "text": " actually need the experts to do this that this the these humans are exactly" }, { "start": 248.44, "end": 255, "text": " the wrong people to make the data sets so it has the CFA has some elements" }, { "start": 255, "end": 260.8, "text": " correctly the same namely how they construct these labels but who construct" }, { "start": 260.8, "end": 266.24, "text": " the labels and for what reason so here it's experts and for what reason it's" }, { "start": 266.24, "end": 271.91999999999996, "text": " for testing it's they say the experts that make the data set should provide an" }, { "start": 271.92, "end": 280.44, "text": " additional contrast test set so this is I mean if if this is just my opinion if" }, { "start": 280.44, "end": 285.04, "text": " this is the same idea of course it's very similar but if this counts as the" }, { "start": 285.04, "end": 290.20000000000005, "text": " same idea then 95% of all research counts as the same idea as something" }, { "start": 290.20000000000005, "end": 294.88, "text": " that Jürgen Schmidhuber has done in the 1990s which of course Jürgen Schmidhuber" }, { "start": 294.88, "end": 304, "text": " will eloquently argue exactly he invented GANs basically the same thing" }, { "start": 304, "end": 310.52, "text": " so yeah so if this is it's not the same like I have to say this is very close" }, { "start": 310.52, "end": 316.86, "text": " but it's not as I understand they even cited the other ones so then the" }, { "start": 316.86, "end": 321.15999999999997, "text": " bickering starts and this is funny I'm like this this is just funny to me so" }, { "start": 321.16, "end": 326.40000000000003, "text": " Zach Lippen jumps here it says this has been published has a name and a clearer" }, { "start": 326.40000000000003, "end": 330.48, "text": " justification it's called contractual augmented data here is the published" }, { "start": 330.48, "end": 336.52000000000004, "text": " paper we just looked at this right and then Matt Gardner answers and he says" }, { "start": 336.52000000000004, "end": 345.64000000000004, "text": " Zach and Divyansh work is excellent recommend you all go look at it our work" }, { "start": 345.64000000000004, "end": 350.84000000000003, "text": " provides a different concurrent take on similar issues right and I think here" }, { "start": 350.84, "end": 356.91999999999996, "text": " someone comments that so he says it is in the related work section although" }, { "start": 356.91999999999996, "end": 362.35999999999996, "text": " mischaracterized and misattributed as contemporary work so position is really" }, { "start": 362.35999999999996, "end": 369.47999999999996, "text": " that it is kind of a stolen idea and they were apparently in contact with" }, { "start": 369.47999999999996, "end": 374.55999999999995, "text": " each other during that so this Matt Gardner here says what the differences" }, { "start": 374.55999999999995, "end": 379.64, "text": " are he says we take a geometrical view we demonstrate such a wider variety I" }, { "start": 379.64, "end": 383.32, "text": " mean for all intents and purposes if you go through any of the research go to" }, { "start": 383.32, "end": 388.71999999999997, "text": " computer vision go to NLP you'll find like the exact I have like I have I" }, { "start": 388.71999999999997, "end": 396.96, "text": " review two papers each year that want to produce data that better defines the" }, { "start": 396.96, "end": 401.96, "text": " decision boundary like these people here I mean this is this idea is just get" }, { "start": 401.96, "end": 409.2, "text": " rehashed over and over in the slightly different form these two are" }, { "start": 409.2, "end": 414.47999999999996, "text": " particularly close but and then see how they pick our paper was finished two" }, { "start": 414.47999999999996, "end": 423.15999999999997, "text": " months after theirs and then they say we started the project well before and so" }, { "start": 423.15999999999997, "end": 434.24, "text": " on why do we feel defensive and then he answers again with this is absolutely" }, { "start": 434.24, "end": 438.91999999999996, "text": " false our paper was drafted in July your paper was finished the night before the" }, { "start": 438.92, "end": 443.8, "text": " ACL deadline this is not two months ago but a half a year it is nothing to do" }, { "start": 443.8, "end": 450.08000000000004, "text": " it says why do you presume to know when we started drop the nonsense we did this" }, { "start": 450.08000000000004, "end": 454.88, "text": " work in May 2019 present the public results in July posted it's a better" }, { "start": 454.88, "end": 460.32, "text": " drop the posturing so much of what you're doing here is the very cancer in" }, { "start": 460.32, "end": 468.76, "text": " the system I mean I agree just you know slightly refining ideas that previously" }, { "start": 468.76, "end": 473.71999999999997, "text": " were there is very bad problem in academia so this is actually correct to" }, { "start": 473.71999999999997, "end": 478, "text": " point out but I don't think that this particular instance is particularly bad" }, { "start": 478, "end": 480.71999999999997, "text": " and then he says I'm afraid you're simply mistaken I have a history of" }, { "start": 480.71999999999997, "end": 485.24, "text": " publishing similar so I've I've something like the last thing I say I" }, { "start": 485.24, "end": 492.32, "text": " just invite you to read this beautiful but the last thing to say here if if" }, { "start": 492.32, "end": 499.15999999999997, "text": " this counterfactually augmented data if this is in fact the first instance of" }, { "start": 499.15999999999997, "end": 505.08, "text": " this general idea to produce counterfactually augmented data that" }, { "start": 505.08, "end": 512.4, "text": " that that does actually fulfill these criteria I would be extremely surprised" }, { "start": 512.4, "end": 517.96, "text": " because this is nothing to do with deep learning right and the real novelty in" }, { "start": 517.96, "end": 523.48, "text": " her field is mostly deep learning so I'm pretty sure someone must have thought of" }, { "start": 523.48, "end": 529.48, "text": " something like this when everyone was just doing grammars and manual features" }, { "start": 529.48, "end": 536.5600000000001, "text": " and things like this so I'm I would be extremely surprised if this hasn't been" }, { "start": 536.5600000000001, "end": 541.2, "text": " there in one form or another and why the authors of that shouldn't make exactly" }, { "start": 541.2, "end": 545.72, "text": " the same argument that being said it is fairly close like that the fun part here" }, { "start": 545.72, "end": 551.76, "text": " is that it is actually a fairly similar idea except after so the idea itself is" }, { "start": 551.76, "end": 558.12, "text": " fairly similar but here the focus is on different things and it's also on" }, { "start": 558.12, "end": 563.36, "text": " different data sets and I believe yeah as I said 95% of research falls into" }, { "start": 563.36, "end": 576.88, "text": " exactly this category so much fun check it out yeah bye bye" } ]
tC01FRB0M7w
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Turing-NLG, DeepSpeed and the ZeRO optimizer
[ "Science & Technology" ]
[ "deep learning", "machine learning", "nlp", "natural language processing", "machine translation", "arxiv", "attention mechanism", "attention", "transformer", "seq2seq", "bert", "long sequence", "memory", "gpt-2", "Megatron", "Microsoft", "distributed", "parallelism" ]
Microsoft has trained a 17-billion parameter language model that achieves state-of-the-art perplexity. This video takes a look at the ZeRO optimizer that enabled this breakthrough. ZeRO allows you to do model- and data-parallelism without having huge cuts in training speed. https://www.microsoft.com/en-us/research/blog/turing-nlg-a-17-billion-parameter-language-model-by-microsoft/ https://www.microsoft.com/en-us/research/blog/zero-deepspeed-new-system-optimizations-enable-training-models-with-over-100-billion-parameters/ https://github.com/microsoft/DeepSpeed https://arxiv.org/abs/1910.02054 Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi everyone, today we're going to look at Turing NLGA 17 billion parameter language model by Microsoft. The latest and greatest of language modeling by Microsoft. What is this? It is a language model. A language model is basically a model that learns to produce language, given language. So if you start a sentence it's supposed to finish a sentence. If you start a paragraph it's supposed to finish the paragraph. That's a language model. Ultimately you can make it do different things like answer questions, have a conversation with you, anything to do with understanding language. The special thing about this one is that it's ginormous. So if you look at the scale of language models, so BERT was quite a large thing back in its day. Ye Olde BERT, you can see here it has about 340 million parameters. Now I have to say all of these language models are transformers. This is kind of the state of the art today. So all of these are kind of our transformer based models. Then GPT-2 here, you can see that was the model that was so large it was too dangerous to be released into the world. That stands at 1.5 billion parameters. Megatron LM by Nvidia 8.3 billion and now we are at 17 billion parameters for this language model. And it is a bit better. People just throw more and more and more resources at this language problem. So what you can do with it, you can of course do language modeling. So what happens is you take a bunch of text like all of Wikipedia and all of the internet and all of Reddit and so on and you let the model train on it to understand, to basically produce that sort of language. And then you can measure it, for example it's a perplexity on a validation set. And the Turing NLG is currently state-of-the-art on that. It can also do for example question answering. So you can ask the question and give it a passage about that question and it will then tell you the answer that it deduced from that passage given the question as you can see here. What is more interesting is that a usual QA system will point to the passage. So it will point to the words Tristan Prediman. Whereas with a generative model like this one what you can do is you can make it actually output an answer as a sentence. So it will generate the text Jason Bras was engaged to Tristan Prediman. If you ask a question without giving it a context and just ask it to generate an answer it will do so as well. I don't know if these answers are cherry-picked but they call this zero-shot question answering. So if you ask when did World War II end and it can output World War II ended in 1945. Simply out of regularities it detected in the training data. So I mean that's what I'm kind of wondering. At what point are these models, do they have so many parameters that they simply reproduce the training data? I mean this clearly some article from the training data is about World War II or many are and it simply learned that following a question when did World War II end it needs to answer with the appropriate passage. I'm not sure that is a proper measure of language understanding if you simply can bake more and more of the training data into these many many parameters but I'm not the judge of that here. It can do it very well. So yeah what I'm actually more interested in is this thing is called the zero optimizer that they use to train the model. So the model is just a transformer, it's just a big big transformer model. There is nothing really special about the model except that it is larger than the last model and therefore a bit better. What is interesting is that this would have been pretty impossible to train if it weren't for this zero optimizer of this deep speed library and Microsoft has released this deep speed library. It's compatible for now with PyTorch. You can check this out. I'll put a link into the description and I want to dive into this a bit. There's a paper, it's by Samyam Raj Bandari and all by Microsoft. The paper describes in detail the optimizer but it's not very visual. That's why we're going to the blog post. You can see it gives many speed ups over the previous Megatron LM model that Nvidia just trained using what Nvidia has. Nvidia has machines that are interconnected within the machine with very fast buses between GPUs. But this zero optimizer can now also go over the network and make it pretty fast. Let's explore that a bit. I have the copy this here. We'll look how the zero optimizer works. Usually what you do is if you have multiple GPUs you can do something like this. This is called data parallelism. What you have is a model and the model in this case fits on your GPU. It fits on a single GPU. The blue thing here is the model. I'll actually draw this. The model is a neural network so it has a bunch of layers. Layer, layer, layer, layer. What you want to do is you pass data forward. Here is some loss and then right into the loss function and then backward again. That's basically what you need to do. You need to pass it forward and backward in order to do back propagation training. If this all fits into one box that's completely fine. If this fits into one machine, cool. We can just put many batches of data through batch one, batch two, batch three and so on. Train the model. If you want to do a speed up using this you can do so. If you have lots of data you can do what's called, and I'm always confused, I think this is called data parallelism or is it called model parallelism. In any case what you can do is you can take a second machine or many of those, replicate the model. These two models here are exactly the same. What you do is you take your data and you split it up. You take double the amount of data and you put one batch of data through the top part and you put the other through the bottom part. You do your forward passes on the machines and you do your backward passes. Then what you want to do is you want to sync between the machines what they learned from the data. Each machine has a different set of data points. Each machine calculates its own parameter updates. It learns from the data it has and then they communicate to keep because this here and this here should be the same. It's the same model. They have to keep in sync. This can be usually can be done fairly efficiently especially if these aren't actually two machines but just two GPUs inside of one large machine. If this is a large machine this is GPU 0 and this is GPU 1. This is pretty standard because especially on Nvidia machines they have these whatever I think they call them InfiniBand or so. Nvidia has these connectors that connects the GPUs together really fast. You can keep these in sync but now the problem becomes what if you want to train a model that is larger than this. Let's forget about the data parallelism for now if that is what it's called and just consider a model that is too large. A model that is too large will not fit into a machine. This is a model as a large model. What you want to do is you want to pack some of the model onto your first machine and then take the other part of the model and pack it onto another machine. You separate the model and put it on different machines. If you have a batch of data what you have to do is you pass it pass it pass it forward propagate as you regularly would but then you have an intermediate result. You send that to the next machine and you forward propagate that. At the end here you have a loss. You want to back propagate regularly through this machine. You have an intermediate result of back propagation. Send it over the network and back prop all the way through the model. That's how you can train a model that is too large for one machine if you have multiple machines. The problem here of course is this part. Just as you had to keep in sync the model before, now your communication problem becomes one of... You have to send the intermediate stages to that model and you have to send the intermediate stage of the back propagation back to that part of the model. While this part is working this part is idling away. The network overhead is just very costly. Especially if your model is so large it can't even fit into one of these single boxes. This is very problematic here. It's still doable. But what the zero optimizer does is it does both data and model parallelism. It can train models that are too large for a single machine. It can do data parallelism at the same time. Basically everything is working all the time. There is not much wasted computation. The communication is efficient and so on. It's really a technical achievement. It's not so much a scientific advance. It's really a technical achievement this optimizer. We'll shortly go through. There is a kind of an animation on the website but it's super slow. I think this might be the first time that I will be faster at explaining something than a video. Let's see here. What you do is... Let's just consider these three GPUs. Before that it would all fit on one machine. But now let's say you don't actually have that much memory. You don't have these giant empty blocks here. You just have a bit of that. So you have to split your model. The blue parts here are your model. These are model parameters. The orange part here is memory you need to store gradients. You need as many gradients as you have model parameters. Because you do gradient descent. The green stuff here are what's called optimizer parameters. Now if you just have SGD these would be non-existent. But if you have something like AdaGrad or Atom they have additional parameters for each model parameter that they need to keep track of. So these are stored here. There can be significant overhead. There's also like a floating point 3216 conversion going on here. Don't want to go into that. So you split your model onto these three machines. Let's say that's your entire model. Your model is six blocks wide. You need to forward propagate now through everything. So here is what Xero does. I think it's pretty cool. What we need to do is we have these three different batches of data and we want to forward propagate them all through the model. Through the same model at the same time. As if the model were actually stored on all these machines. Like if all of these machines had the entire model. And we can do a bit of communication. So what we do first is... This one's easy. Data zero through the first two layers here is easy. Because we have them. So bang you go through the first you get an intermediate result here and here. How do we propagate data one through the first layer? We can't send data one here. That would be too expensive. And that's the whole point would be lost. We want to actually compute data one on this GPU at the same time. What we do is before we start we actually communicate these two blocks here to GPU one. We send these parameters around and fill them in here. We send them here and we also send them here. We send the parameters to all the machines. Then we can actually forward prop data one through this and data three through this. So we can do forward prop. After we've communicated all the GPUs can be working. Same with layer two. Layer two simply can send these two here. You can see that these two here to the other machines. Now while it's doing that we've already propagated through the first layer. We've already propagated here and here through the first layer. So we can actually delete these again. We can delete these first layer parameters that we sent around again. So here you see how we can save memory. We don't keep all the model in sync and all the machines. We send whatever we need on the other machines and then once the computation is done they can delete it again. Because there's always one machine, this one here for the middle parameters, that keeps track of the parameters and that can at any point if they're needed send them again. So that's the big kind of catch. You can forward prop now through these two. They're already present. Then you can delete those again on the machines where they're not natively stored. From here you can send those two. Also up here you can send those two and forward prop your model through to the end. That was a mistake. Then each machine calculates its own loss. The backward propagation happens in much the same way. If you follow so far you can already imagine. Now the loss is different because there's a different batch of data going through each machine. There's a different batch of data going through each machine but each machine has computed with the same model due to the communication of the zero optimizer. That's pretty cool. You get the benefits of data parallelism, lots of data on the different machines and you also split up the model across the machines. You don't actually store the model on any of these machines. You only send. From here you send as you need and then you delete again. For the backward propagation, same thing. You calculate gradients. You calculate gradients here and you send the gradients as needed to the other machines. You calculate gradients here and here and you send them to the machine where they're actually needed. This is a weird pen. You send them to that machine. That machine will aggregate all the gradients of all the machines. It will aggregate them and then locally it can compute using these optimizer parameters and so on. It can do all kinds of optimization locally because it has gathered gradients from all the other data. What you end up with, for example, GPU 2 here, for these two layers it has effectively broadcast the layers such that much much more data than it just had itself could run through the layers. It has aggregated gradients from all of that data and now it can use all of these gradients together to make a good update using the optimizer parameters. To make a good update to these model parameters and then in the next iteration it can go ahead and broadcast the model parameters. The new model parameters again. It is able to compute with much more data than it can just fit by itself. It is just doing its part. So Zero and DeepSpeed, Zero is the protocol and DeepSpeed is the actual library. They will do all of this communication and splitting and so on for you over the network in a way that is efficient, in a way that everything runs at the same time and the communication overhead is minimal. You can actually choose which stage you want, so what your trade-off of communication and memory saving will be. This is extremely cool. They say this goes up to whatever 100 billion parameter models if you use... This isn't something for your average Colab user. This is really something for big players. But that being said, I don't think language is solved by simply throwing more parameters at it. I think there's still a bit of a breakthrough ahead yet to come in language understanding with newer model architectures. Alright, that was it for me. Thanks.
[ { "start": 0, "end": 6.34, "text": " Hi everyone, today we're going to look at Turing NLGA 17 billion parameter" }, { "start": 6.34, "end": 11.78, "text": " language model by Microsoft. The latest and greatest of language modeling by" }, { "start": 11.78, "end": 18.5, "text": " Microsoft. What is this? It is a language model. A language model is basically a" }, { "start": 18.5, "end": 25.580000000000002, "text": " model that learns to produce language, given language. So if you start a" }, { "start": 25.580000000000002, "end": 28.66, "text": " sentence it's supposed to finish a sentence. If you start a paragraph it's" }, { "start": 28.66, "end": 33.6, "text": " supposed to finish the paragraph. That's a language model. Ultimately you can make" }, { "start": 33.6, "end": 37.44, "text": " it do different things like answer questions, have a conversation with you," }, { "start": 37.44, "end": 41.08, "text": " anything to do with understanding language. The special thing about this" }, { "start": 41.08, "end": 47.760000000000005, "text": " one is that it's ginormous. So if you look at the scale of language" }, { "start": 47.760000000000005, "end": 55.68, "text": " models, so BERT was quite a large thing back in its day. Ye Olde BERT, you" }, { "start": 55.68, "end": 62.2, "text": " can see here it has about 340 million parameters. Now I have to say all of" }, { "start": 62.2, "end": 66.08, "text": " these language models are transformers. This is kind of the state of the art" }, { "start": 66.08, "end": 74.32, "text": " today. So all of these are kind of our transformer based models. Then GPT-2" }, { "start": 74.32, "end": 79.92, "text": " here, you can see that was the model that was so large it was too dangerous to be" }, { "start": 79.92, "end": 87.12, "text": " released into the world. That stands at 1.5 billion parameters. Megatron LM by" }, { "start": 87.12, "end": 94.64, "text": " Nvidia 8.3 billion and now we are at 17 billion parameters for this language" }, { "start": 94.64, "end": 103.76, "text": " model. And it is a bit better. People just throw more and more and more" }, { "start": 103.76, "end": 111.96000000000001, "text": " resources at this language problem. So what you can do with it, you can" }, { "start": 111.96000000000001, "end": 116.4, "text": " of course do language modeling. So what happens is you take a bunch of text like" }, { "start": 116.4, "end": 122.36000000000001, "text": " all of Wikipedia and all of the internet and all of Reddit and so on and you let" }, { "start": 122.36000000000001, "end": 128.76, "text": " the model train on it to understand, to basically produce that sort of language." }, { "start": 128.76, "end": 134.07999999999998, "text": " And then you can measure it, for example it's a perplexity on a validation set." }, { "start": 134.07999999999998, "end": 142.92, "text": " And the Turing NLG is currently state-of-the-art on that. It can also do" }, { "start": 142.92, "end": 146.56, "text": " for example question answering. So you can ask the question and give it a" }, { "start": 146.56, "end": 152.76, "text": " passage about that question and it will then tell you the answer that it deduced" }, { "start": 152.76, "end": 158.6, "text": " from that passage given the question as you can see here. What is more interesting" }, { "start": 158.6, "end": 165.64, "text": " is that a usual QA system will point to the passage. So it will point to the" }, { "start": 165.64, "end": 174.68, "text": " words Tristan Prediman. Whereas with a generative model like this one what you" }, { "start": 174.68, "end": 180.51999999999998, "text": " can do is you can make it actually output an answer as a sentence. So it" }, { "start": 180.51999999999998, "end": 186.95999999999998, "text": " will generate the text Jason Bras was engaged to Tristan Prediman." }, { "start": 186.96, "end": 197.92000000000002, "text": " If you ask a question without giving it a context and just ask it to generate an" }, { "start": 197.92000000000002, "end": 202.52, "text": " answer it will do so as well. I don't know if these answers are cherry-picked" }, { "start": 202.52, "end": 206.76000000000002, "text": " but they call this zero-shot question answering. So if you ask when did World" }, { "start": 206.76000000000002, "end": 214.84, "text": " War II end and it can output World War II ended in 1945. Simply out of regularities" }, { "start": 214.84, "end": 220.72, "text": " it detected in the training data. So I mean that's what I'm kind of wondering." }, { "start": 220.72, "end": 227, "text": " At what point are these models, do they have so many parameters that they" }, { "start": 227, "end": 234.24, "text": " simply reproduce the training data? I mean this clearly some article" }, { "start": 234.24, "end": 240.32, "text": " from the training data is about World War II or many are and it simply learned" }, { "start": 240.32, "end": 247.79999999999998, "text": " that following a question when did World War II end it needs to answer with the" }, { "start": 247.79999999999998, "end": 254.68, "text": " appropriate passage. I'm not sure that is a proper measure of language" }, { "start": 254.68, "end": 260.44, "text": " understanding if you simply can bake more and more of the training data into" }, { "start": 260.44, "end": 269.44, "text": " these many many parameters but I'm not the judge of that here. It can do it very" }, { "start": 269.44, "end": 276.28, "text": " well. So yeah what I'm actually more interested in is this thing is called the" }, { "start": 276.28, "end": 281.76, "text": " zero optimizer that they use to train the model. So the model is just a" }, { "start": 281.76, "end": 285.8, "text": " transformer, it's just a big big transformer model. There is nothing really" }, { "start": 285.8, "end": 291.52, "text": " special about the model except that it is larger than the last model and" }, { "start": 291.52, "end": 296.88, "text": " therefore a bit better. What is interesting is that this would have been" }, { "start": 296.88, "end": 303.12, "text": " pretty impossible to train if it weren't for this zero optimizer of this deep" }, { "start": 303.12, "end": 307.88, "text": " speed library and Microsoft has released this deep speed library. It's compatible" }, { "start": 307.88, "end": 313.32, "text": " for now with PyTorch. You can check this out. I'll put a link into the description" }, { "start": 313.32, "end": 320.4, "text": " and I want to dive into this a bit. There's a paper, it's by Samyam Raj" }, { "start": 320.4, "end": 331.84, "text": " Bandari and all by Microsoft. The paper describes in detail the optimizer" }, { "start": 331.84, "end": 338.91999999999996, "text": " but it's not very visual. That's why we're going to the blog post. You can see" }, { "start": 338.91999999999996, "end": 348.28, "text": " it gives many speed ups over the previous Megatron LM model that" }, { "start": 348.28, "end": 355.84, "text": " Nvidia just trained using what Nvidia has. Nvidia has machines that" }, { "start": 355.84, "end": 361.91999999999996, "text": " are interconnected within the machine with very fast buses" }, { "start": 361.91999999999996, "end": 371.67999999999995, "text": " between GPUs. But this zero optimizer can now also go over the network and make it" }, { "start": 371.68, "end": 378.88, "text": " pretty fast. Let's explore that a bit. I have the copy this here. We'll" }, { "start": 378.88, "end": 383.52, "text": " look how the zero optimizer works. Usually what you do is if you have" }, { "start": 383.52, "end": 391.52, "text": " multiple GPUs you can do something like this. This is called data parallelism." }, { "start": 391.52, "end": 398.6, "text": " What you have is a model and the model in this case fits on your GPU." }, { "start": 398.6, "end": 403.76000000000005, "text": " It fits on a single GPU. The blue thing here is the model. I'll actually" }, { "start": 403.76000000000005, "end": 410.64000000000004, "text": " draw this. The model is a neural network so it has a bunch of" }, { "start": 410.64000000000004, "end": 415.76000000000005, "text": " layers. Layer, layer, layer, layer. What you want to do is you pass data" }, { "start": 415.76000000000005, "end": 423.72, "text": " forward. Here is some loss and then right into the loss function and then backward" }, { "start": 423.72, "end": 428.28000000000003, "text": " again. That's basically what you need to do. You need to pass it forward and" }, { "start": 428.28, "end": 433.47999999999996, "text": " backward in order to do back propagation training. If this all fits" }, { "start": 433.47999999999996, "end": 440.44, "text": " into one box that's completely fine. If this fits into one machine, cool." }, { "start": 440.44, "end": 445.21999999999997, "text": " We can just put many batches of data through batch one, batch two, batch three" }, { "start": 445.21999999999997, "end": 451.15999999999997, "text": " and so on. Train the model. If you want to do a speed up using this you can do so." }, { "start": 451.15999999999997, "end": 456.4, "text": " If you have lots of data you can do what's called, and I'm always confused, I" }, { "start": 456.4, "end": 462, "text": " think this is called data parallelism or is it called model parallelism." }, { "start": 462, "end": 466.91999999999996, "text": " In any case what you can do is you can take a second machine or many of those," }, { "start": 466.91999999999996, "end": 475.2, "text": " replicate the model. These two models here are exactly the same." }, { "start": 475.2, "end": 480.88, "text": " What you do is you take your data and you split it up. You take double" }, { "start": 480.88, "end": 486.32, "text": " the amount of data and you put one batch of data through the top part and you" }, { "start": 486.32, "end": 490.59999999999997, "text": " put the other through the bottom part. You do your forward passes on the" }, { "start": 490.59999999999997, "end": 496.24, "text": " machines and you do your backward passes. Then what you want to do is you want" }, { "start": 496.24, "end": 500.88, "text": " to sync between the machines what they learned from the data. Each machine" }, { "start": 500.88, "end": 506.92, "text": " has a different set of data points. Each machine calculates its own parameter" }, { "start": 506.92, "end": 513.08, "text": " updates. It learns from the data it has and then they communicate to keep" }, { "start": 513.08, "end": 518.6, "text": " because this here and this here should be the same. It's the same model." }, { "start": 518.6, "end": 524.6, "text": " They have to keep in sync. This can be usually can be done fairly efficiently" }, { "start": 524.6, "end": 529.96, "text": " especially if these aren't actually two machines but just two GPUs inside of one" }, { "start": 529.96, "end": 536.8000000000001, "text": " large machine. If this is a large machine this is GPU 0 and this is GPU 1." }, { "start": 536.8000000000001, "end": 541.9200000000001, "text": " This is pretty standard because especially on Nvidia machines they have" }, { "start": 541.92, "end": 548.52, "text": " these whatever I think they call them InfiniBand or so." }, { "start": 548.52, "end": 554.16, "text": " Nvidia has these connectors that connects the GPUs together really fast." }, { "start": 554.16, "end": 561.4399999999999, "text": " You can keep these in sync but now the problem becomes what if you want to" }, { "start": 561.4399999999999, "end": 567.24, "text": " train a model that is larger than this. Let's forget about the data parallelism" }, { "start": 567.24, "end": 572.36, "text": " for now if that is what it's called and just consider a model that is too large." }, { "start": 572.36, "end": 582, "text": " A model that is too large will not fit into a machine. This is a model as a" }, { "start": 582, "end": 589.48, "text": " large model. What you want to do is you want to pack some of the model onto" }, { "start": 589.48, "end": 597.36, "text": " your first machine and then take the other part of the model and pack" }, { "start": 597.36, "end": 602.44, "text": " it onto another machine. You separate the model and put it on different" }, { "start": 602.44, "end": 606.8000000000001, "text": " machines. If you have a batch of data what you have to do is you pass it" }, { "start": 606.8000000000001, "end": 611.08, "text": " pass it pass it forward propagate as you regularly would but then you have an" }, { "start": 611.08, "end": 615.9200000000001, "text": " intermediate result. You send that to the next machine and you forward" }, { "start": 615.92, "end": 622.0799999999999, "text": " propagate that. At the end here you have a loss. You want to back propagate" }, { "start": 622.0799999999999, "end": 625.68, "text": " regularly through this machine. You have an intermediate result of back" }, { "start": 625.68, "end": 631.9399999999999, "text": " propagation. Send it over the network and back prop all the way through the model." }, { "start": 631.9399999999999, "end": 637.88, "text": " That's how you can train a model that is too large for one machine if you" }, { "start": 637.88, "end": 645.1999999999999, "text": " have multiple machines. The problem here of course is this part. Just as you had" }, { "start": 645.2, "end": 650.0400000000001, "text": " to keep in sync the model before, now your communication problem" }, { "start": 650.0400000000001, "end": 660.24, "text": " becomes one of... You have to send the intermediate stages to that model and" }, { "start": 660.24, "end": 664.76, "text": " you have to send the intermediate stage of the back propagation back to that" }, { "start": 664.76, "end": 672.84, "text": " part of the model. While this part is working this part is idling away." }, { "start": 672.84, "end": 681.8000000000001, "text": " The network overhead is just very costly. Especially if your model is so" }, { "start": 681.8000000000001, "end": 690.12, "text": " large it can't even fit into one of these single boxes. This is very" }, { "start": 690.12, "end": 701.0400000000001, "text": " problematic here. It's still doable. But what the zero optimizer does is it does" }, { "start": 701.04, "end": 707.52, "text": " both data and model parallelism. It can train models that are too large" }, { "start": 707.52, "end": 718, "text": " for a single machine. It can do data parallelism at the same time." }, { "start": 718, "end": 724.8, "text": " Basically everything is working all the time. There is not much wasted" }, { "start": 724.8, "end": 728.8, "text": " computation. The communication is efficient and so on. It's really a" }, { "start": 728.8, "end": 733.4, "text": " technical achievement. It's not so much a scientific advance. It's really a" }, { "start": 733.4, "end": 739.28, "text": " technical achievement this optimizer. We'll shortly go through. There is a" }, { "start": 739.28, "end": 744.0799999999999, "text": " kind of an animation on the website but it's super slow. I think" }, { "start": 744.0799999999999, "end": 748.7199999999999, "text": " this might be the first time that I will be faster at explaining something than a" }, { "start": 748.7199999999999, "end": 755.4799999999999, "text": " video. Let's see here. What you do is... Let's just consider these" }, { "start": 755.48, "end": 759.28, "text": " three GPUs. Before that it would all fit on one machine. But now let's say you" }, { "start": 759.28, "end": 764.72, "text": " don't actually have that much memory. You don't have these giant" }, { "start": 764.72, "end": 769.84, "text": " empty blocks here. You just have a bit of that. So you have to split your model." }, { "start": 769.84, "end": 776.36, "text": " The blue parts here are your model. These are model parameters." }, { "start": 776.36, "end": 784.08, "text": " The orange part here is memory you need to store gradients. You need as" }, { "start": 784.08, "end": 789.6800000000001, "text": " many gradients as you have model parameters. Because you do gradient" }, { "start": 789.6800000000001, "end": 795.6800000000001, "text": " descent. The green stuff here are what's called optimizer parameters. Now if you" }, { "start": 795.6800000000001, "end": 801.96, "text": " just have SGD these would be non-existent. But if you have something" }, { "start": 801.96, "end": 806, "text": " like AdaGrad or Atom they have additional parameters for each model" }, { "start": 806, "end": 811.8000000000001, "text": " parameter that they need to keep track of. So these are stored here. There" }, { "start": 811.8, "end": 818.28, "text": " can be significant overhead. There's also like a floating point 3216" }, { "start": 818.28, "end": 822.3199999999999, "text": " conversion going on here. Don't want to go into that. So you split your" }, { "start": 822.3199999999999, "end": 825.9599999999999, "text": " model onto these three machines. Let's say that's your entire model. Your model" }, { "start": 825.9599999999999, "end": 832.76, "text": " is six blocks wide. You need to forward propagate now through everything." }, { "start": 832.76, "end": 838.68, "text": " So here is what Xero does. I think it's pretty cool. What we need to do" }, { "start": 838.68, "end": 843.68, "text": " is we have these three different batches of data and we want to forward" }, { "start": 843.68, "end": 850.0799999999999, "text": " propagate them all through the model. Through the same model at the same time." }, { "start": 850.0799999999999, "end": 856, "text": " As if the model were actually stored on all these machines. Like if all of these" }, { "start": 856, "end": 862.9599999999999, "text": " machines had the entire model. And we can do a bit of communication. So what" }, { "start": 862.96, "end": 870.2, "text": " we do first is... This one's easy. Data zero through the first two layers" }, { "start": 870.2, "end": 875.48, "text": " here is easy. Because we have them. So bang you go through the first" }, { "start": 875.48, "end": 886.24, "text": " you get an intermediate result here and here. How do we propagate data one" }, { "start": 886.24, "end": 892.1600000000001, "text": " through the first layer? We can't send data one here. That would be" }, { "start": 892.16, "end": 897.16, "text": " too expensive. And that's the whole point would be lost. We want to" }, { "start": 897.16, "end": 903.68, "text": " actually compute data one on this GPU at the same time. What we do is before we" }, { "start": 903.68, "end": 911.4399999999999, "text": " start we actually communicate these two blocks here to GPU one. We send" }, { "start": 911.4399999999999, "end": 919.4, "text": " these parameters around and fill them in here. We send them here and we" }, { "start": 919.4, "end": 925.12, "text": " also send them here. We send the parameters to all the machines." }, { "start": 925.12, "end": 931.48, "text": " Then we can actually forward prop data one through this and data three through" }, { "start": 931.48, "end": 937.84, "text": " this. So we can do forward prop. After we've communicated all the GPUs can be" }, { "start": 937.84, "end": 946.84, "text": " working. Same with layer two. Layer two simply can send these" }, { "start": 946.84, "end": 954.32, "text": " two here. You can see that these two here to the other machines. Now while" }, { "start": 954.32, "end": 958.48, "text": " it's doing that we've already propagated through the first layer." }, { "start": 958.48, "end": 964.64, "text": " We've already propagated here and here through the first layer. So we can" }, { "start": 964.64, "end": 970.8000000000001, "text": " actually delete these again. We can delete these first layer" }, { "start": 970.8000000000001, "end": 976.64, "text": " parameters that we sent around again. So here you see how we can save memory." }, { "start": 976.64, "end": 982.52, "text": " We don't keep all the model in sync and all the machines. We send whatever we" }, { "start": 982.52, "end": 989, "text": " need on the other machines and then once the computation is done they can delete" }, { "start": 989, "end": 993.84, "text": " it again. Because there's always one machine, this one here for the" }, { "start": 993.84, "end": 998.08, "text": " middle parameters, that keeps track of the parameters and that can at any point" }, { "start": 998.08, "end": 1003.6, "text": " if they're needed send them again. So that's the big kind of catch. You can" }, { "start": 1003.6, "end": 1008.08, "text": " forward prop now through these two. They're already present." }, { "start": 1008.08, "end": 1012.96, "text": " Then you can delete those again on the machines where they're not natively" }, { "start": 1012.96, "end": 1021.24, "text": " stored. From here you can send those two. Also up here you can send" }, { "start": 1021.24, "end": 1030.64, "text": " those two and forward prop your model through to the end." }, { "start": 1030.64, "end": 1039.3200000000002, "text": " That was a mistake. Then each machine calculates its own loss." }, { "start": 1039.3200000000002, "end": 1045.8000000000002, "text": " The backward propagation happens in much the same way." }, { "start": 1045.8000000000002, "end": 1053.0800000000002, "text": " If you follow so far you can already imagine." }, { "start": 1053.0800000000002, "end": 1057.8400000000001, "text": " Now the loss is different because there's a different batch of data" }, { "start": 1057.84, "end": 1061.76, "text": " going through each machine. There's a different batch of data going" }, { "start": 1061.76, "end": 1067.28, "text": " through each machine but each machine has computed with the same model due to" }, { "start": 1067.28, "end": 1074.1599999999999, "text": " the communication of the zero optimizer. That's pretty cool. You get the" }, { "start": 1074.1599999999999, "end": 1079.74, "text": " benefits of data parallelism, lots of data on the different machines and you" }, { "start": 1079.74, "end": 1086.84, "text": " also split up the model across the machines. You don't actually store" }, { "start": 1086.84, "end": 1092.24, "text": " the model on any of these machines. You only send." }, { "start": 1092.24, "end": 1100.12, "text": " From here you send as you need and then you delete again. For the backward" }, { "start": 1100.12, "end": 1106.52, "text": " propagation, same thing. You calculate gradients." }, { "start": 1106.52, "end": 1112.3999999999999, "text": " You calculate gradients here and you send the gradients as needed to the" }, { "start": 1112.4, "end": 1120, "text": " other machines. You calculate gradients here and here and you send them to the" }, { "start": 1120, "end": 1124.64, "text": " machine where they're actually needed. This is a weird pen. You send them to" }, { "start": 1124.64, "end": 1129.44, "text": " that machine. That machine will aggregate all the gradients of all the machines." }, { "start": 1129.44, "end": 1138.3200000000002, "text": " It will aggregate them and then locally it can compute using" }, { "start": 1138.3200000000002, "end": 1142.24, "text": " these optimizer parameters and so on. It can do all kinds of optimization" }, { "start": 1142.24, "end": 1148.48, "text": " locally because it has gathered gradients from all the other data." }, { "start": 1148.48, "end": 1157.44, "text": " What you end up with, for example, GPU 2 here, for these two layers it has" }, { "start": 1157.44, "end": 1164.72, "text": " effectively broadcast the layers such that much much more data than it just" }, { "start": 1164.72, "end": 1172.72, "text": " had itself could run through the layers. It has aggregated gradients from all of" }, { "start": 1172.72, "end": 1178.08, "text": " that data and now it can use all of these gradients together to make a good" }, { "start": 1178.08, "end": 1184.68, "text": " update using the optimizer parameters. To make a good update to these model" }, { "start": 1184.68, "end": 1189.08, "text": " parameters and then in the next iteration it can go ahead and broadcast" }, { "start": 1189.08, "end": 1193.3600000000001, "text": " the model parameters. The new model parameters again. It is able to" }, { "start": 1193.36, "end": 1200, "text": " compute with much more data than it can just fit by itself. It is just doing" }, { "start": 1200, "end": 1207.36, "text": " its part. So Zero and DeepSpeed, Zero is the protocol and DeepSpeed is the" }, { "start": 1207.36, "end": 1213.04, "text": " actual library. They will do all of this communication and splitting and so on" }, { "start": 1213.04, "end": 1218.8799999999999, "text": " for you over the network in a way that is efficient, in a way that everything" }, { "start": 1218.88, "end": 1225.96, "text": " runs at the same time and the communication overhead is minimal. You" }, { "start": 1225.96, "end": 1232.2800000000002, "text": " can actually choose which stage you want, so what your trade-off of communication" }, { "start": 1232.2800000000002, "end": 1238.96, "text": " and memory saving will be. This is extremely cool. They say this goes up to" }, { "start": 1238.96, "end": 1248.72, "text": " whatever 100 billion parameter models if you use... This isn't something for" }, { "start": 1248.72, "end": 1254.48, "text": " your average Colab user. This is really something for big players." }, { "start": 1254.48, "end": 1261.64, "text": " But that being said, I don't think language is solved by simply throwing" }, { "start": 1261.64, "end": 1265.28, "text": " more parameters at it. I think there's still a bit of a breakthrough" }, { "start": 1265.28, "end": 1274.2, "text": " ahead yet to come in language understanding with newer model" }, { "start": 1274.2, "end": 1278.8400000000001, "text": " architectures. Alright, that was it for me. Thanks." } ]
k_hUdZJNzkU
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
The Dimpled Manifold Model of Adversarial Examples in Machine Learning (Research Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "adversarial examples", "goodfellow", "goodfellow adversarial attacks", "adversarial attacks on neural networks", "features not bugs", "madry", "dimpled manifold", "why do adversarial examples exist", "adversarial examples explanation", "adversarial attacks explanation", "computer vision", "decision boundary", "data manifold", "low dimensional manifold", "what are adversarial examples", "what is deep learning" ]
#adversarialexamples #dimpledmanifold #security Adversarial Examples have long been a fascinating topic for many Machine Learning researchers. How can a tiny perturbation cause the neural network to change its output by so much? While many explanations have been proposed over the years, they all appear to fall short. This paper attempts to comprehensively explain the existence of adversarial examples by proposing a view of the classification landscape, which they call the Dimpled Manifold Model, which says that any classifier will adjust its decision boundary to align with the low-dimensional data manifold, and only slightly bend around the data. This potentially explains many phenomena around adversarial examples. Warning: In this video, I disagree. Remember that I'm not an authority, but simply give my own opinions. OUTLINE: 0:00 - Intro & Overview 7:30 - The old mental image of Adversarial Examples 11:25 - The new Dimpled Manifold Hypothesis 22:55 - The Stretchy Feature Model 29:05 - Why do DNNs create Dimpled Manifolds? 38:30 - What can be explained with the new model? 1:00:40 - Experimental evidence for the Dimpled Manifold Model 1:10:25 - Is Goodfellow's claim debunked? 1:13:00 - Conclusion & Comments Paper: https://arxiv.org/abs/2106.10151 My replication code: https://gist.github.com/yk/de8d987c4eb6a39b6d9c08f0744b1f64 Goodfellow's Talk: https://youtu.be/CIfsB_EYsVI?t=4280 Abstract: The extreme fragility of deep neural networks when presented with tiny perturbations in their inputs was independently discovered by several research groups in 2013, but in spite of enormous effort these adversarial examples remained a baffling phenomenon with no clear explanation. In this paper we introduce a new conceptual framework (which we call the Dimpled Manifold Model) which provides a simple explanation for why adversarial examples exist, why their perturbations have such tiny norms, why these perturbations look like random noise, and why a network which was adversarially trained with incorrectly labeled images can still correctly classify test images. In the last part of the paper we describe the results of numerous experiments which strongly support this new model, and in particular our assertion that adversarial perturbations are roughly perpendicular to the low dimensional manifold which contains all the training examples. Abstract: Adi Shamir, Odelia Melamed, Oriel BenShmuel Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher LinkedIn: https://www.linkedin.com/in/ykilcher BiliBili: https://space.bilibili.com/1824646584 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Hello there! Today we're going to look at the dimpled manifold model of adversarial examples in machine learning by Adi Shamir, Odelia Melamed and Oriol Ben-Schmuel. This paper on a high level proposes a new way of looking at the phenomenon of adversarial examples in machine learning, specifically in deep learning, and they proposed this model called the dimpled manifold model, essentially arguing that classifiers put their decision boundaries right next to the manifold of data, while only slightly sort of curving it around the data like this. Now the data manifold being low dimensional, this results in a situation where you can cross the decision boundary really easily if you simply go perpendicular to the data manifold, which also is perpendicular to the decision boundary, and if because it's just such a small dimple there, the decision boundary is pretty close, and that's how you end up with adversarial examples that are super easy to get. So it's not a new attack, a new defense, anything like this, it's simply a mental framework of explaining why adversarial examples exist on a high level. They have some conceptual thought experiments, they have some explanations, and some real-world experiments. Now I personally don't think that this is entirely, it's not necessarily incorrect, but I don't think that this is really useful to think in this way, and I'm gonna explain why. In general my opinion of this is it doesn't really add anything, and I think it explains less than the models we already had. Yeah so that's my opinion, I'm gonna get to it. Specifically also the experiments they propose, I think that there is a big Occam's razor failure right there. But as I said we're gonna get to all of this, I'm gonna go through the paper and I want you to make up your own mind, even though I'm going to try to bias you. So yeah this is not a neutral channel in case you haven't noticed. Alright so if you like content or if you dislike it tell me in the comments, tell me what you think of the paper, whether it makes sense, whether it doesn't make sense, and so on. I'd be very interested to see what you have to say. Yeah I read the comments, so please. They say the extreme fragility of deep neural networks when presented with tiny perturbations, yeah but okay this starts out how every single adversarial examples paper always starts out saying okay deep neural networks are extremely fragile, there's this phenomenon of adversarial examples. Now if you don't know what adversarial examples are, really briefly essentially what this is, it's a phenomenon where you take an image like the thing here on the left, the neural network thinks it's a plane with a very high probability and you change it to this thing right here, which you as a human can't even tell it's different, however the neural network will think that this is now a bird with very high probability and the this is the change that you made. It's magnified for you to see, it kind of looks like random noise but it's a very particular noise that makes the neural network think it's something different and this is just it's tiny in the in its norm. So you don't see a difference. Now bird here is kind of close to plane but you can change this into anything, literally anything you want, you can change this into banana or I don't know dog or any class you want using these techniques. So it's not about being close it's really kind of a separate phenomenon. So that's adversarial examples and many frameworks have been proposed in order to explain these adversarial examples and they make a they make a nice overview right here. Many have been proposed over the last eight years that DNNs are too nonlinear, that they're too linear, that they were trained with insufficient number of training examples, that are just rare cases where they error, that images contain robust and non robust features etc. They say however none of these vague qualitative ideas seem to provide a simple intuitive explanations for the existence and bizarre properties of adversarial examples. So that is pretty harsh criticism specifically the first ones are kind of yeah but specifically this last one that images contain robust and non robust features which is sort of the leading hypothesis right now of why adversarial examples exist and what they are and then here saying none of these can none of these vague qualitative ideas seem to provide a simple intuitive explanation for the existence. Like let's see whether or not they're gonna do better okay. So also in the abstract they go on and they say okay they introduced this new conceptual framework which they call the dimpled manifold model which provides a simple explanation for why adversarial examples exist, why their perturbations have such tiny norms, why these perturbations look like random noise and why a network which was adversarially trained with incorrectly labeled images can still correctly classify test images. Now this last part if you're not familiar with the literature it might come to you a bit random this why a network which was adversarially trained with incorrectly labeled images can still correctly classify test images. This is a famous experiment from the group of Alexander Modri where also this hypothesis this one the robust and non robust feature comes from and any attempt at explaining adversarial examples after this paper has to explain why that experiment makes sense because it's kind of a non intuitive experiment and we're gonna get to that as well but just so you know that's why they write it in the abstract. Now I personally think they don't have a good like this model here doesn't have a good explanation for why that works. They're sort of hand wavy trying in any case. So they say in in the last part of the paper we describe the results of numerous experiments which strongly support this new model and in particular our assertion that adversarial perturbations are roughly perpendicular to the low dimensional manifold which contains all the training examples. Okay also remember this experiment they strongly support what in particular the assertion that adversarial perturbations are roughly perpendicular to the low dimensional manifold which contains all the training examples. Now remember this that the experiments are supposed to support this particular claim because also that is going to be important down the road. Okay so let's get into the dimpled manifold model. What is it? What do these authors propose? I'm gonna try as best as I can to say what the authors are saying in the paper. So they claim that there is an old mental image of adversarial examples and the old mental image is here. They say we think the old mental image is based on the highly misleading 2d image on the left side of figure one and that's this thing right here. So the old mental image is that there is a there is a data space right this here if you think of pic of images as data points this would be the pixel space right so this is images with two pixels right now in this conceptual framework but you have to sort of think yourself into higher dimension. So they claim the old mental image is the following you have sort of the data distributed somehow in this space the data being the all the set of natural images or images you consider which is kind of these these sub space these subgroups right here there are a bunch of images right there and there and also there and there so these are images of two different classes the red class and the blue class now they're distributed like this and what is a classifier supposed to do a classifier is supposed to put a decision boundary between them and that's what they draw in here so this would be sort of a reasonable decision boundary between the two classes right so now what do you do if you want to create an adversarial examples well necessarily you have to start at an image of a class this one maybe and you have to cross the decision boundary right you want to fool the classifier ergo necessarily by definition you have to cross the decision boundary so what do you do the the easiest way to do this is to sort of go straight towards the decision boundary which is approximately in this direction right here and then once you cross the decision boundary you are done you're on the other side you have created an adversarial example provided of course that the image still kind of looks like the original image and so they say this has this has many many problems here they say the in this mental this mental image adversarial examples are created by moving the given images along the green arrows towards some kind of centroid of the nearest training images with the opposite label in which they mean this this thing right here so we would move the images towards the other class towards images of the other class and they say as stated for example by Ian Goodfellow in his lecture at this time I'm gonna cut this in right here I've said that the same perturbation can fool many different models or the same perturbation can be applied to many different clean examples I've also said that the subspace of adversarial perturbations is only about 50 dimensional even if the input dimension is 3,000 dimensional so how is it that these subspaces intersect the reason is that the choice of the subspace directions is not completely random it's generally going to be something like pointing from one class centroid to another class centroid and if you look at that vector and visualize it as an image it might not be meaningful to a human just because humans aren't very good at imagining what class centroid look like and we're really bad at imagining differences between centroid but there is more or less this systematic effect that causes different models to learn similar linear functions just because they're trying to solve the same task okay so it really appears like Goodfellow says this thing right here however they claim now they claim this doesn't make sense so they claim that you should think about adversarial examples in a different way and this is their dimpled manifold hypothesis so what is their dimpled manifold hypothesis they say what you have to do is you have to think about the data manifold in the higher dimensional space that they hand the higher dimensional input space so in this case they consider instead of here this 2d landscape they consider the 3d landscape so this would be the pixel space right now we consider three pixel images and the data is embedded in a low dimensional manifold in this higher space so because if you think about all combinations of pixels that are possible so not all of them are natural images in fact only very few of the possible combinations of pixels are natural images or images that make sense to you as a human or are images that you could potentially generate by going out with a camera so the data you're considering lives on a very low dimensional manifold in this big space and you have to explicitly think about that now the data is the data manifold here is represented in this in this sheet in the middle and on this manifold you're going to have your different classes of data here the blue or one class and the red or the other class what this paper claims is that what classifiers do what neural networks do when they classify the training data here is they go and they lay their decision boundary instead of so in the old model you would have thought maybe something like this happened where you put your decision boundary sort of in the middle between the two classes right crossing the manifold right here so you sort of put it in the middle between the two classes and then when you have to create an adversarial example again what you would do is you would maybe start here what you would have to do is you would go straight towards the decision boundary right here okay crossing the decision boundary and then on the other side you'd have an adversarial example in this new model what they claim is the decision boundary actually doesn't look like this right here okay the decision boundary actually is very much aligned with the manifold of data as you can see right here so this mesh that they show is the decision boundary now and their claim is that that usually just aligns with the manifold of data however around the actual data around the training samples what the classifier will do is it will create these what these dimples okay and these dimples are just tiny well dimples tiny perturbations in the decision manifold such that the data is on the correct side of the decision manifold sorry of the decision boundary right so the blue points here are under or one side of the decision boundary and the red points are on the other side of the decision boundary and for the rest the decision boundary just aligns with the data the data manifold now if you want to make an adversarial example now what you have to do again you start from an image and again you walk straight towards the decision boundary however now you don't have to go like this you what you can do is you can go simply perpendicular to the data manifold and you will cross the decision boundary very quickly because the dimple you're in is kind of shallow and they give a reason why the dimples are shallow because they claim this is results from training these models and that explains some things so the difference is the difference is we started out from this to make an adversarial example we have to go towards the decision boundary okay if we sort of transfer this image into higher dimensions it looks like this in the middle again in order to make an adversarial example we have to go towards the decision boundary now in the old mental image going perpendicular to the decision boundary means walking on the data manifold because we walk from this group of data towards this group of data you can see right here that we're walking on the data manifold when we walk perpendicular to the decision boundary whereas in the new model walking perpendicular to the decision boundary coincides with also walking perpendicular to the data manifold so this is the difference right here that they that they claim so this they say there's we call this conceptual framework the dimpled manifold model and note that it makes three testable claims about the kinds of decision boundaries created by trained deep neural networks first natural images are located in a K dimensional manifold where K is much smaller than N second deep neural network decision boundaries pass very close to this image manifold and third the gradient of the classifications confidence level has a large norm and points roughly perpendicular to the image manifold alright so these are these are the claims that they're going to make to be tested and to be supported by experiments I guess so I hope I've represented enough what the authors claim right here I hope they would agree that I've represented this is accurately so now where is the problem with this in my opinion the problem isn't necessarily with what they claim right here it's it's you know I don't necessarily disagree with this mental image I don't necessarily disagree with these claims in fact that the data is on low dimensional manifold this we've this is kind of commonly agreed upon assumption right as I said not all the possible pixels combinations make good natural images and that the fact that it is then a manifold is a commonly held assumption decision boundaries pass very close to the image manifold well the fact that we can generate adversarial examples right already means that decision boundaries pass very close to the image manifold so this also is not news this this has been like in everybody's conceptual framework for the last five years at least and then third the gradient of the classifications confidence level has a large norm and points roughly perpendicular to the image manifold and this claim right here I'm pretty pretty sure there so this is not a trivial claim which yes okay this is not something that was like set around much however I'm going to claim that their model is not the only model by far that makes this happen or any something like this specifically when we go look at the experiments I'm going to show you that this doesn't necessarily support their claims it doesn't disprove them right but it also doesn't necessarily support them just because they show that okay so the other problem I have with this is that this in this thing they build up as ooh this is this is the old mental image this is how people thought about adversarial examples until now I look I just I disagree like this it's a bit of a it's a bit of a straw man almost I feel like this no one no one thought no one that is sort of in the literature of adversarial examples thought or thinks that this is an appropriate model for what is happening like we know that these distances here are very small right the distance until you cross the decision boundary and we know also like if this were true you should just be able to go to the decision boundary and then go the same distance right and then at some point you would actually arrive at a sample of a different class so you could you could actually transform images into the other class by simply going into the adversarial direction which is precisely what we don't see right we see the image still largely looks the same what gets added looks like a bit of noise okay so no no one was having this mental image because clearly this mental image is it is not appropriate for adversarial examples as well as saying look if you think of this in sort of higher dimensions and I realize I've drawn this decision boundary but this is what they describe in the text then I don't I don't see that this is the correct way of like there are many different kinds of decision boundaries that are compatible with with the decision boundary right here by the way this decision boundary I drew doesn't even separate the classes all the classes correctly what I'm saying is that also if you consider the decision boundary that for example looks like out of colors looks like this that also crosses here however it's sort of kind of flat like this but it's still a linear decision boundary right like this okay so this is above and the other part is below if you think of this if you project this down it looks the same in 2d and in 3d it's also explains that decision boundaries are very close to the data samples it's a bit different though than this dimpled manifold hypothesis right if you I think the at least in my estimation what's happening is much more that you have just a bunch of these kind of linear decision boundaries flying around right here partitioning up the space and so on and this might result in a similar situation as here but it has quite different predictions in form of what it does then what it does right here here it's sort of a flat manifold dimpling around the data whereas here it's kind of the class are separating the space into many regions always trying to sort of distinguish one class from the other and yeah so might end up bit the same but I don't think they give a fair shot at what we know so far like we that this model is not a a model that people hold in general especially the one on the left I can make an attempt at making a mental model that people hold so far maybe it's just me but I have a feeling this is a bit more so the model that I call let's call it something because they call it there something right I call mine the squishy feet the stretchy feature model okay let's contrast this with the stretchy feature model so what I want to do is I have two features and this is a coordinate system in feature space okay so there's two features this in feature space I mean sort of the the last representation before the classification layer in feature space the two classes look like this so there is the red class and there is the blue class and you can see right here there are two features and for some reason the network can classify along these two features maybe because there are other classes other data points so we can't put a decision boundary like this between the two we can classify along the two features okay so you can see there are two features right here feature one and feature two and both features are actually pretty good features for keeping these two data points apart okay now there are empty spaces as you can see right here which we're gonna get to in a second but you can you can use both features and ideally a classifier would actually use both features it would say you know if feature one is high it's there probably a red class if feature two is low it's probably the red class and the combination makes even more of the red class okay however since we are in a deep neural network which is has transformations it transforms the data along the way if you look at the same situation in input space so in the actual pixel space it looks different and this is due to not necessarily the non-linearity of things but actually it is due to the linear transformation it's actually the problem of adversarial examples at least in my estimation appears to happen in the linear layers if you think of for example like eigenvectors of matrices and the largest eigenvalues determine how far you can go in a particular direction by having a sort of a standard input delta and the same happens here by the way this is why spectral norm regularization tends to work at least a little bit against adversarial examples so what I mean is if you look at the scale of these features right they are like one two three four five of this features one two three four five if you look in the input space some of the features are going to have roughly the same scale right here and these features are going to be features that you have to change the input a lot in order to change the feature a lot what do I mean by this this is something like the shape of an of an image okay if you think of a cat the general shape of a cat you know it has it has two years pointy it has a head and and so on that's the general shape of a cat sorry that is actually the left right feature right this is the the left right feature is the shape and I have to change the input a lot in order to affect the feature right so that if they're roughly on the same scale of what I have to change to change the feature however the other the other feature in the input space has a much different scale than it has on in the feature space and this might be something like the fur structure of a cat so the fur structure of a cat like is I can change the pixels a tiny bit and I'm going to change the first structure by a lot I can change the first structure of a cat to the first structure of a dog by just changing the by just changing the pixels a little however it will be different and now it will be the first structure of a dog so how does this change now in input space in input space it's going to look something like this where one feature dimension is going to look rather the same and the other feature direction is going to be very very stretched okay now remember both of these features are good features they both can be used to read to classify the images so you can see changing the shape requires a lot of pixels changing the first structure however requires just a little pixel now if I take some image and I draw an L2 ball around it which was what we usually do when we create an adversarial example we say only we only allow small perturbations you can see that in in this direction it's a very you know you don't get very far in feature space but if you go the same distance in the in the input space into this direction in the feature space you're going to walk a lot you're going to walk like way far and this is just by definition there are going to be many features that you can use to classify images and they're going to be good features they're not going to be errors or aberrations like the first structure is a good feature to classify a cat they want to be many features in there and some of them are going to be of large magnitude and some of them are going to be of small magnitude and this is just what happens okay so I called this the the stretchy feature model and this is sort of a direct result of this paper that they cite by Alexandre Modri's group which we're gonna get to in a second right but keep those two in mind and we're gonna see how which one explains the phenomena better and which one doesn't okay so they say why deep neural networks are likely to create dimpled manifolds as decision boundaries and the the idea here is that okay we have to now explain why this even happens so if you consider the data manifold in green right here and here we have just one dimensional data and you can see it's not linearly separable right so we have to have sort of a curve decision boundary around this and why would this result in a dimpled manifold so they say look if you start off your your deep neural network training you're maybe your decision boundary is going to be somewhere like here okay not very effective what's gonna happen is let's say what you want what you want is you want to have the blue data you want to have the blue data above and the red data below the decision boundary so right now the red data is is oh that's the other way around the red above and the blue below so right now the blue are fine like the blue don't complain you do get a gradient out of the red examples pushing the entire decision boundary down there's no resistance right the blue ones they they're fine so you're gonna push down this is your next decision boundary okay same situation you're gonna push the entire decision boundary down now you're here now you're too far so you're gonna push the entire decision boundary up because now the red ones are fine the blue ones complain and this results you being sort of right on top of the data for once okay and then both gradients kick in so now the red data are gonna push such the decision boundary down the blue data are gonna push the decision boundary up which is going to result in this sort of dimples around the data otherwise the decision boundary coinciding with the data okay this is their explanation for why the why this works I hope this makes a little bit of sense now yeah so they claim that that this is happening contrast this with the mental model of having a bunch of linear half spaces which would result in something like you know a decision boundary being through here a decision boundary being through here a decision boundary being through here and through here through here which would also explain what we see but this is their claim why this decision boundary looks the way it is to me it's it's a bit it's a bit weird right like here why should the decision boundary align with the data manifold maybe it doesn't maybe they don't they don't claim that I should not complain about this but for example in between the data why does it do that they give some examples right here that decision boundary it should be rather simple right it doesn't like to curve a lot they say the new model can help to understand why the training phase of a given network typically converges to the same global optimal placement of the decision boundary regardless of its random initialization they're gonna make a claim right here why this happens to demonstrate this point consider the old model in which you sprinkle at random locations in the two-dimensional square alert as the large number of classes depicted in figure three sorry um I was confused for a second I am no longer so they're talking about this figure right here and they say look in the old model you have if you want to pass sort of simple decision boundaries through this you have to sort of pass them like some of the gray ones we see right here and they are not going to be so good okay so our goal is to pass a decision boundary of bounded complexity and this bounded complexity comes up again and again they claim of course their decision boundary is very smooth and very simple which will best separate the red and blue clusters they say there is a large number of way to do ways to do this like the green lines and most of them will be about equally bad in particular any decision to pass one side or the other of some cluster can make it harder to accommodate other clusters elsewhere along the line consequently there likely be many local minimum of roughly the same quality in the dimpled manifold model however there is likely to be a single globally best decision boundary shape since there is no conflict between our ability to go above one cluster and below a different cluster when they do not intersect so their idea here is that rather putting the decision boundaries like this what they want to do is you look at this in three dimensions and then they just kind of put a sheet over top of it and go above the blue ones and they're below the red ones in all of the three dimensions right so you go above the blue ones and below the red ones rather than this these gray things like here which are not very optimal now this one I'm not really sure what to make of this because for first of all they say it typically converges to the same global optimal placement of the decision boundary regardless of random initialization we know that this is not true right I've specifically made videos on research by Stanislav Ford who shows that if you randomly initialize a network differently what it will happen is you will reach the same accuracy but it will it will make mistakes on different samples of the test set right and there's actually a structure to how these decision boundaries are going to be different depending on your random initialization which actually would support what they claim is the old view right here second of all I have no trouble making a decision boundary here that separates red and blue right I can go something like this like this come here okay you get here right I have no trouble separating red and blue I guess this should go here so they're this this kind of this kind of bounded complexity does a lot of work here them saying who the decision boundary should be simple and so on and that's why they really insist that this decision boundary should be somehow straight but then a lot but I disagree that their decision boundaries are so simple if you have to curve around every data sample and otherwise follow the image manifold that seems to be like a rather complex decision boundary honestly because it's it's it's kind of a generative model of the data right if you follow the data manifold so I disagree that there's is so much simpler right just because it doesn't bend that much and here it like bends a lot that's also something they say like you you don't want to bend decision boundaries so much that hardens training and third of all why do they give their model the benefit of the third dimension right so they claim like oh look the old model doesn't work because if you have to place decision boundary between the data points you're gonna end up with a bad decision boundary however in order for their model to work they need the third dimension they need to pass like under and over the data in the third dimension whereas if you actually go into the third dimension you know every single lecture you have on kernelized SVMs and whatnot they show you like if you go in higher dimensions these things are actually separable like you would make if you have like RBF kernels these would become a cluster these would become a cluster and so on this is sort of the first lecture on going into higher dimensions in order to linearly classify stuff so it's not like their method can explain anything more than any other method if you give it this third dimension and the fact that they don't give the old model the third dimension but they give themselves the third dimension in order to explain it is a little bit I'm not I don't know it's this like yeah so I don't think this is any argument for for their model it just simply shows that if you have a lower dimensional manifold of data and you classify it in a higher dimension there are ways to do that right and if you like if you have relu networks and linear classifiers it's going to look like more chunky it's going to kind of divide the space into these kind of relu cells where you classify the data all of this is compatible with what they're saying not just their dimpled manifold hypothesis all right so this is yeah I don't I don't see the big explanation here so they claim what can they explain with their model explaining the mysteries of adversarial examples okay there are five things they claim they can explain with this first of all the mixture mystery right how can it be that a tiny distance away from any cat image there is also an image of a guacamole and vice versa and okay if these and if these classes are intertwined in such a fractal way how can a neural network correctly distinguish between them our answer is that all the real cat and guacamole images reside in on the tiny image manifold but below the real cat images there is a whole half space of pseudo guacamole images which are not natural images of guacamole and above the guacamole images there is a whole half space of a pseudo cat images so their idea here is that okay you have this one-dimensional data manifold here are the cats here the guacamole is if you have your dimpled manifold curving sort of around the data right here you know all of this is technically guacamole so if you go from the cat to here you reach a non-natural guacamole image just by the fact so the explanation here is that the explanation is that this this the decision boundary lines up with the data manifold except around the data where it creates a small dimple and therefore you can cross the dimple into the other region okay you this is very it's the same effect as this model right here you know I can draw this dimpled manifold I can draw it right here right if I classify the image I can draw this dimpled manifold I get the same effect however this model here explains much more it actually explains like here there is no reason if you think about a multi-class setting right if you think of this in two classes fine but if you think of this in a multi-class setting there is no reason why this region right here should be guacamole it can be any other class right if the if the idea is the decision boundary follows the data manifold and then just dimples around the data to make the data correct clout they classify the only constraint here is is that these are cats it says nothing about sorry it says nothing about why on the other side there is guacamole instead of anything else and that does not coincide with what we know about adversarial examples like this region here is a consistent region what so first of all first of all my bigger problem is why does this even generalize why does the dimpled manifold hypothesis even generalize right like if it follows the if it follows the data manifold largely except around the the training data why does it exactly generalize well to test data you have to like argue that the test data I see are quite close because otherwise it would be it would get very confused on test data which would be somewhere else on the manifold right but we know that generally neural networks classify data that's on the manifold of natural images quite well they generalize quite well however this model is sort of an anti generalization model but okay maybe you can claim that their test images are close enough to the training images such that this works but for example we know that if that this this is a consistent region what do I mean by this we know for example we can make universal adversarial perturbations which means that we can find directions that no matter from which image or from which class we start from they will always result in guacamole okay this is not explained by the dimpled manifold there is no reason why these regions on the other side should be of a consistent label in a multi-class setting we also know that adversarial perturbations are transferable which means that we can make an adversarial perturbation in one classifier and then in a different classifier even if it's trained with a different data set actually we can we can apply the same adversarial perturbation and it will most likely still be of the same like the adversarial perturbation going towards the same class there is no reason in the dimpled manifold hypothesis that explains these phenomena if you think of this of the stretchy feature model this is really easy right if I create an adversarial example I go across the decision boundary right here what do I do I change the fur without changing the shape now I change the fur by so much that you know now there is a conflict right in feature space I go up here now there is a conflict it has the fur of a dog but the shape of a cat still now I there is a conflict but neural networks in the final layer are linear which means they just weigh the different features now I just pump that fur to be so doggish right that it overpowers the shape feature of the cat neural networks are biased towards sort of structure anyway over shape already so I just I just hammer that fur and now the neural network thinks it's it's a dog and a different neural network trained on the same data will also think it's a dog because it will also have learned to classify images by shape and fur therefore therefore it will it will be vulnerable to the same attack right this is super easy to explain in this model there is no reason why this should happen in the dimpled manifold model unless you amend it by some more hand wavy things they say the direction mystery when we use an adversarial attack to modify a cat into guacamole why doesn't the perturbation look green and mushy okay so they say well in the old model you would have to walk along the image manifold from here towards the guacamole images and that should mean that your image should sort of change to look like a guacamole in our in the dimpled manifold model you go off the manifold perpendicular and that explains why the adversarial perturbation looks like a little bit like just random noise again no one thought this in the old model in fact we have a pretty good explanation why it still looks the same and that's because humans are much more receptive to this thing right here to the shape whereas neural networks also or much more consider this thing right here the fur also they consider fur and shape in different proportions than the humans do and so that's we already sort of knew this and it's in fact a better explanation the uniformity mystery you know why the decision boundary is ever present so they claim because the there's this dimple right here even you know the most far away cat image here has a close crossing to the decision boundary so there is no cat images that are kind of closer to the decision boundary but this is I think this is just a property of a high-dimensional classifier I think that here our 2d view of the world betrays us and yeah especially if we can go really far in feature space with a tiny perturbation and input space this is not not a mystery not even a mystery the vanishing gap mystery okay which is about adversarial training I think which we're gonna skip here and then there is the accuracy robustness trade-off mystery so this is if you do if you train a model adversarially which means that here look here I have my cat okay I train I have a data set of cats and dogs I train my neural network on it it's vulnerable what can I do what I can do is I can create adversarial images this is a cat right I can create adversarial images by making this into a dog okay so this is a dog because I changed the first structure a little bit this is an adversarial example now I add this so this is comes from the data set now I add this to the data set but I tell it this is a cat too right this is a cat and this is a cat if I do this with my neural network the neural network will become robust to adversarial examples to a degree not fully but to a degree this is the best method we have so far of defending against adversarial examples called adversarial training now what you do when you do this is you train the network to to sort of classify the advert to yeah classify to incorporate the adversarial ness into its decision-making process and this results usually in a degradation of the generalization performance of the network so as it becomes more robust it becomes less accurate on real data right you gain accuracy on adversarial data you decrease the accuracy in real data which makes sense intuitively but it is a strong effect which is not the same as you know I simply teach my model to do yet another class it is quite it is actually a trade-off now they try to explain this right here when we train the network we keep the images stationary and move to decision boundary by creating dimples when we create adversarial examples we keep the decision boundary stationary and move the images to the other side by allowing a large perpendicular derivative we make the training easier since we do not have to sharply bend decision boundary against around the training examples so this is when you train normally when you train without adversarial examples they say there is a large perpendicular derivative which in the like the what they mean is that the data samples are of push these dimples out that that's the large perpendicular derivative the perpendicularity is to the image manifold and that makes it easy because you don't have to bend the decision boundary a lot so you can kind of remain here and you have to kind of create these dimples again their argument is you don't want to bend this boundary a lot which makes training easy however such a large derivative also creates very close adversarial examples yeah this is their claim that now the decision boundary is pretty close because you don't bend the decision boundary by too much around the data because you do dimples any attempts to robustify a network by limiting all its directional derivatives will make the network harder to train and thus less accurate I'm not super sure how to interpret this so I might be doing this wrong right here but if you create adversarial example what you do is you essentially have this data point and you create an adversarial example this data one is yeah well these are of the same class so now that is now the the decision boundary has a sort of bend harder okay which makes it more hard to train and at some point it so it's harder to train and that's why you have less accuracy and at some point it says well actually I don't want to bend that much I'd rather make a mistake here and just bend around both of these data points and now you have a wrong classification so that's sort of their explanation of why this happens which I find a bit hand wavy you have to argue like ooh ease of training bending the decision boundary and so on in this model right here super easy okay what happens if I create cats that have cat fur and dog fur and I tell the network these both are cats well essentially I tell them I tell the network look there are two features right here the fur and the cat and you know the fur just just disregard it just don't do that don't regard the fur as a feature because it's useless now because I now have cats with cat fur and cat with dog fur so the network can't use that to classify anymore and that explains why it gets less accurate because I take away one useful feature okay so you know now the network has less useful features and that's why it gets worse this it's it's a pretty simple explanation in the stretchy feature model it has there's a lot of work to make this happen in the dimpled manifold model so lastly they try to explain and they what they came an interesting mystery in this this paper that I have cited throughout and what that is is that it's kind of the same experiment as here where we create adversarial examples and we add them to the training set except for two things first of all we don't have the original so our new data set is not going to contain the original images it's only going to contain the adversarial examples second it is going to contain the adversarial example image but the label isn't going to be the correct label quote-unquote correct from where we created but the label is actually going to be the adversarial label the wrong label okay so we're going to tell the network this is a dog please learn that this is a dog right it's a cat with dog fur and the old training images are nowhere in the data set we just do a data set with these wrongly labeled images now when we go and we apply this so we train we use this we train a network right to classify cats and dogs and now we once we've trained this network we go we take one of these samples of the original data set we classify it it's going to give us a correct classification right so it will recognize that this here is a cat even though we told it that this here is a dog now how does it do this it does this by looking at the fur you know we've we've doubled down on the fur here right so this is like we really made that fur feature super strong in these adversarial examples so it's going to look at the cat fur and even though none of the cats have the shape like this we sort of we sort of supercharged that fur feature again in this model not a problem essentially what we've done is we've created two data classes you know one up here and one down here that have the fur supercharged and now it's just going to mainly look at that fur structure and that is a useful feature right so this this what's called their features not bugs paper adversarial examples are features not bugs or other way around not bugs they are features has demonstrated with this experiment this notion that there are adversarial examples result from useful generalizing features in the data set that are simply of by definition the features that are not large enough for humans to see what they call non robust features how do they explain this they say the original people try to explain this highly surprising role by distinguishing between robust and non robust features in any given image where some of them are preserved by the adversarial change and some are not however it is not clear what makes some of the features more robust than others definition just definition like like if you have features and you order them by their size like by their how much you have to change the pixels that some features are going to be larger than other features and then some features going to be below that cutoff where you define adversarial examples budget this is definition makes them such that some of more robust it's not it's not clear our new model provides very simple alternative explanation which does not necessarily contradict the original one okay at least this which is summarized in figure four to simplify the description will use 2d vertical cut through the input space and consider only the decision boundary that separates between cats and anything else okay so they have this example right here they say look we have a decision boundary that distinguishes cats see from non cats and the green one here is the image manifold and the gray is the decision boundary okay so now what we do is we create adversarial examples in frame two right here you can see that we make the cats into non cats and we make the be the bats into bats aren't very popular lately the badgers into into cats so we make the badgers into cats and we make the cats into these whatever DS ducks okay and now we relabel those and that gives us a new data manifold so the new data manifold is this data manifold right here and we have also new labels and now they claim the resulting decision boundary in figure four as you can see right here this is the resulting decision boundary the gray one it is it is very similar to the decision boundary in the first frame and therefore we shouldn't be surprised that this new decision boundary that results from this perturbed data results in the same decision boundary as the original one okay however like why like why so their whole they have two notions notion one is that the decision boundary follows the data manifold closely except it sort of bends around the data a little and you can see this right here like this decision boundary kind of follows the data yet it just happens to be on the correct side of the data points at any given moment which okay okay however they also make the claim in different parts of their paper that bending the decision boundary and so on is not good you'd rather want to have a simple decision boundary so to me there is no reason why the decision boundary couldn't just look like this it would correctly classify this new data set right however it would not correctly classify it would not correctly classify the let's say the C that was right where was it right here or right here these data points it would not correctly classify so you see that this until now they've always had this data manifold to be sort of super duper straight and smooth and that's how they can also say well following the data manifold and not bending too much and so on those are not in conflict with each other but now that they are in conflict with each other you have to give you gonna give up one or the other and only in one of them do actually does this experiment here still make sense in the other one it doesn't and but if you give up the ooh bending too much is bad then you know you lose a bunch of explanations that you have up here so yeah like it's one in my mind it's one or the other and there's I there's still no reason I think no good reason why this like the decision boundary should align super closely with the data points like if there if there is nothing here right if this is perpendicular really to the data manifold like why would the decision boundary align so closely with the data manifold in that point I don't know okay so they ask why are DNN so sensitive and humans so insensitive to adversarial perturbations essentially their argument here is that humans project the input data onto the image manifold which is a contested claim right I don't I don't think that is a I think that is not not a widely accepted I mean it's it's certainly possible but also I'm not sure I'm not sure that humans do project they have like an internal manifold of natural images and project onto that every time they analyze an image and also the yeah how do you project right like how like both of these features are useful okay so both of the features are useful if you project an adversarial example like why do you project it onto the shape dimension and not onto the fur dimension right why there's no explanation right here we know that sort of humans are more receptive to shapes and so on but just projecting won't get you there so now they're going to into experiments and I want to highlight one particular experiment right here they have synthetic experiments they have their experiments I want to highlight this experiment right here remember they said their experiments were going to give you know strong support that and this experiment right here what they want to claim is that okay you have the data manifold here if you are if you have a data point and you make an adversarial example the question is do adversarial examples go along the image manifold or do adversarial examples go sort of perpendicular to the image manifold they they their claim again is that V this here would give support to the old view of adversarial examples and this here would support the dimpled manifold view because of course the decision boundary would be sort of following the data manifold curving around the data and then following the image manifold again so here would be sort of the other data point going below that a little bit all right so that is the view right here now what they're going to try to show you is that if you want to create an adversarial example on the manifold you have to walk much longer for much longer until you find an adversarial example then if you go off the manifold if you go yeah and they're also going to show you that if you are not constrained if you can go anywhere you want with an adversarial example then that will be very similar to when you force the adversarial example to go off the manifold and this gives a bit of proof that you know if two things behave equally they're you know probably equal so what they're going to do is they're going to try to make an adversarial attack first of all a regular one this one they're gonna say okay we're gonna make an adversarial attack let's measure how far we have to go to cross the decision boundary second they're going to say let's make the same thing but let's force the attack to be on the manifold of natural images and let's measure that and lastly they're going to mask okay let's do the same thing but force it to be off the data manifold and then they're going to measure how long these are how long the adversarial attacks are what's their their norm and they're going to find of course they're gonna want to find that these two are a about similar norms and way smaller than the one that is on the data manifold sort of giving evidence to you know if you go perpendicular to the data manifold you have to go very not very far and that's what adversarial attacks do okay yeah so how first of all how do they force the the adversarial attack to be on the manifold what they do is they do an autoencoder so they train an autoencoder so they an autoencoder is a neural network that has sort of a bottleneck layer and you try to just reconstruct the inputs data okay you tried that these two are equal however in the middle here you have a very low dimensional representation so where this is an n dimensional representation this is a k dimensional representation and a k much smaller than n if you can reconstruct the images correctly that means that you sort of have captured the representation in these low dimensions right here so what they're going to do is they train an autoencoder they take that low dimensional representation they linearize around it and that's how they have a way to project onto the image manifold by simply only moving around in this low dimensional manifold right here or always projecting onto it first of all it's a bit of a trouble because how you train the autoencoder is like for these experiment I think it's very relevant to how they this image manifold is going to look like if you train it with L2 you sort of already make some claims about what are important features and whatnot but let's disregard this right here let's say they have an accurate way of projecting onto the image manifold onto the manifold of natural data and here's what they find look let's look at image net okay no constraint PGD it this is the norm you know it's some number okay so like 0.14 now off manifold PGD is where they deliberately project off the manifold so they project on the manifold they subtract that they say you're not to do anything with the mana of the image manifold and that's 0.152 which is slightly larger than the no constraint PGD but essentially the same size now on manifold PGD okay here is a way bigger number like six times bigger number so their claim is look up up to six times more you have to go on the manifold than off the manifold and that gives credence to their claims now okay so what I've done is they have you know they have some descriptions of their experiment specifically they have descriptions of what library they used they used advert torch okay so I used advert torch to they used you know L2 PGD I use that too and they told me how much their low dimensional representation is so the K here how much that is how much the N is and so I was able to reproduce that experiment now what I've done is I have done the same thing and you can see right here this is this the panda image from image net they use an image net classifier and what they do is they do it greedy so they stop as soon as they cross the decision boundary and then they measure the norm you can see right here this is the perturbation now it's a soccer ball and here is the size 0.7772 that's the norm of the original perturbation adversarial what I now do as I project onto the manifold but I don't the difference is I don't project onto the image manifold what I do is here you see project onto K I simply project onto any K dimensional manifold so I know what K is K is 3,500 so it's a very small number compared to the input number and so what they project is actually the gradient so the gradient of the adversarial attack that you use to update your image that's what they project they have the algorithm clearly lined out so what I do is I simply take you can see right here I take a random set of of dimensions like of pixel coordinates in the gradient and I denote the first you know the first few the first K as the manifold and the last K as not the manifold this is not the image manifold there's nothing to do with the image manifold this is simply a random K dimensional subspace of the pixel space okay and now when I project onto K I simply take all the others in the gradient and I set them to zero that's I project onto a K dimensional manifold after that you normalize the gradient and so on so you proceed you proceed as you would right so here you can see the the project is used before you normalize the gradient so there's no issue with sort of the the step size you simply project onto the manifold and I have the same thing by the way projecting off the manifold where I simply take the K dimensions and set them to zero okay so now let's look what happens if I project on to the manifold oh wow before it was 0.77 and now it's 6.5 so about eight times larger and now let's look what happens if I project off the manifold it's 0.7773 instead of 0.7772 so what they're seeing right here and you know maybe okay maybe I've done it modulo I've done it wrong and I completely don't understand what's going on what they have found is simply an effect of projecting onto any lower dimensional space yet they claim that this is like in support of their hypothesis which clearly I have no clue what the data manifold is I've just projected onto a random manifold and I got the same results so I see they have other experiments where they try to kind of convince you with all the types of perturbations and so on but you know like no this these they have other experiments but this is just one that I could try quickly again maybe I've done it wrong to me this Occam's razor is strong here like Occam's razor in this work is quite a bit like there can be like there can be many hypotheses that coincide with the results you're getting and with the phenomena and it's easy to think that stuff is in favor of your hypothesis is providing support for it when there are other explanations available oh I almost forgot about Goodfellow's claim that you know they say belongs to the sort of old thinking that is now that is not a correct thinking and the claim that when you make an adversarial examples you somehow go towards the centroid of a different class and in imagination it's something like this on the on the left right here however if you think about this in this space okay let's say you start out here and you go towards the centroid of the other class right the pro where's the centroid here approximately like this what happens in feature space because of the stretchy feature because of the different scales okay what happens in feature space is it pretty much like the blue arrow here so it's that in feature space you go a long way actually this is probably I should have drawn this here to be square and this here to be super stretchy right yeah yeah I think so yeah I was I was wrong in drawing this so this here should be squares and this here actually should be super duper stretchy right so the centroid what was the centroid here is like way up here like way up here somewhere okay so this gets super stretched and you cross the boundary in this one feature right like the fur feature and yeah so I think this is it's still a correct claim you go towards the centroid of another class but because you go this in input space in the feature space this results in sort of a dramatic shift in some features and a not so dramatic shift in other features so while in the input space you go towards the centroid equally in all pixel directions you don't go towards the centroid equally in all pixel directions in the sorry in all feature directions so I think the claim that Goodfellow made is valid here still and explains like is concurrent with the stretchy feature explanation that I'm pretty sure that's also kind of what maybe I can't read his mind but maybe what he meant by that and not necessarily this picture right here not necessarily that actually the entire picture is going to change into the other class okay that was the interjection and back to the conclusion but as I said make up your own mind what do you what do you think of this go through the paper they it's it's a good paper like it's written it's written well there it has a lot of experiments has quite a lot of appendix where they give you more results and so on and it's not like again it's not like it's in it's necessarily incompatible right it's not I don't disagree with them I just think it's it's not as useful as they claim and it's kind of insufficient I don't disagree with their their main claims yeah and I think we already kind of knew a lot of those stuff and our current mental models are explaining things maybe a little a little better and yeah if you use the the squishy feature what would I call it the the stretchy feature model has a fancy name now but again is this is not mine this is just kind of a a bringing together of of what we what I think we know about adversarial examples safe to say there's going to be something that challenges this and that's going to be exciting alright thanks so much for being here listening and I'll see you next time bye bye
[ { "start": 0, "end": 4.32, "text": " Hello there! Today we're going to look at the dimpled manifold model of" }, { "start": 4.32, "end": 10.040000000000001, "text": " adversarial examples in machine learning by Adi Shamir, Odelia Melamed and Oriol" }, { "start": 10.040000000000001, "end": 16.2, "text": " Ben-Schmuel. This paper on a high level proposes a new way of looking at the" }, { "start": 16.2, "end": 20.28, "text": " phenomenon of adversarial examples in machine learning, specifically in deep" }, { "start": 20.28, "end": 26.12, "text": " learning, and they proposed this model called the dimpled manifold model," }, { "start": 26.12, "end": 32.96, "text": " essentially arguing that classifiers put their decision boundaries right next to" }, { "start": 32.96, "end": 39.24, "text": " the manifold of data, while only slightly sort of curving it around the data like" }, { "start": 39.24, "end": 43.96, "text": " this. Now the data manifold being low dimensional, this results in a situation" }, { "start": 43.96, "end": 49.120000000000005, "text": " where you can cross the decision boundary really easily if you simply go" }, { "start": 49.120000000000005, "end": 54.64, "text": " perpendicular to the data manifold, which also is perpendicular to the" }, { "start": 54.64, "end": 60.08, "text": " decision boundary, and if because it's just such a small dimple there, the" }, { "start": 60.08, "end": 64.76, "text": " decision boundary is pretty close, and that's how you end up with adversarial" }, { "start": 64.76, "end": 70.68, "text": " examples that are super easy to get. So it's not a new attack, a new defense," }, { "start": 70.68, "end": 75.6, "text": " anything like this, it's simply a mental framework of explaining why adversarial" }, { "start": 75.6, "end": 80.28, "text": " examples exist on a high level. They have some conceptual thought" }, { "start": 80.28, "end": 87.24, "text": " experiments, they have some explanations, and some real-world experiments. Now I" }, { "start": 87.24, "end": 92.92, "text": " personally don't think that this is entirely, it's not necessarily" }, { "start": 92.92, "end": 98.08, "text": " incorrect, but I don't think that this is really useful to think in this way, and" }, { "start": 98.08, "end": 102.96000000000001, "text": " I'm gonna explain why. In general my opinion of this is it doesn't really add" }, { "start": 102.96, "end": 111.44, "text": " anything, and I think it explains less than the models we already had. Yeah so" }, { "start": 111.44, "end": 115.36, "text": " that's my opinion, I'm gonna get to it. Specifically also the" }, { "start": 115.36, "end": 121.19999999999999, "text": " experiments they propose, I think that there is a big Occam's razor failure" }, { "start": 121.19999999999999, "end": 126.52, "text": " right there. But as I said we're gonna get to all of this, I'm gonna go through" }, { "start": 126.52, "end": 131.4, "text": " the paper and I want you to make up your own mind, even though I'm going to try to" }, { "start": 131.4, "end": 136.92000000000002, "text": " bias you. So yeah this is not a neutral channel in case you haven't" }, { "start": 136.92000000000002, "end": 143.32, "text": " noticed. Alright so if you like content or if you dislike it tell me in" }, { "start": 143.32, "end": 147.76, "text": " the comments, tell me what you think of the paper, whether it makes sense, whether" }, { "start": 147.76, "end": 152.16, "text": " it doesn't make sense, and so on. I'd be very interested to see what you have to" }, { "start": 152.16, "end": 159.92000000000002, "text": " say. Yeah I read the comments, so please. They say the extreme fragility of deep" }, { "start": 159.92, "end": 164.35999999999999, "text": " neural networks when presented with tiny perturbations, yeah but okay this starts" }, { "start": 164.35999999999999, "end": 168.95999999999998, "text": " out how every single adversarial examples paper always starts out saying" }, { "start": 168.95999999999998, "end": 173.04, "text": " okay deep neural networks are extremely fragile, there's this phenomenon of" }, { "start": 173.04, "end": 177.6, "text": " adversarial examples. Now if you don't know what adversarial examples are," }, { "start": 177.6, "end": 182.83999999999997, "text": " really briefly essentially what this is, it's a phenomenon where you take an" }, { "start": 182.83999999999997, "end": 187.2, "text": " image like the thing here on the left, the neural network thinks it's a plane" }, { "start": 187.2, "end": 192, "text": " with a very high probability and you change it to this thing right here, which" }, { "start": 192, "end": 195.83999999999997, "text": " you as a human can't even tell it's different, however the neural network" }, { "start": 195.83999999999997, "end": 201.98, "text": " will think that this is now a bird with very high probability and the this is" }, { "start": 201.98, "end": 207.64, "text": " the change that you made. It's magnified for you to see, it kind of looks like" }, { "start": 207.64, "end": 211.64, "text": " random noise but it's a very particular noise that makes the neural network" }, { "start": 211.64, "end": 216.35999999999999, "text": " think it's something different and this is just it's tiny in the in its norm." }, { "start": 216.36, "end": 221.72000000000003, "text": " So you don't see a difference. Now bird here is kind of close to plane" }, { "start": 221.72000000000003, "end": 225.34, "text": " but you can change this into anything, literally anything you want, you can" }, { "start": 225.34, "end": 233.60000000000002, "text": " change this into banana or I don't know dog or any class you want using these" }, { "start": 233.60000000000002, "end": 237.56, "text": " techniques. So it's not about being close it's really kind of a separate" }, { "start": 237.56, "end": 242.96, "text": " phenomenon. So that's adversarial examples and many frameworks have been" }, { "start": 242.96, "end": 247.68, "text": " proposed in order to explain these adversarial examples and they make a" }, { "start": 247.68, "end": 253.28, "text": " they make a nice overview right here. Many have been proposed over the last" }, { "start": 253.28, "end": 257.52, "text": " eight years that DNNs are too nonlinear, that they're too linear, that they" }, { "start": 257.52, "end": 262.16, "text": " were trained with insufficient number of training examples, that are just rare" }, { "start": 262.16, "end": 267.08, "text": " cases where they error, that images contain robust and non robust features" }, { "start": 267.08, "end": 273.64, "text": " etc. They say however none of these vague qualitative ideas seem to provide a" }, { "start": 273.64, "end": 277.8, "text": " simple intuitive explanations for the existence and bizarre properties of" }, { "start": 277.8, "end": 284.96, "text": " adversarial examples. So that is pretty harsh criticism specifically the first" }, { "start": 284.96, "end": 289.76, "text": " ones are kind of yeah but specifically this last one that images contain robust" }, { "start": 289.76, "end": 295.71999999999997, "text": " and non robust features which is sort of the leading hypothesis right now of why" }, { "start": 295.72, "end": 300.8, "text": " adversarial examples exist and what they are and then here saying none of" }, { "start": 300.8, "end": 304.84000000000003, "text": " these can none of these vague qualitative ideas seem to provide a" }, { "start": 304.84000000000003, "end": 309.72, "text": " simple intuitive explanation for the existence. Like let's see whether or not" }, { "start": 309.72, "end": 318.20000000000005, "text": " they're gonna do better okay. So also in the abstract they go on and they say" }, { "start": 318.20000000000005, "end": 322.04, "text": " okay they introduced this new conceptual framework which they call the dimpled" }, { "start": 322.04, "end": 326, "text": " manifold model which provides a simple explanation for why adversarial" }, { "start": 326, "end": 330.08000000000004, "text": " examples exist, why their perturbations have such tiny norms, why these" }, { "start": 330.08000000000004, "end": 334.68, "text": " perturbations look like random noise and why a network which was adversarially" }, { "start": 334.68, "end": 340.04, "text": " trained with incorrectly labeled images can still correctly classify test images." }, { "start": 340.04, "end": 344.40000000000003, "text": " Now this last part if you're not familiar with the literature it might" }, { "start": 344.40000000000003, "end": 349.92, "text": " come to you a bit random this why a network which was adversarially trained" }, { "start": 349.92, "end": 355.32, "text": " with incorrectly labeled images can still correctly classify test images. This" }, { "start": 355.32, "end": 361.04, "text": " is a famous experiment from the group of Alexander Modri where also this" }, { "start": 361.04, "end": 368, "text": " hypothesis this one the robust and non robust feature comes from and any" }, { "start": 368, "end": 373.52000000000004, "text": " attempt at explaining adversarial examples after this paper has to explain" }, { "start": 373.52000000000004, "end": 379.16, "text": " why that experiment makes sense because it's kind of a non intuitive experiment" }, { "start": 379.16, "end": 382.28000000000003, "text": " and we're gonna get to that as well but just so you know that's why they write" }, { "start": 382.28000000000003, "end": 386.84000000000003, "text": " it in the abstract. Now I personally think they don't have a good like this" }, { "start": 386.84000000000003, "end": 390.92, "text": " model here doesn't have a good explanation for why that works. They're" }, { "start": 390.92, "end": 397.64000000000004, "text": " sort of hand wavy trying in any case. So they say in in the last part of the" }, { "start": 397.64000000000004, "end": 401.52000000000004, "text": " paper we describe the results of numerous experiments which strongly" }, { "start": 401.52000000000004, "end": 405.40000000000003, "text": " support this new model and in particular our assertion that adversarial" }, { "start": 405.4, "end": 409.47999999999996, "text": " perturbations are roughly perpendicular to the low dimensional manifold which" }, { "start": 409.47999999999996, "end": 414.35999999999996, "text": " contains all the training examples. Okay also remember this experiment they" }, { "start": 414.35999999999996, "end": 420.59999999999997, "text": " strongly support what in particular the assertion that adversarial perturbations" }, { "start": 420.59999999999997, "end": 425.23999999999995, "text": " are roughly perpendicular to the low dimensional manifold which contains all" }, { "start": 425.23999999999995, "end": 432.76, "text": " the training examples. Now remember this that the experiments are supposed to" }, { "start": 432.76, "end": 437.8, "text": " support this particular claim because also that is going to be important down" }, { "start": 437.8, "end": 442.44, "text": " the road. Okay so let's get into the dimpled manifold model. What is it? What" }, { "start": 442.44, "end": 447.8, "text": " do these authors propose? I'm gonna try as best as I can to say what the" }, { "start": 447.8, "end": 452.92, "text": " authors are saying in the paper. So they claim that there is an old mental image" }, { "start": 452.92, "end": 464.96000000000004, "text": " of adversarial examples and the old mental image is here. They say we" }, { "start": 464.96000000000004, "end": 469.40000000000003, "text": " think the old mental image is based on the highly misleading 2d image on the" }, { "start": 469.40000000000003, "end": 476.44, "text": " left side of figure one and that's this thing right here. So the old mental" }, { "start": 476.44, "end": 482, "text": " image is that there is a there is a data space right this here if you think of" }, { "start": 482, "end": 486.64, "text": " pic of images as data points this would be the pixel space right so this is" }, { "start": 486.64, "end": 493.24, "text": " images with two pixels right now in this conceptual framework but you have to" }, { "start": 493.24, "end": 497.28, "text": " sort of think yourself into higher dimension. So they claim the old mental" }, { "start": 497.28, "end": 501.56, "text": " image is the following you have sort of the data distributed somehow in this" }, { "start": 501.56, "end": 506.76, "text": " space the data being the all the set of natural images or images you consider" }, { "start": 506.76, "end": 512.24, "text": " which is kind of these these sub space these subgroups right here there are a" }, { "start": 512.24, "end": 517.16, "text": " bunch of images right there and there and also there and there so these are" }, { "start": 517.16, "end": 522.6, "text": " images of two different classes the red class and the blue class now they're" }, { "start": 522.6, "end": 526.64, "text": " distributed like this and what is a classifier supposed to do a classifier" }, { "start": 526.64, "end": 530.4399999999999, "text": " is supposed to put a decision boundary between them and that's what they draw" }, { "start": 530.4399999999999, "end": 534.48, "text": " in here so this would be sort of a reasonable decision boundary between the" }, { "start": 534.48, "end": 539.12, "text": " two classes right so now what do you do if you want to create an adversarial" }, { "start": 539.12, "end": 544.4, "text": " examples well necessarily you have to start at an image of a class this one" }, { "start": 544.4, "end": 549.0600000000001, "text": " maybe and you have to cross the decision boundary right you want to fool the" }, { "start": 549.0600000000001, "end": 553.08, "text": " classifier ergo necessarily by definition you have to cross the" }, { "start": 553.08, "end": 558, "text": " decision boundary so what do you do the the easiest way to do this is to sort of" }, { "start": 558, "end": 562.5600000000001, "text": " go straight towards the decision boundary which is approximately in this" }, { "start": 562.56, "end": 566.88, "text": " direction right here and then once you cross the decision boundary you are done" }, { "start": 566.88, "end": 571.92, "text": " you're on the other side you have created an adversarial example provided" }, { "start": 571.92, "end": 577.76, "text": " of course that the image still kind of looks like the original image and so" }, { "start": 577.76, "end": 584.4399999999999, "text": " they say this has this has many many problems here they say the in this" }, { "start": 584.4399999999999, "end": 588.28, "text": " mental this mental image adversarial examples are created by moving the given" }, { "start": 588.28, "end": 592.4799999999999, "text": " images along the green arrows towards some kind of centroid of the nearest" }, { "start": 592.48, "end": 597.32, "text": " training images with the opposite label in which they mean this this thing right" }, { "start": 597.32, "end": 603.1, "text": " here so we would move the images towards the other class towards images of the" }, { "start": 603.1, "end": 608.36, "text": " other class and they say as stated for example by Ian Goodfellow in his" }, { "start": 608.36, "end": 614.32, "text": " lecture at this time I'm gonna cut this in right here I've said that the same" }, { "start": 614.32, "end": 617.38, "text": " perturbation can fool many different models or the same perturbation can be" }, { "start": 617.38, "end": 622.08, "text": " applied to many different clean examples I've also said that the subspace of" }, { "start": 622.08, "end": 626.9200000000001, "text": " adversarial perturbations is only about 50 dimensional even if the input" }, { "start": 626.9200000000001, "end": 632.6800000000001, "text": " dimension is 3,000 dimensional so how is it that these subspaces intersect the" }, { "start": 632.6800000000001, "end": 636.84, "text": " reason is that the choice of the subspace directions is not completely" }, { "start": 636.84, "end": 641.8000000000001, "text": " random it's generally going to be something like pointing from one class" }, { "start": 641.8000000000001, "end": 648.3000000000001, "text": " centroid to another class centroid and if you look at that vector and visualize" }, { "start": 648.3, "end": 652.1999999999999, "text": " it as an image it might not be meaningful to a human just because humans" }, { "start": 652.1999999999999, "end": 655.8399999999999, "text": " aren't very good at imagining what class centroid look like and we're really bad" }, { "start": 655.8399999999999, "end": 660.24, "text": " at imagining differences between centroid but there is more or less this" }, { "start": 660.24, "end": 664.88, "text": " systematic effect that causes different models to learn similar linear" }, { "start": 664.88, "end": 668.76, "text": " functions just because they're trying to solve the same task" }, { "start": 668.76, "end": 674.64, "text": " okay so it really appears like Goodfellow says this thing right here" }, { "start": 674.64, "end": 683.88, "text": " however they claim now they claim this doesn't make sense so they claim that" }, { "start": 683.88, "end": 688.28, "text": " you should think about adversarial examples in a different way and this is" }, { "start": 688.28, "end": 693.4, "text": " their dimpled manifold hypothesis so what is their dimpled manifold hypothesis" }, { "start": 693.4, "end": 699.36, "text": " they say what you have to do is you have to think about the data manifold in the" }, { "start": 699.36, "end": 704.08, "text": " higher dimensional space that they hand the higher dimensional input space so in" }, { "start": 704.08, "end": 709.5600000000001, "text": " this case they consider instead of here this 2d landscape they consider the 3d" }, { "start": 709.5600000000001, "end": 715.4200000000001, "text": " landscape so this would be the pixel space right now we consider three pixel" }, { "start": 715.4200000000001, "end": 722.48, "text": " images and the data is embedded in a low dimensional manifold in this higher" }, { "start": 722.48, "end": 729.76, "text": " space so because if you think about all combinations of pixels that are possible" }, { "start": 729.76, "end": 737.68, "text": " so not all of them are natural images in fact only very few of the possible" }, { "start": 737.68, "end": 742.48, "text": " combinations of pixels are natural images or images that make sense to you" }, { "start": 742.48, "end": 747.52, "text": " as a human or are images that you could potentially generate by going out with a" }, { "start": 747.52, "end": 753.8, "text": " camera so the data you're considering lives on a very low dimensional manifold" }, { "start": 753.8, "end": 758.96, "text": " in this big space and you have to explicitly think about that now the data" }, { "start": 758.96, "end": 764, "text": " is the data manifold here is represented in this in this sheet in the middle and" }, { "start": 764, "end": 770.24, "text": " on this manifold you're going to have your different classes of data here the" }, { "start": 770.24, "end": 776, "text": " blue or one class and the red or the other class what this paper claims is" }, { "start": 776, "end": 780.76, "text": " that what classifiers do what neural networks do when they classify the" }, { "start": 780.76, "end": 787.4000000000001, "text": " training data here is they go and they lay their decision boundary instead of" }, { "start": 787.4, "end": 791.04, "text": " so in the old model you would have thought maybe something like this" }, { "start": 791.04, "end": 796.12, "text": " happened where you put your decision boundary sort of in the middle between" }, { "start": 796.12, "end": 801.1999999999999, "text": " the two classes right crossing the manifold right here so you sort of put" }, { "start": 801.1999999999999, "end": 807.68, "text": " it in the middle between the two classes and then when you have to create an" }, { "start": 807.68, "end": 811.88, "text": " adversarial example again what you would do is you would maybe start here what" }, { "start": 811.88, "end": 814.8, "text": " you would have to do is you would go straight towards the decision boundary" }, { "start": 814.8, "end": 818.4399999999999, "text": " right here okay crossing the decision boundary and then on the other side" }, { "start": 818.4399999999999, "end": 826.14, "text": " you'd have an adversarial example in this new model what they claim is the" }, { "start": 826.14, "end": 830.68, "text": " decision boundary actually doesn't look like this right here okay the decision" }, { "start": 830.68, "end": 836.8399999999999, "text": " boundary actually is very much aligned with the manifold of data as you can see" }, { "start": 836.8399999999999, "end": 842.16, "text": " right here so this mesh that they show is the decision boundary now and their" }, { "start": 842.16, "end": 848.9599999999999, "text": " claim is that that usually just aligns with the manifold of data however around" }, { "start": 848.9599999999999, "end": 853.7199999999999, "text": " the actual data around the training samples what the classifier will do is" }, { "start": 853.7199999999999, "end": 859.4399999999999, "text": " it will create these what these dimples okay and these dimples are just tiny" }, { "start": 859.4399999999999, "end": 866.4, "text": " well dimples tiny perturbations in the decision manifold such that the data is" }, { "start": 866.4, "end": 871.1999999999999, "text": " on the correct side of the decision manifold sorry of the decision boundary" }, { "start": 871.2, "end": 876.72, "text": " right so the blue points here are under or one side of the decision boundary and" }, { "start": 876.72, "end": 881.4000000000001, "text": " the red points are on the other side of the decision boundary and for the rest" }, { "start": 881.4000000000001, "end": 888.36, "text": " the decision boundary just aligns with the data the data manifold now if you" }, { "start": 888.36, "end": 893.44, "text": " want to make an adversarial example now what you have to do again you start from" }, { "start": 893.44, "end": 898.1600000000001, "text": " an image and again you walk straight towards the decision boundary however" }, { "start": 898.16, "end": 904.4, "text": " now you don't have to go like this you what you can do is you can go simply" }, { "start": 904.4, "end": 908.7199999999999, "text": " perpendicular to the data manifold and you will cross the decision boundary" }, { "start": 908.7199999999999, "end": 912.7199999999999, "text": " very quickly because the dimple you're in is kind of shallow and they give a" }, { "start": 912.7199999999999, "end": 918.16, "text": " reason why the dimples are shallow because they claim this is results from" }, { "start": 918.16, "end": 925.4, "text": " training these models and that explains some things so the difference is the" }, { "start": 925.4, "end": 930.36, "text": " difference is we started out from this to make an adversarial example we have" }, { "start": 930.36, "end": 935.8, "text": " to go towards the decision boundary okay if we sort of transfer this image into" }, { "start": 935.8, "end": 940.76, "text": " higher dimensions it looks like this in the middle again in order to make an" }, { "start": 940.76, "end": 945.48, "text": " adversarial example we have to go towards the decision boundary now in the" }, { "start": 945.48, "end": 952.36, "text": " old mental image going perpendicular to the decision boundary means walking on" }, { "start": 952.36, "end": 958.6, "text": " the data manifold because we walk from this group of data towards this group of" }, { "start": 958.6, "end": 963.76, "text": " data you can see right here that we're walking on the data manifold when we" }, { "start": 963.76, "end": 967.36, "text": " walk perpendicular to the decision boundary whereas in the new model" }, { "start": 967.36, "end": 973.24, "text": " walking perpendicular to the decision boundary coincides with also walking" }, { "start": 973.24, "end": 980.36, "text": " perpendicular to the data manifold so this is the difference right here that" }, { "start": 980.36, "end": 989.48, "text": " they that they claim so this they say there's we call this conceptual" }, { "start": 989.48, "end": 994.52, "text": " framework the dimpled manifold model and note that it makes three testable claims" }, { "start": 994.52, "end": 998.32, "text": " about the kinds of decision boundaries created by trained deep neural networks" }, { "start": 998.32, "end": 1004.16, "text": " first natural images are located in a K dimensional manifold where K is much" }, { "start": 1004.16, "end": 1009.64, "text": " smaller than N second deep neural network decision boundaries pass very" }, { "start": 1009.64, "end": 1016, "text": " close to this image manifold and third the gradient of the classifications" }, { "start": 1016, "end": 1022.1999999999999, "text": " confidence level has a large norm and points roughly perpendicular to the" }, { "start": 1022.1999999999999, "end": 1027.8799999999999, "text": " image manifold alright so these are these are the claims that they're going" }, { "start": 1027.8799999999999, "end": 1034.6399999999999, "text": " to make to be tested and to be supported by experiments I guess so I hope I've" }, { "start": 1034.64, "end": 1039.92, "text": " represented enough what the authors claim right here I hope they would agree" }, { "start": 1039.92, "end": 1045.76, "text": " that I've represented this is accurately so now where is the problem with this in" }, { "start": 1045.76, "end": 1051.0800000000002, "text": " my opinion the problem isn't necessarily with what they claim right here it's" }, { "start": 1051.0800000000002, "end": 1056.1200000000001, "text": " it's you know I don't necessarily disagree with this mental image I don't" }, { "start": 1056.1200000000001, "end": 1060.38, "text": " necessarily disagree with these claims in fact that the data is on low" }, { "start": 1060.38, "end": 1065.8400000000001, "text": " dimensional manifold this we've this is kind of commonly agreed upon assumption" }, { "start": 1065.8400000000001, "end": 1074.5400000000002, "text": " right as I said not all the possible pixels combinations make good natural" }, { "start": 1074.5400000000002, "end": 1081.1200000000001, "text": " images and that the fact that it is then a manifold is a commonly held" }, { "start": 1081.1200000000001, "end": 1087.5, "text": " assumption decision boundaries pass very close to the image manifold well the" }, { "start": 1087.5, "end": 1093.08, "text": " fact that we can generate adversarial examples right already means that" }, { "start": 1093.08, "end": 1098.08, "text": " decision boundaries pass very close to the image manifold so this also is not" }, { "start": 1098.08, "end": 1104.32, "text": " news this this has been like in everybody's conceptual framework for the" }, { "start": 1104.32, "end": 1110.02, "text": " last five years at least and then third the gradient of the classifications" }, { "start": 1110.02, "end": 1115.04, "text": " confidence level has a large norm and points roughly perpendicular to the" }, { "start": 1115.04, "end": 1121.68, "text": " image manifold and this claim right here I'm pretty pretty sure there so this is" }, { "start": 1121.68, "end": 1130.84, "text": " not a trivial claim which yes okay this is not something that was like set" }, { "start": 1130.84, "end": 1137.76, "text": " around much however I'm going to claim that their model is not the only model" }, { "start": 1137.76, "end": 1143.76, "text": " by far that makes this happen or any something like this specifically when we" }, { "start": 1143.76, "end": 1150.52, "text": " go look at the experiments I'm going to show you that this doesn't necessarily" }, { "start": 1150.52, "end": 1155.52, "text": " support their claims it doesn't disprove them right but it also doesn't" }, { "start": 1155.52, "end": 1162.42, "text": " necessarily support them just because they show that okay so the other problem" }, { "start": 1162.42, "end": 1166.52, "text": " I have with this is that this in this thing they build up as ooh this is this" }, { "start": 1166.52, "end": 1170.72, "text": " is the old mental image this is how people thought about adversarial" }, { "start": 1170.72, "end": 1177.6000000000001, "text": " examples until now I look I just I disagree like this it's a bit of a it's" }, { "start": 1177.6000000000001, "end": 1184.48, "text": " a bit of a straw man almost I feel like this no one no one thought no one that" }, { "start": 1184.48, "end": 1188.56, "text": " is sort of in the literature of adversarial examples thought or thinks" }, { "start": 1188.56, "end": 1193.6000000000001, "text": " that this is an appropriate model for what is happening like we know that" }, { "start": 1193.6000000000001, "end": 1199.52, "text": " these distances here are very small right the distance until you cross the" }, { "start": 1199.52, "end": 1205.6399999999999, "text": " decision boundary and we know also like if this were true you should just be" }, { "start": 1205.6399999999999, "end": 1210.84, "text": " able to go to the decision boundary and then go the same distance right and then" }, { "start": 1210.84, "end": 1215.76, "text": " at some point you would actually arrive at a sample of a different class so you" }, { "start": 1215.76, "end": 1220.04, "text": " could you could actually transform images into the other class by simply" }, { "start": 1220.04, "end": 1223.6399999999999, "text": " going into the adversarial direction which is precisely what we don't see" }, { "start": 1223.6399999999999, "end": 1228.8, "text": " right we see the image still largely looks the same what gets added looks" }, { "start": 1228.8, "end": 1233.8, "text": " like a bit of noise okay so no no one was having this mental image because" }, { "start": 1233.8, "end": 1240.2, "text": " clearly this mental image is it is not appropriate for adversarial examples as" }, { "start": 1240.2, "end": 1246.32, "text": " well as saying look if you think of this in sort of higher dimensions and I" }, { "start": 1246.32, "end": 1249.48, "text": " realize I've drawn this decision boundary but this is what they describe" }, { "start": 1249.48, "end": 1258.96, "text": " in the text then I don't I don't see that this is the correct way of like" }, { "start": 1258.96, "end": 1263.3600000000001, "text": " there are many different kinds of decision boundaries that are compatible" }, { "start": 1263.3600000000001, "end": 1269.8, "text": " with with the decision boundary right here by the way this decision boundary I" }, { "start": 1269.8, "end": 1274.68, "text": " drew doesn't even separate the classes all the classes correctly what I'm" }, { "start": 1274.68, "end": 1278.84, "text": " saying is that also if you consider the decision boundary that for example looks" }, { "start": 1278.84, "end": 1286.36, "text": " like out of colors looks like this that also crosses here however it's sort of" }, { "start": 1286.36, "end": 1294.76, "text": " kind of flat like this but it's still a linear decision boundary right like this" }, { "start": 1294.76, "end": 1301.32, "text": " okay so this is above and the other part is below if you think of this if you" }, { "start": 1301.32, "end": 1307.84, "text": " project this down it looks the same in 2d and in 3d it's also explains that" }, { "start": 1307.84, "end": 1314.32, "text": " decision boundaries are very close to the data samples it's a bit different" }, { "start": 1314.32, "end": 1319.56, "text": " though than this dimpled manifold hypothesis right if you I think the at" }, { "start": 1319.56, "end": 1324.12, "text": " least in my estimation what's happening is much more that you have just a bunch" }, { "start": 1324.12, "end": 1329.72, "text": " of these kind of linear decision boundaries flying around right here" }, { "start": 1329.72, "end": 1336, "text": " partitioning up the space and so on and this might result in a similar situation" }, { "start": 1336, "end": 1341.48, "text": " as here but it has quite different predictions in form of what it does then" }, { "start": 1341.48, "end": 1347.04, "text": " what it does right here here it's sort of a flat manifold dimpling around the" }, { "start": 1347.04, "end": 1351.48, "text": " data whereas here it's kind of the class are separating the space into many" }, { "start": 1351.48, "end": 1357.68, "text": " regions always trying to sort of distinguish one class from the other and" }, { "start": 1357.68, "end": 1364.72, "text": " yeah so might end up bit the same but I don't think they give a fair shot at" }, { "start": 1364.72, "end": 1372.44, "text": " what we know so far like we that this model is not a a model that people hold" }, { "start": 1372.44, "end": 1378.88, "text": " in general especially the one on the left I can make an attempt at making a" }, { "start": 1378.88, "end": 1384.32, "text": " mental model that people hold so far maybe it's just me but I have a feeling" }, { "start": 1384.32, "end": 1390.88, "text": " this is a bit more so the model that I call let's call it something because" }, { "start": 1390.88, "end": 1395.68, "text": " they call it there something right I call mine the squishy feet the stretchy" }, { "start": 1395.68, "end": 1400.3600000000001, "text": " feature model okay let's contrast this with the stretchy feature model so what" }, { "start": 1400.3600000000001, "end": 1405.5200000000002, "text": " I want to do is I have two features and this is a coordinate system in feature" }, { "start": 1405.5200000000002, "end": 1409.7800000000002, "text": " space okay so there's two features this in feature space I mean sort of the the" }, { "start": 1409.7800000000002, "end": 1414.5600000000002, "text": " last representation before the classification layer in feature space" }, { "start": 1414.56, "end": 1421.8, "text": " the two classes look like this so there is the red class and there is the blue" }, { "start": 1421.8, "end": 1426.52, "text": " class and you can see right here there are two features and for some reason the" }, { "start": 1426.52, "end": 1430.24, "text": " network can classify along these two features maybe because there are other" }, { "start": 1430.24, "end": 1433.56, "text": " classes other data points so we can't put a decision boundary like this" }, { "start": 1433.56, "end": 1439, "text": " between the two we can classify along the two features okay so you can see" }, { "start": 1439, "end": 1444.6, "text": " there are two features right here feature one and feature two and both features are" }, { "start": 1444.6, "end": 1450.2, "text": " actually pretty good features for keeping these two data points apart okay" }, { "start": 1450.2, "end": 1455.76, "text": " now there are empty spaces as you can see right here which we're gonna get to" }, { "start": 1455.76, "end": 1460.9, "text": " in a second but you can you can use both features and ideally a classifier would" }, { "start": 1460.9, "end": 1465.28, "text": " actually use both features it would say you know if feature one is high it's" }, { "start": 1465.28, "end": 1469.08, "text": " there probably a red class if feature two is low it's probably the red class and the" }, { "start": 1469.08, "end": 1475.18, "text": " combination makes even more of the red class okay however since we are in a deep" }, { "start": 1475.18, "end": 1480.2, "text": " neural network which is has transformations it transforms the data" }, { "start": 1480.2, "end": 1484.58, "text": " along the way if you look at the same situation in input space so in the" }, { "start": 1484.58, "end": 1490.02, "text": " actual pixel space it looks different and this is due to not necessarily the" }, { "start": 1490.02, "end": 1495.6399999999999, "text": " non-linearity of things but actually it is due to the linear transformation it's" }, { "start": 1495.6399999999999, "end": 1498.92, "text": " actually the problem of adversarial examples at least in my estimation" }, { "start": 1498.92, "end": 1505.32, "text": " appears to happen in the linear layers if you think of for example like eigenvectors" }, { "start": 1505.32, "end": 1510.56, "text": " of matrices and the largest eigenvalues determine how far you can go in a" }, { "start": 1510.56, "end": 1519.08, "text": " particular direction by having a sort of a standard input delta and the same" }, { "start": 1519.08, "end": 1522.76, "text": " happens here by the way this is why spectral norm regularization tends to" }, { "start": 1522.76, "end": 1526.8799999999999, "text": " work at least a little bit against adversarial examples so what I mean is" }, { "start": 1526.8799999999999, "end": 1531.1599999999999, "text": " if you look at the scale of these features right they are like one two" }, { "start": 1531.1599999999999, "end": 1535.48, "text": " three four five of this features one two three four five if you look in the" }, { "start": 1535.48, "end": 1540.1999999999998, "text": " input space some of the features are going to have roughly the same scale" }, { "start": 1540.1999999999998, "end": 1546.76, "text": " right here and these features are going to be features that you have to change" }, { "start": 1546.76, "end": 1551.64, "text": " the input a lot in order to change the feature a lot what do I mean by this" }, { "start": 1551.64, "end": 1557.64, "text": " this is something like the shape of an of an image okay if you think of a cat" }, { "start": 1557.64, "end": 1564.04, "text": " the general shape of a cat you know it has it has two years pointy it has a" }, { "start": 1564.04, "end": 1570.08, "text": " head and and so on that's the general shape of a cat sorry that is actually" }, { "start": 1570.08, "end": 1577.28, "text": " the left right feature right this is the the left right feature is the shape and I" }, { "start": 1577.28, "end": 1581.3999999999999, "text": " have to change the input a lot in order to affect the feature right so that if" }, { "start": 1581.3999999999999, "end": 1585.04, "text": " they're roughly on the same scale of what I have to change to change the" }, { "start": 1585.04, "end": 1591.8799999999999, "text": " feature however the other the other feature in the input space has a much" }, { "start": 1591.8799999999999, "end": 1597.04, "text": " different scale than it has on in the feature space and this might be" }, { "start": 1597.04, "end": 1603.68, "text": " something like the fur structure of a cat so the fur structure of a cat like" }, { "start": 1603.68, "end": 1608.8, "text": " is I can change the pixels a tiny bit and I'm going to change the first" }, { "start": 1608.8, "end": 1613.36, "text": " structure by a lot I can change the first structure of a cat to the first" }, { "start": 1613.36, "end": 1620.3999999999999, "text": " structure of a dog by just changing the by just changing the pixels a little" }, { "start": 1620.3999999999999, "end": 1625.6, "text": " however it will be different and now it will be the first structure of a dog so" }, { "start": 1625.6, "end": 1631, "text": " how does this change now in input space in input space it's going to look" }, { "start": 1631, "end": 1637.76, "text": " something like this where one feature dimension is going to look rather the" }, { "start": 1637.76, "end": 1644.9199999999998, "text": " same and the other feature direction is going to be very very stretched okay now" }, { "start": 1644.9199999999998, "end": 1649.8, "text": " remember both of these features are good features they both can be used to" }, { "start": 1649.8, "end": 1656.04, "text": " read to classify the images so you can see changing the shape requires a lot of" }, { "start": 1656.04, "end": 1660.2, "text": " pixels changing the first structure however requires just a little pixel now" }, { "start": 1660.2, "end": 1666.68, "text": " if I take some image and I draw an L2 ball around it which was what we usually" }, { "start": 1666.68, "end": 1671.96, "text": " do when we create an adversarial example we say only we only allow small" }, { "start": 1671.96, "end": 1679.56, "text": " perturbations you can see that in in this direction it's a very you know you" }, { "start": 1679.56, "end": 1685.28, "text": " don't get very far in feature space but if you go the same distance in the in" }, { "start": 1685.28, "end": 1691.6799999999998, "text": " the input space into this direction in the feature space you're going to walk a" }, { "start": 1691.6799999999998, "end": 1698.6, "text": " lot you're going to walk like way far and this is just by definition there are" }, { "start": 1698.6, "end": 1702.48, "text": " going to be many features that you can use to classify images and they're" }, { "start": 1702.48, "end": 1705.84, "text": " going to be good features they're not going to be errors or aberrations like" }, { "start": 1705.84, "end": 1709.3999999999999, "text": " the first structure is a good feature to classify a cat they want to be many" }, { "start": 1709.3999999999999, "end": 1714.08, "text": " features in there and some of them are going to be of large magnitude and some" }, { "start": 1714.08, "end": 1718.8, "text": " of them are going to be of small magnitude and this is just what happens" }, { "start": 1718.8, "end": 1725.6399999999999, "text": " okay so I called this the the stretchy feature model and this is sort of a" }, { "start": 1725.6399999999999, "end": 1730.6799999999998, "text": " direct result of this paper that they cite by Alexandre Modri's group which" }, { "start": 1730.6799999999998, "end": 1735.72, "text": " we're gonna get to in a second right but keep those two in mind and we're gonna" }, { "start": 1735.72, "end": 1744.48, "text": " see how which one explains the phenomena better and which one doesn't okay so" }, { "start": 1744.48, "end": 1750.3600000000001, "text": " they say why deep neural networks are likely to create dimpled manifolds as" }, { "start": 1750.3600000000001, "end": 1757.88, "text": " decision boundaries and the the idea here is that okay we have to now explain" }, { "start": 1757.88, "end": 1763.32, "text": " why this even happens so if you consider the data manifold in green right here" }, { "start": 1763.32, "end": 1766.84, "text": " and here we have just one dimensional data and you can see it's not linearly" }, { "start": 1766.84, "end": 1772.36, "text": " separable right so we have to have sort of a curve decision boundary around this" }, { "start": 1772.36, "end": 1781.8799999999999, "text": " and why would this result in a dimpled manifold so they say look if you start" }, { "start": 1781.8799999999999, "end": 1785.36, "text": " off your your deep neural network training you're maybe your decision" }, { "start": 1785.36, "end": 1790.1599999999999, "text": " boundary is going to be somewhere like here okay not very effective what's" }, { "start": 1790.16, "end": 1795.5600000000002, "text": " gonna happen is let's say what you want what you want is you want to have the" }, { "start": 1795.5600000000002, "end": 1800.96, "text": " blue data you want to have the blue data above and the red data below the" }, { "start": 1800.96, "end": 1806.52, "text": " decision boundary so right now the red data is is oh that's the other way" }, { "start": 1806.52, "end": 1812.18, "text": " around the red above and the blue below so right now the blue are fine like the" }, { "start": 1812.18, "end": 1817, "text": " blue don't complain you do get a gradient out of the red examples pushing" }, { "start": 1817, "end": 1820.24, "text": " the entire decision boundary down there's no resistance right the blue ones" }, { "start": 1820.24, "end": 1824.84, "text": " they they're fine so you're gonna push down this is your next decision boundary" }, { "start": 1824.84, "end": 1829.22, "text": " okay same situation you're gonna push the entire decision boundary down now" }, { "start": 1829.22, "end": 1833.88, "text": " you're here now you're too far so you're gonna push the entire decision boundary" }, { "start": 1833.88, "end": 1838.52, "text": " up because now the red ones are fine the blue ones complain and this results you" }, { "start": 1838.52, "end": 1843.92, "text": " being sort of right on top of the data for once okay and then both gradients" }, { "start": 1843.92, "end": 1850.3600000000001, "text": " kick in so now the red data are gonna push such the decision boundary down the" }, { "start": 1850.3600000000001, "end": 1854.1200000000001, "text": " blue data are gonna push the decision boundary up which is going to result in" }, { "start": 1854.1200000000001, "end": 1862.8200000000002, "text": " this sort of dimples around the data otherwise the decision boundary" }, { "start": 1862.8200000000002, "end": 1869.1200000000001, "text": " coinciding with the data okay this is their explanation for why the why this" }, { "start": 1869.12, "end": 1880.1999999999998, "text": " works I hope this makes a little bit of sense now yeah so they claim that that" }, { "start": 1880.1999999999998, "end": 1886, "text": " this is happening contrast this with the mental model of having a bunch of" }, { "start": 1886, "end": 1890.2399999999998, "text": " linear half spaces which would result in something like you know a decision" }, { "start": 1890.2399999999998, "end": 1894.08, "text": " boundary being through here a decision boundary being through here a decision" }, { "start": 1894.08, "end": 1899.96, "text": " boundary being through here and through here through here which would also" }, { "start": 1899.96, "end": 1905.9199999999998, "text": " explain what we see but this is their claim why this decision boundary looks" }, { "start": 1905.9199999999998, "end": 1915, "text": " the way it is to me it's it's a bit it's a bit weird right like here why should" }, { "start": 1915, "end": 1919.8799999999999, "text": " the decision boundary align with the data manifold maybe it doesn't maybe they" }, { "start": 1919.88, "end": 1924.88, "text": " don't they don't claim that I should not complain about this but for example in" }, { "start": 1924.88, "end": 1929.7600000000002, "text": " between the data why does it do that they give some examples right here that" }, { "start": 1929.7600000000002, "end": 1936.96, "text": " decision boundary it should be rather simple right it doesn't like to curve a" }, { "start": 1936.96, "end": 1943.72, "text": " lot they say the new model can help to understand why the training phase of a" }, { "start": 1943.72, "end": 1947.8400000000001, "text": " given network typically converges to the same global optimal placement of the" }, { "start": 1947.84, "end": 1952.28, "text": " decision boundary regardless of its random initialization they're gonna make" }, { "start": 1952.28, "end": 1959.9599999999998, "text": " a claim right here why this happens to demonstrate this point consider the old" }, { "start": 1959.9599999999998, "end": 1964.3999999999999, "text": " model in which you sprinkle at random locations in the two-dimensional square" }, { "start": 1964.3999999999999, "end": 1971.36, "text": " alert as the large number of classes depicted in figure three sorry um I was" }, { "start": 1971.36, "end": 1975.9599999999998, "text": " confused for a second I am no longer so they're talking about this figure right" }, { "start": 1975.96, "end": 1982.04, "text": " here and they say look in the old model you have if you want to pass sort of" }, { "start": 1982.04, "end": 1988.76, "text": " simple decision boundaries through this you have to sort of pass them like some" }, { "start": 1988.76, "end": 1994.72, "text": " of the gray ones we see right here and they are not going to be so good okay so" }, { "start": 1994.72, "end": 1999.44, "text": " our goal is to pass a decision boundary of bounded complexity and this bounded" }, { "start": 1999.44, "end": 2003.68, "text": " complexity comes up again and again they claim of course their decision boundary" }, { "start": 2003.68, "end": 2010.0800000000002, "text": " is very smooth and very simple which will best separate the red and blue" }, { "start": 2010.0800000000002, "end": 2014.5600000000002, "text": " clusters they say there is a large number of way to do ways to do this like" }, { "start": 2014.5600000000002, "end": 2019.3200000000002, "text": " the green lines and most of them will be about equally bad in particular any" }, { "start": 2019.3200000000002, "end": 2023.44, "text": " decision to pass one side or the other of some cluster can make it harder to" }, { "start": 2023.44, "end": 2028.6000000000001, "text": " accommodate other clusters elsewhere along the line consequently there likely" }, { "start": 2028.6000000000001, "end": 2033.04, "text": " be many local minimum of roughly the same quality in the dimpled manifold" }, { "start": 2033.04, "end": 2037.48, "text": " model however there is likely to be a single globally best decision boundary" }, { "start": 2037.48, "end": 2041.76, "text": " shape since there is no conflict between our ability to go above one cluster and" }, { "start": 2041.76, "end": 2046.96, "text": " below a different cluster when they do not intersect so their idea here is that" }, { "start": 2046.96, "end": 2051.24, "text": " rather putting the decision boundaries like this what they want to do is you" }, { "start": 2051.24, "end": 2056.2, "text": " look at this in three dimensions and then they just kind of put a sheet over" }, { "start": 2056.2, "end": 2061.52, "text": " top of it and go above the blue ones and they're below the red ones in all of the" }, { "start": 2061.52, "end": 2066.44, "text": " three dimensions right so you go above the blue ones and below the red ones" }, { "start": 2066.44, "end": 2073.32, "text": " rather than this these gray things like here which are not very optimal now this" }, { "start": 2073.32, "end": 2078.56, "text": " one I'm not really sure what to make of this because for first of all they say" }, { "start": 2078.56, "end": 2082.8, "text": " it typically converges to the same global optimal placement of the decision" }, { "start": 2082.8, "end": 2086, "text": " boundary regardless of random initialization we know that this is not" }, { "start": 2086, "end": 2093.52, "text": " true right I've specifically made videos on research by Stanislav Ford who shows" }, { "start": 2093.52, "end": 2098.84, "text": " that if you randomly initialize a network differently what it will happen" }, { "start": 2098.84, "end": 2104, "text": " is you will reach the same accuracy but it will it will make mistakes on" }, { "start": 2104, "end": 2109.64, "text": " different samples of the test set right and there's actually a structure to how" }, { "start": 2109.64, "end": 2113.72, "text": " these decision boundaries are going to be different depending on your random" }, { "start": 2113.72, "end": 2118.16, "text": " initialization which actually would support what they claim is the old view" }, { "start": 2118.16, "end": 2123.2, "text": " right here second of all I have no trouble making a decision boundary here" }, { "start": 2123.2, "end": 2131, "text": " that separates red and blue right I can go something like this like this come" }, { "start": 2131, "end": 2137, "text": " here okay you get here right I have no trouble separating red and blue I guess" }, { "start": 2137, "end": 2142.7999999999997, "text": " this should go here so they're this this kind of this kind of bounded" }, { "start": 2142.8, "end": 2146.2400000000002, "text": " complexity does a lot of work here them saying who the decision boundary should" }, { "start": 2146.2400000000002, "end": 2152.2000000000003, "text": " be simple and so on and that's why they really insist that this decision" }, { "start": 2152.2000000000003, "end": 2157.92, "text": " boundary should be somehow straight but then a lot but I disagree that their" }, { "start": 2157.92, "end": 2162.2400000000002, "text": " decision boundaries are so simple if you have to curve around every data sample" }, { "start": 2162.2400000000002, "end": 2168.2000000000003, "text": " and otherwise follow the image manifold that seems to be like a rather complex" }, { "start": 2168.2, "end": 2174.2, "text": " decision boundary honestly because it's it's it's kind of a generative model of" }, { "start": 2174.2, "end": 2182.12, "text": " the data right if you follow the data manifold so I disagree that there's is" }, { "start": 2182.12, "end": 2186.96, "text": " so much simpler right just because it doesn't bend that much and here it like" }, { "start": 2186.96, "end": 2190.8399999999997, "text": " bends a lot that's also something they say like you you don't want to bend" }, { "start": 2190.8399999999997, "end": 2197.24, "text": " decision boundaries so much that hardens training and third of all why do they" }, { "start": 2197.24, "end": 2204.72, "text": " give their model the benefit of the third dimension right so they claim like" }, { "start": 2204.72, "end": 2208.72, "text": " oh look the old model doesn't work because if you have to place decision" }, { "start": 2208.72, "end": 2213.6, "text": " boundary between the data points you're gonna end up with a bad decision" }, { "start": 2213.6, "end": 2218.56, "text": " boundary however in order for their model to work they need the third" }, { "start": 2218.56, "end": 2224.3999999999996, "text": " dimension they need to pass like under and over the data in the third dimension" }, { "start": 2224.4, "end": 2229.44, "text": " whereas if you actually go into the third dimension you know every single" }, { "start": 2229.44, "end": 2233.7200000000003, "text": " lecture you have on kernelized SVMs and whatnot they show you like if you go in" }, { "start": 2233.7200000000003, "end": 2236.64, "text": " higher dimensions these things are actually separable like you would make" }, { "start": 2236.64, "end": 2240.7200000000003, "text": " if you have like RBF kernels these would become a cluster these would become a" }, { "start": 2240.7200000000003, "end": 2246.2400000000002, "text": " cluster and so on this is sort of the first lecture on going into higher" }, { "start": 2246.2400000000002, "end": 2251.64, "text": " dimensions in order to linearly classify stuff so it's not like their method can" }, { "start": 2251.64, "end": 2256.52, "text": " explain anything more than any other method if you give it this third" }, { "start": 2256.52, "end": 2260.68, "text": " dimension and the fact that they don't give the old model the third dimension" }, { "start": 2260.68, "end": 2264.3599999999997, "text": " but they give themselves the third dimension in order to explain it is a" }, { "start": 2264.3599999999997, "end": 2271.68, "text": " little bit I'm not I don't know it's this like yeah so I don't think this is" }, { "start": 2271.68, "end": 2277.48, "text": " any argument for for their model it just simply shows that if you have a lower" }, { "start": 2277.48, "end": 2282.6, "text": " dimensional manifold of data and you classify it in a higher dimension there" }, { "start": 2282.6, "end": 2288.12, "text": " are ways to do that right and if you like if you have relu networks and linear" }, { "start": 2288.12, "end": 2292.84, "text": " classifiers it's going to look like more chunky it's going to kind of divide the" }, { "start": 2292.84, "end": 2298.76, "text": " space into these kind of relu cells where you classify the data all of this" }, { "start": 2298.76, "end": 2304.32, "text": " is compatible with what they're saying not just their dimpled manifold" }, { "start": 2304.32, "end": 2310.8, "text": " hypothesis all right so this is yeah I don't I don't see the big explanation" }, { "start": 2310.8, "end": 2316.0800000000004, "text": " here so they claim what can they explain with their model explaining the" }, { "start": 2316.0800000000004, "end": 2321, "text": " mysteries of adversarial examples okay there are five things they claim they" }, { "start": 2321, "end": 2326.84, "text": " can explain with this first of all the mixture mystery right how can it be that" }, { "start": 2326.84, "end": 2331.7200000000003, "text": " a tiny distance away from any cat image there is also an image of a guacamole" }, { "start": 2331.72, "end": 2338.6, "text": " and vice versa and okay if these and if these classes are intertwined in such a" }, { "start": 2338.6, "end": 2343.6, "text": " fractal way how can a neural network correctly distinguish between them our" }, { "start": 2343.6, "end": 2347.9599999999996, "text": " answer is that all the real cat and guacamole images reside in on the tiny" }, { "start": 2347.9599999999996, "end": 2351.9599999999996, "text": " image manifold but below the real cat images there is a whole half space of" }, { "start": 2351.9599999999996, "end": 2356.68, "text": " pseudo guacamole images which are not natural images of guacamole and above" }, { "start": 2356.68, "end": 2360.9199999999996, "text": " the guacamole images there is a whole half space of a pseudo cat images so" }, { "start": 2360.92, "end": 2365.88, "text": " their idea here is that okay you have this one-dimensional data manifold here" }, { "start": 2365.88, "end": 2371.48, "text": " are the cats here the guacamole is if you have your dimpled manifold curving" }, { "start": 2371.48, "end": 2377.28, "text": " sort of around the data right here you know all of this is technically guacamole" }, { "start": 2377.28, "end": 2384.04, "text": " so if you go from the cat to here you reach a non-natural guacamole image just" }, { "start": 2384.04, "end": 2390.6800000000003, "text": " by the fact so the explanation here is that the explanation is that this" }, { "start": 2390.68, "end": 2397.7599999999998, "text": " this the decision boundary lines up with the data manifold except around the data" }, { "start": 2397.7599999999998, "end": 2402.56, "text": " where it creates a small dimple and therefore you can cross the dimple into" }, { "start": 2402.56, "end": 2409.96, "text": " the other region okay you this is very it's the same effect as this model right" }, { "start": 2409.96, "end": 2414.8399999999997, "text": " here you know I can draw this dimpled manifold I can draw it right here right" }, { "start": 2414.8399999999997, "end": 2419.8399999999997, "text": " if I classify the image I can draw this dimpled manifold I get the same effect" }, { "start": 2419.84, "end": 2425.84, "text": " however this model here explains much more it actually explains like here" }, { "start": 2425.84, "end": 2431, "text": " there is no reason if you think about a multi-class setting right if you think" }, { "start": 2431, "end": 2435.08, "text": " of this in two classes fine but if you think of this in a multi-class setting" }, { "start": 2435.08, "end": 2441.84, "text": " there is no reason why this region right here should be guacamole it can be any" }, { "start": 2441.84, "end": 2445.7200000000003, "text": " other class right if the if the idea is the decision boundary follows the data" }, { "start": 2445.72, "end": 2451, "text": " manifold and then just dimples around the data to make the data correct" }, { "start": 2451, "end": 2456.9599999999996, "text": " clout they classify the only constraint here is is that these are cats it says" }, { "start": 2456.9599999999996, "end": 2463.3599999999997, "text": " nothing about sorry it says nothing about why on the other side there is" }, { "start": 2463.3599999999997, "end": 2469.2799999999997, "text": " guacamole instead of anything else and that does not coincide with what we know" }, { "start": 2469.2799999999997, "end": 2475.3199999999997, "text": " about adversarial examples like this region here is a consistent region what" }, { "start": 2475.32, "end": 2481.1200000000003, "text": " so first of all first of all my bigger problem is why does this even generalize" }, { "start": 2481.1200000000003, "end": 2485.76, "text": " why does the dimpled manifold hypothesis even generalize right like if it" }, { "start": 2485.76, "end": 2490.76, "text": " follows the if it follows the data manifold largely except around the the" }, { "start": 2490.76, "end": 2496.76, "text": " training data why does it exactly generalize well to test data you have" }, { "start": 2496.76, "end": 2501.44, "text": " to like argue that the test data I see are quite close because otherwise it" }, { "start": 2501.44, "end": 2506.2000000000003, "text": " would be it would get very confused on test data which would be somewhere else" }, { "start": 2506.2000000000003, "end": 2512.32, "text": " on the manifold right but we know that generally neural networks classify data" }, { "start": 2512.32, "end": 2518.76, "text": " that's on the manifold of natural images quite well they generalize quite well" }, { "start": 2518.76, "end": 2523.56, "text": " however this model is sort of an anti generalization model but okay maybe you" }, { "start": 2523.56, "end": 2528.6, "text": " can claim that their test images are close enough to the training images such" }, { "start": 2528.6, "end": 2538.16, "text": " that this works but for example we know that if that this this is a consistent" }, { "start": 2538.16, "end": 2542.1, "text": " region what do I mean by this we know for example we can make universal" }, { "start": 2542.1, "end": 2546.7799999999997, "text": " adversarial perturbations which means that we can find directions that no" }, { "start": 2546.7799999999997, "end": 2550.52, "text": " matter from which image or from which class we start from they will always" }, { "start": 2550.52, "end": 2555.92, "text": " result in guacamole okay this is not explained by the dimpled manifold there" }, { "start": 2555.92, "end": 2560.3, "text": " is no reason why these regions on the other side should be of a consistent" }, { "start": 2560.3, "end": 2565.28, "text": " label in a multi-class setting we also know that adversarial perturbations are" }, { "start": 2565.28, "end": 2570.76, "text": " transferable which means that we can make an adversarial perturbation in one" }, { "start": 2570.76, "end": 2575.12, "text": " classifier and then in a different classifier even if it's trained with a" }, { "start": 2575.12, "end": 2580.52, "text": " different data set actually we can we can apply the same adversarial" }, { "start": 2580.52, "end": 2585.16, "text": " perturbation and it will most likely still be of the same like the" }, { "start": 2585.16, "end": 2591, "text": " adversarial perturbation going towards the same class there is no reason in the" }, { "start": 2591, "end": 2595.12, "text": " dimpled manifold hypothesis that explains these phenomena if you think" }, { "start": 2595.12, "end": 2600.56, "text": " of this of the stretchy feature model this is really easy right if I create an" }, { "start": 2600.56, "end": 2607, "text": " adversarial example I go across the decision boundary right here what do I" }, { "start": 2607, "end": 2612.2, "text": " do I change the fur without changing the shape now I change the fur by so much" }, { "start": 2612.2, "end": 2618.68, "text": " that you know now there is a conflict right in feature space I go up here now" }, { "start": 2618.68, "end": 2624.8399999999997, "text": " there is a conflict it has the fur of a dog but the shape of a cat still now I" }, { "start": 2624.8399999999997, "end": 2629.8399999999997, "text": " there is a conflict but neural networks in the final layer are linear which" }, { "start": 2629.8399999999997, "end": 2634.2799999999997, "text": " means they just weigh the different features now I just pump that fur to be" }, { "start": 2634.2799999999997, "end": 2639.16, "text": " so doggish right that it overpowers the shape feature of the cat neural networks" }, { "start": 2639.16, "end": 2645.24, "text": " are biased towards sort of structure anyway over shape already so I just I" }, { "start": 2645.24, "end": 2650.72, "text": " just hammer that fur and now the neural network thinks it's it's a dog and a" }, { "start": 2650.72, "end": 2654.56, "text": " different neural network trained on the same data will also think it's a dog" }, { "start": 2654.56, "end": 2659.68, "text": " because it will also have learned to classify images by shape and fur" }, { "start": 2659.68, "end": 2666.64, "text": " therefore therefore it will it will be vulnerable to the same attack right this" }, { "start": 2666.64, "end": 2670.92, "text": " is super easy to explain in this model there is no reason why this should" }, { "start": 2670.92, "end": 2676, "text": " happen in the dimpled manifold model unless you amend it by some more hand" }, { "start": 2676, "end": 2684.08, "text": " wavy things they say the direction mystery when we use an adversarial attack" }, { "start": 2684.08, "end": 2687.52, "text": " to modify a cat into guacamole why doesn't the perturbation look green and" }, { "start": 2687.52, "end": 2695.2, "text": " mushy okay so they say well in the old model you would have to walk along the" }, { "start": 2695.2, "end": 2700.68, "text": " image manifold from here towards the guacamole images and that should mean" }, { "start": 2700.68, "end": 2705.7999999999997, "text": " that your image should sort of change to look like a guacamole in our in the" }, { "start": 2705.7999999999997, "end": 2710.24, "text": " dimpled manifold model you go off the manifold perpendicular and that" }, { "start": 2710.24, "end": 2713.3999999999996, "text": " explains why the adversarial perturbation looks like a little bit" }, { "start": 2713.3999999999996, "end": 2718.7999999999997, "text": " like just random noise again no one thought this in the old model in fact" }, { "start": 2718.7999999999997, "end": 2722.52, "text": " we have a pretty good explanation why it still looks the same and that's because" }, { "start": 2722.52, "end": 2728, "text": " humans are much more receptive to this thing right here to the shape whereas" }, { "start": 2728, "end": 2733.08, "text": " neural networks also or much more consider this thing right here the fur" }, { "start": 2733.08, "end": 2739.12, "text": " also they consider fur and shape in different proportions than the humans do" }, { "start": 2739.12, "end": 2746.44, "text": " and so that's we already sort of knew this and it's in fact a better" }, { "start": 2746.44, "end": 2752.96, "text": " explanation the uniformity mystery you know why the decision boundary is ever" }, { "start": 2752.96, "end": 2758.32, "text": " present so they claim because the there's this dimple right here even you" }, { "start": 2758.32, "end": 2763.88, "text": " know the most far away cat image here has a close crossing to the decision" }, { "start": 2763.88, "end": 2768.2400000000002, "text": " boundary so there is no cat images that are kind of closer to the decision" }, { "start": 2768.2400000000002, "end": 2772.08, "text": " boundary but this is I think this is just a property of a high-dimensional" }, { "start": 2772.08, "end": 2780.2799999999997, "text": " classifier I think that here our 2d view of the world betrays us and yeah" }, { "start": 2780.2799999999997, "end": 2784.52, "text": " especially if we can go really far in feature space with a tiny perturbation" }, { "start": 2784.52, "end": 2789.7999999999997, "text": " and input space this is not not a mystery not even a mystery the vanishing" }, { "start": 2789.7999999999997, "end": 2798.36, "text": " gap mystery okay which is about adversarial training I think which we're" }, { "start": 2798.36, "end": 2805.6, "text": " gonna skip here and then there is the accuracy robustness trade-off mystery so" }, { "start": 2805.6, "end": 2812.52, "text": " this is if you do if you train a model adversarially which means that here look" }, { "start": 2812.52, "end": 2818.44, "text": " here I have my cat okay I train I have a data set of cats and dogs I train my" }, { "start": 2818.44, "end": 2822.6800000000003, "text": " neural network on it it's vulnerable what can I do what I can do is I can" }, { "start": 2822.6800000000003, "end": 2827.28, "text": " create adversarial images this is a cat right I can create adversarial images by" }, { "start": 2827.28, "end": 2833.28, "text": " making this into a dog okay so this is a dog because I changed the first" }, { "start": 2833.28, "end": 2837.5600000000004, "text": " structure a little bit this is an adversarial example now I add this so" }, { "start": 2837.5600000000004, "end": 2843.5600000000004, "text": " this is comes from the data set now I add this to the data set but I tell it" }, { "start": 2843.5600000000004, "end": 2849.32, "text": " this is a cat too right this is a cat and this is a cat if I do this with my" }, { "start": 2849.32, "end": 2854.5600000000004, "text": " neural network the neural network will become robust to adversarial examples" }, { "start": 2854.56, "end": 2859.2, "text": " to a degree not fully but to a degree this is the best method we have so far" }, { "start": 2859.2, "end": 2863.86, "text": " of defending against adversarial examples called adversarial training now" }, { "start": 2863.86, "end": 2870.32, "text": " what you do when you do this is you train the network to to sort of classify" }, { "start": 2870.32, "end": 2875.98, "text": " the advert to yeah classify to incorporate the adversarial ness into" }, { "start": 2875.98, "end": 2882.92, "text": " its decision-making process and this results usually in a degradation of the" }, { "start": 2882.92, "end": 2887.12, "text": " generalization performance of the network so as it becomes more robust it" }, { "start": 2887.12, "end": 2893.36, "text": " becomes less accurate on real data right you gain accuracy on adversarial data" }, { "start": 2893.36, "end": 2899.12, "text": " you decrease the accuracy in real data which makes sense intuitively but it is" }, { "start": 2899.12, "end": 2904.08, "text": " a strong effect which is not the same as you know I simply teach my model to do" }, { "start": 2904.08, "end": 2911.04, "text": " yet another class it is quite it is actually a trade-off now they try to" }, { "start": 2911.04, "end": 2916.4, "text": " explain this right here when we train the network we keep the images" }, { "start": 2916.4, "end": 2921.12, "text": " stationary and move to decision boundary by creating dimples when we create" }, { "start": 2921.12, "end": 2924.24, "text": " adversarial examples we keep the decision boundary stationary and move" }, { "start": 2924.24, "end": 2930.48, "text": " the images to the other side by allowing a large perpendicular derivative we make" }, { "start": 2930.48, "end": 2935, "text": " the training easier since we do not have to sharply bend decision boundary" }, { "start": 2935, "end": 2940.92, "text": " against around the training examples so this is when you train normally when you" }, { "start": 2940.92, "end": 2946.2400000000002, "text": " train without adversarial examples they say there is a large perpendicular" }, { "start": 2946.2400000000002, "end": 2954.7200000000003, "text": " derivative which in the like the what they mean is that the data samples are" }, { "start": 2954.7200000000003, "end": 2960.66, "text": " of push these dimples out that that's the large perpendicular derivative the" }, { "start": 2960.66, "end": 2966.2000000000003, "text": " perpendicularity is to the image manifold and that makes it easy because" }, { "start": 2966.2000000000003, "end": 2969.92, "text": " you don't have to bend the decision boundary a lot so you can kind of" }, { "start": 2969.92, "end": 2974.4, "text": " remain here and you have to kind of create these dimples again their" }, { "start": 2974.4, "end": 2980.84, "text": " argument is you don't want to bend this boundary a lot which makes training easy" }, { "start": 2981.44, "end": 2985.52, "text": " however such a large derivative also creates very close adversarial examples" }, { "start": 2985.52, "end": 2989.12, "text": " yeah this is their claim that now the decision boundary is pretty close" }, { "start": 2989.12, "end": 2992.88, "text": " because you don't bend the decision boundary by too much around the data" }, { "start": 2992.88, "end": 2998.84, "text": " because you do dimples any attempts to robustify a network by limiting all its" }, { "start": 2998.84, "end": 3002.84, "text": " directional derivatives will make the network harder to train and thus less" }, { "start": 3002.84, "end": 3008.82, "text": " accurate I'm not super sure how to interpret this so I might be doing this" }, { "start": 3008.82, "end": 3011.88, "text": " wrong right here but if you create adversarial example what you do is you" }, { "start": 3011.88, "end": 3015.96, "text": " essentially have this data point and you create an adversarial example this data" }, { "start": 3015.96, "end": 3020.1200000000003, "text": " one is yeah well these are of the same class so now that is now the the" }, { "start": 3020.1200000000003, "end": 3026.56, "text": " decision boundary has a sort of bend harder okay which makes it more hard to" }, { "start": 3026.56, "end": 3031.7999999999997, "text": " train and at some point it so it's harder to train and that's why you have" }, { "start": 3031.7999999999997, "end": 3034.88, "text": " less accuracy and at some point it says well actually I don't want to bend that" }, { "start": 3034.88, "end": 3039.04, "text": " much I'd rather make a mistake here and just bend around both of these data" }, { "start": 3039.04, "end": 3044.96, "text": " points and now you have a wrong classification so that's sort of their" }, { "start": 3044.96, "end": 3050.2799999999997, "text": " explanation of why this happens which I find a bit hand wavy you have to argue" }, { "start": 3050.2799999999997, "end": 3054.7599999999998, "text": " like ooh ease of training bending the decision boundary and so on in this" }, { "start": 3054.76, "end": 3060.7200000000003, "text": " model right here super easy okay what happens if I create cats that have cat" }, { "start": 3060.7200000000003, "end": 3065.1200000000003, "text": " fur and dog fur and I tell the network these both are cats well essentially I" }, { "start": 3065.1200000000003, "end": 3069.2400000000002, "text": " tell them I tell the network look there are two features right here the fur and" }, { "start": 3069.2400000000002, "end": 3075.88, "text": " the cat and you know the fur just just disregard it just don't do that don't" }, { "start": 3075.88, "end": 3081.32, "text": " regard the fur as a feature because it's useless now because I now have cats with" }, { "start": 3081.32, "end": 3085.76, "text": " cat fur and cat with dog fur so the network can't use that to classify" }, { "start": 3085.76, "end": 3090.04, "text": " anymore and that explains why it gets less accurate because I take away one" }, { "start": 3090.04, "end": 3095.48, "text": " useful feature okay so you know now the network has less useful features and" }, { "start": 3095.48, "end": 3101.92, "text": " that's why it gets worse this it's it's a pretty simple explanation in the" }, { "start": 3101.92, "end": 3107.6800000000003, "text": " stretchy feature model it has there's a lot of work to make this happen in the" }, { "start": 3107.68, "end": 3113.56, "text": " dimpled manifold model so lastly they try to explain and they what they came" }, { "start": 3113.56, "end": 3119.3999999999996, "text": " an interesting mystery in this this paper that I have cited throughout and" }, { "start": 3119.3999999999996, "end": 3125.2, "text": " what that is is that it's kind of the same experiment as here where we create" }, { "start": 3125.2, "end": 3130.24, "text": " adversarial examples and we add them to the training set except for two things" }, { "start": 3130.24, "end": 3137.12, "text": " first of all we don't have the original so our new data set is not going to" }, { "start": 3137.12, "end": 3142.4, "text": " contain the original images it's only going to contain the adversarial examples" }, { "start": 3142.4, "end": 3150.12, "text": " second it is going to contain the adversarial example image but the label" }, { "start": 3150.12, "end": 3154.8399999999997, "text": " isn't going to be the correct label quote-unquote correct from where we" }, { "start": 3154.8399999999997, "end": 3159.48, "text": " created but the label is actually going to be the adversarial label the wrong" }, { "start": 3159.48, "end": 3164.7799999999997, "text": " label okay so we're going to tell the network this is a dog please learn that" }, { "start": 3164.78, "end": 3170.84, "text": " this is a dog right it's a cat with dog fur and the old training images are" }, { "start": 3170.84, "end": 3175.0400000000004, "text": " nowhere in the data set we just do a data set with these wrongly labeled" }, { "start": 3175.0400000000004, "end": 3182.6000000000004, "text": " images now when we go and we apply this so we train we use this we train a" }, { "start": 3182.6000000000004, "end": 3187.7200000000003, "text": " network right to classify cats and dogs and now we once we've trained this" }, { "start": 3187.7200000000003, "end": 3193.28, "text": " network we go we take one of these samples of the original data set we" }, { "start": 3193.28, "end": 3198.7200000000003, "text": " classify it it's going to give us a correct classification right so it will" }, { "start": 3198.7200000000003, "end": 3203.1200000000003, "text": " recognize that this here is a cat even though we told it that this here is a" }, { "start": 3203.1200000000003, "end": 3210.84, "text": " dog now how does it do this it does this by looking at the fur you know we've" }, { "start": 3210.84, "end": 3215.5600000000004, "text": " we've doubled down on the fur here right so this is like we really made that fur" }, { "start": 3215.5600000000004, "end": 3219.48, "text": " feature super strong in these adversarial examples so it's going to" }, { "start": 3219.48, "end": 3224.84, "text": " look at the cat fur and even though none of the cats have the shape like this we" }, { "start": 3224.84, "end": 3229.96, "text": " sort of we sort of supercharged that fur feature again in this model not a" }, { "start": 3229.96, "end": 3235.16, "text": " problem essentially what we've done is we've created two data classes you know" }, { "start": 3235.16, "end": 3242.4, "text": " one up here and one down here that have the fur supercharged and now it's just" }, { "start": 3242.4, "end": 3247.28, "text": " going to mainly look at that fur structure and that is a useful feature" }, { "start": 3247.28, "end": 3253.0800000000004, "text": " right so this this what's called their features not bugs paper adversarial" }, { "start": 3253.0800000000004, "end": 3258.6800000000003, "text": " examples are features not bugs or other way around not bugs they are features" }, { "start": 3258.6800000000003, "end": 3264.0800000000004, "text": " has demonstrated with this experiment this notion that there are adversarial" }, { "start": 3264.0800000000004, "end": 3269.52, "text": " examples result from useful generalizing features in the data set" }, { "start": 3269.52, "end": 3275.84, "text": " that are simply of by definition the features that are not large enough for" }, { "start": 3275.84, "end": 3283.6400000000003, "text": " humans to see what they call non robust features how do they explain this they" }, { "start": 3283.6400000000003, "end": 3287.36, "text": " say the original people try to explain this highly surprising role by" }, { "start": 3287.36, "end": 3291.92, "text": " distinguishing between robust and non robust features in any given image where" }, { "start": 3291.92, "end": 3296.2000000000003, "text": " some of them are preserved by the adversarial change and some are not" }, { "start": 3296.2000000000003, "end": 3302.2000000000003, "text": " however it is not clear what makes some of the features more robust than others" }, { "start": 3302.2, "end": 3307.8399999999997, "text": " definition just definition like like if you have features and you order them by" }, { "start": 3307.8399999999997, "end": 3312.3199999999997, "text": " their size like by their how much you have to change the pixels that some" }, { "start": 3312.3199999999997, "end": 3316.3199999999997, "text": " features are going to be larger than other features and then some features" }, { "start": 3316.3199999999997, "end": 3320.48, "text": " going to be below that cutoff where you define adversarial examples budget this" }, { "start": 3320.48, "end": 3326.12, "text": " is definition makes them such that some of more robust it's not it's not clear" }, { "start": 3326.12, "end": 3331.3599999999997, "text": " our new model provides very simple alternative explanation which does not" }, { "start": 3331.36, "end": 3337.2000000000003, "text": " necessarily contradict the original one okay at least this which is summarized" }, { "start": 3337.2000000000003, "end": 3341.6800000000003, "text": " in figure four to simplify the description will use 2d vertical cut" }, { "start": 3341.6800000000003, "end": 3344.6400000000003, "text": " through the input space and consider only the decision boundary that" }, { "start": 3344.6400000000003, "end": 3351.96, "text": " separates between cats and anything else okay so they have this example right" }, { "start": 3351.96, "end": 3357.08, "text": " here they say look we have a decision boundary that distinguishes cats see" }, { "start": 3357.08, "end": 3362.7799999999997, "text": " from non cats and the green one here is the image manifold and the gray is the" }, { "start": 3362.7799999999997, "end": 3368.18, "text": " decision boundary okay so now what we do is we create adversarial examples in" }, { "start": 3368.18, "end": 3373, "text": " frame two right here you can see that we make the cats into non cats and we make" }, { "start": 3373, "end": 3379.52, "text": " the be the bats into bats aren't very popular lately the badgers into into" }, { "start": 3379.52, "end": 3386.52, "text": " cats so we make the badgers into cats and we make the cats into these whatever" }, { "start": 3386.52, "end": 3393.6, "text": " DS ducks okay and now we relabel those and that gives us a new data manifold so" }, { "start": 3393.6, "end": 3399.12, "text": " the new data manifold is this data manifold right here and we have also new" }, { "start": 3399.12, "end": 3405.28, "text": " labels and now they claim the resulting decision boundary in figure four as you" }, { "start": 3405.28, "end": 3410.7599999999998, "text": " can see right here this is the resulting decision boundary the gray one it is it" }, { "start": 3410.7599999999998, "end": 3415.4, "text": " is very similar to the decision boundary in the first frame and therefore we" }, { "start": 3415.4, "end": 3419.88, "text": " shouldn't be surprised that this new decision boundary that results from this" }, { "start": 3419.88, "end": 3425.4, "text": " perturbed data results in the same decision boundary as the original one" }, { "start": 3425.4, "end": 3436.92, "text": " okay however like why like why so their whole they have two notions notion one is" }, { "start": 3436.92, "end": 3442.76, "text": " that the decision boundary follows the data manifold closely except it sort of" }, { "start": 3442.76, "end": 3446.6000000000004, "text": " bends around the data a little and you can see this right here like this" }, { "start": 3446.6000000000004, "end": 3450.84, "text": " decision boundary kind of follows the data yet it just happens to be on the" }, { "start": 3450.84, "end": 3459.6400000000003, "text": " correct side of the data points at any given moment which okay okay however they" }, { "start": 3459.6400000000003, "end": 3463.48, "text": " also make the claim in different parts of their paper that bending the decision" }, { "start": 3463.48, "end": 3466.96, "text": " boundary and so on is not good you'd rather want to have a simple decision" }, { "start": 3466.96, "end": 3470.1200000000003, "text": " boundary so to me there is no reason why the decision boundary couldn't just look" }, { "start": 3470.12, "end": 3476.56, "text": " like this it would correctly classify this new data set right however it would" }, { "start": 3476.56, "end": 3485.24, "text": " not correctly classify it would not correctly classify the let's say the C" }, { "start": 3485.24, "end": 3491.44, "text": " that was right where was it right here or right here these data points it would" }, { "start": 3491.44, "end": 3498.12, "text": " not correctly classify so you see that this until now they've always had this" }, { "start": 3498.12, "end": 3503.56, "text": " data manifold to be sort of super duper straight and smooth and that's how they" }, { "start": 3503.56, "end": 3508.64, "text": " can also say well following the data manifold and not bending too much and so" }, { "start": 3508.64, "end": 3513.04, "text": " on those are not in conflict with each other but now that they are in conflict" }, { "start": 3513.04, "end": 3518.16, "text": " with each other you have to give you gonna give up one or the other and only" }, { "start": 3518.16, "end": 3523.4, "text": " in one of them do actually does this experiment here still make sense in the" }, { "start": 3523.4, "end": 3530.08, "text": " other one it doesn't and but if you give up the ooh bending too much is bad then" }, { "start": 3530.08, "end": 3536.2400000000002, "text": " you know you lose a bunch of explanations that you have up here so yeah" }, { "start": 3536.2400000000002, "end": 3542.1600000000003, "text": " like it's one in my mind it's one or the other and there's I there's still no" }, { "start": 3542.1600000000003, "end": 3547.08, "text": " reason I think no good reason why this like the decision boundary should align" }, { "start": 3547.08, "end": 3552.96, "text": " super closely with the data points like if there if there is nothing here right" }, { "start": 3552.96, "end": 3559.4, "text": " if this is perpendicular really to the data manifold like why would the" }, { "start": 3559.4, "end": 3564.4, "text": " decision boundary align so closely with the data manifold in that point I don't" }, { "start": 3564.4, "end": 3574.6, "text": " know okay so they ask why are DNN so sensitive and humans so insensitive to" }, { "start": 3574.6, "end": 3579.6, "text": " adversarial perturbations essentially their argument here is that humans" }, { "start": 3579.6, "end": 3586.8399999999997, "text": " project the input data onto the image manifold which is a contested claim" }, { "start": 3586.8399999999997, "end": 3594.12, "text": " right I don't I don't think that is a I think that is not not a widely accepted" }, { "start": 3594.12, "end": 3600.2799999999997, "text": " I mean it's it's certainly possible but also I'm not sure I'm not sure that" }, { "start": 3600.2799999999997, "end": 3604.68, "text": " humans do project they have like an internal manifold of natural images and" }, { "start": 3604.68, "end": 3615.52, "text": " project onto that every time they analyze an image and also the yeah how do" }, { "start": 3615.52, "end": 3621.7599999999998, "text": " you project right like how like both of these features are useful okay so both" }, { "start": 3621.7599999999998, "end": 3626.7, "text": " of the features are useful if you project an adversarial example like why" }, { "start": 3626.7, "end": 3631.3999999999996, "text": " do you project it onto the shape dimension and not onto the fur dimension" }, { "start": 3631.4, "end": 3637.12, "text": " right why there's no explanation right here we know that sort of humans are" }, { "start": 3637.12, "end": 3643.76, "text": " more receptive to shapes and so on but just projecting won't get you there so" }, { "start": 3643.76, "end": 3648.52, "text": " now they're going to into experiments and I want to highlight one particular" }, { "start": 3648.52, "end": 3652.44, "text": " experiment right here they have synthetic experiments they have their" }, { "start": 3652.44, "end": 3656.64, "text": " experiments I want to highlight this experiment right here remember they said" }, { "start": 3656.64, "end": 3661.52, "text": " their experiments were going to give you know strong support that and this" }, { "start": 3661.52, "end": 3665.56, "text": " experiment right here what they want to claim is that okay you have the data" }, { "start": 3665.56, "end": 3672.72, "text": " manifold here if you are if you have a data point and you make an adversarial" }, { "start": 3672.72, "end": 3680.7999999999997, "text": " example the question is do adversarial examples go along the image manifold or" }, { "start": 3680.7999999999997, "end": 3686.6, "text": " do adversarial examples go sort of perpendicular to the image manifold they" }, { "start": 3686.6, "end": 3692.2799999999997, "text": " they their claim again is that V this here would give support to the old view" }, { "start": 3692.2799999999997, "end": 3697.16, "text": " of adversarial examples and this here would support the dimpled manifold view" }, { "start": 3697.16, "end": 3700.64, "text": " because of course the decision boundary would be sort of following the data" }, { "start": 3700.64, "end": 3707.96, "text": " manifold curving around the data and then following the image manifold again" }, { "start": 3707.96, "end": 3713.8399999999997, "text": " so here would be sort of the other data point going below that a little bit all" }, { "start": 3713.84, "end": 3722.08, "text": " right so that is the view right here now what they're going to try to show you is" }, { "start": 3722.08, "end": 3726.52, "text": " that if you want to create an adversarial example on the manifold you" }, { "start": 3726.52, "end": 3732.4, "text": " have to walk much longer for much longer until you find an adversarial example" }, { "start": 3732.4, "end": 3738.32, "text": " then if you go off the manifold if you go yeah and they're also going to show" }, { "start": 3738.32, "end": 3742.08, "text": " you that if you are not constrained if you can go anywhere you want with an" }, { "start": 3742.08, "end": 3748.44, "text": " adversarial example then that will be very similar to when you force the" }, { "start": 3748.44, "end": 3752.08, "text": " adversarial example to go off the manifold and this gives a bit of proof" }, { "start": 3752.08, "end": 3756.7599999999998, "text": " that you know if two things behave equally they're you know probably equal" }, { "start": 3756.7599999999998, "end": 3761.96, "text": " so what they're going to do is they're going to try to make an adversarial" }, { "start": 3761.96, "end": 3766.64, "text": " attack first of all a regular one this one they're gonna say okay we're gonna" }, { "start": 3766.64, "end": 3770.64, "text": " make an adversarial attack let's measure how far we have to go to cross the" }, { "start": 3770.64, "end": 3774.7599999999998, "text": " decision boundary second they're going to say let's make the same thing but" }, { "start": 3774.7599999999998, "end": 3781.72, "text": " let's force the attack to be on the manifold of natural images and let's" }, { "start": 3781.72, "end": 3785.7999999999997, "text": " measure that and lastly they're going to mask okay let's do the same thing but" }, { "start": 3785.7999999999997, "end": 3791.4, "text": " force it to be off the data manifold and then they're going to measure how long" }, { "start": 3791.4, "end": 3795.8799999999997, "text": " these are how long the adversarial attacks are what's their their norm and" }, { "start": 3795.8799999999997, "end": 3800.3599999999997, "text": " they're going to find of course they're gonna want to find that these two are a" }, { "start": 3800.36, "end": 3806.76, "text": " about similar norms and way smaller than the one that is on the data manifold" }, { "start": 3806.76, "end": 3811.32, "text": " sort of giving evidence to you know if you go perpendicular to the data" }, { "start": 3811.32, "end": 3815.96, "text": " manifold you have to go very not very far and that's what adversarial attacks" }, { "start": 3815.96, "end": 3824.1200000000003, "text": " do okay yeah so how first of all how do they force the the adversarial attack to" }, { "start": 3824.1200000000003, "end": 3829.8, "text": " be on the manifold what they do is they do an autoencoder so they train an" }, { "start": 3829.8, "end": 3834, "text": " autoencoder so they an autoencoder is a neural network that has sort of a" }, { "start": 3834, "end": 3840.2400000000002, "text": " bottleneck layer and you try to just reconstruct the inputs data okay you" }, { "start": 3840.2400000000002, "end": 3844.2000000000003, "text": " tried that these two are equal however in the middle here you have a very low" }, { "start": 3844.2000000000003, "end": 3848.1600000000003, "text": " dimensional representation so where this is an n dimensional representation" }, { "start": 3848.1600000000003, "end": 3855.1600000000003, "text": " this is a k dimensional representation and a k much smaller than n if you can" }, { "start": 3855.1600000000003, "end": 3859.76, "text": " reconstruct the images correctly that means that you sort of have captured" }, { "start": 3859.76, "end": 3864.36, "text": " the representation in these low dimensions right here so what they're" }, { "start": 3864.36, "end": 3867.44, "text": " going to do is they train an autoencoder they take that low dimensional" }, { "start": 3867.44, "end": 3871.2000000000003, "text": " representation they linearize around it and that's how they have a way to" }, { "start": 3871.2000000000003, "end": 3876.6000000000004, "text": " project onto the image manifold by simply only moving around in this low" }, { "start": 3876.6000000000004, "end": 3882.5200000000004, "text": " dimensional manifold right here or always projecting onto it first of all" }, { "start": 3882.5200000000004, "end": 3887.6000000000004, "text": " it's a bit of a trouble because how you train the autoencoder is like for these" }, { "start": 3887.6, "end": 3892.3199999999997, "text": " experiment I think it's very relevant to how they this image manifold is going" }, { "start": 3892.3199999999997, "end": 3897.64, "text": " to look like if you train it with L2 you sort of already make some claims about" }, { "start": 3897.64, "end": 3902.04, "text": " what are important features and whatnot but let's disregard this right here" }, { "start": 3902.04, "end": 3907.4, "text": " let's say they have an accurate way of projecting onto the image manifold onto" }, { "start": 3907.4, "end": 3912.6, "text": " the manifold of natural data and here's what they find look let's look at image" }, { "start": 3912.6, "end": 3918.36, "text": " net okay no constraint PGD it this is the norm you know it's some number okay" }, { "start": 3918.36, "end": 3925.48, "text": " so like 0.14 now off manifold PGD is where they deliberately project off the" }, { "start": 3925.48, "end": 3929.12, "text": " manifold so they project on the manifold they subtract that they say you're not" }, { "start": 3929.12, "end": 3934.98, "text": " to do anything with the mana of the image manifold and that's 0.152 which is" }, { "start": 3934.98, "end": 3941.16, "text": " slightly larger than the no constraint PGD but essentially the same size now on" }, { "start": 3941.16, "end": 3948.48, "text": " manifold PGD okay here is a way bigger number like six times bigger number so" }, { "start": 3948.48, "end": 3954.7599999999998, "text": " their claim is look up up to six times more you have to go on the manifold than" }, { "start": 3954.7599999999998, "end": 3962.72, "text": " off the manifold and that gives credence to their claims now okay so what I've" }, { "start": 3962.72, "end": 3967.04, "text": " done is they have you know they have some descriptions of their experiment" }, { "start": 3967.04, "end": 3971.44, "text": " specifically they have descriptions of what library they used they used advert" }, { "start": 3971.44, "end": 3977.8, "text": " torch okay so I used advert torch to they used you know L2 PGD I use that too" }, { "start": 3977.8, "end": 3982.46, "text": " and they told me how much their low dimensional representation is so the K" }, { "start": 3982.46, "end": 3988.44, "text": " here how much that is how much the N is and so I was able to reproduce that" }, { "start": 3988.44, "end": 3995.36, "text": " experiment now what I've done is I have done the same thing and you can see" }, { "start": 3995.36, "end": 3998.92, "text": " right here this is this the panda image from image net they use an image net" }, { "start": 3998.92, "end": 4003.7200000000003, "text": " classifier and what they do is they do it greedy so they stop as soon as they" }, { "start": 4003.7200000000003, "end": 4008.84, "text": " cross the decision boundary and then they measure the norm you can see right" }, { "start": 4008.84, "end": 4017.44, "text": " here this is the perturbation now it's a soccer ball and here is the size 0.7772" }, { "start": 4017.44, "end": 4022.6800000000003, "text": " that's the norm of the original perturbation adversarial what I now do" }, { "start": 4022.68, "end": 4028.04, "text": " as I project onto the manifold but I don't the difference is I don't project" }, { "start": 4028.04, "end": 4033.24, "text": " onto the image manifold what I do is here you see project onto K I simply" }, { "start": 4033.24, "end": 4040.52, "text": " project onto any K dimensional manifold so I know what K is K is 3,500 so it's a" }, { "start": 4040.52, "end": 4045.2999999999997, "text": " very small number compared to the input number and so what they project is" }, { "start": 4045.2999999999997, "end": 4049.08, "text": " actually the gradient so the gradient of the adversarial attack that you use to" }, { "start": 4049.08, "end": 4052.7999999999997, "text": " update your image that's what they project they have the algorithm clearly" }, { "start": 4052.7999999999997, "end": 4059.96, "text": " lined out so what I do is I simply take you can see right here I take a random" }, { "start": 4059.96, "end": 4067.92, "text": " set of of dimensions like of pixel coordinates in the gradient and I denote" }, { "start": 4067.92, "end": 4073.58, "text": " the first you know the first few the first K as the manifold and the last K" }, { "start": 4073.58, "end": 4077.4, "text": " as not the manifold this is not the image manifold there's nothing to do with" }, { "start": 4077.4, "end": 4083.2400000000002, "text": " the image manifold this is simply a random K dimensional subspace of the" }, { "start": 4083.2400000000002, "end": 4090.44, "text": " pixel space okay and now when I project onto K I simply take all the others in" }, { "start": 4090.44, "end": 4096.68, "text": " the gradient and I set them to zero that's I project onto a K dimensional" }, { "start": 4096.68, "end": 4102.68, "text": " manifold after that you normalize the gradient and so on so you proceed you" }, { "start": 4102.68, "end": 4108.72, "text": " proceed as you would right so here you can see the the project is used before" }, { "start": 4108.72, "end": 4113.92, "text": " you normalize the gradient so there's no issue with sort of the the step size you" }, { "start": 4113.92, "end": 4119.360000000001, "text": " simply project onto the manifold and I have the same thing by the way" }, { "start": 4119.360000000001, "end": 4123.96, "text": " projecting off the manifold where I simply take the K dimensions and" }, { "start": 4123.96, "end": 4130.16, "text": " set them to zero okay so now let's look what happens if I project on to the" }, { "start": 4130.16, "end": 4138.24, "text": " manifold oh wow before it was 0.77 and now it's 6.5 so about eight times" }, { "start": 4138.24, "end": 4144.2, "text": " larger and now let's look what happens if I project off the manifold it's 0.7773" }, { "start": 4144.2, "end": 4150.92, "text": " instead of 0.7772 so what they're seeing right here and you know maybe" }, { "start": 4150.92, "end": 4154.32, "text": " okay maybe I've done it modulo I've done it wrong and I completely don't" }, { "start": 4154.32, "end": 4160.04, "text": " understand what's going on what they have found is simply an effect of" }, { "start": 4160.04, "end": 4165.32, "text": " projecting onto any lower dimensional space yet they claim that this is like" }, { "start": 4165.32, "end": 4170.12, "text": " in support of their hypothesis which clearly I have no clue what the data" }, { "start": 4170.12, "end": 4174.44, "text": " manifold is I've just projected onto a random manifold and I got the same" }, { "start": 4174.44, "end": 4180.36, "text": " results so I see they have other experiments where they try to kind of" }, { "start": 4180.36, "end": 4184.88, "text": " convince you with all the types of perturbations and so on but you know" }, { "start": 4184.88, "end": 4190.799999999999, "text": " like no this these they have other experiments but this is just one that I" }, { "start": 4190.799999999999, "end": 4196.799999999999, "text": " could try quickly again maybe I've done it wrong to me this Occam's razor is" }, { "start": 4196.799999999999, "end": 4204.12, "text": " strong here like Occam's razor in this work is quite a bit like there can be" }, { "start": 4204.12, "end": 4210.88, "text": " like there can be many hypotheses that coincide with the results you're getting" }, { "start": 4210.88, "end": 4217.5599999999995, "text": " and with the phenomena and it's easy to think that stuff is in favor of your" }, { "start": 4217.5599999999995, "end": 4224.16, "text": " hypothesis is providing support for it when there are other explanations" }, { "start": 4224.16, "end": 4231.5199999999995, "text": " available oh I almost forgot about Goodfellow's claim that you know they say" }, { "start": 4231.52, "end": 4238.4800000000005, "text": " belongs to the sort of old thinking that is now that is not a correct thinking" }, { "start": 4238.4800000000005, "end": 4242.92, "text": " and the claim that when you make an adversarial examples you somehow go" }, { "start": 4242.92, "end": 4248.080000000001, "text": " towards the centroid of a different class and in imagination it's something" }, { "start": 4248.080000000001, "end": 4253.160000000001, "text": " like this on the on the left right here however if you think about this in this" }, { "start": 4253.160000000001, "end": 4259.56, "text": " space okay let's say you start out here and you go towards the centroid of the" }, { "start": 4259.56, "end": 4266.4400000000005, "text": " other class right the pro where's the centroid here approximately like this" }, { "start": 4266.4400000000005, "end": 4271.56, "text": " what happens in feature space because of the stretchy feature because of the" }, { "start": 4271.56, "end": 4275.320000000001, "text": " different scales okay what happens in feature space is it pretty much like the" }, { "start": 4275.320000000001, "end": 4281.200000000001, "text": " blue arrow here so it's that in feature space you go a long way actually this is" }, { "start": 4281.200000000001, "end": 4286.64, "text": " probably I should have drawn this here to be square and this here to be super" }, { "start": 4286.64, "end": 4293.88, "text": " stretchy right yeah yeah I think so yeah I was I was wrong in drawing this so" }, { "start": 4293.88, "end": 4297.4400000000005, "text": " this here should be squares and this here actually should be super duper" }, { "start": 4297.4400000000005, "end": 4303.12, "text": " stretchy right so the centroid what was the centroid here is like way up here" }, { "start": 4303.12, "end": 4309.64, "text": " like way up here somewhere okay so this gets super stretched and you cross the" }, { "start": 4309.64, "end": 4318.160000000001, "text": " boundary in this one feature right like the fur feature and yeah so I think this" }, { "start": 4318.160000000001, "end": 4322.96, "text": " is it's still a correct claim you go towards the centroid of another class" }, { "start": 4322.96, "end": 4329.68, "text": " but because you go this in input space in the feature space this results in" }, { "start": 4329.68, "end": 4333.240000000001, "text": " sort of a dramatic shift in some features and a not so dramatic shift in" }, { "start": 4333.240000000001, "end": 4337.8, "text": " other features so while in the input space you go towards the centroid" }, { "start": 4337.8, "end": 4343.76, "text": " equally in all pixel directions you don't go towards the centroid equally in" }, { "start": 4343.76, "end": 4350.52, "text": " all pixel directions in the sorry in all feature directions so I think the claim" }, { "start": 4350.52, "end": 4357.68, "text": " that Goodfellow made is valid here still and explains like is concurrent with the" }, { "start": 4357.68, "end": 4362.58, "text": " stretchy feature explanation that I'm pretty sure that's also kind of what" }, { "start": 4362.58, "end": 4367, "text": " maybe I can't read his mind but maybe what he meant by that and not" }, { "start": 4367, "end": 4372.08, "text": " necessarily this picture right here not necessarily that actually the entire" }, { "start": 4372.08, "end": 4376.8, "text": " picture is going to change into the other class okay that was the" }, { "start": 4376.8, "end": 4383.54, "text": " interjection and back to the conclusion but as I said make up your own mind what" }, { "start": 4383.54, "end": 4389.42, "text": " do you what do you think of this go through the paper they it's it's a good" }, { "start": 4389.42, "end": 4393.72, "text": " paper like it's written it's written well there it has a lot of experiments" }, { "start": 4393.72, "end": 4399.6, "text": " has quite a lot of appendix where they give you more results and so on and it's" }, { "start": 4399.6, "end": 4404.16, "text": " not like again it's not like it's in it's necessarily incompatible right it's" }, { "start": 4404.16, "end": 4411.04, "text": " not I don't disagree with them I just think it's it's not as useful as they" }, { "start": 4411.04, "end": 4415.12, "text": " claim and it's kind of insufficient I don't disagree with their their main" }, { "start": 4415.12, "end": 4422.72, "text": " claims yeah and I think we already kind of knew a lot of those stuff and our" }, { "start": 4422.72, "end": 4430.76, "text": " current mental models are explaining things maybe a little a little better" }, { "start": 4430.76, "end": 4437.76, "text": " and yeah if you use the the squishy feature what would I call it the the" }, { "start": 4437.76, "end": 4443.52, "text": " stretchy feature model has a fancy name now but again is this is not mine this" }, { "start": 4443.52, "end": 4449.4800000000005, "text": " is just kind of a a bringing together of of what we what I think we know about" }, { "start": 4449.48, "end": 4454.08, "text": " adversarial examples safe to say there's going to be something that challenges" }, { "start": 4454.08, "end": 4457.959999999999, "text": " this and that's going to be exciting alright thanks so much for being here" }, { "start": 4457.96, "end": 4483.36, "text": " listening and I'll see you next time bye bye" } ]
nxEr4VNgYOE
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Movement Pruning: Adaptive Sparsity by Fine-Tuning (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "prune", "pruning", "transfer learning", "weights", "magnitude", "gradient", "moving", "small", "importance", "huggingface", "nlp", "natural language processing", "squad", "mnli", "bert", "transformer", "attention", "cnn", "distillation", "teacher", "sparse", "sparsity", "question answering", "mobile", "edge", "tune", "fine-tune" ]
Deep neural networks are large models and pruning has become an important part of ML product pipelines, making models small while keeping their performance high. However, the classic pruning method, Magnitude Pruning, is suboptimal in models that are obtained by transfer learning. This paper proposes a solution, called Movement Pruning and shows its superior performance. OUTLINE: 0:00 - Intro & High-Level Overview 0:55 - Magnitude Pruning 4:25 - Transfer Learning 7:25 - The Problem with Magnitude Pruning in Transfer Learning 9:20 - Movement Pruning 22:20 - Experiments 24:20 - Improvements via Distillation 26:40 - Analysis of the Learned Weights Paper: https://arxiv.org/abs/2005.07683 Code: https://github.com/huggingface/transformers/tree/master/examples/movement-pruning Abstract: Magnitude pruning is a widely used strategy for reducing model size in pure supervised learning; however, it is less effective in the transfer learning regime that has become standard for state-of-the-art natural language processing applications. We propose the use of movement pruning, a simple, deterministic first-order weight pruning method that is more adaptive to pretrained model fine-tuning. We give mathematical foundations to the method and compare it to existing zeroth- and first-order pruning methods. Experiments show that when pruning large pretrained language models, movement pruning shows significant improvements in high-sparsity regimes. When combined with distillation, the approach achieves minimal accuracy loss with down to only 3% of the model parameters. Authors: Victor Sanh, Thomas Wolf, Alexander M. Rush Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there, today we're looking at movement pruning, adaptive sparsity by fine tuning by Victor Sun, Thomas Wolff and Alexander M. Rush of Hugging Face and Cornell University. On a high level, this paper proposes that if you have a transfer learning objective and you want to do pruning, you should not do pruning by weight magnitude, you should do pruning by how much the weights move during the transfer learning. This yields better results in the very sparse model regimes and is specifically relevant to current NLP transfer learning tasks such as BERT models. So if you like content like this, consider subscribing and sharing it to your friends and as always leave a comment if you have anything to say on this. Alright let's dive in. So they say magnitude pruning is a widely used strategy for reducing model size in pure supervised learning. So what is magnitude pruning? Now if I have a neural network, let's say I have a convolutional neural network and I input my little cat right here and I have a bunch of layers right and now if we look at these layers, each of these layers is going to be made up of these units of the neurons and the next layer is also made up of these neurons. Now what kind of neural network that is, it's not that important but what is important is that you have these connections from neuron to neuron and in let's say a fully connected network every neuron is connected to every other neuron. In a CNN that would be slightly different but in essence you have a lot of connections here and these are usually called weights. So these are the weights. Now the problem is if I train like these giant neural networks and I want to ship them for example to mobile devices to my customers then they won't be able to download gigabytes of models or even like hundreds of megabytes of models, it's just not possible. So what we want to do is we want to prune this model which means we want to remove parts of these weights, a lot of these weights but we don't want to lose accuracy of the network. So imagine I have a network and that's trained, it's an image classifier, it's here it's cats or dogs and I have it trained to a good accuracy. I want to delete these weights but I want to retain the performance and these methods are called pruning. Now what people do is usually they sort of go in stepwise fashion, they say well first of all I don't need some of these and then they delete some and then they sort of retrain the pruned network and after that they go again and they say well I don't really need that one and they don't really need that one so they do it in this stepwise fashion until the network is of the size that they want and the hope is that you don't lose too much accuracy. So the question is how do you select which weights you need and which ones you don't need and usually this is done by so called magnitude pruning which means that you look at the weights and the weights they'll have some distribution, there will be very negative so here is very negative weights and here is very large positive weights and what you'll say is that okay probably the weights that are very large they contribute a lot to the signal of the network within the network and the weights that are quite small they're you know since there's all this noise and stuff they're probably not that important so I'm going to cut off basically right here and everything that's in here I'm going to delete those are the non-important weights whereas on the outside those are the important weights. This is called magnitude pruning because it goes by the magnitude of the weight the absolute value of the weight. So you don't actually need so there's not one threshold here you don't need a threshold you simply need a method to order the weights right and then you keep removing them until you're satisfied with the size. So this is magnitude pruning. Now what's the problem with the magnitude pruning in these kinds of tasks? They say however it is less effective in the transfer learning regime that has become standard for state-of-the-art natural language processing applications. So what do you do in these transfer learning regimes? In the transfer learning regime and actually let's go with the image example right here even though it's mostly used in NLP we can do the same thing. So let's say we have a classifier here for cats and dogs our classifier and we had a big big database of cats and dogs images right so we were able to train that fairly well and we don't prune it yet we have this full network. Now we want to adapt this to a task where we want to recognize whether or not the animal is sick. So we developed this app for a veterinarian and it's like a short screening for a particular disease that a cat might have and we already have this cats and dogs classifier so it's reasonable to assume that this classifier has some good features to work with cats and dog images. So what we can do instead of because let's assume for this other task we just have this tiny little data set which is not enough to train a neural network of this size right but so in the first step we'll train this big neural network on the cats versus dogs and then we what we do is we transfer learning so we transfer all the weights right here and here we have a different task now sick or not sick right this is cat this is dog and here is sick or not sick not sick and of course we can't transfer these particular weights but we hope that the features here will sort of be the same so we transfer them and then we train these weights including the head right here this part we train it on this little data set and we hope that we already have this good starting point we only need to you know learn the basically the specifics of what makes these two data sets different and we won't have to learn entire task of dealing with cat and dog images from the get-go okay so this is called transfer learning now in this case we combine the two so first we want to transfer learn like if we build this app for vets and then we might say oh this is not you know this is not only for vets this is actually for anyone you know who has a cat or a dog at home so what we could do is build an app where anyone at home could scan their their cat and it would output like a probability of the cat having that disease so we want this neural network is still the same size as this neural network so now we want to do the pruning we want this neural network to become sparse to only have a couple of connections left such that it's a few kilobytes large but retain performance now they say when you do this step you can't just do the magnitude pruning like you did right here and why not because this is not this model right here is not the result of a training step like of a regular training process but is the result of a transfer learning process where first you do the big training and then second you adapt it and why is that the case well ultimately what you want to do is you want to prove the non-important weights now there could be a weight right here this one that is very important for the cat versus dog task but that is not important for the sick versus non-sick task and we also we know that in these transfer learning settings the weights they don't tend to move that much in general the research shows that once you've trained a neural network basically the beginning is important but then once you did it like if you adapt it or transfer learn it and so on the weights they won't move that much so in essence this weight maybe starts out right here and it will sort of stay around this place it will maybe go a little bit down because it's not important but it won't move much during transfer learning that's just a property of transfer learning so this paper here says we can't just use magnitude pruning when we transfer learn because what will basically go by what will basically say is the will assign the importance based on based on the original neural network task on the cat versus dog we we will miss specify the importance of the weights what we should do is actually measure the importance with respect to this task and how do they achieve it so on a high level they're basically saying okay if we start out well this was fatal if we start out with a point over here let's make that red red i want the color red well it's blue now so if we ah if we start out with a point over here what we should do what we should do is we should observe how it moves during transfer learning if it moves towards zero then it's probably not that important for the new task and if it moves to the to be even larger then it's probably important for that new task okay so that's that's a high level now how do you measure how it moves and what exactly how exactly do you do all of this during training such that you don't make mistakes that's the point of this paper they say we propose movement pruning a simple deterministic first-order weight pruning method that is more adaptive to pre-trained model fine-tuning we give mathematical foundations to the method and compare it to existing zero and first-order pruning methods okay so um yeah we said so that's basically on a high level that's that now how do they actually do it what they do right here is the following they say what we can define we can define each each network layer basically as a matrix multiplication by a weight now you can express pretty much any neural network as such a multiplication with a weight so you have x the in the signal in each layer and you multiply that by the weight matrix w now if you prune the neural network you can see that right here what you're saying is i basically in here i have the matrix m which is a mask so the mask is either zero or one for if a weight is active or if a weight is not active now this is not a matrix multiply actually this is like a hadamard product um but you have this mask matrix and what decides on this mask this mask is decided as you can see right here by this s so s s is a matrix that for each entry in w it will decide how important it is now in the classic sense in the magnitude pruning you already saw that this is just going to be the absolute value of w i j okay and then the top v simply means that you take the whoever are the most important the most magnitude that those are going to be one in the mask and everything else is going to be zero in the mask okay that's how this s this the w determines the s and the s determines the m okay so what you ultimately use is the m right here um but in now what we want to do is we want to actually make the s based on the movement and the movement is not really a defined concept because it goes over steps and so on so how do you do the movement in in a kind of dynamic way and this paper says you should do it by gradient so um you you should observe the gradient of your loss function with respect to this s matrix to this importance matrix right what does it mean so the what does it mean it means let's consider this quantity right here if s is the importance of a particular connection and if the gradient is large that means this this connection moves a lot right it like the loss pulls it into a particular direction okay so we're not talking about yet which direction actually the gradient has a sign it's either positive or negative right so by this quantity you can decide how much does this new task want this particular importance score to move so this is a direct direct measure of how much basically the loss function pulls on that importance score how much and now you can simply decide if and they have these they have i think they have a diagram yes so but i don't like that let's go so we have right here we have what's the value of this gradient of l with respect to s and here is w so if the gradient is positive and w is already positive that means the gradient goes into the positive direction so you increase the loss function in that let's put the negative gradient here because you do gradient descent right so so if the negative gradient is positive and the weight is already positive in in this case that means you're all the weight is already high but now the loss function wants to push it even higher so that must be a very very important weight right like it's like very good the same goes if the gradient the negative gradient is negative and the weight is already negative the weight being negative already means the weight you know it has a negative sign and then the gradient wants it to go even more negative the the optimization procedure says this thing should become even more negative and also we say that's probably a good weight now the other two cases means basically the weight's already positive but the gradient wants it to go negative which means it's pulled towards zero now it's entirely possible that it's going to cross zero and go like if you're here going from over here gonna here cross zero and become like super large but that violates our basic assumptions that the transfer learning doesn't move the weights too much right what you're caring for is basically this local neighborhood right here okay so you can make the fair assumption that these weights are not that important in the case where the negative gradient goes against the sign of the weight so this this is of course discrete right now but we can actually assign a number by how large the gradient is and by how large the weight already is and therefore we can make a score so the important score right here as you can see is the weight multiplied by the gradient of the weight and they can actually show mathematically that if you do this over multiple steps so you optimize while you do this pruning and they do some sort of a soft pruning so you can kind of correct your mistakes later on I mean they have hard and soft pruning but in any case they can correct their mistakes later on this will actually result in these important scores being an accumulation over the training over the entire training of this quantity and that's pretty cool because that means eventually you'll sort of have a consistent estimator of these important scores across your training procedure okay because the main fear with something like this of course is that it's very brittle and very much depends on the training dynamics and who knows if in step one something bad happens and so on but the the math behind this here gives sort of more evidence that this it can be like a self-correcting mechanism it is actually not too dependent on the particular training dynamics so they do this experimental setup now they have some they have some quirks here actually let's first go to the the the actual different methods they compare different methods right here where they say okay there's magnitude pruning it's a zero with order which basically just means you just look at the weight magnitude that that's it it's top v which means you just pick the top whatever and the objective is just the loss and the scores are just the absolute value we've seen this now movement pruning on the other hand is first order which means you look at the movement in our case the gradient as you can see here those are the importance scores and you use this straight through estimator which is basically just a way of saying that even though you're masking some things in the forward step you shouldn't mask them in the in the gradient backward step because you still want gradient signal to reach so if you have layers and you have a weight right here at least that's how I understand it I have not read that paper but if you mask this one here you still want the gradient to sort of flow backwards because you still need the actual importance scores for the weights that here below connect to this weight I think that's what is meant not entirely sure in this though so but you can see that the objective function is also the actual loss function now this is contrasted to a baseline called l0 regularization which is quite similar but is also first order but has this sort of regularizer right here and uses the gumball softmax in order to determine the scores and as you can see it also has a different score function and it has this continuous hard concrete importance masking function sorry masking function and they have a variant of movement pruning that tends to perform a little bit better which is soft movement pruning where instead of just going by the loss function optimizing the loss function they optimize the loss function plus something now they have as you can see here they have a thresholding masking function and the threshold is actually dynamic or it's determined by the importance scores and they have then a regularizer that make the importance scores sparse so instead of saying we just want the top 5% of weights they now just put a lot of mass on this lambda right here which will cause the s to be sparse and you know they think if they are not happy with how many weights they can simply increase or decrease this lambda such that they get to their desired sparsity of course you see there's the direct trade-off with the loss function right so the more you put weight on this lambda the less weight is you put basically on the loss function itself so you have the trade-off here is very explicit whereas in basic movement pruning it's just given by you masking away completely the bottom 1-v of percent of the weights but you see the score function is the same oh well the score function here the score function is the same okay now there are quite a number of tricks here like there's this sparsity scheduling function and so on so as always in NLP and with any big models there are a bunch of engineering tricks that make everything work better and you can never exactly tell how much is due to that and how much is due to the the actual technique but if you know you can sort of assess whether or not these it's done well and this here actually the rationale makes sense and that's why I tend to think that it is actually a better method and the experiments are very very convincing let's say okay this is just a pictorial comparison where you can see movement pruning sorry magnitude pruning all it does is it looks at after you fine-tune what are the weights and it just cuts away everything in the middle doesn't care about how the weights were when before however movement pruning looks at the combination of what were the weights before and what are the weights now and it cuts away everything where the weights moved towards zero which are these quadrants right here and it leaves in everything where it moved away from zero or that's that's the ordering let's say that how much it moved okay experiments now as you might already have figured out by now in the machine learning and especially the NLP community the methods presented always outperform the previous methods in this case it's pretty special so they test this on these number of tasks quad and NLMI and QQP and these are quite hard tasks from an NLP perspective like squad is question answering MNLIs like language inference so these would be on the I would guess these are on the foreign NLP system on the harder side of tasks that's it's fairly cool and as you can see here now just focus on first of all focus on this MAP which is the magnitude pruning so that's the baseline if you will and the purple one the SMVP which is the soft movement pruning okay now you can also focus on the MVP right here but they're approximately the same now the RPP you can maybe see in the graph is performing fairly well even compared to the full model it's another baseline basically but we just want to compare those to and you can see that in this regime that the magnitude pruning outperforms the movement pruning but however in this regime the movement pruning is much better and that's the percent where the percent of remaining weights is very very low so this is kind of the extreme sparse case where you only have 10% or you only have 3% of the weights left and you can see that the movement pruning is outperforming the magnitude pruning by a lot okay now they do discover that so this this happens in all in all of these tasks as you can see right here and they do they do discover that if you then distill the model further so in distillation this is yet another technique that you can use to boost the performance of the transfer learned model so in distillation you would not only train the model on you so you have you now you have your this model that you transfer learn and you have the pruned version and in the pruned version what you would do is you would simply also train it on this data set but what you can also do is you can distill this model right here the one that you trained on the same task right that's that's presumably better because it still has all the weights okay you can run a data point through both so the same data point goes through this and you get an output which are logits which is like representing a distribution saying yeah it's about this high now instead of assigning the hard labels so here we also get the label right it's a supervised learning task like one and zero you also put the data point through this model right here obtain whatever that model would have said and let's say it's about this and now presumably this model is better already so you say well the label here says that it's this class but the model that's really good says you shouldn't be too sure about it so you can sort of mix the two losses and this this process of transferring the knowledge of this model to here is called distillation with the lower model being the teacher model now if you do distillation you can actually improve your performance even more and they show that in the experiments here especially again in the low in the low parameter regime but you can see for example in squad here that the distilled movement pruned method now catches up with the magnitude pruned method in the also in the high in the not so sparse regime okay and they analyze these weights and as you can see that expected the magnitude pruned method it will simply cut out anything right here that's not not a surprise whereas the movement pruned method it will leave a lot of these weights alive so as you can as you can see basically it's it's very much the case since you can outperform the red the yellow one can outperform the red one it is almost warranted to say that the this magnitude pruning wasn't the best choice it's actually a better choice to leave some of those weights in and actually cut some of the weights that are large out just based on their movement now the V here in the middle is of course due to the fact that if a weight is here it was probably not super important in the first place and since since this thing removes anything that moves towards zero any point starting let's say around here moving towards zero would end up here so all the points that end up in this region probably moved towards zero during training and therefore are cut away so there's there's not just like for there to be points there would have been points that started even more in the middle and then moved out to here right and there's just not as many so that's why you have that the V shape is very natural to expect right here so they they analyze this then in terms of the where the model cuts the weights out now they experiment on a BERT base thing that which is a transformer with 12 layers and if you don't know what BERT is you can go look at my video on BERT but you can see that the magnitude pruning will sort of cut all the weights on the layers equally so it will sort of go through the layers and take away let's say 90 percent of each here you can see 10 percent of weights remaining whereas the movement pruning especially the soft movement pruning will actually make a large difference it will remove much much more of the later layers weights and keep the lower layer weights which i think if you do transfer learning from these language models it tends to be that the lower layers maybe pick up if you if you think of a CNN the lower layers might pick up you know on these essential image features like corners and so on and the higher layers will pick up on the task specific things now if you do like a big pre-training tasks you might have a lot of information that you need there but then if you distill it and transfer it down to like a small set small task where only a single thing is important like in squad it's only important what's the answer to the question then you can probably remove a lot of that superfluous information that was there like high level features from the pre-training task i mean that's my my guess here but they also have explained that so yeah that was this paper if you're still here and you enjoyed it leave a like tell me in the comments what you think and i'll see you next time bye bye
[ { "start": 0, "end": 5.96, "text": " Hi there, today we're looking at movement pruning, adaptive sparsity by fine tuning" }, { "start": 5.96, "end": 13.040000000000001, "text": " by Victor Sun, Thomas Wolff and Alexander M. Rush of Hugging Face and Cornell University." }, { "start": 13.040000000000001, "end": 18.76, "text": " On a high level, this paper proposes that if you have a transfer learning objective" }, { "start": 18.76, "end": 24.12, "text": " and you want to do pruning, you should not do pruning by weight magnitude, you should" }, { "start": 24.12, "end": 28.64, "text": " do pruning by how much the weights move during the transfer learning." }, { "start": 28.64, "end": 35.6, "text": " This yields better results in the very sparse model regimes and is specifically relevant" }, { "start": 35.6, "end": 41.54, "text": " to current NLP transfer learning tasks such as BERT models." }, { "start": 41.54, "end": 47.040000000000006, "text": " So if you like content like this, consider subscribing and sharing it to your friends" }, { "start": 47.040000000000006, "end": 52.120000000000005, "text": " and as always leave a comment if you have anything to say on this." }, { "start": 52.120000000000005, "end": 53.88, "text": " Alright let's dive in." }, { "start": 53.88, "end": 60.580000000000005, "text": " So they say magnitude pruning is a widely used strategy for reducing model size in pure" }, { "start": 60.580000000000005, "end": 62.400000000000006, "text": " supervised learning." }, { "start": 62.400000000000006, "end": 64.82000000000001, "text": " So what is magnitude pruning?" }, { "start": 64.82000000000001, "end": 70.32000000000001, "text": " Now if I have a neural network, let's say I have a convolutional neural network and" }, { "start": 70.32000000000001, "end": 76.36, "text": " I input my little cat right here and I have a bunch of layers right and now if we look" }, { "start": 76.36, "end": 81.92, "text": " at these layers, each of these layers is going to be made up of these units of the neurons" }, { "start": 81.92, "end": 85.48, "text": " and the next layer is also made up of these neurons." }, { "start": 85.48, "end": 90.96000000000001, "text": " Now what kind of neural network that is, it's not that important but what is important is" }, { "start": 90.96000000000001, "end": 96.08, "text": " that you have these connections from neuron to neuron and in let's say a fully connected" }, { "start": 96.08, "end": 100.28, "text": " network every neuron is connected to every other neuron." }, { "start": 100.28, "end": 105.16, "text": " In a CNN that would be slightly different but in essence you have a lot of connections" }, { "start": 105.16, "end": 108.64, "text": " here and these are usually called weights." }, { "start": 108.64, "end": 110.56, "text": " So these are the weights." }, { "start": 110.56, "end": 117.28, "text": " Now the problem is if I train like these giant neural networks and I want to ship them for" }, { "start": 117.28, "end": 124.46000000000001, "text": " example to mobile devices to my customers then they won't be able to download gigabytes" }, { "start": 124.46000000000001, "end": 129.12, "text": " of models or even like hundreds of megabytes of models, it's just not possible." }, { "start": 129.12, "end": 133.76, "text": " So what we want to do is we want to prune this model which means we want to remove parts" }, { "start": 133.76, "end": 140.28, "text": " of these weights, a lot of these weights but we don't want to lose accuracy of the network." }, { "start": 140.28, "end": 144.92, "text": " So imagine I have a network and that's trained, it's an image classifier, it's here it's cats" }, { "start": 144.92, "end": 148.92000000000002, "text": " or dogs and I have it trained to a good accuracy." }, { "start": 148.92000000000002, "end": 157, "text": " I want to delete these weights but I want to retain the performance and these methods" }, { "start": 157, "end": 158.4, "text": " are called pruning." }, { "start": 158.4, "end": 163.32, "text": " Now what people do is usually they sort of go in stepwise fashion, they say well first" }, { "start": 163.32, "end": 170.51999999999998, "text": " of all I don't need some of these and then they delete some and then they sort of retrain" }, { "start": 170.51999999999998, "end": 175.12, "text": " the pruned network and after that they go again and they say well I don't really need" }, { "start": 175.12, "end": 180.04, "text": " that one and they don't really need that one so they do it in this stepwise fashion until" }, { "start": 180.04, "end": 185.84, "text": " the network is of the size that they want and the hope is that you don't lose too much" }, { "start": 185.84, "end": 187.04, "text": " accuracy." }, { "start": 187.04, "end": 192.44, "text": " So the question is how do you select which weights you need and which ones you don't" }, { "start": 192.44, "end": 201.24, "text": " need and usually this is done by so called magnitude pruning which means that you look" }, { "start": 201.24, "end": 210.28, "text": " at the weights and the weights they'll have some distribution, there will be very negative" }, { "start": 210.28, "end": 215.28, "text": " so here is very negative weights and here is very large positive weights and what you'll" }, { "start": 215.28, "end": 220.6, "text": " say is that okay probably the weights that are very large they contribute a lot to the" }, { "start": 220.6, "end": 225.6, "text": " signal of the network within the network and the weights that are quite small they're you" }, { "start": 225.6, "end": 228.95999999999998, "text": " know since there's all this noise and stuff they're probably not that important so I'm" }, { "start": 228.95999999999998, "end": 235.6, "text": " going to cut off basically right here and everything that's in here I'm going to delete" }, { "start": 235.6, "end": 240.45999999999998, "text": " those are the non-important weights whereas on the outside those are the important weights." }, { "start": 240.45999999999998, "end": 244.6, "text": " This is called magnitude pruning because it goes by the magnitude of the weight the absolute" }, { "start": 244.6, "end": 247.44, "text": " value of the weight." }, { "start": 247.44, "end": 253.07999999999998, "text": " So you don't actually need so there's not one threshold here you don't need a threshold" }, { "start": 253.07999999999998, "end": 257.76, "text": " you simply need a method to order the weights right and then you keep removing them until" }, { "start": 257.76, "end": 260.76, "text": " you're satisfied with the size." }, { "start": 260.76, "end": 263.34, "text": " So this is magnitude pruning." }, { "start": 263.34, "end": 268.52, "text": " Now what's the problem with the magnitude pruning in these kinds of tasks?" }, { "start": 268.52, "end": 273.92, "text": " They say however it is less effective in the transfer learning regime that has become standard" }, { "start": 273.92, "end": 278.48, "text": " for state-of-the-art natural language processing applications." }, { "start": 278.48, "end": 281.08000000000004, "text": " So what do you do in these transfer learning regimes?" }, { "start": 281.08000000000004, "end": 285.88, "text": " In the transfer learning regime and actually let's go with the image example right here" }, { "start": 285.88, "end": 289.82, "text": " even though it's mostly used in NLP we can do the same thing." }, { "start": 289.82, "end": 295.04, "text": " So let's say we have a classifier here for cats and dogs our classifier and we had a" }, { "start": 295.04, "end": 300.44, "text": " big big database of cats and dogs images right so we were able to train that fairly well" }, { "start": 300.44, "end": 303.42, "text": " and we don't prune it yet we have this full network." }, { "start": 303.42, "end": 312.16, "text": " Now we want to adapt this to a task where we want to recognize whether or not the animal" }, { "start": 312.16, "end": 313.74, "text": " is sick." }, { "start": 313.74, "end": 319.28000000000003, "text": " So we developed this app for a veterinarian and it's like a short screening for a particular" }, { "start": 319.28000000000003, "end": 325.14, "text": " disease that a cat might have and we already have this cats and dogs classifier so it's" }, { "start": 325.14, "end": 331.44, "text": " reasonable to assume that this classifier has some good features to work with cats and" }, { "start": 331.44, "end": 332.66, "text": " dog images." }, { "start": 332.66, "end": 337.44, "text": " So what we can do instead of because let's assume for this other task we just have this" }, { "start": 337.44, "end": 342.24, "text": " tiny little data set which is not enough to train a neural network of this size right" }, { "start": 342.24, "end": 348.16, "text": " but so in the first step we'll train this big neural network on the cats versus dogs" }, { "start": 348.16, "end": 354, "text": " and then we what we do is we transfer learning so we transfer all the weights right here" }, { "start": 354, "end": 360.38, "text": " and here we have a different task now sick or not sick right this is cat this is dog" }, { "start": 360.38, "end": 369.1, "text": " and here is sick or not sick not sick and of course we can't transfer these particular" }, { "start": 369.1, "end": 375.76, "text": " weights but we hope that the features here will sort of be the same so we transfer them" }, { "start": 375.76, "end": 382.38, "text": " and then we train these weights including the head right here this part we train it" }, { "start": 382.38, "end": 387.34, "text": " on this little data set and we hope that we already have this good starting point we only" }, { "start": 387.34, "end": 393.38, "text": " need to you know learn the basically the specifics of what makes these two data sets different" }, { "start": 393.38, "end": 400.9, "text": " and we won't have to learn entire task of dealing with cat and dog images from the get-go" }, { "start": 400.9, "end": 408.02, "text": " okay so this is called transfer learning now in this case we combine the two so first we" }, { "start": 408.02, "end": 413.7, "text": " want to transfer learn like if we build this app for vets and then we might say oh this" }, { "start": 413.7, "end": 418.38, "text": " is not you know this is not only for vets this is actually for anyone you know who has" }, { "start": 418.38, "end": 423.62, "text": " a cat or a dog at home so what we could do is build an app where anyone at home could" }, { "start": 423.62, "end": 429.62, "text": " scan their their cat and it would output like a probability of the cat having that disease" }, { "start": 429.62, "end": 434.09999999999997, "text": " so we want this neural network is still the same size as this neural network so now we" }, { "start": 434.09999999999997, "end": 441.06, "text": " want to do the pruning we want this neural network to become sparse to only have a couple" }, { "start": 441.06, "end": 447.78000000000003, "text": " of connections left such that it's a few kilobytes large but retain performance now they say" }, { "start": 447.78000000000003, "end": 455.86, "text": " when you do this step you can't just do the magnitude pruning like you did right here" }, { "start": 455.86, "end": 463.22, "text": " and why not because this is not this model right here is not the result of a training" }, { "start": 463.22, "end": 469.46, "text": " step like of a regular training process but is the result of a transfer learning process" }, { "start": 469.46, "end": 476.7, "text": " where first you do the big training and then second you adapt it and why is that the case" }, { "start": 476.7, "end": 482.85999999999996, "text": " well ultimately what you want to do is you want to prove the non-important weights now" }, { "start": 482.85999999999996, "end": 488.14, "text": " there could be a weight right here this one that is very important for the cat versus" }, { "start": 488.14, "end": 496.46, "text": " dog task but that is not important for the sick versus non-sick task and we also we know" }, { "start": 496.46, "end": 501.18, "text": " that in these transfer learning settings the weights they don't tend to move that much" }, { "start": 501.18, "end": 508.26, "text": " in general the research shows that once you've trained a neural network basically the beginning" }, { "start": 508.26, "end": 513.06, "text": " is important but then once you did it like if you adapt it or transfer learn it and so" }, { "start": 513.06, "end": 520.9399999999999, "text": " on the weights they won't move that much so in essence this weight maybe starts out right" }, { "start": 520.94, "end": 528, "text": " here and it will sort of stay around this place it will maybe go a little bit down because" }, { "start": 528, "end": 531.98, "text": " it's not important but it won't move much during transfer learning that's just a property" }, { "start": 531.98, "end": 537.46, "text": " of transfer learning so this paper here says we can't just use magnitude pruning when we" }, { "start": 537.46, "end": 543.34, "text": " transfer learn because what will basically go by what will basically say is the will" }, { "start": 543.34, "end": 549.2600000000001, "text": " assign the importance based on based on the original neural network task on the cat versus" }, { "start": 549.26, "end": 555.14, "text": " dog we we will miss specify the importance of the weights what we should do is actually" }, { "start": 555.14, "end": 560.9399999999999, "text": " measure the importance with respect to this task and how do they achieve it so on a high" }, { "start": 560.9399999999999, "end": 568.52, "text": " level they're basically saying okay if we start out well this was fatal if we start" }, { "start": 568.52, "end": 577.84, "text": " out with a point over here let's make that red red i want the color red well it's blue" }, { "start": 577.84, "end": 583.58, "text": " now so if we ah if we start out with a point over here what we should do what we should" }, { "start": 583.58, "end": 590.1800000000001, "text": " do is we should observe how it moves during transfer learning if it moves towards zero" }, { "start": 590.1800000000001, "end": 595.94, "text": " then it's probably not that important for the new task and if it moves to the to be" }, { "start": 595.94, "end": 602.1, "text": " even larger then it's probably important for that new task okay so that's that's a high" }, { "start": 602.1, "end": 607.86, "text": " level now how do you measure how it moves and what exactly how exactly do you do all" }, { "start": 607.86, "end": 613.4200000000001, "text": " of this during training such that you don't make mistakes that's the point of this paper" }, { "start": 613.4200000000001, "end": 620.1, "text": " they say we propose movement pruning a simple deterministic first-order weight pruning method" }, { "start": 620.1, "end": 625.86, "text": " that is more adaptive to pre-trained model fine-tuning we give mathematical foundations" }, { "start": 625.86, "end": 634.38, "text": " to the method and compare it to existing zero and first-order pruning methods okay so um" }, { "start": 634.38, "end": 642.98, "text": " yeah we said so that's basically on a high level that's that now how do they actually" }, { "start": 642.98, "end": 651.98, "text": " do it what they do right here is the following they say what we can define we can define" }, { "start": 651.98, "end": 661.26, "text": " each each network layer basically as a matrix multiplication by a weight now you can express" }, { "start": 661.26, "end": 667.78, "text": " pretty much any neural network as such a multiplication with a weight so you have x the in the signal" }, { "start": 667.78, "end": 674.46, "text": " in each layer and you multiply that by the weight matrix w now if you prune the neural" }, { "start": 674.46, "end": 681.1, "text": " network you can see that right here what you're saying is i basically in here i have the matrix" }, { "start": 681.1, "end": 688.34, "text": " m which is a mask so the mask is either zero or one for if a weight is active or if a weight" }, { "start": 688.34, "end": 695.78, "text": " is not active now this is not a matrix multiply actually this is like a hadamard product um" }, { "start": 695.78, "end": 703.14, "text": " but you have this mask matrix and what decides on this mask this mask is decided as you can" }, { "start": 703.14, "end": 714.46, "text": " see right here by this s so s s is a matrix that for each entry in w it will decide how" }, { "start": 714.46, "end": 720.74, "text": " important it is now in the classic sense in the magnitude pruning you already saw that" }, { "start": 720.74, "end": 728.06, "text": " this is just going to be the absolute value of w i j okay and then the top v simply means" }, { "start": 728.06, "end": 733.6999999999999, "text": " that you take the whoever are the most important the most magnitude that those are going to" }, { "start": 733.6999999999999, "end": 739.6999999999999, "text": " be one in the mask and everything else is going to be zero in the mask okay that's how" }, { "start": 739.6999999999999, "end": 746.9, "text": " this s this the w determines the s and the s determines the m okay so what you ultimately" }, { "start": 746.9, "end": 754.78, "text": " use is the m right here um but in now what we want to do is we want to actually make" }, { "start": 754.78, "end": 759.62, "text": " the s based on the movement and the movement is not really a defined concept because it" }, { "start": 759.62, "end": 767.5, "text": " goes over steps and so on so how do you do the movement in in a kind of dynamic way and" }, { "start": 767.5, "end": 775.5, "text": " this paper says you should do it by gradient so um you you should observe the gradient" }, { "start": 775.5, "end": 782.3, "text": " of your loss function with respect to this s matrix to this importance matrix right what" }, { "start": 782.3, "end": 797.0999999999999, "text": " does it mean so the what does it mean it means let's consider this quantity right here if" }, { "start": 797.0999999999999, "end": 805.66, "text": " s is the importance of a particular connection and if the gradient is large that means this" }, { "start": 805.66, "end": 812.42, "text": " this connection moves a lot right it like the loss pulls it into a particular direction" }, { "start": 812.42, "end": 817.5799999999999, "text": " okay so we're not talking about yet which direction actually the gradient has a sign" }, { "start": 817.5799999999999, "end": 823.2199999999999, "text": " it's either positive or negative right so by this quantity you can decide how much does" }, { "start": 823.2199999999999, "end": 831.42, "text": " this new task want this particular importance score to move so this is a direct direct measure" }, { "start": 831.42, "end": 837.3399999999999, "text": " of how much basically the loss function pulls on that importance score how much and now" }, { "start": 837.3399999999999, "end": 847.88, "text": " you can simply decide if and they have these they have i think they have a diagram yes" }, { "start": 847.88, "end": 856.54, "text": " so but i don't like that let's go so we have right here we have what's the value of this" }, { "start": 856.54, "end": 866.38, "text": " gradient of l with respect to s and here is w so if the gradient is positive and w is" }, { "start": 866.38, "end": 876.74, "text": " already positive that means the gradient goes into the positive direction so you increase" }, { "start": 876.74, "end": 882.26, "text": " the loss function in that let's put the negative gradient here because you do gradient descent" }, { "start": 882.26, "end": 889.66, "text": " right so so if the negative gradient is positive and the weight is already positive in in this" }, { "start": 889.66, "end": 895.8199999999999, "text": " case that means you're all the weight is already high but now the loss function wants to push" }, { "start": 895.8199999999999, "end": 901.14, "text": " it even higher so that must be a very very important weight right like it's like very" }, { "start": 901.14, "end": 909.3, "text": " good the same goes if the gradient the negative gradient is negative and the weight is already" }, { "start": 909.3, "end": 914.0999999999999, "text": " negative the weight being negative already means the weight you know it has a negative" }, { "start": 914.0999999999999, "end": 920.14, "text": " sign and then the gradient wants it to go even more negative the the optimization procedure" }, { "start": 920.14, "end": 926.38, "text": " says this thing should become even more negative and also we say that's probably a good weight" }, { "start": 926.38, "end": 931.18, "text": " now the other two cases means basically the weight's already positive but the gradient" }, { "start": 931.18, "end": 936.26, "text": " wants it to go negative which means it's pulled towards zero now it's entirely possible that" }, { "start": 936.26, "end": 943.98, "text": " it's going to cross zero and go like if you're here going from over here gonna here cross" }, { "start": 943.98, "end": 949.3199999999999, "text": " zero and become like super large but that violates our basic assumptions that the transfer" }, { "start": 949.3199999999999, "end": 953.38, "text": " learning doesn't move the weights too much right what you're caring for is basically" }, { "start": 953.38, "end": 959.06, "text": " this local neighborhood right here okay so you can make the fair assumption that these" }, { "start": 959.06, "end": 964.7, "text": " weights are not that important in the case where the negative gradient goes against the" }, { "start": 964.7, "end": 970.34, "text": " sign of the weight so this this is of course discrete right now but we can actually assign" }, { "start": 970.34, "end": 976.4200000000001, "text": " a number by how large the gradient is and by how large the weight already is and therefore" }, { "start": 976.4200000000001, "end": 985.34, "text": " we can make a score so the important score right here as you can see is the weight multiplied" }, { "start": 985.34, "end": 991.22, "text": " by the gradient of the weight and they can actually show mathematically that if you do" }, { "start": 991.22, "end": 996.48, "text": " this over multiple steps so you optimize while you do this pruning and they do some sort" }, { "start": 996.48, "end": 1001.9, "text": " of a soft pruning so you can kind of correct your mistakes later on I mean they have hard" }, { "start": 1001.9, "end": 1007.0600000000001, "text": " and soft pruning but in any case they can correct their mistakes later on this will" }, { "start": 1007.0600000000001, "end": 1012.0600000000001, "text": " actually result in these important scores being an accumulation over the training over" }, { "start": 1012.0600000000001, "end": 1020.34, "text": " the entire training of this quantity and that's pretty cool because that means eventually" }, { "start": 1020.34, "end": 1025.18, "text": " you'll sort of have a consistent estimator of these important scores across your training" }, { "start": 1025.18, "end": 1030.02, "text": " procedure okay because the main fear with something like this of course is that it's" }, { "start": 1030.02, "end": 1035.14, "text": " very brittle and very much depends on the training dynamics and who knows if in step" }, { "start": 1035.14, "end": 1041.38, "text": " one something bad happens and so on but the the math behind this here gives sort of more" }, { "start": 1041.38, "end": 1047.8600000000001, "text": " evidence that this it can be like a self-correcting mechanism it is actually not too dependent" }, { "start": 1047.86, "end": 1055.1799999999998, "text": " on the particular training dynamics so they do this experimental setup now they have some" }, { "start": 1055.1799999999998, "end": 1060.06, "text": " they have some quirks here actually let's first go to the the the actual different methods" }, { "start": 1060.06, "end": 1064.82, "text": " they compare different methods right here where they say okay there's magnitude pruning" }, { "start": 1064.82, "end": 1069.3, "text": " it's a zero with order which basically just means you just look at the weight magnitude" }, { "start": 1069.3, "end": 1076.4599999999998, "text": " that that's it it's top v which means you just pick the top whatever and the objective" }, { "start": 1076.46, "end": 1081.5, "text": " is just the loss and the scores are just the absolute value we've seen this now movement" }, { "start": 1081.5, "end": 1085.58, "text": " pruning on the other hand is first order which means you look at the movement in our case" }, { "start": 1085.58, "end": 1091.18, "text": " the gradient as you can see here those are the importance scores and you use this straight" }, { "start": 1091.18, "end": 1096.3, "text": " through estimator which is basically just a way of saying that even though you're masking" }, { "start": 1096.3, "end": 1100.9, "text": " some things in the forward step you shouldn't mask them in the in the gradient backward" }, { "start": 1100.9, "end": 1106.26, "text": " step because you still want gradient signal to reach so if you have layers and you have" }, { "start": 1106.26, "end": 1111.14, "text": " a weight right here at least that's how I understand it I have not read that paper but" }, { "start": 1111.14, "end": 1116.46, "text": " if you mask this one here you still want the gradient to sort of flow backwards because" }, { "start": 1116.46, "end": 1122.74, "text": " you still need the actual importance scores for the weights that here below connect to" }, { "start": 1122.74, "end": 1130.7, "text": " this weight I think that's what is meant not entirely sure in this though so but you can" }, { "start": 1130.7, "end": 1136.66, "text": " see that the objective function is also the actual loss function now this is contrasted" }, { "start": 1136.66, "end": 1144.04, "text": " to a baseline called l0 regularization which is quite similar but is also first order but" }, { "start": 1144.04, "end": 1149.72, "text": " has this sort of regularizer right here and uses the gumball softmax in order to determine" }, { "start": 1149.72, "end": 1155.02, "text": " the scores and as you can see it also has a different score function and it has this" }, { "start": 1155.02, "end": 1163.3799999999999, "text": " continuous hard concrete importance masking function sorry masking function and they have" }, { "start": 1163.3799999999999, "end": 1168.46, "text": " a variant of movement pruning that tends to perform a little bit better which is soft" }, { "start": 1168.46, "end": 1174.06, "text": " movement pruning where instead of just going by the loss function optimizing the loss function" }, { "start": 1174.06, "end": 1179.46, "text": " they optimize the loss function plus something now they have as you can see here they have" }, { "start": 1179.46, "end": 1188.9, "text": " a thresholding masking function and the threshold is actually dynamic or it's determined by" }, { "start": 1188.9, "end": 1193.52, "text": " the importance scores and they have then a regularizer that make the importance scores" }, { "start": 1193.52, "end": 1200.74, "text": " sparse so instead of saying we just want the top 5% of weights they now just put a lot" }, { "start": 1200.74, "end": 1207.78, "text": " of mass on this lambda right here which will cause the s to be sparse and you know they" }, { "start": 1207.78, "end": 1212.3, "text": " think if they are not happy with how many weights they can simply increase or decrease" }, { "start": 1212.3, "end": 1219.3, "text": " this lambda such that they get to their desired sparsity of course you see there's the direct" }, { "start": 1219.3, "end": 1226.7, "text": " trade-off with the loss function right so the more you put weight on this lambda the" }, { "start": 1226.7, "end": 1232.22, "text": " less weight is you put basically on the loss function itself so you have the trade-off" }, { "start": 1232.22, "end": 1239.06, "text": " here is very explicit whereas in basic movement pruning it's just given by you masking away" }, { "start": 1239.06, "end": 1247.18, "text": " completely the bottom 1-v of percent of the weights but you see the score function is" }, { "start": 1247.18, "end": 1259.94, "text": " the same oh well the score function here the score function is the same okay now there" }, { "start": 1259.94, "end": 1266.38, "text": " are quite a number of tricks here like there's this sparsity scheduling function and so on" }, { "start": 1266.38, "end": 1272.6200000000001, "text": " so as always in NLP and with any big models there are a bunch of engineering tricks that" }, { "start": 1272.6200000000001, "end": 1279.46, "text": " make everything work better and you can never exactly tell how much is due to that and how" }, { "start": 1279.46, "end": 1284.78, "text": " much is due to the the actual technique but if you know you can sort of assess whether" }, { "start": 1284.78, "end": 1290.82, "text": " or not these it's done well and this here actually the rationale makes sense and that's" }, { "start": 1290.82, "end": 1298.7, "text": " why I tend to think that it is actually a better method and the experiments are very" }, { "start": 1298.7, "end": 1305.82, "text": " very convincing let's say okay this is just a pictorial comparison where you can see movement" }, { "start": 1305.82, "end": 1311.7, "text": " pruning sorry magnitude pruning all it does is it looks at after you fine-tune what are" }, { "start": 1311.7, "end": 1316.3400000000001, "text": " the weights and it just cuts away everything in the middle doesn't care about how the weights" }, { "start": 1316.3400000000001, "end": 1322.6200000000001, "text": " were when before however movement pruning looks at the combination of what were the" }, { "start": 1322.6200000000001, "end": 1329.46, "text": " weights before and what are the weights now and it cuts away everything where the weights" }, { "start": 1329.46, "end": 1334.3, "text": " moved towards zero which are these quadrants right here and it leaves in everything where" }, { "start": 1334.3, "end": 1343.18, "text": " it moved away from zero or that's that's the ordering let's say that how much it moved" }, { "start": 1343.18, "end": 1350.7, "text": " okay experiments now as you might already have figured out by now in the machine learning" }, { "start": 1350.7, "end": 1358.78, "text": " and especially the NLP community the methods presented always outperform the previous methods" }, { "start": 1358.78, "end": 1364.26, "text": " in this case it's pretty special so they test this on these number of tasks quad and NLMI" }, { "start": 1364.26, "end": 1370.84, "text": " and QQP and these are quite hard tasks from an NLP perspective like squad is question" }, { "start": 1370.84, "end": 1377.4, "text": " answering MNLIs like language inference so these would be on the I would guess these" }, { "start": 1377.4, "end": 1384.18, "text": " are on the foreign NLP system on the harder side of tasks that's it's fairly cool and" }, { "start": 1384.18, "end": 1391.94, "text": " as you can see here now just focus on first of all focus on this MAP which is the magnitude" }, { "start": 1391.94, "end": 1401.2, "text": " pruning so that's the baseline if you will and the purple one the SMVP which is the soft" }, { "start": 1401.2, "end": 1406.6200000000001, "text": " movement pruning okay now you can also focus on the MVP right here but they're approximately" }, { "start": 1406.6200000000001, "end": 1414.1000000000001, "text": " the same now the RPP you can maybe see in the graph is performing fairly well even compared" }, { "start": 1414.1000000000001, "end": 1421.92, "text": " to the full model it's another baseline basically but we just want to compare those to and" }, { "start": 1421.92, "end": 1430.7, "text": " you can see that in this regime that the magnitude pruning outperforms the movement pruning but" }, { "start": 1430.7, "end": 1436.44, "text": " however in this regime the movement pruning is much better and that's the percent where" }, { "start": 1436.44, "end": 1441.14, "text": " the percent of remaining weights is very very low so this is kind of the extreme sparse" }, { "start": 1441.14, "end": 1447.66, "text": " case where you only have 10% or you only have 3% of the weights left and you can see that" }, { "start": 1447.66, "end": 1455.22, "text": " the movement pruning is outperforming the magnitude pruning by a lot okay now they do" }, { "start": 1455.22, "end": 1463.42, "text": " discover that so this this happens in all in all of these tasks as you can see right" }, { "start": 1463.42, "end": 1472.6200000000001, "text": " here and they do they do discover that if you then distill the model further so in distillation" }, { "start": 1472.62, "end": 1477.62, "text": " this is yet another technique that you can use to boost the performance of the transfer" }, { "start": 1477.62, "end": 1488.86, "text": " learned model so in distillation you would not only train the model on you so you have" }, { "start": 1488.86, "end": 1496.3999999999999, "text": " you now you have your this model that you transfer learn and you have the pruned version" }, { "start": 1496.3999999999999, "end": 1500.84, "text": " and in the pruned version what you would do is you would simply also train it on this" }, { "start": 1500.84, "end": 1506.58, "text": " data set but what you can also do is you can distill this model right here the one that" }, { "start": 1506.58, "end": 1510.6599999999999, "text": " you trained on the same task right that's that's presumably better because it still" }, { "start": 1510.6599999999999, "end": 1517.34, "text": " has all the weights okay you can run a data point through both so the same data point" }, { "start": 1517.34, "end": 1522.98, "text": " goes through this and you get an output which are logits which is like representing a distribution" }, { "start": 1522.98, "end": 1529.82, "text": " saying yeah it's about this high now instead of assigning the hard labels so here we also" }, { "start": 1529.82, "end": 1534.6599999999999, "text": " get the label right it's a supervised learning task like one and zero you also put the data" }, { "start": 1534.6599999999999, "end": 1544.1, "text": " point through this model right here obtain whatever that model would have said and let's" }, { "start": 1544.1, "end": 1553.6599999999999, "text": " say it's about this and now presumably this model is better already so you say well the" }, { "start": 1553.6599999999999, "end": 1559.74, "text": " label here says that it's this class but the model that's really good says you shouldn't" }, { "start": 1559.74, "end": 1566, "text": " be too sure about it so you can sort of mix the two losses and this this process of transferring" }, { "start": 1566, "end": 1572.14, "text": " the knowledge of this model to here is called distillation with the lower model being the" }, { "start": 1572.14, "end": 1578.72, "text": " teacher model now if you do distillation you can actually improve your performance even" }, { "start": 1578.72, "end": 1587.06, "text": " more and they show that in the experiments here especially again in the low in the low" }, { "start": 1587.06, "end": 1593.1, "text": " parameter regime but you can see for example in squad here that the distilled movement" }, { "start": 1593.1, "end": 1600.24, "text": " pruned method now catches up with the magnitude pruned method in the also in the high in the" }, { "start": 1600.24, "end": 1609.8999999999999, "text": " not so sparse regime okay and they analyze these weights and as you can see that expected" }, { "start": 1609.8999999999999, "end": 1616.22, "text": " the magnitude pruned method it will simply cut out anything right here that's not not" }, { "start": 1616.22, "end": 1622.08, "text": " a surprise whereas the movement pruned method it will leave a lot of these weights alive" }, { "start": 1622.08, "end": 1631.78, "text": " so as you can as you can see basically it's it's very much the case since you can outperform" }, { "start": 1631.78, "end": 1638.6200000000001, "text": " the red the yellow one can outperform the red one it is almost warranted to say that" }, { "start": 1638.6200000000001, "end": 1644.02, "text": " the this magnitude pruning wasn't the best choice it's actually a better choice to leave" }, { "start": 1644.02, "end": 1648.78, "text": " some of those weights in and actually cut some of the weights that are large out just" }, { "start": 1648.78, "end": 1654.66, "text": " based on their movement now the V here in the middle is of course due to the fact that" }, { "start": 1654.66, "end": 1663.06, "text": " if a weight is here it was probably not super important in the first place and since since" }, { "start": 1663.06, "end": 1669.42, "text": " this thing removes anything that moves towards zero any point starting let's say around here" }, { "start": 1669.42, "end": 1674.46, "text": " moving towards zero would end up here so all the points that end up in this region probably" }, { "start": 1674.46, "end": 1682.6200000000001, "text": " moved towards zero during training and therefore are cut away so there's there's not just like" }, { "start": 1682.6200000000001, "end": 1687.18, "text": " for there to be points there would have been points that started even more in the middle" }, { "start": 1687.18, "end": 1691.26, "text": " and then moved out to here right and there's just not as many so that's why you have that" }, { "start": 1691.26, "end": 1701.98, "text": " the V shape is very natural to expect right here so they they analyze this then in terms" }, { "start": 1701.98, "end": 1711.26, "text": " of the where the model cuts the weights out now they experiment on a BERT base thing that" }, { "start": 1711.26, "end": 1715.9, "text": " which is a transformer with 12 layers and if you don't know what BERT is you can go" }, { "start": 1715.9, "end": 1726.14, "text": " look at my video on BERT but you can see that the magnitude pruning will sort of cut all" }, { "start": 1726.14, "end": 1732.18, "text": " the weights on the layers equally so it will sort of go through the layers and take away" }, { "start": 1732.18, "end": 1738.18, "text": " let's say 90 percent of each here you can see 10 percent of weights remaining whereas" }, { "start": 1738.18, "end": 1743.5, "text": " the movement pruning especially the soft movement pruning will actually make a large difference" }, { "start": 1743.5, "end": 1750.9, "text": " it will remove much much more of the later layers weights and keep the lower layer weights" }, { "start": 1750.9, "end": 1755.86, "text": " which i think if you do transfer learning from these language models it tends to be" }, { "start": 1755.86, "end": 1761.5, "text": " that the lower layers maybe pick up if you if you think of a CNN the lower layers might" }, { "start": 1761.5, "end": 1766.46, "text": " pick up you know on these essential image features like corners and so on and the higher" }, { "start": 1766.46, "end": 1771.64, "text": " layers will pick up on the task specific things now if you do like a big pre-training tasks" }, { "start": 1771.64, "end": 1775.38, "text": " you might have a lot of information that you need there but then if you distill it and" }, { "start": 1775.38, "end": 1781.46, "text": " transfer it down to like a small set small task where only a single thing is important" }, { "start": 1781.46, "end": 1785.88, "text": " like in squad it's only important what's the answer to the question then you can probably" }, { "start": 1785.88, "end": 1791.22, "text": " remove a lot of that superfluous information that was there like high level features from" }, { "start": 1791.22, "end": 1798.7, "text": " the pre-training task i mean that's my my guess here but they also have explained that" }, { "start": 1798.7, "end": 1806.1000000000001, "text": " so yeah that was this paper if you're still here and you enjoyed it leave a like tell" }, { "start": 1806.1, "end": 1833.1399999999999, "text": " me in the comments what you think and i'll see you next time bye bye" } ]
kP-dXK9JEhY
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Symbolic Knowledge Distillation: from General Language Models to Commonsense Models (Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "gpt-3", "knowledge distillation", "teacher", "student", "nlp", "natural language processing", "gpt3", "prompt engineering", "symbolic knowledge", "symbolic reasoning", "symbolic nlp", "knowledge graphs", "triples", "what does gpt-3 know", "does gpt-3 understand" ]
#gpt3 #knowledge #symbolic Symbolic knowledge models are usually trained on human-generated corpora that are cumbersome and expensive to create. Such corpora consist of structured triples of symbolic knowledge. This paper takes a different approach and attempts to generate such a corpus by prompting GPT-3. Results show that clever prompting, combined with targeted small critic models trained on human ratings can outperform both human-generated data, as well as the teacher model (GPT-3) itself. The results of this paper give a general recipe for automatically building corpora for various NLP tasks by extracting samples from large language models. OUTLINE: 0:00 - Intro & Overview 2:30 - Sponsor: Weights & Biases 4:15 - Commonsense Knowledge Graphs 7:50 - ATOMIC dataset 10:00 - Generating the corpus from a model 13:00 - Prompting GPT-3 15:30 - Generating Events 18:40 - Generating Inferences 23:00 - Evaluating the created dataset 26:45 - Introducing the critic 31:25 - Using the critic to filter the data 36:30 - Training a student on the generated data 41:00 - Key Findings 44:45 - Comments & Conclusion Paper: https://arxiv.org/abs/2110.07178 Code & Corpus: https://github.com/peterwestai2/symbolic-knowledge-distillation Sponsor: Weights & Biases https://wandb.com https://community.wandb.ai/ Abstract: The common practice for training commonsense models has gone from-human-to-corpus-to-machine: humans author commonsense knowledge graphs in order to train commonsense models. In this work, we investigate an alternative, from-machine-to-corpus-to-machine: general language models author these commonsense knowledge graphs to train commonsense models. Our study leads to a new framework, Symbolic Knowledge Distillation. As with prior art in Knowledge Distillation (Hinton et al., 2015), our approach uses larger models to teach smaller models. A key difference is that we distill knowledge symbolically-as text-in addition to the neural model. We also distill only one aspect-the commonsense of a general language model teacher, allowing the student to be a different type, a commonsense model. Altogether, we show that careful prompt engineering and a separately trained critic model allow us to selectively distill high-quality causal commonsense from GPT-3, a general language model. Empirical results demonstrate that, for the first time, a human-authored commonsense knowledge graph is surpassed by our automatically distilled variant in all three criteria: quantity, quality, and diversity. In addition, it results in a neural commonsense model that surpasses the teacher model's commonsense capabilities despite its 100x smaller size. We apply this to the ATOMIC resource, and share our new symbolic knowledge graph and commonsense models. Authors: Peter West, Chandra Bhagavatula, Jack Hessel, Jena D. Hwang, Liwei Jiang, Ronan Le Bras, Ximing Lu, Sean Welleck, Yejin Choi Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher LinkedIn: https://www.linkedin.com/in/ykilcher BiliBili: https://space.bilibili.com/1824646584 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Hi there. Today we'll look at symbolic knowledge distillation from general language models to common-sense models by Peter West and others of the University of Washington and the Allen Institute for Artificial Intelligence. On a high level this paper takes a new approach to symbolic knowledge generation, so to automatically coming up with knowledge graphs, with symbolic knowledge graphs, and rather than trying to mine this symbolic knowledge automatically from raw text or from existing knowledge bases, they mine it from GPT-3. So they use the GPT-3 large language model in order to first come up with a corpus that gives them a corpus of symbolic knowledge and then they use that corpus in order to train a model that they call a common-sense model, but essentially a knowledge graph completion model. So this is a new paradigm where you go what they say from machine to corpus to machine and it is there the paradigm they advertise here in contrast to what people did before the from human to corpus to machine, which is where humans generate a corpus and then you train the machine on that corpus. So we're gonna look into how they do it. It's pretty surprising what they find in that for example the distilled model, the models they come up with at the end, they tend to be better not only than the humans or the human fed models, they even tend to be better than the original teacher, the GPT-3 teacher, and this is a result of how they combine the different elements here of the system and they strategically bring in outside help in the form of human knowledge. So this could be a recipe for much more broad applications, not only knowledge graph generation but various natural language tasks. They combine cleverly prompting, training small models and as I said bringing in small amounts of human annotated data strategically. So as I said we'll go through it, we'll look at the different stages and yeah tell me what you think in the comments, subscribe if you haven't and let's dive in. But first a quick word from our sponsor Weights and Biases, your one-stop-shop. If you're a machine learning researcher, practitioner, a hobbyist, a power user, it does not matter. Weights and Biases is with you from the inception of your idea, tracking your experiments, to really getting the fine details right, optimizing your hyper parameters up until you deploy your model and track all of your metrics. Not only does it do that, it also organizes your data sets, your models and you can generate super cool reports from all of that. In addition to that, it lets you have great insight into what you research and what you produce and all of this runs in the cloud really effortless with a single line of code. Though today I want to talk to you about a yet not so well known feature of Weights and Biases and that is the Weights and Biases community. So I believe they recently migrated this from like a giant slack onto this new sleek community website. It's a discourse based forum essentially where you can get help not only for Weights and Biases stuff but also machine learning in general. But not only is it a help page, it's a discussion forum about all things machine learning. Also they organize regular events, book reading groups and paper discussions and so on. So if you're interested don't hesitate and hop over to the introduce yourself thread and take part in the discussion. As I said this is still a pretty young place but it's bound to grow over the near future. And of course if you want any advice on Weights and Biases, how to use it, what are the best practices are, this is the best place to do so. Thanks again to Weights and Biases for sponsoring this video. It's an awesome system, I invite you to check it out and back to the video. So what's the deal with knowledge? I can't read this without pronouncing knowledge as knowledge. So what you want to do is you want to have symbolic knowledge. And in this particular case the symbolic knowledge they're after is what they they always have to have what they call an event and a relation. So an event, relation, an event they give some examples but essentially the event is some kind of situation that a person finds themselves in. It's common sense reasoning. So it's not like Napoleon was born in France or something like that. I don't even know if that's true but it's not that it's common sense reasoning. So the event is a person finds themselves in some sort of situation or two people. It can be one or two people. Then the relation is some sort of, well it's probably better we make an example. The relation is some sort of this. For example this is the situation right here. X starts running. The relation is, these are predefined relations and we deal with seven different relations right here. The seven relations are chosen because they represent sort of causal knowledge. One of them is effect which means what is the effect of this event or what is one possible effect of this event. And the goal of the model is to come up with this thing down here. So you prompt the model by saying X starts running. We have the effect relation so the model is supposed to come up with the effect of starting to run. Now there is not only one correct example. There are many correct examples right here but one example is X gets in shape. This is not a direct logical, you can't prove it mathematically right or you can't check it and that's why it's called common sense reasoning. A human would look at this says X starts running. Is the effect of that that X might get in shape? Yes probably. So that is a valid triple. Let's look at another one. Let's maybe take one with two people in it. No there is none with two people right here. Let's see X is not well liked. That is the event. The relation that we give to the model right here is the react relation which means how does X react to that event. So X feels lonely and that as well kind of makes sense. If you as a human judge this you apply your common sense makes sense. So I hope the task is clear. Given an event and a relation where the event can be anything like anything involving X or X and Y which are one or two people and any piece of text. This is any piece of text right here and the relation they are seven different predefined relations. You have to give the result right here the inference and the inference again can be any text. So this is quite a challenging task. Humans have come up with a data set for this task. I don't know where they describe it right here. They have come up with a data set called atomic 2020. So the atomic data set is a data set that where humans go and humans make these triples right. It's a data set made by humans as you would make data sets. This takes a lot of work, costs a lot of money and we would like to have methods for not having to do that necessarily. So either to cut out the humans all together or to use the human labor more strategically such that it doesn't cost as much. And they also the the model that's trained on this human corpus is called common sorry comet 2020. That is if we simply feed the human corpus to a deep learning model have it learn to predict the inference from the event in relation that model is called comet 2020 and that's going to be our baseline and obviously we're going to surpass that. So the result of this paper is going to be a another corpus called atomic 10x which is 10 times the size of the human atomic data set which is going to be better or larger and with appropriate filtering also better in quality than the original corpus which is surprising right. And then also the comet distill model which is the model that's trained on the atomic 10x data set and that is going to be as well depending on the filtering largely better than the original comet 2020 model that's trained on human data. So that's the goal that we we get there we get to a model that is better than it had we trained on human data and along we get a corpus that we that is better than the human corpus. So again the original the original paradigm was humans go humans think with their brains like here from the brain comes a corpus right so I invent a bunch of corpus entries right maybe I'm many like many I let many humans do this I come up with a corpus manually then I feed that corpus to the model through the machine so there is a neural network right here I trained the neural network on that machine neural network thinks yeah cool the new paradigm is the following I take a big giant neural network such as GPT 3 that is not necessarily trained on this task right I'm gonna make GPT 3 have one more layer than the other network to symbolize its absolute bigness so GPT 3 is trained on the whole world wide is this a globe this is a globe GPT 3 is trained on the whole world wide web or at least readable part of it and I'm gonna use GPT 3 in order to come up with the corpus okay so I'm gonna use GPT 3 to come up with this corpus and then optionally optionally I'm going to filter that corpus with a model that I train on human data so this is where the human component can come in right here now we're gonna see how this happens but the obvious the obvious effect of this is that the human no longer needs to come up with examples the human simply has to rate examples in order for the filtering mechanism to get better which is much easier and much cheaper and we don't need as much I guess maybe we do but it's it's essentially it's much cheaper for the human to rate than to come up with stuff so we use GPT 3 to come up with a corpus and then we use that corpus to train our model so we're gonna use the power of these large language models to come up with corpus and of course the magic is going to be how are we going to do this and the answer is clever prompting so there's a bunch of math right here about knowledge distillation I'm not sure I guess they just had to put this in to get accepted because you need like a bunch of math and yada yada yada but essentially it's irrelevant so yeah sorry if if you disagree authors but yeah this is it's essentially irrelevant so the key findings of the paper so you ain't we're gonna skip this because we get this at the end so what do we mean by clever prompting we want to come up with a corpus the corpus should have events the corpus should have inference relations the relations of course we know the corpus should have inferences so they have this general template for prompting GPT 3 they start off with a task prompt where you briefly describe the task inside the prompt and then they have a bunch of examples so the input the output the input the output the input the output and then they have another input and this is the input they're actually interested in and they're gonna let GPT 3 complete the output right here now given that they have the task description right here and they have this pattern of repeating inputs and outputs you can get GPT 3 to continue the pattern and actually give you what you want right here we've seen this a number of times right here this is called prompting or prompt engineering and I predicted this right away when GPT 3 came out that prompt engineering would sort of be like a quite an important thing to do in the future so importantly we don't train GPT 3 we simply query GPT 3 in a very structured way in order for us to create a data set essentially I think that's even against the terms of service of GPT 3 but they must have gotten an exception here this paper is also cool because it finds a number of interesting things in prompting now some of you might have been aware of this others not but there are interesting effects for example you want to number these things right here you want to label them with actual numbers such as that they say this increases the degree to which GPT 3 follows previous examples and also when they construct examples for example like this X goes jogging they also say if they replace X and Y and so on by common names it also works better so you really want to I think it's it's still a bit of an art form to see exactly how you have to phrase the things you put into GPT 3 such that you get out something good so the first task they're gonna do is they gonna create these events ultimately we want to create the data set but the first step is we create the events so they go to the atomic data set this human generated data set and what they do is they simply sample so they collect a set of 100 high quality events from atomic 2020 to use in our prompt note that yes they do make use of the human corpus right here which is a little bit unfair when you think of comparing to that but given that it is a hundred examples that is something you could still easily come up with even even as a researcher right or you could you could pay a bunch of humans 100 examples isn't that much so we go and we collect a hundred and then we simply every time we go to GPT 3 we randomly sample 10 we put the 10 inside of the prompt right we simply list the 10 events for example X overcomes evil with good X does not learn from Y and so on we simply list that and then we put 11 and we let GPT 3 continue the prompt right here and that here is going to give us an a next event I guess we could even let it continue more but there are these issues like repeating and so on so I'm not exactly sure how well that would go but in any case you can generate essentially infinity events because even if you even if you put the exact 10 same events in the exact same order right since you sample you sample with with nuclear sampling it doesn't give you the same results therefore you can generate a lot of events in fact they generate 165,000 unique events which is as you can see quite a bit more than the human authored corpus which only has 6.2 thousand events and all you needed as a base is 100 of these events right 100 were enough in order to create 165,000 that is the power of these large language models you can essentially count on them already having built in all of this sort of language modeling all of this well you might call it knowledge or you might simply call it data that they have absorbed but you can query that in a particular way and the way we create here it gives us new events alright so this is the way pretty simple that we create new events now from these events we want to create these triples right the triples are going to actually make up the data set so for a triple remember we need an we need an event we need a relation and then we need an inference so the events we now have check the relations there are just seven of them they're always the same in this data set so we have them as well so now we can simply pair take an event from the data we created pair it with a relation and then we have to come up with an inference and again we're going to use clever prompting and GPT-3 so what the authors do is that for each relation they come up with a they come up with a textual representation of that relation so the by the way the the relations are described right here there is X adder how X is perceived after an event how X reacts in response to an event what effect does it have on X what was X's intent in event and so on so these are the kinds of relations that we're dealing with right here they give an example here for the need relation which is here what X needed for the event to happen and their textual representation is as follows so I'm going to put the event with an event number right here according to what they said at the beginning it helps when you number the individual entries then they're gonna write prerequisites for this to happen comma and then the actual inference goes here right until here so they're going to repeat this this is one if they're going to repeat it two three and so on again they're going to put ten samples into the prompt with the inference filled out and then for the eleventh one they're simply going to put the event right here and the prompt that they have already used and then they're gonna let GPT-3 fill in the rest right here and that thing is going to be the GPT-3 provided inference so they say as in 3.2 we sample ten few-shot examples for each prompt from a set of 100 human authored cases for each pair of event and relation we generate ten inferences with the second largest form following the same hyperparameters as event generation now they don't use the largest form of GPT-3 because it would cost them too much money so they use the second largest one but you do the same thing you you generate just very very very few human authored cases so that's 100 100 human authored cases and I don't know if that is 100 per relation or just 100 in total I don't know I'm gonna guess maybe per relations I don't know it doesn't say just says we replace anonymous names with generic names as this improves quality however it doesn't matter if it's a hundred or or 700 it's still very very few compared to having humans come up with an entire corpus so what you want to do is you simply want to give GPT-3 a little bit of input like ten different things of input and these ten things you may vary a little bit over time you might not even have to and let's not forget the task description up here that also seems to be important and then they come up with 165,000 times 7 inferences which you can filter a little bit but in the end this results in 6.46 million atomic date atomic style data triples they call it atomic 10 X as it contains an order of magnitude more triples than the atomic 2020 with respect to the seven relations they investigate so this is a giant corpus right now of machine generated of machine generated data I'm trying to find table one where they compare the size right here okay so here you can see just the the comparison of what that cost you can see the total count in atomic 2020 is 600,000 triples and atomic 10 X has 10 times more triples yet cost only a fraction of what atomic 2020 cost now the question is of course is this data set any good you know this here at least has been generated by humans you know humans aren't perfect but at least they have some common sense therefore for a common-sense data set it might be important does the atomic 10 X data set is it any good and that's what they go about investigating right now so they evaluate degenerated common-sense knowledge graph so they evaluate now these triples first of all they look for diversity so they have a few diversity related metrics such as like hard diversity or this what they call blue soft uniqueness where they check for overlap between the triples and look how many of them are unique they also look they also try to train a GPT-2 model and look at the entropy of the different data sets and in general they find that the machine generated data is quite diverse as quite high entropy there's not much of a problem right there it's also quite unique it is not as unique it seems as the human generated data but given that you have so much more of it the absolute number of unique things is way way higher the real kicker comes when you do actual human evaluation so they have spent a lot of time into humanly evaluating the quality of whatever they produce the humans have been asked to rate these triples into for example always often so when you see an event a relation and an inference you as a human have to say does this inference always or often come from the event and relation is it sometimes is it likely if you said one of the two it would be accepted the triplet would be counted as good if you if you as a human say ah that's kind of far-fetched or that never happens or is invalid then you would you would reject the triple if you look at this then you can see right here in the human authored data set the humans accepted 68% of the triples and rejected 11% whereas this top row right here is the unfiltered data set we got from GPT-3 with the prompting and you can see that the accept probability is slightly lower actually quite a bit lower like 8% lower and humans also reject more often and even sometimes not available means that you can't make any any judgment on it so the number is it's way larger right but it's a bit lowering quality as assessed by humans as it seems so now they gear up they say okay can we make this better and their answer is yes by introducing a critic so making the teacher model more critical where they go about the following they have this formula right here maybe that math isn't as useless after all so if you simply generate language you simply have GPT-3 be a model a probabilistic sequence model a language model that simply says what is the probability of the next token and I'm going to sample by that probability but now what you can do is you can introduce a critic so if this is your language model can introduce a critic and the critic also will have an opinion on how likely a particular sequence is so now you consider both you can you generate data with GPT-3 and then you let a critic evaluate that data which essentially amounts to multiplying the two probabilities but in practice you would simply run the critic on the data and then the critic decides is this data good data or bad data and that together GPT-3 and the critic they you hope that they will produce a better data set than just GPT-3 alone because now the critic is able to filter whatever GPT-3 says and only let the good data pass note that I think it's maybe the critic is probably capped at one or something like this so this is a filtering mechanism it's not like you can you can introduce new bad data so we would expect that the filtered corpus is is hopefully better the question is how much better is it ok so now we introduce this critic and the critic is now is where we strategically bring in human data the critic would remove unacceptable knowledge in practice this means filtering the generations in the large corpus and creating a range of new corporate that are higher quality yet still larger scale than the human the human authored one so for this they gather a training set of correct versus incorrect humans who human judgments on a randomly sampled set of 10k entries of atomic 10x so they take their large corpus they take 10,000 entries of it and they let humans rate those 10,000 entries much like they did here for the evaluation but this now counts as this now goes as training data for the critic and that's where I said we strategically bring in human knowledge and not only do we strategically bring it in rather than letting letting humans generate the entire corpus we also make it easier for humans because this isn't coming up with examples coming up with examples is hard it takes time these humans here they simply need to read examples of the corpus these 10,000 examples and for each one they have to rate it and this can even be noisy so other than in the evaluation where I think they gather three labels per data set they say we only gather one annotation for each example so this can be noisy since its training data and yeah that seems to be quite a quite a good way of thinking about human labor in machine learning it's sort of where can we bring it in to make the biggest difference now when they do that yeah so they argue this here it's vastly cheaper than human construction instead we argue that a more useful and efficient role for humans in knowledge graph construction is to correct the mistakes of the teacher by evaluating a small number of examples so they train a Roberta large model on the human annotated data as the critic the critic of course doesn't have to be a language model it doesn't have to generate anything it simply has to look at the data and decide is it good or is it not good so they train that and and and yeah now we go back to the table right here these here as we go down the table more and more filtering is applied by the critic so now you have a choice as a designer right you have this critic model it tells you about how good a particular sample is and now you get to the side the cutoff you know how much do I want to filter this data right here now this will have a trade-off the more you filter the smaller the resulting data set is going to get so we can look at a few examples for the first step you go from 5.6 million as for sorry from 6.5 to 5.1 which is a reduction in somewhere between somewhere on the order of 20% of data so you throw away 20% of data look at that the accept percentage jumps from 78% to 88% so now human raters human raters rate these triples in the corpus that you generate and then filter as more likely a more acceptable than the corpus that was authored by humans like this is this is astounding already right now there might be a little bit of an effect here in that probably the humans that rated were the same humans or at least you know humans from the same population or distribution then the humans that rated the training data for the critic and therefore all of these humans might sort of have the same taste whereas the humans that came up with the atomic 2020 data set might be different humans I'm not sure but it is astounding and even more astounding as you filter more you can clearly see the accept percentage therefore the quality of the data set going up and to the point where you keep about 40% of the data that you've generated from GPT-3 yet the accept percentage is like 96% which is 10% higher 10 percentage points higher than the accept percentage of the human generated data right this is quite this is quite astounding and still you have like four to five times more data than the human created corpus and they do some they do some they do some evaluation also again on the diversity of the data and actually turns out that as you go as you filter more the diversity increases so that would be the relative diversity meaning sort of how how many percent of the data are you know different from other how are unique and so on so it appears to be that GPT-3 when it just creates data it will create a lot of good stuff but also some garbage and as it turns out the garbage seems to be always the same kind of garbage therefore if you filter out the garbage also the uniqueness and diversity of your overall data set increases so it's quite the opposite of you know you always hear this no I guess I guess it's that the saying that all was it was it all unhealthy families are the same or all healthy ones I don't know but in this case all the garbage GPT-3 produces is kind of the same kind of garbage or the same few types of garbage whereas all the good stuff it produces is relatively unique alright so now we have a really yeah this is what gets filtered out right here so first of all logical misalignment consists of events or inferences joined in a logically inconsistent manner that makes sense that that gets filtered out X cannot find his shirt as a result X is wearing a shirt that should probably not be in there and two awkward phrasings which consists of events or inferences that in isolation are incoherent ambiguous or awkwardly phrased so when an event itself is already poorly phrased the model essentially has no chance of generating good inference like person X has a fire in the bath yeah so there there is just there's a high chance that a human would would negatively rate this or not accept it or say it not available even like from the get-go doesn't even matter what the relation and the inference is right so the last step is the last step is we want to go back to a model so we have taken GPT-3 a model we have used it strategically to come up with a corpus that is both better in quality more diverse and larger than the corpus that humans have generated and now we want to go back to creating a model from that corpus so when a train an inference model because right now we can only generate data but we would like to have an inference model and remember the original task the inference is to given an event and a relation to produce and to produce either produce an inference right which you could do with GPT-3 but it's it's sort of not super good so you have to filter with the critic but that means you have to like sample until the critic says it's okay what you'd rather have is you just like to have a model that is trained on this data to produce directly the inference rather than having to prompt GPT-3 right so the model can be way smaller than GPT-3 because it's directly trained on the task and you don't have to pay open AI every time you call it so now I want to go back to a model and that's pretty easy right we simply take a the same architecture as this comet model remember the comet model is the model that's trained on this human data to do this inference simply take same architecture and we train it on the large corpus and you know what what turns out so on it turns out that we do that and then we let again humans rate the triples that the models produce so for the comet 2020 this is the model that's trained on the human corpus this here you can again see the accept percentage by the raters of of the corpus itself when we train the model on it to do this inference for us the model produces triples that get accepted 81% of the time which is pretty good right so if the corpus gets accepted this much we train a model on it an NLP model it's pretty good to drop only a little bit in the accept percentage that means the model has essentially learned because this is obviously on a on a validation set the model has obviously learned to do this inference somewhat correctly now if we do the same on our large corpus that has lower accept percentage we see the same effect so the model kind of learns in fact overall we see the same effects if we now add a critic with a low threshold then we surpass already this model and we if we add a critic with the high threshold so that would correspond to throwing away 60% of the data as we saw before then the model that we end up with has an 87.5% accept rating so now we have a model that's the same size as this comet 2020 right it is an a trained model it's not GPT-3 it's not prompting it's a trained model that does inference in these triples and it is better it is better than the model the same model that's been trained on the human corpus which is pretty cool right so you even you it not only does it surpass GPT-3 itself it also surpasses the human generated data and yeah that's pretty cool so this was essentially the the findings of this paper I guess we can go back to conclude with what they said at the beginning the key findings right here learning symbolic knowledge from language models can be framed as a symbolic extension to knowledge distillation okay so that's the that's the mathy part symbolic knowledge distillation constructs a high quality knowledge graph at scale okay that's their data generation process a critical teacher results in a higher quality student now granted the critical teacher makes the quality of the data set better and therefore any model the student that is trained on that data set it will become better a notable ingredient right here is that here is where we actually bring in the human the human annotated data into this process of automated knowledge graph generation because we need to train that critic critical teachers or not a student can outperform the knowledge source so this is about that the student model they exceed the quality of GPT-3 which so if you simply prompt GPT-3 you get some of these triples right yet the student models that are trained on these triples that come from GPT-3 outperform GPT-3 which can make sense since GPT-3 is a general purpose language model and these student models are specifically trained on that particular kind of data and also I have to say the student models they are their GPT-2 so in the student model what you would do is you have your corpus you have event relation inference event relation inference where these are your samples this is this is all text essentially right so the relation you can abstract that in a either a single token or you can make it into a text as they did so they feed that into a GPT-2 which is something that you can train and that GPT-2 is trained to take in an event and a relation into the context and then generate the inference much like GPT-3 but now you actually train it specifically on this particular data structure and data set and the GPT-2 you pre train it of course on language modeling and it could be that some of the effect that the students model exceed the quality of GPT-3 might be due to the fact that it starts out already from a GPT-2 checkpoint it's it's a possib like there's a possibility that that also plays into the game right here machines can now win over humans for automatic knowledge graph construction so that is a little bit it's a little bit is a little bit shady since the critics you train are still using humans but I would agree that at least the paper shows that there are better places to use human knowledge than letting humans come up with a text corpus because these text corpora can be generated pretty easily using large language models and proper prompting and if you do that then you can use the human knowledge to filter whatever the language models output and that might be much more effective so this was it for this paper I hope to not only show this paper but show give you a little bit of an idea of what all is possible with these language models and proper prompt engineering and I think this serves as a little bit of a recipe for many or a lot of things to come a lot of NLP tasks to be done could be tackled in this particular way alright so yeah let me know what you think in the comments and bye bye
[ { "start": 0, "end": 5.24, "text": " Hi there. Today we'll look at symbolic knowledge distillation from general" }, { "start": 5.24, "end": 10.040000000000001, "text": " language models to common-sense models by Peter West and others of the University" }, { "start": 10.040000000000001, "end": 14.700000000000001, "text": " of Washington and the Allen Institute for Artificial Intelligence. On a high" }, { "start": 14.700000000000001, "end": 21.04, "text": " level this paper takes a new approach to symbolic knowledge generation, so to" }, { "start": 21.04, "end": 24.8, "text": " automatically coming up with knowledge graphs, with symbolic knowledge graphs," }, { "start": 24.8, "end": 30.6, "text": " and rather than trying to mine this symbolic knowledge automatically from" }, { "start": 30.6, "end": 37.52, "text": " raw text or from existing knowledge bases, they mine it from GPT-3. So they" }, { "start": 37.52, "end": 43.88, "text": " use the GPT-3 large language model in order to first come up with a corpus" }, { "start": 43.88, "end": 50.120000000000005, "text": " that gives them a corpus of symbolic knowledge and then they use that corpus" }, { "start": 50.12, "end": 55.64, "text": " in order to train a model that they call a common-sense model, but essentially a" }, { "start": 55.64, "end": 63, "text": " knowledge graph completion model. So this is a new paradigm where you go what they" }, { "start": 63, "end": 69.2, "text": " say from machine to corpus to machine and it is there the paradigm they" }, { "start": 69.2, "end": 74.62, "text": " advertise here in contrast to what people did before the from human to" }, { "start": 74.62, "end": 79.64, "text": " corpus to machine, which is where humans generate a corpus and then you train the" }, { "start": 79.64, "end": 85.52, "text": " machine on that corpus. So we're gonna look into how they do it. It's pretty" }, { "start": 85.52, "end": 91.32, "text": " surprising what they find in that for example the distilled model, the models" }, { "start": 91.32, "end": 97.4, "text": " they come up with at the end, they tend to be better not only than the humans or" }, { "start": 97.4, "end": 103.48, "text": " the human fed models, they even tend to be better than the original teacher, the" }, { "start": 103.48, "end": 109.48, "text": " GPT-3 teacher, and this is a result of how they combine the different elements" }, { "start": 109.48, "end": 115.84, "text": " here of the system and they strategically bring in outside" }, { "start": 115.84, "end": 122.24000000000001, "text": " help in the form of human knowledge. So this could be a recipe for much more" }, { "start": 122.24000000000001, "end": 128.28, "text": " broad applications, not only knowledge graph generation but various" }, { "start": 128.28, "end": 133.52, "text": " natural language tasks. They combine cleverly prompting, training small" }, { "start": 133.52, "end": 138.6, "text": " models and as I said bringing in small amounts of human annotated data" }, { "start": 138.6, "end": 143.84, "text": " strategically. So as I said we'll go through it, we'll look at the different" }, { "start": 143.84, "end": 149.04, "text": " stages and yeah tell me what you think in the comments, subscribe if you haven't" }, { "start": 149.04, "end": 155.76, "text": " and let's dive in. But first a quick word from our sponsor Weights and Biases," }, { "start": 155.76, "end": 160.28, "text": " your one-stop-shop. If you're a machine learning researcher, practitioner, a" }, { "start": 160.28, "end": 165.35999999999999, "text": " hobbyist, a power user, it does not matter. Weights and Biases is with you from the" }, { "start": 165.36, "end": 169.88000000000002, "text": " inception of your idea, tracking your experiments, to really getting the fine" }, { "start": 169.88000000000002, "end": 174.44000000000003, "text": " details right, optimizing your hyper parameters up until you deploy your" }, { "start": 174.44000000000003, "end": 178.92000000000002, "text": " model and track all of your metrics. Not only does it do that, it also organizes" }, { "start": 178.92000000000002, "end": 183.68, "text": " your data sets, your models and you can generate super cool reports from all of" }, { "start": 183.68, "end": 187.84, "text": " that. In addition to that, it lets you have great insight into what you" }, { "start": 187.84, "end": 192.52, "text": " research and what you produce and all of this runs in the cloud really effortless" }, { "start": 192.52, "end": 196.76000000000002, "text": " with a single line of code. Though today I want to talk to you about a yet not so" }, { "start": 196.76000000000002, "end": 200.64000000000001, "text": " well known feature of Weights and Biases and that is the Weights and Biases" }, { "start": 200.64000000000001, "end": 204.92000000000002, "text": " community. So I believe they recently migrated this from like a giant slack" }, { "start": 204.92000000000002, "end": 210.08, "text": " onto this new sleek community website. It's a discourse based forum essentially" }, { "start": 210.08, "end": 215.66000000000003, "text": " where you can get help not only for Weights and Biases stuff but also machine" }, { "start": 215.66000000000003, "end": 220.4, "text": " learning in general. But not only is it a help page, it's a discussion forum about" }, { "start": 220.4, "end": 225.52, "text": " all things machine learning. Also they organize regular events, book reading" }, { "start": 225.52, "end": 229.68, "text": " groups and paper discussions and so on. So if you're interested don't hesitate" }, { "start": 229.68, "end": 234.08, "text": " and hop over to the introduce yourself thread and take part in the discussion." }, { "start": 234.08, "end": 238.16, "text": " As I said this is still a pretty young place but it's bound to grow over the" }, { "start": 238.16, "end": 242.12, "text": " near future. And of course if you want any advice on Weights and Biases, how to" }, { "start": 242.12, "end": 246.88, "text": " use it, what are the best practices are, this is the best place to do so. Thanks" }, { "start": 246.88, "end": 250.76, "text": " again to Weights and Biases for sponsoring this video. It's an awesome system, I" }, { "start": 250.76, "end": 255.4, "text": " invite you to check it out and back to the video." }, { "start": 256.6, "end": 264.2, "text": " So what's the deal with knowledge? I can't read this without" }, { "start": 264.2, "end": 269.88, "text": " pronouncing knowledge as knowledge. So what you want to do is you want to have" }, { "start": 269.88, "end": 274.71999999999997, "text": " symbolic knowledge. And in this particular case the symbolic knowledge" }, { "start": 274.72, "end": 280.88000000000005, "text": " they're after is what they they always have to have what they call an event and" }, { "start": 280.88000000000005, "end": 289.68, "text": " a relation. So an event, relation, an event they give some examples but" }, { "start": 289.68, "end": 295.20000000000005, "text": " essentially the event is some kind of situation that a person finds themselves" }, { "start": 295.20000000000005, "end": 301.36, "text": " in. It's common sense reasoning. So it's not like Napoleon was born in France or" }, { "start": 301.36, "end": 304.96000000000004, "text": " something like that. I don't even know if that's true but it's not that it's" }, { "start": 304.96000000000004, "end": 309.44, "text": " common sense reasoning. So the event is a person finds themselves in some sort of" }, { "start": 309.44, "end": 315.92, "text": " situation or two people. It can be one or two people. Then the relation is some" }, { "start": 315.92, "end": 323.48, "text": " sort of, well it's probably better we make an example. The relation is some" }, { "start": 323.48, "end": 330.40000000000003, "text": " sort of this. For example this is the situation right here. X starts running." }, { "start": 330.4, "end": 336.71999999999997, "text": " The relation is, these are predefined relations and we deal with seven" }, { "start": 336.71999999999997, "end": 341.79999999999995, "text": " different relations right here. The seven relations are chosen because they" }, { "start": 341.79999999999995, "end": 348.71999999999997, "text": " represent sort of causal knowledge. One of them is effect which means what" }, { "start": 348.71999999999997, "end": 354.64, "text": " is the effect of this event or what is one possible effect of this event. And" }, { "start": 354.64, "end": 360.64, "text": " the goal of the model is to come up with this thing down here. So you prompt the" }, { "start": 360.64, "end": 365.44, "text": " model by saying X starts running. We have the effect relation so the model is" }, { "start": 365.44, "end": 370.2, "text": " supposed to come up with the effect of starting to run. Now there is not only" }, { "start": 370.2, "end": 375.36, "text": " one correct example. There are many correct examples right here but one" }, { "start": 375.36, "end": 381.52, "text": " example is X gets in shape. This is not a direct logical, you can't prove it" }, { "start": 381.52, "end": 385.56, "text": " mathematically right or you can't check it and that's why it's called common" }, { "start": 385.56, "end": 392.4, "text": " sense reasoning. A human would look at this says X starts running. Is the" }, { "start": 392.4, "end": 398.56, "text": " effect of that that X might get in shape? Yes probably. So that is a valid triple." }, { "start": 398.56, "end": 406.88, "text": " Let's look at another one. Let's maybe take one with two people in it. No there" }, { "start": 406.88, "end": 415.12, "text": " is none with two people right here. Let's see X is not well liked. That is the" }, { "start": 415.12, "end": 421.08, "text": " event. The relation that we give to the model right here is the react relation" }, { "start": 421.08, "end": 431.56, "text": " which means how does X react to that event. So X feels lonely and that as" }, { "start": 431.56, "end": 436.92, "text": " well kind of makes sense. If you as a human judge this you apply your" }, { "start": 436.92, "end": 443.12, "text": " common sense makes sense. So I hope the task is clear. Given an event and a" }, { "start": 443.12, "end": 451.32, "text": " relation where the event can be anything like anything involving X or X and Y" }, { "start": 451.32, "end": 456.52, "text": " which are one or two people and any piece of text. This is any piece of" }, { "start": 456.52, "end": 462.47999999999996, "text": " text right here and the relation they are seven different" }, { "start": 462.47999999999996, "end": 468.84, "text": " predefined relations. You have to give the result right here the inference and" }, { "start": 468.84, "end": 474.52, "text": " the inference again can be any text. So this is quite a challenging task." }, { "start": 474.52, "end": 480, "text": " Humans have come up with a data set for this task. I don't know where they" }, { "start": 480, "end": 486.28, "text": " describe it right here. They have come up with a data set called atomic 2020. So" }, { "start": 486.28, "end": 491.91999999999996, "text": " the atomic data set is a data set that where humans go and humans make these" }, { "start": 491.91999999999996, "end": 497.91999999999996, "text": " triples right. It's a data set made by humans as you would make data sets. This" }, { "start": 497.91999999999996, "end": 504.91999999999996, "text": " takes a lot of work, costs a lot of money and we would like to have methods for" }, { "start": 504.91999999999996, "end": 510.71999999999997, "text": " not having to do that necessarily. So either to cut out the humans all" }, { "start": 510.71999999999997, "end": 516.04, "text": " together or to use the human labor more strategically such that it doesn't cost" }, { "start": 516.04, "end": 523.8399999999999, "text": " as much. And they also the the model that's trained on this human corpus is" }, { "start": 523.8399999999999, "end": 529.9599999999999, "text": " called common sorry comet 2020. That is if we simply feed the human corpus to a" }, { "start": 529.9599999999999, "end": 534.88, "text": " deep learning model have it learn to predict the inference from the event in" }, { "start": 534.88, "end": 539.9599999999999, "text": " relation that model is called comet 2020 and that's going to be our baseline and" }, { "start": 539.9599999999999, "end": 545.7199999999999, "text": " obviously we're going to surpass that. So the result of this paper is going to be" }, { "start": 545.72, "end": 553.76, "text": " a another corpus called atomic 10x which is 10 times the size of the human atomic" }, { "start": 553.76, "end": 561.32, "text": " data set which is going to be better or larger and with appropriate filtering" }, { "start": 561.32, "end": 567.08, "text": " also better in quality than the original corpus which is surprising right. And then" }, { "start": 567.08, "end": 574.0400000000001, "text": " also the comet distill model which is the model that's trained on the atomic" }, { "start": 574.04, "end": 579.3199999999999, "text": " 10x data set and that is going to be as well depending on the filtering largely" }, { "start": 579.3199999999999, "end": 587, "text": " better than the original comet 2020 model that's trained on human data. So" }, { "start": 587, "end": 593, "text": " that's the goal that we we get there we get to a model that is better than it" }, { "start": 593, "end": 598.76, "text": " had we trained on human data and along we get a corpus that we that is better" }, { "start": 598.76, "end": 606.28, "text": " than the human corpus. So again the original the original paradigm was" }, { "start": 606.28, "end": 612.52, "text": " humans go humans think with their brains like here from the brain comes a corpus" }, { "start": 612.52, "end": 618.3199999999999, "text": " right so I invent a bunch of corpus entries right maybe I'm many like many I" }, { "start": 618.3199999999999, "end": 624.04, "text": " let many humans do this I come up with a corpus manually then I feed that corpus" }, { "start": 624.04, "end": 630.16, "text": " to the model through the machine so there is a neural network right here I" }, { "start": 630.16, "end": 635.36, "text": " trained the neural network on that machine neural network thinks yeah cool" }, { "start": 635.36, "end": 645.5999999999999, "text": " the new paradigm is the following I take a big giant neural network such as GPT" }, { "start": 645.5999999999999, "end": 651.8, "text": " 3 that is not necessarily trained on this task right I'm gonna make GPT 3" }, { "start": 651.8, "end": 656.1999999999999, "text": " have one more layer than the other network to symbolize its absolute" }, { "start": 656.1999999999999, "end": 667.24, "text": " bigness so GPT 3 is trained on the whole world wide is this a globe this is a" }, { "start": 667.24, "end": 674.1999999999999, "text": " globe GPT 3 is trained on the whole world wide web or at least readable part" }, { "start": 674.2, "end": 684.0400000000001, "text": " of it and I'm gonna use GPT 3 in order to come up with the corpus okay so I'm" }, { "start": 684.0400000000001, "end": 690.44, "text": " gonna use GPT 3 to come up with this corpus and then optionally optionally" }, { "start": 690.44, "end": 697.5600000000001, "text": " I'm going to filter that corpus with a model that I train on human data so this" }, { "start": 697.5600000000001, "end": 702.8000000000001, "text": " is where the human component can come in right here now we're gonna see how this" }, { "start": 702.8, "end": 708.9599999999999, "text": " happens but the obvious the obvious effect of this is that the human no" }, { "start": 708.9599999999999, "end": 714.16, "text": " longer needs to come up with examples the human simply has to rate examples in" }, { "start": 714.16, "end": 717.76, "text": " order for the filtering mechanism to get better which is much easier and much" }, { "start": 717.76, "end": 723.7199999999999, "text": " cheaper and we don't need as much I guess maybe we do but it's it's" }, { "start": 723.7199999999999, "end": 727.5999999999999, "text": " essentially it's much cheaper for the human to rate than to come up with stuff" }, { "start": 727.6, "end": 736.32, "text": " so we use GPT 3 to come up with a corpus and then we use that corpus to train our" }, { "start": 736.32, "end": 743.76, "text": " model so we're gonna use the power of these large language models to come up" }, { "start": 743.76, "end": 748.16, "text": " with corpus and of course the magic is going to be how are we going to do this" }, { "start": 748.16, "end": 755.28, "text": " and the answer is clever prompting so there's a bunch of math right here about" }, { "start": 755.28, "end": 759.88, "text": " knowledge distillation I'm not sure I guess they just had to put this in to" }, { "start": 759.88, "end": 764.8, "text": " get accepted because you need like a bunch of math and yada yada yada but" }, { "start": 764.8, "end": 773.4399999999999, "text": " essentially it's irrelevant so yeah sorry if if you disagree authors but" }, { "start": 773.52, "end": 781, "text": " yeah this is it's essentially irrelevant so the key findings of the paper" }, { "start": 781, "end": 786.64, "text": " so you ain't we're gonna skip this because we get this at the end so what" }, { "start": 786.64, "end": 791.8, "text": " do we mean by clever prompting we want to come up with a corpus the corpus" }, { "start": 791.8, "end": 798.04, "text": " should have events the corpus should have inference relations the relations of" }, { "start": 798.04, "end": 803.68, "text": " course we know the corpus should have inferences so they have this general" }, { "start": 803.68, "end": 810.28, "text": " template for prompting GPT 3 they start off with a task prompt where you briefly" }, { "start": 810.28, "end": 816.8399999999999, "text": " describe the task inside the prompt and then they have a bunch of examples so" }, { "start": 816.8399999999999, "end": 822.04, "text": " the input the output the input the output the input the output and then" }, { "start": 822.04, "end": 826.56, "text": " they have another input and this is the input they're actually interested in and" }, { "start": 826.56, "end": 830.8, "text": " they're gonna let GPT 3 complete the output right here now given that they" }, { "start": 830.8, "end": 835.3199999999999, "text": " have the task description right here and they have this pattern of repeating" }, { "start": 835.32, "end": 841.4000000000001, "text": " inputs and outputs you can get GPT 3 to continue the pattern and actually give" }, { "start": 841.4000000000001, "end": 846.32, "text": " you what you want right here we've seen this a number of times right here this" }, { "start": 846.32, "end": 851.9000000000001, "text": " is called prompting or prompt engineering and I predicted this right" }, { "start": 851.9000000000001, "end": 856.96, "text": " away when GPT 3 came out that prompt engineering would sort of be like a" }, { "start": 856.96, "end": 863.2800000000001, "text": " quite an important thing to do in the future so importantly we don't train GPT" }, { "start": 863.28, "end": 870.8, "text": " 3 we simply query GPT 3 in a very structured way in order for us to create" }, { "start": 870.8, "end": 876.64, "text": " a data set essentially I think that's even against the terms of service of GPT" }, { "start": 876.64, "end": 882, "text": " 3 but they must have gotten an exception here this paper is also cool because it" }, { "start": 882, "end": 887.28, "text": " finds a number of interesting things in prompting now some of you might have" }, { "start": 887.28, "end": 892.12, "text": " been aware of this others not but there are interesting effects for example you" }, { "start": 892.12, "end": 896.92, "text": " want to number these things right here you want to label them with actual" }, { "start": 896.92, "end": 903.36, "text": " numbers such as that they say this increases the degree to which GPT 3" }, { "start": 903.36, "end": 911.4, "text": " follows previous examples and also when they construct examples for example like" }, { "start": 911.4, "end": 917.96, "text": " this X goes jogging they also say if they replace X and Y and so on by common" }, { "start": 917.96, "end": 923.84, "text": " names it also works better so you really want to I think it's it's still a bit of" }, { "start": 923.84, "end": 929.84, "text": " an art form to see exactly how you have to phrase the things you put into GPT 3" }, { "start": 929.84, "end": 934.94, "text": " such that you get out something good so the first task they're gonna do is they" }, { "start": 934.94, "end": 939.64, "text": " gonna create these events ultimately we want to create the data set but the" }, { "start": 939.64, "end": 946.5600000000001, "text": " first step is we create the events so they go to the atomic data set this" }, { "start": 946.56, "end": 954.2399999999999, "text": " human generated data set and what they do is they simply sample so they collect" }, { "start": 954.2399999999999, "end": 960.8399999999999, "text": " a set of 100 high quality events from atomic 2020 to use in our prompt note" }, { "start": 960.8399999999999, "end": 966.64, "text": " that yes they do make use of the human corpus right here which is a little bit" }, { "start": 966.64, "end": 971.88, "text": " unfair when you think of comparing to that but given that it is a hundred" }, { "start": 971.88, "end": 976.6, "text": " examples that is something you could still easily come up with even even as a" }, { "start": 976.6, "end": 982.56, "text": " researcher right or you could you could pay a bunch of humans 100 examples isn't" }, { "start": 982.56, "end": 992.24, "text": " that much so we go and we collect a hundred and then we simply every time we" }, { "start": 992.24, "end": 999, "text": " go to GPT 3 we randomly sample 10 we put the 10 inside of the prompt right we" }, { "start": 999, "end": 1005.88, "text": " simply list the 10 events for example X overcomes evil with good X does not" }, { "start": 1005.88, "end": 1012.4, "text": " learn from Y and so on we simply list that and then we put 11 and we let GPT 3" }, { "start": 1012.4, "end": 1019.76, "text": " continue the prompt right here and that here is going to give us an a next event" }, { "start": 1019.76, "end": 1024.2, "text": " I guess we could even let it continue more but there are these issues like" }, { "start": 1024.2, "end": 1030.56, "text": " repeating and so on so I'm not exactly sure how well that would go but in any" }, { "start": 1030.56, "end": 1036.16, "text": " case you can generate essentially infinity events because even if you even" }, { "start": 1036.16, "end": 1040.24, "text": " if you put the exact 10 same events in the exact same order right since you" }, { "start": 1040.24, "end": 1046.64, "text": " sample you sample with with nuclear sampling it doesn't give you the same" }, { "start": 1046.64, "end": 1053.52, "text": " results therefore you can generate a lot of events in fact they generate 165,000" }, { "start": 1053.52, "end": 1060.8, "text": " unique events which is as you can see quite a bit more than the human authored" }, { "start": 1060.8, "end": 1066.84, "text": " corpus which only has 6.2 thousand events and all you needed as a base is" }, { "start": 1066.84, "end": 1074.44, "text": " 100 of these events right 100 were enough in order to create 165,000 that" }, { "start": 1074.44, "end": 1079.44, "text": " is the power of these large language models you can essentially count on them" }, { "start": 1079.44, "end": 1086.4, "text": " already having built in all of this sort of language modeling all of this well" }, { "start": 1086.4, "end": 1091.6000000000001, "text": " you might call it knowledge or you might simply call it data that they have" }, { "start": 1091.6000000000001, "end": 1096.8, "text": " absorbed but you can query that in a particular way and the way we create" }, { "start": 1096.8, "end": 1101.96, "text": " here it gives us new events alright so this is the way pretty simple that we" }, { "start": 1101.96, "end": 1107.72, "text": " create new events now from these events we want to create these triples right" }, { "start": 1107.72, "end": 1113.44, "text": " the triples are going to actually make up the data set so for a triple remember" }, { "start": 1113.44, "end": 1118.32, "text": " we need an we need an event we need a relation and then we need an inference" }, { "start": 1118.32, "end": 1123.44, "text": " so the events we now have check the relations there are just seven of them" }, { "start": 1123.44, "end": 1127.92, "text": " they're always the same in this data set so we have them as well so now we can" }, { "start": 1127.92, "end": 1134.04, "text": " simply pair take an event from the data we created pair it with a relation and" }, { "start": 1134.04, "end": 1138.12, "text": " then we have to come up with an inference and again we're going to use" }, { "start": 1138.12, "end": 1146.72, "text": " clever prompting and GPT-3 so what the authors do is that for each relation" }, { "start": 1146.72, "end": 1155.68, "text": " they come up with a they come up with a textual representation of that relation" }, { "start": 1155.68, "end": 1163.6, "text": " so the by the way the the relations are described right here there is X adder" }, { "start": 1163.6, "end": 1169.76, "text": " how X is perceived after an event how X reacts in response to an event what" }, { "start": 1169.76, "end": 1176.3999999999999, "text": " effect does it have on X what was X's intent in event and so on so these are" }, { "start": 1176.3999999999999, "end": 1180.28, "text": " the kinds of relations that we're dealing with right here they give an" }, { "start": 1180.28, "end": 1187.24, "text": " example here for the need relation which is here what X needed for the event to" }, { "start": 1187.24, "end": 1192.3999999999999, "text": " happen and their textual representation is as follows so I'm going to put the" }, { "start": 1192.4, "end": 1198, "text": " event with an event number right here according to what they said at the" }, { "start": 1198, "end": 1203.44, "text": " beginning it helps when you number the individual entries then they're gonna" }, { "start": 1203.44, "end": 1211.76, "text": " write prerequisites for this to happen comma and then the actual inference goes" }, { "start": 1211.76, "end": 1218.1200000000001, "text": " here right until here so they're going to repeat this this is one if they're" }, { "start": 1218.12, "end": 1224.1999999999998, "text": " going to repeat it two three and so on again they're going to put ten samples" }, { "start": 1224.1999999999998, "end": 1228.52, "text": " into the prompt with the inference filled out and then for the eleventh one" }, { "start": 1228.52, "end": 1235.8799999999999, "text": " they're simply going to put the event right here and the prompt that they" }, { "start": 1235.8799999999999, "end": 1240.6799999999998, "text": " have already used and then they're gonna let GPT-3 fill in the rest right here and" }, { "start": 1240.68, "end": 1253.28, "text": " that thing is going to be the GPT-3 provided inference so they say as in 3.2" }, { "start": 1253.28, "end": 1259.0800000000002, "text": " we sample ten few-shot examples for each prompt from a set of 100 human authored" }, { "start": 1259.0800000000002, "end": 1265.4, "text": " cases for each pair of event and relation we generate ten inferences with" }, { "start": 1265.4, "end": 1271.4, "text": " the second largest form following the same hyperparameters as event generation" }, { "start": 1271.4, "end": 1276.92, "text": " now they don't use the largest form of GPT-3 because it would cost them too" }, { "start": 1276.92, "end": 1283.2, "text": " much money so they use the second largest one but you do the same thing you you" }, { "start": 1283.2, "end": 1291.7800000000002, "text": " generate just very very very few human authored cases so that's 100 100 human" }, { "start": 1291.78, "end": 1301.32, "text": " authored cases and I don't know if that is 100 per relation or just 100 in total" }, { "start": 1301.32, "end": 1311.96, "text": " I don't know I'm gonna guess maybe per relations I don't know it doesn't say" }, { "start": 1311.96, "end": 1316.44, "text": " just says we replace anonymous names with generic names as this improves" }, { "start": 1316.44, "end": 1324.24, "text": " quality however it doesn't matter if it's a hundred or or 700 it's still very" }, { "start": 1324.24, "end": 1329.1200000000001, "text": " very few compared to having humans come up with an entire corpus so what you" }, { "start": 1329.1200000000001, "end": 1333.68, "text": " want to do is you simply want to give GPT-3 a little bit of input like ten" }, { "start": 1333.68, "end": 1338.88, "text": " different things of input and these ten things you may vary a little bit over" }, { "start": 1338.88, "end": 1344.8, "text": " time you might not even have to and let's not forget the task description up" }, { "start": 1344.8, "end": 1354.56, "text": " here that also seems to be important and then they come up with 165,000 times 7" }, { "start": 1354.56, "end": 1362.36, "text": " inferences which you can filter a little bit but in the end this results in 6.46" }, { "start": 1362.36, "end": 1368.8799999999999, "text": " million atomic date atomic style data triples they call it atomic 10 X as it" }, { "start": 1368.8799999999999, "end": 1374.08, "text": " contains an order of magnitude more triples than the atomic 2020 with" }, { "start": 1374.08, "end": 1380.3999999999999, "text": " respect to the seven relations they investigate so this is a giant corpus" }, { "start": 1380.3999999999999, "end": 1386.9199999999998, "text": " right now of machine generated of machine generated data I'm trying to" }, { "start": 1386.9199999999998, "end": 1392.6, "text": " find table one where they compare the size right here okay so here you can see" }, { "start": 1392.6, "end": 1398.84, "text": " just the the comparison of what that cost you can see the total count in" }, { "start": 1398.84, "end": 1407, "text": " atomic 2020 is 600,000 triples and atomic 10 X has 10 times more triples yet" }, { "start": 1407, "end": 1415.9199999999998, "text": " cost only a fraction of what atomic 2020 cost now the question is of course is" }, { "start": 1415.9199999999998, "end": 1420.8, "text": " this data set any good you know this here at least has been generated by" }, { "start": 1420.8, "end": 1425.1599999999999, "text": " humans you know humans aren't perfect but at least they have some common sense" }, { "start": 1425.16, "end": 1431.64, "text": " therefore for a common-sense data set it might be important does the atomic 10 X" }, { "start": 1431.64, "end": 1439.6000000000001, "text": " data set is it any good and that's what they go about investigating right now so" }, { "start": 1439.6000000000001, "end": 1446.72, "text": " they evaluate degenerated common-sense knowledge graph so they evaluate now" }, { "start": 1446.72, "end": 1451.96, "text": " these triples first of all they look for diversity so they have a few diversity" }, { "start": 1451.96, "end": 1458.44, "text": " related metrics such as like hard diversity or this what they call blue" }, { "start": 1458.44, "end": 1463.04, "text": " soft uniqueness where they check for overlap between the triples and look how" }, { "start": 1463.04, "end": 1470.32, "text": " many of them are unique they also look they also try to train a GPT-2 model and" }, { "start": 1470.32, "end": 1478.3600000000001, "text": " look at the entropy of the different data sets and in general they find that" }, { "start": 1478.36, "end": 1485.12, "text": " the machine generated data is quite diverse as quite high entropy there's" }, { "start": 1485.12, "end": 1493, "text": " not much of a problem right there it's also quite unique it is not as unique it" }, { "start": 1493, "end": 1498.36, "text": " seems as the human generated data but given that you have so much more of it" }, { "start": 1498.36, "end": 1505.4799999999998, "text": " the absolute number of unique things is way way higher the real kicker comes" }, { "start": 1505.48, "end": 1510.68, "text": " when you do actual human evaluation so they have spent a lot of time into" }, { "start": 1510.68, "end": 1517.72, "text": " humanly evaluating the quality of whatever they produce the humans have" }, { "start": 1517.72, "end": 1525.08, "text": " been asked to rate these triples into for example always often so when you see" }, { "start": 1525.08, "end": 1530.28, "text": " an event a relation and an inference you as a human have to say does this" }, { "start": 1530.28, "end": 1535.3600000000001, "text": " inference always or often come from the event and relation is it sometimes" }, { "start": 1535.36, "end": 1541.24, "text": " is it likely if you said one of the two it would be accepted the triplet would" }, { "start": 1541.24, "end": 1545.3999999999999, "text": " be counted as good if you if you as a human say ah that's kind of far-fetched" }, { "start": 1545.3999999999999, "end": 1556.6399999999999, "text": " or that never happens or is invalid then you would you would reject the triple if" }, { "start": 1556.64, "end": 1565.24, "text": " you look at this then you can see right here in the human authored data set the" }, { "start": 1565.24, "end": 1573.5, "text": " humans accepted 68% of the triples and rejected 11% whereas this top row right" }, { "start": 1573.5, "end": 1578.5200000000002, "text": " here is the unfiltered data set we got from GPT-3 with the prompting and you can" }, { "start": 1578.5200000000002, "end": 1583.16, "text": " see that the accept probability is slightly lower actually quite a bit" }, { "start": 1583.16, "end": 1589.88, "text": " lower like 8% lower and humans also reject more often and even sometimes not" }, { "start": 1589.88, "end": 1597.3200000000002, "text": " available means that you can't make any any judgment on it so the number is it's" }, { "start": 1597.3200000000002, "end": 1602.8000000000002, "text": " way larger right but it's a bit lowering quality as assessed by humans as it" }, { "start": 1602.8000000000002, "end": 1610.44, "text": " seems so now they gear up they say okay can we make this better and their answer" }, { "start": 1610.44, "end": 1618.76, "text": " is yes by introducing a critic so making the teacher model more critical where" }, { "start": 1618.76, "end": 1622.24, "text": " they go about the following they have this formula right here maybe that math" }, { "start": 1622.24, "end": 1629.6000000000001, "text": " isn't as useless after all so if you simply generate language you simply have" }, { "start": 1629.6000000000001, "end": 1636, "text": " GPT-3 be a model a probabilistic sequence model a language model that" }, { "start": 1636, "end": 1641.2, "text": " simply says what is the probability of the next token and I'm going to sample" }, { "start": 1641.2, "end": 1647.44, "text": " by that probability but now what you can do is you can introduce a critic so if" }, { "start": 1647.44, "end": 1652.84, "text": " this is your language model can introduce a critic and the critic also" }, { "start": 1652.84, "end": 1658.36, "text": " will have an opinion on how likely a particular sequence is so now you" }, { "start": 1658.36, "end": 1664.44, "text": " consider both you can you generate data with GPT-3 and then you let a critic" }, { "start": 1664.44, "end": 1669.68, "text": " evaluate that data which essentially amounts to multiplying the two" }, { "start": 1669.68, "end": 1675.92, "text": " probabilities but in practice you would simply run the critic on the data and" }, { "start": 1675.92, "end": 1682, "text": " then the critic decides is this data good data or bad data and that together" }, { "start": 1682, "end": 1689.4, "text": " GPT-3 and the critic they you hope that they will produce a better data set than" }, { "start": 1689.4, "end": 1695.2, "text": " just GPT-3 alone because now the critic is able to filter whatever GPT-3 says" }, { "start": 1695.2, "end": 1703.16, "text": " and only let the good data pass note that I think it's maybe the critic is" }, { "start": 1703.16, "end": 1708, "text": " probably capped at one or something like this so this is a filtering mechanism" }, { "start": 1708, "end": 1714.64, "text": " it's not like you can you can introduce new bad data so we would expect that the" }, { "start": 1714.64, "end": 1721.1200000000001, "text": " filtered corpus is is hopefully better the question is how much better is it" }, { "start": 1721.1200000000001, "end": 1728.0400000000002, "text": " ok so now we introduce this critic and the critic is now is where we" }, { "start": 1728.0400000000002, "end": 1734.8000000000002, "text": " strategically bring in human data the critic would remove unacceptable" }, { "start": 1734.8000000000002, "end": 1738.96, "text": " knowledge in practice this means filtering the generations in the large" }, { "start": 1738.96, "end": 1743.0800000000002, "text": " corpus and creating a range of new corporate that are higher quality yet" }, { "start": 1743.08, "end": 1751.1599999999999, "text": " still larger scale than the human the human authored one so for this they" }, { "start": 1751.1599999999999, "end": 1756.6799999999998, "text": " gather a training set of correct versus incorrect humans who human judgments on" }, { "start": 1756.6799999999998, "end": 1763.4399999999998, "text": " a randomly sampled set of 10k entries of atomic 10x so they take their large" }, { "start": 1763.4399999999998, "end": 1769.6, "text": " corpus they take 10,000 entries of it and they let humans rate those 10,000" }, { "start": 1769.6, "end": 1777.04, "text": " entries much like they did here for the evaluation but this now counts as this" }, { "start": 1777.04, "end": 1781.6399999999999, "text": " now goes as training data for the critic and that's where I said we" }, { "start": 1781.6399999999999, "end": 1787, "text": " strategically bring in human knowledge and not only do we strategically bring" }, { "start": 1787, "end": 1792.1599999999999, "text": " it in rather than letting letting humans generate the entire corpus we also make" }, { "start": 1792.1599999999999, "end": 1797.28, "text": " it easier for humans because this isn't coming up with examples coming up with" }, { "start": 1797.28, "end": 1801.8799999999999, "text": " examples is hard it takes time these humans here they simply need to read" }, { "start": 1801.8799999999999, "end": 1807.48, "text": " examples of the corpus these 10,000 examples and for each one they have to" }, { "start": 1807.48, "end": 1813.32, "text": " rate it and this can even be noisy so other than in the evaluation where I" }, { "start": 1813.32, "end": 1817.8799999999999, "text": " think they gather three labels per data set they say we only gather one" }, { "start": 1817.8799999999999, "end": 1823.92, "text": " annotation for each example so this can be noisy since its training data and" }, { "start": 1823.92, "end": 1831.92, "text": " yeah that seems to be quite a quite a good way of thinking about human labor" }, { "start": 1831.92, "end": 1837.2, "text": " in machine learning it's sort of where can we bring it in to make the biggest" }, { "start": 1837.2, "end": 1844.72, "text": " difference now when they do that yeah so they argue this here it's vastly cheaper" }, { "start": 1844.72, "end": 1849.76, "text": " than human construction instead we argue that a more useful and efficient role" }, { "start": 1849.76, "end": 1854.48, "text": " for humans in knowledge graph construction is to correct the mistakes" }, { "start": 1854.48, "end": 1860.32, "text": " of the teacher by evaluating a small number of examples so they train a" }, { "start": 1860.32, "end": 1866.96, "text": " Roberta large model on the human annotated data as the critic the critic" }, { "start": 1866.96, "end": 1870.56, "text": " of course doesn't have to be a language model it doesn't have to generate" }, { "start": 1870.56, "end": 1874.8799999999999, "text": " anything it simply has to look at the data and decide is it good or is it not" }, { "start": 1874.88, "end": 1889.2800000000002, "text": " good so they train that and and and yeah now we go back to the table right here" }, { "start": 1889.2800000000002, "end": 1897.92, "text": " these here as we go down the table more and more filtering is applied by the" }, { "start": 1897.92, "end": 1902.8400000000001, "text": " critic so now you have a choice as a designer right you have this critic" }, { "start": 1902.84, "end": 1909.1999999999998, "text": " model it tells you about how good a particular sample is and now you get to" }, { "start": 1909.1999999999998, "end": 1914.6, "text": " the side the cutoff you know how much do I want to filter this data right here" }, { "start": 1914.6, "end": 1921.36, "text": " now this will have a trade-off the more you filter the smaller the resulting" }, { "start": 1921.36, "end": 1927.72, "text": " data set is going to get so we can look at a few examples for the first step you" }, { "start": 1927.72, "end": 1934.2, "text": " go from 5.6 million as for sorry from 6.5 to 5.1 which is a reduction in" }, { "start": 1934.2, "end": 1942.2, "text": " somewhere between somewhere on the order of 20% of data so you throw away 20% of" }, { "start": 1942.2, "end": 1949.48, "text": " data look at that the accept percentage jumps from 78% to 88% so now human" }, { "start": 1949.48, "end": 1956.72, "text": " raters human raters rate these triples in the corpus that you generate and then" }, { "start": 1956.72, "end": 1964.16, "text": " filter as more likely a more acceptable than the corpus that was authored by" }, { "start": 1964.16, "end": 1972.56, "text": " humans like this is this is astounding already right now there might be a" }, { "start": 1972.56, "end": 1978.68, "text": " little bit of an effect here in that probably the humans that rated were the" }, { "start": 1978.68, "end": 1984.3600000000001, "text": " same humans or at least you know humans from the same population or distribution" }, { "start": 1984.36, "end": 1992.32, "text": " then the humans that rated the training data for the critic and therefore all of" }, { "start": 1992.32, "end": 1996.12, "text": " these humans might sort of have the same taste whereas the humans that came up" }, { "start": 1996.12, "end": 2002.28, "text": " with the atomic 2020 data set might be different humans I'm not sure but it is" }, { "start": 2002.28, "end": 2007.4799999999998, "text": " astounding and even more astounding as you filter more you can clearly see the" }, { "start": 2007.4799999999998, "end": 2013.1999999999998, "text": " accept percentage therefore the quality of the data set going up and to the" }, { "start": 2013.2, "end": 2019.92, "text": " point where you keep about 40% of the data that you've generated from GPT-3 yet" }, { "start": 2019.92, "end": 2027.16, "text": " the accept percentage is like 96% which is 10% higher 10 percentage points" }, { "start": 2027.16, "end": 2033.8400000000001, "text": " higher than the accept percentage of the human generated data right this is quite" }, { "start": 2033.8400000000001, "end": 2039.8, "text": " this is quite astounding and still you have like four to five times more data" }, { "start": 2039.8, "end": 2048.8, "text": " than the human created corpus and they do some they do some they do some" }, { "start": 2048.8, "end": 2053.96, "text": " evaluation also again on the diversity of the data and actually turns out that" }, { "start": 2053.96, "end": 2060.12, "text": " as you go as you filter more the diversity increases so that would be the" }, { "start": 2060.12, "end": 2068.2799999999997, "text": " relative diversity meaning sort of how how many percent of the data are you" }, { "start": 2068.28, "end": 2075.92, "text": " know different from other how are unique and so on so it appears to be that GPT-3" }, { "start": 2075.92, "end": 2080.0800000000004, "text": " when it just creates data it will create a lot of good stuff but also some" }, { "start": 2080.0800000000004, "end": 2086.44, "text": " garbage and as it turns out the garbage seems to be always the same kind of" }, { "start": 2086.44, "end": 2091.48, "text": " garbage therefore if you filter out the garbage also the uniqueness and" }, { "start": 2091.48, "end": 2096.6400000000003, "text": " diversity of your overall data set increases so it's quite the opposite of" }, { "start": 2096.64, "end": 2103.8399999999997, "text": " you know you always hear this no I guess I guess it's that the saying that all" }, { "start": 2103.8399999999997, "end": 2109.08, "text": " was it was it all unhealthy families are the same or all healthy ones I don't" }, { "start": 2109.08, "end": 2114.56, "text": " know but in this case all the garbage GPT-3 produces is kind of the same kind" }, { "start": 2114.56, "end": 2120.3399999999997, "text": " of garbage or the same few types of garbage whereas all the good stuff it" }, { "start": 2120.34, "end": 2129.1600000000003, "text": " produces is relatively unique alright so now we have a really yeah this is what" }, { "start": 2129.1600000000003, "end": 2136.48, "text": " gets filtered out right here so first of all logical misalignment consists of" }, { "start": 2136.48, "end": 2141.36, "text": " events or inferences joined in a logically inconsistent manner that makes" }, { "start": 2141.36, "end": 2147.04, "text": " sense that that gets filtered out X cannot find his shirt as a result X is" }, { "start": 2147.04, "end": 2153.36, "text": " wearing a shirt that should probably not be in there and two awkward phrasings" }, { "start": 2153.36, "end": 2157.68, "text": " which consists of events or inferences that in isolation are incoherent" }, { "start": 2157.68, "end": 2163.08, "text": " ambiguous or awkwardly phrased so when an event itself is already poorly" }, { "start": 2163.08, "end": 2167.6, "text": " phrased the model essentially has no chance of generating good inference" }, { "start": 2167.6, "end": 2175.92, "text": " like person X has a fire in the bath yeah so there there is just there's a" }, { "start": 2175.92, "end": 2181.7200000000003, "text": " high chance that a human would would negatively rate this or not accept it or" }, { "start": 2181.7200000000003, "end": 2187.76, "text": " say it not available even like from the get-go doesn't even matter what the" }, { "start": 2187.76, "end": 2198, "text": " relation and the inference is right so the last step is the last step is we" }, { "start": 2198, "end": 2204.28, "text": " want to go back to a model so we have taken GPT-3 a model we have used it" }, { "start": 2204.28, "end": 2211.0400000000004, "text": " strategically to come up with a corpus that is both better in quality more" }, { "start": 2211.0400000000004, "end": 2217.36, "text": " diverse and larger than the corpus that humans have generated and now we want to" }, { "start": 2217.36, "end": 2222.6400000000003, "text": " go back to creating a model from that corpus so when a train an inference" }, { "start": 2222.6400000000003, "end": 2226.88, "text": " model because right now we can only generate data but we would like to have" }, { "start": 2226.88, "end": 2235, "text": " an inference model and remember the original task the inference is to given" }, { "start": 2235, "end": 2242.04, "text": " an event and a relation to produce and to produce either produce an inference" }, { "start": 2242.04, "end": 2252.32, "text": " right which you could do with GPT-3 but it's it's sort of not super good so you" }, { "start": 2252.32, "end": 2255.88, "text": " have to filter with the critic but that means you have to like sample until the" }, { "start": 2255.88, "end": 2260.12, "text": " critic says it's okay what you'd rather have is you just like to have a model" }, { "start": 2260.12, "end": 2267.4, "text": " that is trained on this data to produce directly the inference rather than" }, { "start": 2267.4, "end": 2274.2000000000003, "text": " having to prompt GPT-3 right so the model can be way smaller than GPT-3" }, { "start": 2274.2000000000003, "end": 2278.48, "text": " because it's directly trained on the task and you don't have to pay open AI" }, { "start": 2278.48, "end": 2283.2000000000003, "text": " every time you call it so now I want to go back to a model and that's pretty" }, { "start": 2283.2, "end": 2289.56, "text": " easy right we simply take a the same architecture as this comet model" }, { "start": 2289.56, "end": 2293.2799999999997, "text": " remember the comet model is the model that's trained on this human data to do" }, { "start": 2293.2799999999997, "end": 2298.24, "text": " this inference simply take same architecture and we train it on the" }, { "start": 2298.24, "end": 2311.6, "text": " large corpus and you know what what turns out so on it turns out that we do" }, { "start": 2311.6, "end": 2318.68, "text": " that and then we let again humans rate the triples that the models produce so" }, { "start": 2318.68, "end": 2325.6, "text": " for the comet 2020 this is the model that's trained on the human corpus this" }, { "start": 2325.6, "end": 2330.96, "text": " here you can again see the accept percentage by the raters of of the" }, { "start": 2330.96, "end": 2337.64, "text": " corpus itself when we train the model on it to do this inference for us the" }, { "start": 2337.64, "end": 2344.4, "text": " model produces triples that get accepted 81% of the time which is pretty good" }, { "start": 2344.4, "end": 2350, "text": " right so if the corpus gets accepted this much we train a model on it an NLP" }, { "start": 2350, "end": 2358.3599999999997, "text": " model it's pretty good to drop only a little bit in the accept percentage that" }, { "start": 2358.3599999999997, "end": 2362.6, "text": " means the model has essentially learned because this is obviously on a on a" }, { "start": 2362.6, "end": 2368.2799999999997, "text": " validation set the model has obviously learned to do this inference somewhat" }, { "start": 2368.2799999999997, "end": 2376.8399999999997, "text": " correctly now if we do the same on our large corpus that has lower accept" }, { "start": 2376.8399999999997, "end": 2381.7999999999997, "text": " percentage we see the same effect so the model kind of learns in fact overall we" }, { "start": 2381.7999999999997, "end": 2390.12, "text": " see the same effects if we now add a critic with a low threshold then we" }, { "start": 2390.12, "end": 2395.44, "text": " surpass already this model and we if we add a critic with the high threshold so" }, { "start": 2395.44, "end": 2400.7599999999998, "text": " that would correspond to throwing away 60% of the data as we saw before then" }, { "start": 2400.7599999999998, "end": 2407.7999999999997, "text": " the model that we end up with has an 87.5% accept rating so now we have a" }, { "start": 2407.7999999999997, "end": 2417.96, "text": " model that's the same size as this comet 2020 right it is an a trained model" }, { "start": 2417.96, "end": 2422.68, "text": " it's not GPT-3 it's not prompting it's a trained model that does inference in" }, { "start": 2422.68, "end": 2429.92, "text": " these triples and it is better it is better than the model the same model" }, { "start": 2429.92, "end": 2438.2400000000002, "text": " that's been trained on the human corpus which is pretty cool right so you even" }, { "start": 2438.2400000000002, "end": 2446.16, "text": " you it not only does it surpass GPT-3 itself it also surpasses the human" }, { "start": 2446.16, "end": 2456.7999999999997, "text": " generated data and yeah that's pretty cool so this was essentially the the" }, { "start": 2456.7999999999997, "end": 2462.04, "text": " findings of this paper I guess we can go back to conclude with what they said at" }, { "start": 2462.04, "end": 2466.52, "text": " the beginning the key findings right here learning symbolic knowledge from" }, { "start": 2466.52, "end": 2470.3599999999997, "text": " language models can be framed as a symbolic extension to knowledge" }, { "start": 2470.3599999999997, "end": 2476, "text": " distillation okay so that's the that's the mathy part symbolic knowledge" }, { "start": 2476, "end": 2482.52, "text": " distillation constructs a high quality knowledge graph at scale okay that's" }, { "start": 2482.52, "end": 2490.32, "text": " their data generation process a critical teacher results in a higher quality" }, { "start": 2490.32, "end": 2497.4, "text": " student now granted the critical teacher makes the quality of the data set better" }, { "start": 2497.4, "end": 2502.8, "text": " and therefore any model the student that is trained on that data set it will" }, { "start": 2502.8, "end": 2506.92, "text": " become better a notable ingredient right here is that here is where we actually" }, { "start": 2506.92, "end": 2513.84, "text": " bring in the human the human annotated data into this process of automated" }, { "start": 2513.84, "end": 2521.28, "text": " knowledge graph generation because we need to train that critic critical" }, { "start": 2521.28, "end": 2526.92, "text": " teachers or not a student can outperform the knowledge source so this is about" }, { "start": 2526.92, "end": 2534.88, "text": " that the student model they exceed the quality of GPT-3 which so if you simply" }, { "start": 2534.88, "end": 2540.28, "text": " prompt GPT-3 you get some of these triples right yet the student models" }, { "start": 2540.28, "end": 2546.76, "text": " that are trained on these triples that come from GPT-3 outperform GPT-3 which" }, { "start": 2546.76, "end": 2552.28, "text": " can make sense since GPT-3 is a general purpose language model and these student" }, { "start": 2552.28, "end": 2558.6400000000003, "text": " models are specifically trained on that particular kind of data and also I have" }, { "start": 2558.6400000000003, "end": 2566, "text": " to say the student models they are their GPT-2 so in the student model what you" }, { "start": 2566, "end": 2570.76, "text": " would do is you have your corpus you have event relation inference event" }, { "start": 2570.76, "end": 2575.84, "text": " relation inference where these are your samples this is this is all text" }, { "start": 2575.84, "end": 2580.36, "text": " essentially right so the relation you can abstract that in a either a single" }, { "start": 2580.36, "end": 2587.76, "text": " token or you can make it into a text as they did so they feed that into a GPT-2" }, { "start": 2587.76, "end": 2595.36, "text": " which is something that you can train and that GPT-2 is trained to take in an" }, { "start": 2595.36, "end": 2602.04, "text": " event and a relation into the context and then generate the inference much" }, { "start": 2602.04, "end": 2606.96, "text": " like GPT-3 but now you actually train it specifically on this particular data" }, { "start": 2606.96, "end": 2613.36, "text": " structure and data set and the GPT-2 you pre train it of course on language" }, { "start": 2613.36, "end": 2619.88, "text": " modeling and it could be that some of the effect that the students model" }, { "start": 2619.88, "end": 2626.04, "text": " exceed the quality of GPT-3 might be due to the fact that it starts out already" }, { "start": 2626.04, "end": 2632.28, "text": " from a GPT-2 checkpoint it's it's a possib like there's a possibility that" }, { "start": 2632.28, "end": 2639.0400000000004, "text": " that also plays into the game right here machines can now win over humans for" }, { "start": 2639.0400000000004, "end": 2647.44, "text": " automatic knowledge graph construction so that is a little bit it's a little bit" }, { "start": 2647.44, "end": 2655.44, "text": " is a little bit shady since the critics you train are still using humans but I" }, { "start": 2655.44, "end": 2662.12, "text": " would agree that at least the paper shows that there are better places to" }, { "start": 2662.12, "end": 2668.76, "text": " use human knowledge than letting humans come up with a text corpus because these" }, { "start": 2668.76, "end": 2675.36, "text": " text corpora can be generated pretty easily using large language models and" }, { "start": 2675.36, "end": 2680.2000000000003, "text": " proper prompting and if you do that then you can use the human knowledge to" }, { "start": 2680.2000000000003, "end": 2684.52, "text": " filter whatever the language models output and that might be much more" }, { "start": 2684.52, "end": 2692.44, "text": " effective so this was it for this paper I hope to not only show this paper but" }, { "start": 2692.44, "end": 2698.24, "text": " show give you a little bit of an idea of what all is possible with these language" }, { "start": 2698.24, "end": 2704.72, "text": " models and proper prompt engineering and I think this serves as a little bit of a" }, { "start": 2704.72, "end": 2711.56, "text": " recipe for many or a lot of things to come a lot of NLP tasks to be done could" }, { "start": 2711.56, "end": 2717.32, "text": " be tackled in this particular way alright so yeah let me know what you" }, { "start": 2717.32, "end": 2746.6000000000004, "text": " think in the comments and bye bye" } ]
a6v92P0EbJc
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Neural Architecture Search without Training (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "nas", "nas-bench", "architecture search", "initialization", "untrained", "cifar10", "imagenet", "neural architecture search", "controller", "rnn", "correlation", "gradient", "jacobian", "linearization" ]
#ai #research #machinelearning Neural Architecture Search is typically very slow and resource-intensive. A meta-controller has to train many hundreds or thousands of different models to find a suitable building plan. This paper proposes to use statistics of the Jacobian around data points to estimate the performance of proposed architectures at initialization. This method does not require training and speeds up NAS by orders of magnitude. OUTLINE: 0:00 - Intro & Overview 0:50 - Neural Architecture Search 4:15 - Controller-based NAS 7:35 - Architecture Search Without Training 9:30 - Linearization Around Datapoints 14:10 - Linearization Statistics 19:00 - NAS-201 Benchmark 20:15 - Experiments 34:15 - Conclusion & Comments Paper: https://arxiv.org/abs/2006.04647 Code: https://github.com/BayesWatch/nas-without-training Abstract: The time and effort involved in hand-designing deep neural networks is immense. This has prompted the development of Neural Architecture Search (NAS) techniques to automate this design. However, NAS algorithms tend to be extremely slow and expensive; they need to train vast numbers of candidate networks to inform the search process. This could be remedied if we could infer a network's trained accuracy from its initial state. In this work, we examine how the linear maps induced by data points correlate for untrained network architectures in the NAS-Bench-201 search space, and motivate how this can be used to give a measure of modelling flexibility which is highly indicative of a network's trained performance. We incorporate this measure into a simple algorithm that allows us to search for powerful networks without any training in a matter of seconds on a single GPU. Code to reproduce our experiments is available at this https URL. Authors: Joseph Mellor, Jack Turner, Amos Storkey, Elliot J. Crowley Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher LinkedIn: https://www.linkedin.com/in/yannic-kilcher-488534136/ If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar (preferred to Patreon): https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Hi there! Today we're looking at neural architecture search without training by Joseph Meller, Jack Turner, Alma Storky and Elliot J. Crowley. On a high level, this paper performs neural architecture search by looking at the correlation matrices of the Jacobian of the data when you pass it through the network. And it does so at initialization. So you pass the data, look at the Jacobian, and if it's very correlated, then the network is bad. And if it's very uncorrelated, then the network is good. And by simply observing that, they can already achieve a very good score on a neural architecture search benchmark. All right, that was a high level and maybe a bit too simplified. But that's sort of what's going on. Okay, let's dive in. So what's neural architecture search? Neural architecture search is the discipline of you are given a data set. Let's say here we have a data set, which could be something like CIFAR-10, which is an image data set. And you are given a sort of training procedure, let's say, ADM or SGD for 100,000 steps or something like this with many batches of size 64. Okay, and you're given a loss function, which the loss function here could be the cross entropy between the outputs of the network, which we'll call L and the label Y. And your task is now to find a neural network architecture that conforms to these specifications, but gives the lowest possible loss or the highest possible validation accuracy in this case. So this here would be like the train and then you'd have the test accuracy or the validation accuracy. Okay, so you could decide, well, I'm going to go with, you know, first, like three convolutional layers, each one having like a ReLU non-linearity. But you could also say, well, I'm going to build like a skip connection from here to here. You could also say that I'm going to down sample by two, you could have maybe a bigger stride and so on. So the kernel size of the convolution, you can vary until now, people have done this by hand, right? In effect, we all use like the same 10 to 20 different architectures. So if it's an image problem, we tend to go for like a ResNet or a wide ResNet, or like a VGG style architecture. Someone has come up with those at some point with each of those, discovered that it works well. And we don't really do much exploration, we simply kind of use the same things over and over. And the truth is that there might be much better architectures that we're simply not exploring, right? There might be much better building plans for networks that we don't know of that might perform a lot better with the same data and the same training. So neural architecture search is the process of automatically searching for these better architectures. Of course, that's a combinatorial problem. But the idea is that, you know, you can actually learn to construct good architectures. And by doing so, you can, you can sort of speed up this process that is manual otherwise. And the idea behind it is there's some regularity of when an architecture is good, there's some like high level pattern that you as a human maybe cannot really grasp, but like a machine can figure out which architectures are good and which ones aren't. So there have been a few inventions in this area, but they are mostly costly. That's what they say here. The time and effort involved in hand designing deep neural networks is immense. This has prompted the development of neural architecture search techniques to automate this design. However, neural architecture search algorithms tend to be extremely slow and expensive. They need to train vast numbers of candidate networks to inform the search process. So what neural architecture search methods do is what they'll have is they'll have something like a controller, the controller itself, of course, is going to be a neural network. So there'll be this thing that will be the controller, and the controller will emit like a building plan. So the controller will emit like a building plan for this network right here. And then you train the entire thing once through for the entire 100,000 steps. And then you observe the final validation accuracy, which might be something like 80%. And then you know, okay, this is 80%. So you feed the 80% into your controller and the controller outputs the next building plan that it thinks will score higher. And then you train the entire thing again, and you maybe observe 70% accuracy, you again feed that in, right, and the controller realizes, oh, I may have done something wrong, let me try something else. And does again, if this looks like reinforcement learning to you, that's because this is reinforcement learning. So the real the, the C here, the controller would be the agent, the percentages here, the accuracies would be the real reward. And the environment, the observations would be basically, this thing here, this thing would be the actions, but sometimes it's the observations and you need to score the different things. Okay. So the problem, of course, with this is that the reinforcement learning requires a lot of data, it requires a lot of steps to converge, because the signal from the reward is just so weak, you simply get one number for your action. And you don't know what you can change to make it better, you simply have to try. So you need a lot of steps, but this thing here is mighty slow, because each each single step in your reinforcement learning procedure involves training an entire neural network for like this many steps. Okay, so all of this is ginormously slow, and it's resource intensive. And that of course, blocks a lot of research, because, you know, we started with the plan to automate this part right here, but automating it itself is super expensive. So they go for a different solution. They say this could be remedied if we could infer at net, sorry, if we could infer a network's trained accuracy from its initial state. Okay, it seems a bit out there, but let's let's give them benefit of the doubt. In this work, we examine how the linear maps induced by data points correlate for untrained network architectures in the NAS bench 201 search space, and motivate how this can be used to give a measure of the accuracy of the network. So we use this measure to give a measure of modeling flexibility, which is highly indicative of a network's trained performance. We incorporate this measure into a simple algorithm that allows us to search for powerful networks without any training in a matter of seconds on a single GPU. Okay, and they have the code available right here if you want to go and check that out. So let's go ahead and check that out. The claims are pretty big. And the reasoning behind the claims is the following observation. You can already sort of see in this graphic right here, we'll go over what it means in one second. But what they do is they take different networks in this search space. And the search space in this case is given by this benchmark. So this benchmark basically has a long architectures that you could consider. Actually, so it's a constructive list. So they don't actually give you the list, but they give you like a way to construct architectures. And they took those architectures and they rank them by how well they score on CIFAR-10. So there are very good architectures, which are here, there are good ones, there are mediocre ones, and then the bad ones. Okay, and you can see that the histograms here of whatever they measure, they look quite different. So the histograms with the good ones, they all look kind of spiky around zero. And the histograms of the bad ones all sort of look spread out. So this is the measure that they're going to propose is they have some sort of number, some sort of histogram that they produce. And if the histogram is very spiky and close together around zero, then they conclude that this network is good. And if the histogram is very spread out like this, they conclude that the network is bad. Now these histograms, as you might expect, they are computed not from the final trained network, but they are computed from the initial network. So here they show at least, you know, in this case, it seems to be that there is a general correlation between the trained accuracy and how this histogram looks. And we're going to explore what they do. So it's essentially, it's pretty easy. They compute the linear map around each data point. So what is that? If you imagine a neural network as a nonlinear function, which I guess you should, because it is. And so let's imagine it as like a nonlinear function from X to Y. What they'll do is simply they'll look at a given date training data point, which could be here, right? This could be the X and this could be the Y. And in fact, let's look at it in loss landscape, not even in Y, but in L in terms of the loss, because we don't need necessarily a single label. This could be for unsupervised, this could be for anything. Okay, so it maps a data point to a loss. Now, what we'll do is we'll simply linearize the function around that point, which means we'll just freeze all the nonlinearities in place. And that will give us this linear function right here. Okay, we just observe that this linear function can exist. It's the tangent to the loss landscape. And it's at a particular data point, right? It's in data space, not in weight space. Then we look at a different data point. So we look at this data point right here, another data point. What's the linear function around this one is sort of like, whoops, D is like that. And then around this one is like this. Okay, so this is one function. Now let's look at a different function right here. So L, X, and we'll look at this function, the linear function. Okay, so for some reason, this is like this. And if we consider two data points, their linearization is very similar. Now imagine that these two have been produced by the same sort of neural networks. It's just the architecture is a little different. But they have the same number of parameters in the neural network. Which neural network would you prefer? Remember, by training the neural network, you can actually shape this loss function. You can kind of shape that around. So which one would you prefer? I personally would prefer the top one, because the top one already tells me that, hey, you know, I might have 10 parameters here. And this already sort of looks like each of the 10 parameters is doing something. So if I then go into my 10 parameters, and I, you know, turn this knob right here, then I might, you know, up this bump, or down this bump, or do something with it. But the sort of frequency, curvature, the randomness of the function, the way that it fluctuates tells me that all of the different parameters must have some sort of effect, right? Because it's quite an expressive function. Whereas if I have the same number of parameters for a function like this, this sort of tells me, well, maybe only one of the when the only one of the weights is actually doing something, maybe only one of the dimensions is doing something. This seems odd, right? That even though I've initialized it randomly, a super regular function like this comes out. So maybe all of the all of these parameters down here, they don't do anything. Or this, so somehow the signal doesn't get through. So that's, I, they don't explicitly say it in these terms. But this is how I make sense of this. What they're saying is that if you look at the linearizations of the functions, and you look at the the angle right here, so the angle in this case is that and in this case is that and in this case is that. So you look at the slope here. And the slope is basically the gradient of these linearized functions. And what you want to do is you want to look at the correlation between those of the different data points. So here we have three angles. One is very short, one is very bit longer, like this, and or no, even like this, and one is even over 90 degrees like that. They are not correlated at all, right? They're all very different. However, the angles here, they're all quite the same, as you can see. So what they propose is the following. Let's send all the data points, or in that case, all the data points in a particular mini batch, let's send them through the function, and let's calculate their linearizations. So the linearization is nothing else than you send them through the network to obtain the f value for the x value, and then you calculate the gradient with respect to the input. Now you have to get used to this a bit, because usually we calculate the gradient with respect to the weight. So you calculate the gradient, but now we calculate the gradient with respect to the input, which if this is a linear function, so if you have a f of x equals wx, like a linear function, then this gradient, del f del x, would just give you the w, will give you the slope of the linear function, and the same in the neural network when you linearize it. All right, so we're going to obtain all these linearizations, and that gives us this matrix J right here. And what we can do is we can then observe the covariance matrix of J, of all these linearizations. The covariance matrix simply tells you how two data points vary with each other, and in fact they don't look at the covariance matrix, but they look at the correlation matrix, which is simply the scaled covariance matrix. So one entry in this covariance matrix, so you have n data points, and this gives you a matrix that's n by n, and a particular entry here, like the entry i, j, would simply state how does the angle of data point i correlate with the angle of data point j. Okay, that's the covariance matrix. And now the hypothesis is, if all of these data points are sort of independent, like in our very expressive function here, then these correlations, they should not be high. In fact most data points should be rather uncorrelated. However, in this case right here, if the function is sort of kind of degenerative or something, not very expressive, then all of these angles, all of these linearizations should be highly correlated. And that's what you see in this graph right here. This right here now is this correlation histogram of the correlations between local linear maps across all pairs of items in a mini batch of C410 training data. Each policy is scrammed for a single untrained NASBench 201 architecture. So remember the expressivity is important because we want to train that function, and therefore it's important that every parameter does something. And if it's degenerate, we can't train it well. And that's, I find that's the reasoning. They sort of say this, but I might make the wrong sense out of it here, but it seems to me like that's what's actually going on. So you can see this is simply these matrix values rolled out and then plotted as a histogram. So what does it mean when an histogram is like super spread out like this? It means that there are a lot, and I think down here are axes, yes, there are a lot of data points that correlate highly or anti-correlate highly with each other. Okay, which means that exactly this degeneracy happens. So either too high or too negative high correlation means that they're very much, they're kind of the same thing. So there is, if you have as many parameters as data points, that means that one parameter can potentially serve these two data points or these two that are correlated by one or negative one. You don't need both parameters and therefore you have a lot of parameters doing nothing. Whereas over here with the good networks, you can see that this spikes around zero, meaning that the data points are not correlated or the linearizations around the data points are not correlated. And therefore you can sort of shape the function around each data point however you want. Which we sort of know that neural networks, what they do is they're so over expressive that they're actually able to shape the functions around the data points without necessarily looking at other data points nearby. And that expressivity is what you want and that expressivity is what this in part measures. Okay, so they make a, they have some experiments here where they validate this. So for all these architectures in this benchmark, and maybe I should tell you what, show you what the benchmark looks like. So the benchmark has this particular form, this particular form, there's this skeleton, and in this skeleton there is this block and it's always repeated. And you're basically, your task is to determine what this block should be. So this block has an input node A and an output node D and two intermediate nodes. And what you have to do is basically you have to determine these connections right here. So there are six connections and for each one you have the option of putting different things there. Like you can see you can put a convolution, you can put the identity function, which is a skip connection, zero wise. I don't, maybe that's the zero function, so it basically means nothing. I'm not so sure, honestly. But you could technically put a convolution here and here, right, or different convolutions or things like this. So there are these 15,625 possible cells. So the NAS benchmark contains 15,625 possible architectures that you'll have to search. And they take these architectures and they plot now, they plot for each architecture the validation accuracy after training. And the training protocol is standardized, you don't have to care about that. And the score that they measure at the beginning of training. And what you can see is that there is a linear relationship, sort of, like sort of. From these experiments what you'll get is like this sort of feeling. What they're going to propose is that you should take that score as a measure. And here again also, sort of, sort of. There is a clear trend, as you can see, right here. Though, yeah, though this, as you can see, this sort of spreads out. And the most right one is ImageNet, which is the most difficult one, of course. So, and this is CIFAR 100, which is more difficult than CIFAR 10. So we can see that this sort of relationship at the top, it doesn't really hold anymore if the task gets difficult. And this is, so what I think is happening, this is kind of an interjection of my own opinion. What's happening here is that this score that they discover allows them pretty efficiently to see which networks are just degenerate and cannot be trained. Like if you try to train them, they just perform really poorly, okay? That, it's probably a very good score for weeding those out. And that would mean if you kind of barrier here somewhere, right? You could just discard a whole lot of this crap, or even here, right? You could just discard a whole lot of this crap. And also now here, just, you know, all of this crap. Yeah, whereas here, as you can see, some, this score, sometimes it's higher than these ones, even though they perform better. And again, you could probably discard a lot of the crap, but it's not as distinctive for the well performing networks, because these here are all not the degenerate version, right? They're not degenerate in the sense that they're, they have some fundamental flaw where the function lacks now expressivity from the very start, so you can't train it. And so, you know, it's not a big deal. And then probably other factors come into play, other factors than you can simply determine with this particular score. But, you know, there is this relationship that's, you know, you can see that. And they do some ablations on this here. For example, are your scores a proxy for a number of parameters? And they say, no, the number of parameters works way than this particular score, which, you know, is a cool thing. Then how important is the specific mini batch and initialization? And they say, look right here, we, for some architectures, we do different mini batch sizes. And you can see each of those groups, they don't vary too much in how they're, it influences their score. This is, I believe this is the same architecture. So it's always an architecture that achieves in this case, for example, wow, that's not a straight line, 77% or so. And you can see if you go for different mini batches, the score varies only minimally. Initialization is a bigger variance inducing thing. But also here, the scores don't vary too much. But it is interesting that the different initialization do get you to different score, because it would directly support kind of my hypothesis that what's going on here is that you sort of measure initial degeneracies. And you can sort of make up for these initial degeneracies in the architecture sometimes with sort of a different initialization. So the different initializations give you differently performing networks. We already know this from things like, you know, lottery ticket hypothesis and so on, that the initialization can matter to some degree in these types of things. Now, that being said, they always train to the same, it seems, but their their score varies. So I might be backwards correct here, or not correct. But in any case, the initialization here matters more, but also you can still see this linear relationship. And this is particularly interesting. This is even the case when you just input white noise. So instead of the data, you measure that score by just inputting noise that I guess has some sort of the same magnitude as the data would have, but it's just noise. And you can still sort of see this linear relationship, which is very interesting. And that I think also shows some that you what you're fine, what you find is a property of the network itself. And the fact that it is, it is initialized and built in such a way that it allows you to train it in a very, in a sort of a benign manner, it has no degeneracies. Okay. So in the last experiment, they go here and they say, we evaluated the score on initialized networks in the PyTorch CV library. So they go to this library that has a lot of these networks, but these networks are not the same as this benchmark. This benchmark is specifically designed to do architecture search. Now the networks in this library, they are all designed to perform really well. Some are designed to be quite small, some are designed to be quite fast and so on. But in general, they are all of their goal is to perform well, and they have been sort of found by humans to perform well. So they take now these networks on CIFAR 10 and they test them. So as you can see here, here is the test accuracy again, and here is their score that they give it. And they say, now I can't move this anymore. Hello. Well, okay. They say that this linear relationship still sort of holds. It doesn't hold super, super well, but you can still sort of, if you squint, if you squint hard, you can see that it sort of goes upward, though you really have to squint hard. Like what are these things right here? And what, again, what's the case is that if the score is low, it's not going to be a good score. So what you can do is that if the score is low, you will sort of be able to cut off the worst performing ones. But really at the top here, it doesn't seem like there is a particular relation between these networks and this initial score, which sort of strengthens my hypothesis that it's just kind of weed out the bad ones. But it's pretty cool because you can weed out the bad ones without any training, right? You'd simply forward prop backward prop. There you have it. So cool. Now they come, they, here is the experiment where they now really do this NAS benchmark and they compare with other methods. So some of these other methods are designed to do the call weight sharing, which basically is a technique where you can sort of speed up the speed up the algorithm as compared to non weight sharing and the non weight sharing. That's one of these we have discussed initially. That was my initial example with the controller and so on where it takes super long. So here you see the method and how long each method takes. Now the best ones, as you can see already, the best ones here, or these, these methods right here are the best ones. They score somewhat like a 93.9 or so on C for 10, whereas these weight sharing ones, they don't perform too well, except this one seems to perform quite well. And in this hours case, they perform worse than that, but they still perform better than a lot of the weight sharing ones. So what their point is basically is that they get a pretty good score, which is a 91.5 on C for 10, which is, you know, it's at least not degenerate. It's a, it's a good accuracy. They score that with simply evaluating 10 architectures, right? And as n goes up, as they evaluate more and more architectures, they do, they do get better, but not much. So they have a discussion here. I'm having trouble moving this. All right, so we'll sort of go through the discussion. We report results, yada, yada, yada, yada, yada. As the setup, the non weight sharing methods are given a time budget of 12,000 seconds for our method and the non weight sharing methods are averaged. Accuracies are averaged over 500 runs for weight sharing methods. Accuracies are reported over three runs with the exception of G Das. Our method is able to outperform all the weight sharing methods while requiring a fraction of the search time. And that you may see at the table. This is the real, I mean, this is the real deal here. They only use here 1.7 seconds compared to the 12,000 seconds of the other methods. And you reach almost the same accuracy. Now to be said, 2% in this particular regime on C410 is still a sizable difference. And that's the same benchmark, right? With the same sort of the same training schedule and so on. So there's not too much room to tune here. You simply have to find a better architecture. So these things are still sizably ahead of this. And what it appears to me that these methods here that don't perform well, they're simply crap. It seems they're simply, I don't know, but they might be trying out something or, you know, doing something researchy or whatnot. But it seems like if you're well able to weed out the bad architectures, you might be getting to a score like this. And then if you are actually performing a search to find the best one, then you might be getting to somewhere like this. And you can see this here throughout. So in C4100, they achieve a better score than these things, but a worse score than the non-weight sharing method. And in ImageNet, the difference is even larger. So again, what I can see here is that theirs is a good method to maybe get you, let's say, 90% of the way you want to go. And what's interesting is that here they say, we also show the effect of sample size. We show the accuracy of the networks chosen by our method for each n. So that's the sample size. We list the optimal accuracy for sample sizes 10 and 100 and random selection over the whole benchmark. So in this case, they have the optimal one, which I guess they just draw 10 samples and then take the best one. So they train all of them and then take the best one. And you can see that already gets you to the 93. And whereas in their case, sometimes when they add more, they get worse. So here they get better, but then they get worse again. So they comment on this right here. We observe that the sample size does not have a large effect on the accuracy of our method. But note that as sample size increases, our method suffers from a small amount of noise, increasing the gap between our score and the optimal result. And of course, the key practical benefit is execution time. So again, they are massively faster than the other methods. But to me, it seems you could just think of combining these methods, right? You combine this with this in that what you want to do is actually actively search for the best ones. But by doing so, you could, if you could pretty quickly weed out the bad ones using this method down here, you might already have like a big speed up. Because again, with comparison to this random ones, what appears to happen is that they get good at finding, you know, your 90% architecture, but then they fail to differentiate the top performance performers from each other, where you'd really have to train the network to find out what's, you know, which one's better. So yeah, here they say they visualize the trade off between search time and accuracy for C410 for different NAS algorithms on the NAS benchmark. By removing the need for training, our method is able to find accurate networks in seconds instead of hours. And here you can see the accuracy and here you can see the time and all the good ones are either way over here or here. And theirs is almost at zero while being quite close to the accuracy of the other ones. All right, yeah, that was that was this paper. Again, I think this is pretty valuable if you are especially if you're in a new domain, where you might not know what kind of network to build, you might just be able to write a little script that generates networks, run it through this algorithm, and at least you get an idea of which ones are certainly not worth considering. And then you can simply select one of the other ones. It doesn't, you know, often it doesn't need to be the best ones. And you can then tweak it a little bit manually, the ones you found, maybe you see some regularity. And yeah, that was my two cents on this paper. I hope you liked it. If you did, consider sharing it out and telling your friends about it and subscribing, liking, and leave a comment if you agree or disagree. That was it. Bye bye.
[ { "start": 0, "end": 6.5600000000000005, "text": " Hi there! Today we're looking at neural architecture search without training by Joseph Meller, Jack" }, { "start": 6.5600000000000005, "end": 12.96, "text": " Turner, Alma Storky and Elliot J. Crowley. On a high level, this paper performs neural" }, { "start": 12.96, "end": 22.56, "text": " architecture search by looking at the correlation matrices of the Jacobian of the data when" }, { "start": 22.56, "end": 28.240000000000002, "text": " you pass it through the network. And it does so at initialization. So you pass the data," }, { "start": 28.24, "end": 35.04, "text": " look at the Jacobian, and if it's very correlated, then the network is bad. And if it's very" }, { "start": 35.04, "end": 41.28, "text": " uncorrelated, then the network is good. And by simply observing that, they can already" }, { "start": 41.28, "end": 47.28, "text": " achieve a very good score on a neural architecture search benchmark. All right, that was a high" }, { "start": 47.28, "end": 53.12, "text": " level and maybe a bit too simplified. But that's sort of what's going on. Okay, let's dive in." }, { "start": 53.12, "end": 58.72, "text": " So what's neural architecture search? Neural architecture search is the discipline of you" }, { "start": 58.72, "end": 65.28, "text": " are given a data set. Let's say here we have a data set, which could be something like CIFAR-10," }, { "start": 65.84, "end": 74.96, "text": " which is an image data set. And you are given a sort of training procedure, let's say, ADM or SGD" }, { "start": 74.96, "end": 83.83999999999999, "text": " for 100,000 steps or something like this with many batches of size 64. Okay, and you're given a loss" }, { "start": 83.83999999999999, "end": 91.44, "text": " function, which the loss function here could be the cross entropy between the outputs of the network," }, { "start": 91.44, "end": 99.83999999999999, "text": " which we'll call L and the label Y. And your task is now to find a neural network architecture" }, { "start": 99.84, "end": 106.72, "text": " that conforms to these specifications, but gives the lowest possible loss or the highest possible" }, { "start": 106.72, "end": 112.96000000000001, "text": " validation accuracy in this case. So this here would be like the train and then you'd have the" }, { "start": 112.96000000000001, "end": 118.88, "text": " test accuracy or the validation accuracy. Okay, so you could decide, well, I'm going to go with," }, { "start": 118.88, "end": 124.96000000000001, "text": " you know, first, like three convolutional layers, each one having like a ReLU non-linearity." }, { "start": 124.96, "end": 129.35999999999999, "text": " But you could also say, well, I'm going to build like a skip connection from here to here." }, { "start": 129.84, "end": 136, "text": " You could also say that I'm going to down sample by two, you could have maybe a bigger stride and" }, { "start": 136, "end": 142.56, "text": " so on. So the kernel size of the convolution, you can vary until now, people have done this by hand," }, { "start": 142.56, "end": 149.84, "text": " right? In effect, we all use like the same 10 to 20 different architectures. So if it's an image" }, { "start": 149.84, "end": 155.92000000000002, "text": " problem, we tend to go for like a ResNet or a wide ResNet, or like a VGG style architecture." }, { "start": 157.20000000000002, "end": 163.2, "text": " Someone has come up with those at some point with each of those, discovered that it works well." }, { "start": 163.2, "end": 169.52, "text": " And we don't really do much exploration, we simply kind of use the same things over and over." }, { "start": 170.48000000000002, "end": 177.36, "text": " And the truth is that there might be much better architectures that we're simply not exploring," }, { "start": 177.36, "end": 182.88000000000002, "text": " right? There might be much better building plans for networks that we don't know of that might" }, { "start": 182.88000000000002, "end": 189.04000000000002, "text": " perform a lot better with the same data and the same training. So neural architecture search is" }, { "start": 189.04000000000002, "end": 193.68, "text": " the process of automatically searching for these better architectures. Of course, that's a" }, { "start": 193.68, "end": 202.72000000000003, "text": " combinatorial problem. But the idea is that, you know, you can actually learn to construct good" }, { "start": 202.72, "end": 208.64, "text": " architectures. And by doing so, you can, you can sort of speed up this process that is manual" }, { "start": 208.64, "end": 214.48, "text": " otherwise. And the idea behind it is there's some regularity of when an architecture is good," }, { "start": 214.48, "end": 219.84, "text": " there's some like high level pattern that you as a human maybe cannot really grasp, but like a" }, { "start": 219.84, "end": 225.68, "text": " machine can figure out which architectures are good and which ones aren't. So there have been a few" }, { "start": 225.68, "end": 233.20000000000002, "text": " inventions in this area, but they are mostly costly. That's what they say here. The time and" }, { "start": 233.20000000000002, "end": 238.08, "text": " effort involved in hand designing deep neural networks is immense. This has prompted the" }, { "start": 238.08, "end": 244.08, "text": " development of neural architecture search techniques to automate this design. However," }, { "start": 244.08, "end": 250.48000000000002, "text": " neural architecture search algorithms tend to be extremely slow and expensive. They need to train" }, { "start": 250.48, "end": 256.96, "text": " vast numbers of candidate networks to inform the search process. So what neural architecture" }, { "start": 256.96, "end": 262, "text": " search methods do is what they'll have is they'll have something like a controller," }, { "start": 262, "end": 267.76, "text": " the controller itself, of course, is going to be a neural network. So there'll be this thing that" }, { "start": 267.76, "end": 274.71999999999997, "text": " will be the controller, and the controller will emit like a building plan. So the controller will" }, { "start": 274.72, "end": 280.96000000000004, "text": " emit like a building plan for this network right here. And then you train the entire thing once" }, { "start": 280.96000000000004, "end": 286.64000000000004, "text": " through for the entire 100,000 steps. And then you observe the final validation accuracy, which" }, { "start": 286.64000000000004, "end": 293.92, "text": " might be something like 80%. And then you know, okay, this is 80%. So you feed the 80% into your" }, { "start": 293.92, "end": 300.08000000000004, "text": " controller and the controller outputs the next building plan that it thinks will score higher." }, { "start": 300.08, "end": 306.71999999999997, "text": " And then you train the entire thing again, and you maybe observe 70% accuracy, you again feed" }, { "start": 306.71999999999997, "end": 311.68, "text": " that in, right, and the controller realizes, oh, I may have done something wrong, let me try" }, { "start": 311.68, "end": 317.2, "text": " something else. And does again, if this looks like reinforcement learning to you, that's because" }, { "start": 317.2, "end": 324.15999999999997, "text": " this is reinforcement learning. So the real the, the C here, the controller would be the agent," }, { "start": 324.15999999999997, "end": 325.44, "text": " the percentages here, the accuracies would be the real" }, { "start": 325.44, "end": 333.2, "text": " reward. And the environment, the observations would be basically, this thing here, this thing" }, { "start": 333.2, "end": 337.6, "text": " would be the actions, but sometimes it's the observations and you need to score the different" }, { "start": 337.6, "end": 345.6, "text": " things. Okay. So the problem, of course, with this is that the reinforcement learning requires a lot" }, { "start": 345.6, "end": 351.6, "text": " of data, it requires a lot of steps to converge, because the signal from the reward is just so" }, { "start": 351.6, "end": 358.40000000000003, "text": " weak, you simply get one number for your action. And you don't know what you can change to make it" }, { "start": 358.40000000000003, "end": 364.48, "text": " better, you simply have to try. So you need a lot of steps, but this thing here is mighty slow," }, { "start": 364.48, "end": 371.28000000000003, "text": " because each each single step in your reinforcement learning procedure involves training an entire" }, { "start": 371.28000000000003, "end": 378.72, "text": " neural network for like this many steps. Okay, so all of this is ginormously slow, and it's" }, { "start": 378.72, "end": 386.40000000000003, "text": " resource intensive. And that of course, blocks a lot of research, because, you know, we started" }, { "start": 386.40000000000003, "end": 391.84000000000003, "text": " with the plan to automate this part right here, but automating it itself is super expensive." }, { "start": 392.8, "end": 401.68, "text": " So they go for a different solution. They say this could be remedied if we could infer at net," }, { "start": 401.68, "end": 409.68, "text": " sorry, if we could infer a network's trained accuracy from its initial state. Okay, it seems" }, { "start": 409.68, "end": 416.88, "text": " a bit out there, but let's let's give them benefit of the doubt. In this work, we examine how the" }, { "start": 416.88, "end": 422.8, "text": " linear maps induced by data points correlate for untrained network architectures in the NAS bench" }, { "start": 422.8, "end": 430.32, "text": " 201 search space, and motivate how this can be used to give a measure of the accuracy of the" }, { "start": 430.32, "end": 436.71999999999997, "text": " network. So we use this measure to give a measure of modeling flexibility, which is highly indicative" }, { "start": 436.71999999999997, "end": 443.84, "text": " of a network's trained performance. We incorporate this measure into a simple algorithm that allows" }, { "start": 443.84, "end": 450.64, "text": " us to search for powerful networks without any training in a matter of seconds on a single GPU." }, { "start": 450.64, "end": 456.56, "text": " Okay, and they have the code available right here if you want to go and check that out. So let's go" }, { "start": 456.56, "end": 463.36, "text": " ahead and check that out. The claims are pretty big. And the reasoning behind the claims is the" }, { "start": 463.36, "end": 470.08, "text": " following observation. You can already sort of see in this graphic right here, we'll go over what it" }, { "start": 470.08, "end": 476.88, "text": " means in one second. But what they do is they take different networks in this search space. And the" }, { "start": 476.88, "end": 483.6, "text": " search space in this case is given by this benchmark. So this benchmark basically has a long" }, { "start": 483.6, "end": 490, "text": " architectures that you could consider. Actually, so it's a constructive list. So they don't actually" }, { "start": 490, "end": 497.12, "text": " give you the list, but they give you like a way to construct architectures. And they took those" }, { "start": 497.12, "end": 502.8, "text": " architectures and they rank them by how well they score on CIFAR-10. So there are very good" }, { "start": 502.8, "end": 508.48, "text": " architectures, which are here, there are good ones, there are mediocre ones, and then the bad ones." }, { "start": 508.48, "end": 514.8000000000001, "text": " Okay, and you can see that the histograms here of whatever they measure, they look quite different." }, { "start": 514.8000000000001, "end": 520.72, "text": " So the histograms with the good ones, they all look kind of spiky around zero. And the histograms" }, { "start": 520.72, "end": 526.5600000000001, "text": " of the bad ones all sort of look spread out. So this is the measure that they're going to propose" }, { "start": 526.5600000000001, "end": 532.64, "text": " is they have some sort of number, some sort of histogram that they produce. And if the histogram" }, { "start": 532.64, "end": 539.76, "text": " is very spiky and close together around zero, then they conclude that this network is good." }, { "start": 539.76, "end": 546.3199999999999, "text": " And if the histogram is very spread out like this, they conclude that the network is bad. Now these" }, { "start": 546.3199999999999, "end": 554.3199999999999, "text": " histograms, as you might expect, they are computed not from the final trained network, but they are" }, { "start": 554.3199999999999, "end": 562, "text": " computed from the initial network. So here they show at least, you know, in this case, it seems" }, { "start": 562, "end": 568.96, "text": " to be that there is a general correlation between the trained accuracy and how this histogram looks." }, { "start": 569.68, "end": 571.68, "text": " And we're going to explore what they do." }, { "start": 574.48, "end": 581.76, "text": " So it's essentially, it's pretty easy. They compute the linear map around each data point." }, { "start": 581.76, "end": 588.64, "text": " So what is that? If you imagine a neural network as a nonlinear function, which I guess you should," }, { "start": 588.64, "end": 597.52, "text": " because it is. And so let's imagine it as like a nonlinear function from X to Y. What they'll do" }, { "start": 597.52, "end": 603.6, "text": " is simply they'll look at a given date training data point, which could be here, right? This could" }, { "start": 603.6, "end": 611.84, "text": " be the X and this could be the Y. And in fact, let's look at it in loss landscape, not even in Y," }, { "start": 611.84, "end": 617.28, "text": " but in L in terms of the loss, because we don't need necessarily a single label. This could be" }, { "start": 617.28, "end": 623.76, "text": " for unsupervised, this could be for anything. Okay, so it maps a data point to a loss. Now," }, { "start": 624.72, "end": 629.68, "text": " what we'll do is we'll simply linearize the function around that point, which means we'll" }, { "start": 629.68, "end": 635.28, "text": " just freeze all the nonlinearities in place. And that will give us this linear function right here." }, { "start": 636.0799999999999, "end": 643.4399999999999, "text": " Okay, we just observe that this linear function can exist. It's the tangent to the loss landscape." }, { "start": 643.44, "end": 649.12, "text": " And it's at a particular data point, right? It's in data space, not in weight space. Then we look" }, { "start": 649.12, "end": 654.32, "text": " at a different data point. So we look at this data point right here, another data point. What's the" }, { "start": 654.32, "end": 662, "text": " linear function around this one is sort of like, whoops, D is like that. And then around this one" }, { "start": 662, "end": 669.44, "text": " is like this. Okay, so this is one function. Now let's look at a different function right here. So" }, { "start": 669.44, "end": 680, "text": " L, X, and we'll look at this function, the linear function. Okay, so for some reason," }, { "start": 681.12, "end": 692.08, "text": " this is like this. And if we consider two data points, their linearization is very similar." }, { "start": 692.6400000000001, "end": 699.2800000000001, "text": " Now imagine that these two have been produced by the same sort of neural networks. It's just" }, { "start": 699.28, "end": 705.36, "text": " the architecture is a little different. But they have the same number of parameters in the neural" }, { "start": 705.36, "end": 712, "text": " network. Which neural network would you prefer? Remember, by training the neural network, you can" }, { "start": 712, "end": 718.0799999999999, "text": " actually shape this loss function. You can kind of shape that around. So which one would you prefer?" }, { "start": 719.04, "end": 725.68, "text": " I personally would prefer the top one, because the top one already tells me that, hey, you know," }, { "start": 725.68, "end": 730.7199999999999, "text": " I might have 10 parameters here. And this already sort of looks like each of the 10 parameters is" }, { "start": 730.7199999999999, "end": 736.2399999999999, "text": " doing something. So if I then go into my 10 parameters, and I, you know, turn this knob" }, { "start": 736.2399999999999, "end": 742.3199999999999, "text": " right here, then I might, you know, up this bump, or down this bump, or do something with it. But" }, { "start": 742.3199999999999, "end": 749.92, "text": " the sort of frequency, curvature, the randomness of the function, the way that it fluctuates tells" }, { "start": 749.92, "end": 756.0799999999999, "text": " me that all of the different parameters must have some sort of effect, right? Because it's quite an" }, { "start": 756.0799999999999, "end": 761.5999999999999, "text": " expressive function. Whereas if I have the same number of parameters for a function like this," }, { "start": 761.5999999999999, "end": 768.3199999999999, "text": " this sort of tells me, well, maybe only one of the when the only one of the weights is actually" }, { "start": 768.3199999999999, "end": 774.16, "text": " doing something, maybe only one of the dimensions is doing something. This seems odd, right? That" }, { "start": 774.16, "end": 780.3199999999999, "text": " even though I've initialized it randomly, a super regular function like this comes out. So maybe all" }, { "start": 780.3199999999999, "end": 787.28, "text": " of the all of these parameters down here, they don't do anything. Or this, so somehow the signal" }, { "start": 787.28, "end": 794.24, "text": " doesn't get through. So that's, I, they don't explicitly say it in these terms. But this is" }, { "start": 794.24, "end": 802, "text": " how I make sense of this. What they're saying is that if you look at the linearizations of the" }, { "start": 802, "end": 809.68, "text": " functions, and you look at the the angle right here, so the angle in this case is that and in" }, { "start": 809.68, "end": 816.8, "text": " this case is that and in this case is that. So you look at the slope here. And the slope is basically" }, { "start": 816.8, "end": 823.28, "text": " the gradient of these linearized functions. And what you want to do is you want to look at the" }, { "start": 823.28, "end": 828.64, "text": " correlation between those of the different data points. So here we have three angles. One is" }, { "start": 828.64, "end": 839.28, "text": " very short, one is very bit longer, like this, and or no, even like this, and one is even over" }, { "start": 839.84, "end": 845.92, "text": " 90 degrees like that. They are not correlated at all, right? They're all very different. However," }, { "start": 845.92, "end": 854.48, "text": " the angles here, they're all quite the same, as you can see. So what they propose is the following." }, { "start": 854.48, "end": 860.64, "text": " Let's send all the data points, or in that case, all the data points in a particular mini batch," }, { "start": 860.64, "end": 867.36, "text": " let's send them through the function, and let's calculate their linearizations. So the linearization" }, { "start": 867.36, "end": 873.12, "text": " is nothing else than you send them through the network to obtain the f value for the x value," }, { "start": 873.12, "end": 878.72, "text": " and then you calculate the gradient with respect to the input. Now you have to get used to this a" }, { "start": 878.72, "end": 884.4, "text": " bit, because usually we calculate the gradient with respect to the weight. So you calculate the" }, { "start": 884.4, "end": 889.92, "text": " gradient, but now we calculate the gradient with respect to the input, which if this is a linear" }, { "start": 889.92, "end": 898.9599999999999, "text": " function, so if you have a f of x equals wx, like a linear function, then this gradient, del f del" }, { "start": 898.9599999999999, "end": 906.24, "text": " x, would just give you the w, will give you the slope of the linear function, and the same in the" }, { "start": 906.24, "end": 912.56, "text": " neural network when you linearize it. All right, so we're going to obtain all these linearizations," }, { "start": 912.56, "end": 920.64, "text": " and that gives us this matrix J right here. And what we can do is we can then observe the" }, { "start": 920.64, "end": 929.8399999999999, "text": " covariance matrix of J, of all these linearizations. The covariance matrix simply tells you how two data" }, { "start": 929.8399999999999, "end": 935.28, "text": " points vary with each other, and in fact they don't look at the covariance matrix, but they look at the" }, { "start": 935.28, "end": 941.5999999999999, "text": " correlation matrix, which is simply the scaled covariance matrix. So one entry in this covariance" }, { "start": 941.6, "end": 949.28, "text": " matrix, so you have n data points, and this gives you a matrix that's n by n, and a particular entry" }, { "start": 949.28, "end": 957.28, "text": " here, like the entry i, j, would simply state how does the angle of data point i correlate with the" }, { "start": 957.28, "end": 970.1600000000001, "text": " angle of data point j. Okay, that's the covariance matrix. And now the hypothesis is, if all of these" }, { "start": 970.16, "end": 976.88, "text": " data points are sort of independent, like in our very expressive function here, then these correlations," }, { "start": 976.88, "end": 983.1999999999999, "text": " they should not be high. In fact most data points should be rather uncorrelated. However, in this" }, { "start": 983.1999999999999, "end": 990.56, "text": " case right here, if the function is sort of kind of degenerative or something, not very expressive," }, { "start": 990.56, "end": 996.48, "text": " then all of these angles, all of these linearizations should be highly correlated." }, { "start": 996.48, "end": 1005.04, "text": " And that's what you see in this graph right here. This right here now is this correlation histogram" }, { "start": 1005.84, "end": 1012.08, "text": " of the correlations between local linear maps across all pairs of items in a mini batch of C410" }, { "start": 1012.08, "end": 1019.36, "text": " training data. Each policy is scrammed for a single untrained NASBench 201 architecture. So remember" }, { "start": 1019.36, "end": 1024.8, "text": " the expressivity is important because we want to train that function, and therefore it's important" }, { "start": 1024.8, "end": 1030.32, "text": " that every parameter does something. And if it's degenerate, we can't train it well. And that's," }, { "start": 1030.32, "end": 1039.68, "text": " I find that's the reasoning. They sort of say this, but I might make the wrong sense out of it here," }, { "start": 1039.68, "end": 1045.04, "text": " but it seems to me like that's what's actually going on. So you can see this is simply these" }, { "start": 1045.04, "end": 1050.3999999999999, "text": " matrix values rolled out and then plotted as a histogram. So what does it mean when an histogram" }, { "start": 1050.4, "end": 1055.92, "text": " is like super spread out like this? It means that there are a lot, and I think down here are axes," }, { "start": 1055.92, "end": 1062.48, "text": " yes, there are a lot of data points that correlate highly or anti-correlate highly with each other." }, { "start": 1063.1200000000001, "end": 1071.0400000000002, "text": " Okay, which means that exactly this degeneracy happens. So either too high or too negative high" }, { "start": 1071.0400000000002, "end": 1077.2, "text": " correlation means that they're very much, they're kind of the same thing. So there is, if you have" }, { "start": 1077.2, "end": 1084.8, "text": " as many parameters as data points, that means that one parameter can potentially serve these two data" }, { "start": 1084.8, "end": 1090.48, "text": " points or these two that are correlated by one or negative one. You don't need both parameters and" }, { "start": 1090.48, "end": 1095.8400000000001, "text": " therefore you have a lot of parameters doing nothing. Whereas over here with the good networks," }, { "start": 1095.8400000000001, "end": 1102.72, "text": " you can see that this spikes around zero, meaning that the data points are not correlated" }, { "start": 1102.72, "end": 1110.8, "text": " or the linearizations around the data points are not correlated. And therefore you can sort of shape" }, { "start": 1110.8, "end": 1117.52, "text": " the function around each data point however you want. Which we sort of know that neural networks," }, { "start": 1117.52, "end": 1122.96, "text": " what they do is they're so over expressive that they're actually able to shape the functions" }, { "start": 1122.96, "end": 1129.68, "text": " around the data points without necessarily looking at other data points nearby. And that" }, { "start": 1129.68, "end": 1138.5600000000002, "text": " expressivity is what you want and that expressivity is what this in part measures. Okay, so they make" }, { "start": 1138.5600000000002, "end": 1144.48, "text": " a, they have some experiments here where they validate this. So for all these architectures" }, { "start": 1144.48, "end": 1148.8, "text": " in this benchmark, and maybe I should tell you what, show you what the benchmark looks like." }, { "start": 1148.8, "end": 1154.96, "text": " So the benchmark has this particular form, this particular form, there's this skeleton," }, { "start": 1154.96, "end": 1159.76, "text": " and in this skeleton there is this block and it's always repeated. And you're basically," }, { "start": 1159.76, "end": 1165.6000000000001, "text": " your task is to determine what this block should be. So this block has an input node A and an output" }, { "start": 1165.6000000000001, "end": 1170.72, "text": " node D and two intermediate nodes. And what you have to do is basically you have to determine" }, { "start": 1170.72, "end": 1177.3600000000001, "text": " these connections right here. So there are six connections and for each one you have the option" }, { "start": 1177.3600000000001, "end": 1182.24, "text": " of putting different things there. Like you can see you can put a convolution, you can put the" }, { "start": 1182.24, "end": 1187.68, "text": " identity function, which is a skip connection, zero wise. I don't, maybe that's the zero function," }, { "start": 1187.68, "end": 1194.56, "text": " so it basically means nothing. I'm not so sure, honestly. But you could technically put a" }, { "start": 1194.56, "end": 1201.76, "text": " convolution here and here, right, or different convolutions or things like this. So there are" }, { "start": 1201.76, "end": 1214.72, "text": " these 15,625 possible cells. So the NAS benchmark contains 15,625 possible architectures that you'll" }, { "start": 1214.72, "end": 1223.12, "text": " have to search. And they take these architectures and they plot now, they plot for each architecture" }, { "start": 1223.12, "end": 1227.76, "text": " the validation accuracy after training. And the training protocol is standardized, you don't have" }, { "start": 1227.76, "end": 1234, "text": " to care about that. And the score that they measure at the beginning of training. And what you can see" }, { "start": 1234, "end": 1242, "text": " is that there is a linear relationship, sort of, like sort of. From these experiments what you'll" }, { "start": 1242, "end": 1249.04, "text": " get is like this sort of feeling. What they're going to propose is that you should take that score" }, { "start": 1249.04, "end": 1259.12, "text": " as a measure. And here again also, sort of, sort of. There is a clear trend, as you can see," }, { "start": 1259.12, "end": 1266.8799999999999, "text": " right here. Though, yeah, though this, as you can see, this sort of spreads out. And the most right" }, { "start": 1266.8799999999999, "end": 1275.68, "text": " one is ImageNet, which is the most difficult one, of course. So, and this is CIFAR 100, which is more" }, { "start": 1275.68, "end": 1283.6000000000001, "text": " difficult than CIFAR 10. So we can see that this sort of relationship at the top, it doesn't really" }, { "start": 1283.6000000000001, "end": 1288.8, "text": " hold anymore if the task gets difficult. And this is, so what I think is happening, this is kind of" }, { "start": 1288.8, "end": 1295.3600000000001, "text": " an interjection of my own opinion. What's happening here is that this score that they discover" }, { "start": 1296.4, "end": 1302.96, "text": " allows them pretty efficiently to see which networks are just degenerate and cannot be trained." }, { "start": 1302.96, "end": 1309.92, "text": " Like if you try to train them, they just perform really poorly, okay? That, it's probably a very" }, { "start": 1309.92, "end": 1316.08, "text": " good score for weeding those out. And that would mean if you kind of barrier here somewhere, right?" }, { "start": 1316.08, "end": 1321.28, "text": " You could just discard a whole lot of this crap, or even here, right? You could just discard a" }, { "start": 1321.28, "end": 1330.24, "text": " whole lot of this crap. And also now here, just, you know, all of this crap. Yeah, whereas here," }, { "start": 1330.24, "end": 1335.92, "text": " as you can see, some, this score, sometimes it's higher than these ones, even though they perform" }, { "start": 1335.92, "end": 1342.4, "text": " better. And again, you could probably discard a lot of the crap, but it's not as distinctive for" }, { "start": 1342.4, "end": 1348, "text": " the well performing networks, because these here are all not the degenerate version, right? They're" }, { "start": 1348, "end": 1353.76, "text": " not degenerate in the sense that they're, they have some fundamental flaw where the function lacks" }, { "start": 1353.76, "end": 1360.08, "text": " now expressivity from the very start, so you can't train it. And so, you know, it's not" }, { "start": 1360.08, "end": 1366.24, "text": " a big deal. And then probably other factors come into play, other factors than you can simply" }, { "start": 1366.24, "end": 1372.6399999999999, "text": " determine with this particular score. But, you know, there is this relationship that's," }, { "start": 1373.6, "end": 1381.36, "text": " you know, you can see that. And they do some ablations on this here. For example, are your" }, { "start": 1381.36, "end": 1387.6799999999998, "text": " scores a proxy for a number of parameters? And they say, no, the number of parameters works way" }, { "start": 1387.68, "end": 1394, "text": " than this particular score, which, you know, is a cool thing. Then how important is the specific" }, { "start": 1394, "end": 1400.24, "text": " mini batch and initialization? And they say, look right here, we, for some architectures," }, { "start": 1400.24, "end": 1406.64, "text": " we do different mini batch sizes. And you can see each of those groups, they don't vary too much" }, { "start": 1406.64, "end": 1412.16, "text": " in how they're, it influences their score. This is, I believe this is the same architecture. So" }, { "start": 1412.16, "end": 1418, "text": " it's always an architecture that achieves in this case, for example, wow, that's not a straight line," }, { "start": 1419.28, "end": 1426.24, "text": " 77% or so. And you can see if you go for different mini batches, the score varies only minimally." }, { "start": 1427.2, "end": 1436.24, "text": " Initialization is a bigger variance inducing thing. But also here, the scores don't vary too much." }, { "start": 1436.24, "end": 1441.3600000000001, "text": " But it is interesting that the different initialization do get you to different score," }, { "start": 1441.36, "end": 1446.24, "text": " because it would directly support kind of my hypothesis that what's going on here is that" }, { "start": 1446.8, "end": 1454.32, "text": " you sort of measure initial degeneracies. And you can sort of make up for these initial degeneracies" }, { "start": 1454.32, "end": 1458.8, "text": " in the architecture sometimes with sort of a different initialization. So the different" }, { "start": 1458.8, "end": 1464.9599999999998, "text": " initializations give you differently performing networks. We already know this from things like," }, { "start": 1464.9599999999998, "end": 1470.8799999999999, "text": " you know, lottery ticket hypothesis and so on, that the initialization can matter to some degree" }, { "start": 1470.88, "end": 1477.1200000000001, "text": " in these types of things. Now, that being said, they always train to the same, it seems, but their" }, { "start": 1477.1200000000001, "end": 1484.88, "text": " their score varies. So I might be backwards correct here, or not correct. But in any case," }, { "start": 1484.88, "end": 1492, "text": " the initialization here matters more, but also you can still see this linear relationship." }, { "start": 1492.96, "end": 1499.6000000000001, "text": " And this is particularly interesting. This is even the case when you just input white noise. So" }, { "start": 1499.6, "end": 1505.6799999999998, "text": " instead of the data, you measure that score by just inputting noise that I guess has some sort" }, { "start": 1505.6799999999998, "end": 1511.28, "text": " of the same magnitude as the data would have, but it's just noise. And you can still sort of see this" }, { "start": 1511.28, "end": 1517.76, "text": " linear relationship, which is very interesting. And that I think also shows some that you what" }, { "start": 1517.76, "end": 1525.04, "text": " you're fine, what you find is a property of the network itself. And the fact that it is," }, { "start": 1525.04, "end": 1531.68, "text": " it is initialized and built in such a way that it allows you to train it in a very," }, { "start": 1532.56, "end": 1542.72, "text": " in a sort of a benign manner, it has no degeneracies. Okay. So in the last experiment," }, { "start": 1542.72, "end": 1552.24, "text": " they go here and they say, we evaluated the score on initialized networks in the PyTorch CV library." }, { "start": 1552.24, "end": 1557.52, "text": " So they go to this library that has a lot of these networks, but these networks are not the same as" }, { "start": 1557.52, "end": 1561.68, "text": " this benchmark. This benchmark is specifically designed to do architecture search. Now the" }, { "start": 1561.68, "end": 1567.76, "text": " networks in this library, they are all designed to perform really well. Some are designed to be" }, { "start": 1567.76, "end": 1572.72, "text": " quite small, some are designed to be quite fast and so on. But in general, they are all of their" }, { "start": 1572.72, "end": 1579.04, "text": " goal is to perform well, and they have been sort of found by humans to perform well. So they take" }, { "start": 1579.04, "end": 1586, "text": " now these networks on CIFAR 10 and they test them. So as you can see here, here is the test" }, { "start": 1586, "end": 1594.96, "text": " accuracy again, and here is their score that they give it. And they say, now I can't move this anymore." }, { "start": 1595.6, "end": 1599.52, "text": " Hello. Well, okay." }, { "start": 1599.52, "end": 1606.48, "text": " They say that this linear relationship still sort of holds. It doesn't hold super, super well," }, { "start": 1606.48, "end": 1614.96, "text": " but you can still sort of, if you squint, if you squint hard, you can see that it sort of goes" }, { "start": 1614.96, "end": 1621.76, "text": " upward, though you really have to squint hard. Like what are these things right here? And what," }, { "start": 1621.76, "end": 1628.16, "text": " again, what's the case is that if the score is low, it's not going to be a good score." }, { "start": 1628.16, "end": 1636.16, "text": " So what you can do is that if the score is low, you will sort of be able to cut off the worst" }, { "start": 1636.16, "end": 1643.92, "text": " performing ones. But really at the top here, it doesn't seem like there is a particular relation" }, { "start": 1643.92, "end": 1652.72, "text": " between these networks and this initial score, which sort of strengthens my hypothesis that" }, { "start": 1652.72, "end": 1659.6000000000001, "text": " it's just kind of weed out the bad ones. But it's pretty cool because you can weed out the bad ones" }, { "start": 1659.6000000000001, "end": 1665.3600000000001, "text": " without any training, right? You'd simply forward prop backward prop. There you have it. So cool." }, { "start": 1666.4, "end": 1672.64, "text": " Now they come, they, here is the experiment where they now really do this NAS benchmark and they" }, { "start": 1672.64, "end": 1679.44, "text": " compare with other methods. So some of these other methods are designed to do the call weight" }, { "start": 1679.44, "end": 1685.44, "text": " sharing, which basically is a technique where you can sort of speed up the speed up the algorithm" }, { "start": 1685.44, "end": 1691.52, "text": " as compared to non weight sharing and the non weight sharing. That's one of these we have discussed" }, { "start": 1691.52, "end": 1697.68, "text": " initially. That was my initial example with the controller and so on where it takes super long." }, { "start": 1697.68, "end": 1706.4, "text": " So here you see the method and how long each method takes. Now the best ones, as you can see" }, { "start": 1706.4, "end": 1715.1200000000001, "text": " already, the best ones here, or these, these methods right here are the best ones. They score" }, { "start": 1715.1200000000001, "end": 1722.64, "text": " somewhat like a 93.9 or so on C for 10, whereas these weight sharing ones, they don't perform too" }, { "start": 1722.64, "end": 1730.88, "text": " well, except this one seems to perform quite well. And in this hours case, they perform worse than" }, { "start": 1730.88, "end": 1736.64, "text": " that, but they still perform better than a lot of the weight sharing ones. So what their point is" }, { "start": 1736.64, "end": 1744.88, "text": " basically is that they get a pretty good score, which is a 91.5 on C for 10, which is, you know," }, { "start": 1744.88, "end": 1753.5200000000002, "text": " it's at least not degenerate. It's a, it's a good accuracy. They score that with simply evaluating" }, { "start": 1753.52, "end": 1762.16, "text": " 10 architectures, right? And as n goes up, as they evaluate more and more architectures, they do," }, { "start": 1763.6, "end": 1769.36, "text": " they do get better, but not much. So they have a discussion here. I'm having trouble moving this." }, { "start": 1771.68, "end": 1776.4, "text": " All right, so we'll sort of go through the discussion. We report results, yada, yada, yada," }, { "start": 1776.4, "end": 1782.32, "text": " yada, yada. As the setup, the non weight sharing methods are given a time budget of 12,000 seconds" }, { "start": 1782.32, "end": 1787.84, "text": " for our method and the non weight sharing methods are averaged. Accuracies are averaged over 500" }, { "start": 1787.84, "end": 1794.8, "text": " runs for weight sharing methods. Accuracies are reported over three runs with the exception of" }, { "start": 1794.8, "end": 1800.8, "text": " G Das. Our method is able to outperform all the weight sharing methods while requiring a fraction" }, { "start": 1800.8, "end": 1805.36, "text": " of the search time. And that you may see at the table. This is the real, I mean, this is the real" }, { "start": 1805.36, "end": 1812.9599999999998, "text": " deal here. They only use here 1.7 seconds compared to the 12,000 seconds of the other methods. And" }, { "start": 1812.9599999999998, "end": 1821.28, "text": " you reach almost the same accuracy. Now to be said, 2% in this particular regime on C410 is still a" }, { "start": 1821.28, "end": 1827.1999999999998, "text": " sizable difference. And that's the same benchmark, right? With the same sort of the same training" }, { "start": 1827.1999999999998, "end": 1832.32, "text": " schedule and so on. So there's not too much room to tune here. You simply have to find a better" }, { "start": 1832.32, "end": 1842.3999999999999, "text": " architecture. So these things are still sizably ahead of this. And what it appears to me that" }, { "start": 1842.3999999999999, "end": 1848.32, "text": " these methods here that don't perform well, they're simply crap. It seems they're simply," }, { "start": 1849.04, "end": 1854, "text": " I don't know, but they might be trying out something or, you know, doing something" }, { "start": 1854, "end": 1862.16, "text": " researchy or whatnot. But it seems like if you're well able to weed out the bad architectures," }, { "start": 1862.16, "end": 1870, "text": " you might be getting to a score like this. And then if you are actually performing a search to" }, { "start": 1870, "end": 1876, "text": " find the best one, then you might be getting to somewhere like this. And you can see this here" }, { "start": 1876, "end": 1884, "text": " throughout. So in C4100, they achieve a better score than these things, but a worse score than" }, { "start": 1884, "end": 1894.8, "text": " the non-weight sharing method. And in ImageNet, the difference is even larger. So again, what I" }, { "start": 1894.8, "end": 1902.4, "text": " can see here is that theirs is a good method to maybe get you, let's say, 90% of the way you want" }, { "start": 1902.4, "end": 1910.24, "text": " to go. And what's interesting is that here they say, we also show the effect of sample size. We" }, { "start": 1910.24, "end": 1914.64, "text": " show the accuracy of the networks chosen by our method for each n. So that's the sample size." }, { "start": 1914.64, "end": 1920.64, "text": " We list the optimal accuracy for sample sizes 10 and 100 and random selection over the whole benchmark." }, { "start": 1921.76, "end": 1927.76, "text": " So in this case, they have the optimal one, which I guess they just draw 10 samples and then take" }, { "start": 1927.76, "end": 1931.76, "text": " the best one. So they train all of them and then take the best one. And you can see that" }, { "start": 1931.76, "end": 1940.8799999999999, "text": " already gets you to the 93. And whereas in their case, sometimes when they add more, they get worse." }, { "start": 1940.8799999999999, "end": 1947.76, "text": " So here they get better, but then they get worse again. So they comment on this right here. We" }, { "start": 1947.76, "end": 1953.36, "text": " observe that the sample size does not have a large effect on the accuracy of our method. But note" }, { "start": 1953.36, "end": 1958.56, "text": " that as sample size increases, our method suffers from a small amount of noise, increasing the gap" }, { "start": 1958.56, "end": 1966.08, "text": " between our score and the optimal result. And of course, the key practical benefit is execution" }, { "start": 1966.08, "end": 1974.08, "text": " time. So again, they are massively faster than the other methods. But to me, it seems you could" }, { "start": 1974.08, "end": 1981.12, "text": " just think of combining these methods, right? You combine this with this in that what you want to do" }, { "start": 1981.12, "end": 1987.04, "text": " is actually actively search for the best ones. But by doing so, you could, if you could pretty" }, { "start": 1987.04, "end": 1993.28, "text": " quickly weed out the bad ones using this method down here, you might already have like a big speed" }, { "start": 1993.28, "end": 2001.28, "text": " up. Because again, with comparison to this random ones, what appears to happen is that they get good" }, { "start": 2001.28, "end": 2008.08, "text": " at finding, you know, your 90% architecture, but then they fail to differentiate the top" }, { "start": 2008.08, "end": 2014.8799999999999, "text": " performance performers from each other, where you'd really have to train the network to find out" }, { "start": 2014.88, "end": 2024.16, "text": " what's, you know, which one's better. So yeah, here they say they visualize the trade off between" }, { "start": 2024.16, "end": 2030.48, "text": " search time and accuracy for C410 for different NAS algorithms on the NAS benchmark. By removing" }, { "start": 2030.48, "end": 2035.2, "text": " the need for training, our method is able to find accurate networks in seconds instead of hours." }, { "start": 2035.2, "end": 2042, "text": " And here you can see the accuracy and here you can see the time and all the good ones are either" }, { "start": 2042, "end": 2051.44, "text": " way over here or here. And theirs is almost at zero while being quite close to the accuracy of" }, { "start": 2051.44, "end": 2060.16, "text": " the other ones. All right, yeah, that was that was this paper. Again, I think this is pretty" }, { "start": 2060.16, "end": 2066.48, "text": " valuable if you are especially if you're in a new domain, where you might not know what kind of" }, { "start": 2066.48, "end": 2071.92, "text": " network to build, you might just be able to write a little script that generates networks, run it" }, { "start": 2071.92, "end": 2077.04, "text": " through this algorithm, and at least you get an idea of which ones are certainly not worth" }, { "start": 2077.04, "end": 2082.8, "text": " considering. And then you can simply select one of the other ones. It doesn't, you know, often it" }, { "start": 2082.8, "end": 2087.44, "text": " doesn't need to be the best ones. And you can then tweak it a little bit manually, the ones you found," }, { "start": 2087.44, "end": 2093.12, "text": " maybe you see some regularity. And yeah, that was my two cents on this paper. I hope you liked it." }, { "start": 2093.12, "end": 2100.24, "text": " If you did, consider sharing it out and telling your friends about it and subscribing, liking," }, { "start": 2100.24, "end": 2127.2799999999997, "text": " and leave a comment if you agree or disagree. That was it. Bye bye." } ]
qtu0aSTDE2I
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
DreamCoder: Growing generalizable, interpretable knowledge with wake-sleep Bayesian program learning
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "artificial intelligence", "wake sleep algorithm", "program synthesis", "ai program synthesis", "program synthesis deep learning", "dreamcoder", "dream coder", "mit dream coder", "bayesian program search", "neural guided search", "learning to sort a list", "neural networks learn sorting", "deep learning physical laws", "deep learning symbolic reasoning", "symbolic machine learning", "symbolic artificial intelligence", "deep learning tutorial" ]
#dreamcoder #programsynthesis #symbolicreasoning Classic Machine Learning struggles with few-shot generalization for tasks where humans can easily generalize from just a handful of examples, for example sorting a list of numbers. Humans do this by coming up with a short program, or algorithm, that explains the few data points in a compact way. DreamCoder emulates this by using neural guided search over a language of primitives, a library, that it builds up over time. By doing this, it can iteratively construct more and more complex programs by building on its own abstractions and therefore solve more and more difficult tasks in a few-shot manner by generating very short programs that solve the few given datapoints. The resulting system can not only generalize quickly but also delivers an explainable solution to its problems in form of a modular and hierarchical learned library. Combining this with classic Deep Learning for low-level perception is a very promising future direction. OUTLINE: 0:00 - Intro & Overview 4:55 - DreamCoder System Architecture 9:00 - Wake Phase: Neural Guided Search 19:15 - Abstraction Phase: Extending the Internal Library 24:30 - Dreaming Phase: Training Neural Search on Fictional Programs and Replays 30:55 - Abstraction by Compressing Program Refactorings 32:40 - Experimental Results on LOGO Drawings 39:00 - Ablation Studies 39:50 - Re-Discovering Physical Laws 42:25 - Discovering Recursive Programming Algorithms 44:20 - Conclusions & Discussion Paper: https://arxiv.org/abs/2006.08381 Code: https://github.com/ellisk42/ec Abstract: Expert problem-solving is driven by powerful languages for thinking about problems and their solutions. Acquiring expertise means learning these languages -- systems of concepts, alongside the skills to use them. We present DreamCoder, a system that learns to solve problems by writing programs. It builds expertise by creating programming languages for expressing domain concepts, together with neural networks to guide the search for programs within these languages. A ``wake-sleep'' learning algorithm alternately extends the language with new symbolic abstractions and trains the neural network on imagined and replayed problems. DreamCoder solves both classic inductive programming tasks and creative tasks such as drawing pictures and building scenes. It rediscovers the basics of modern functional programming, vector algebra and classical physics, including Newton's and Coulomb's laws. Concepts are built compositionally from those learned earlier, yielding multi-layered symbolic representations that are interpretable and transferrable to new tasks, while still growing scalably and flexibly with experience. Authors: Kevin Ellis, Catherine Wong, Maxwell Nye, Mathias Sable-Meyer, Luc Cary, Lucas Morales, Luke Hewitt, Armando Solar-Lezama, Joshua B. Tenenbaum Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher LinkedIn: https://www.linkedin.com/in/yannic-kilcher-488534136/ BiliBili: https://space.bilibili.com/1824646584 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Hi there, I have a little challenge for you right here. Look at these numbers and see if you can figure out what comes where the question mark is. Now, if you look at it a little bit, you'll recognize that this is the sorting algorithm. You're supposed to sort these numbers in ascending order, and that's going to be the solution. Why I'm showing you this isn't because it's particularly hard or because I'm particularly good at sorting numbers. It is because this is a core feature of human intelligence that we haven't been able to reach with machine learning quite yet. We are able to look at very few examples and then generalize to new examples. We do that not by the way machine learning does it by gradient descent into a model, but we do it by coming up with a rule such as, hey, this is sorting. Even if we didn't know what sorting was, we would be able to come up with the rule nevertheless, because we would realize, I need to compare the numbers and I need to pick the lowest one first, and then the second lowest one second, and so on. We humans are able to come up with rules to solve the problem, and in more general sense, we're able to come up with a program, with an algorithm that solves the problem. That is the point of this paper, to solve problems not with pure brute force machine learning like gradient descent from a dataset but with coming up with rules with algorithms to solve the problem. Now, this brings its inherent challenges. It's not a new approach, but this paper makes it more scalable than before. The paper is called Dream Coder, Growing Generalizable Interpretable Knowledge with Wake Sleep Bayesian Program Learning. It's by Kevin Ellis, Catherine Wong, Maxwell Nye, Matthias Sable-Meier, Luke Carey, Luca Moral, Luke Hewitt, Armando Soler-Lesema, and Joshua B. Tenbaum. Again, the paper says itself, we present Dream Coder, a system that learns to solve problems by writing programs. It builds expertise by creating programming languages for expressing domain concepts together with neural networks to guide the search for programs within these languages. The entire model is going to be a system that sees problems, just a few of them, and comes up with programs that solve these problems. It does so in its own language. It builds up its own programming language, and then it's able to synthesize programs in this language that solve the problem. It does so by having a neural network guide that search. That's Dream Coder. It includes this wake-sleep algorithm, which has been also around for a while, but it's a different take on it. The wake-sleep learning algorithm alternatively extends the language with new symbolic abstractions and trains the neural network on imagined and replayed problems. The past ventures into program synthesis have all been not really scalable, because either they have some handcrafted programming language that you search over, or they have handcrafted rules of how you search, and so on. This system here is much more general, and it can solve a vast variety of different tasks. For example, here you can see the different types of tasks that the system can solve. There is list processing. Sorry, that's a bit heavy. There's list processing, such as summing lists, doubling each element, check for evens, text editing, learning regex for stuff, and also very creative things like creating graphics, creating block towers, regressing symbolically, recursive programming, and figuring out physical laws. We've already looked at paper that figure out physical laws from data, but they have been geared towards that. This is the same system that can figure out all of these things. Now, of course, it's going to be configured a little bit differently if you talk about list processing versus figuring out physical laws, but it is the same underlying system. Ultimately, what does that amount to? That amounts to you giving the system a problem. Let's say the problem right here is... What do we have here? To sort a list. That's what we came up with at the beginning. Here you have the problem of sorting a list. You're going to give the program a few examples, like three like I gave you at the beginning, and the system is going to come up with a program. The program ultimately is going to look like the thing down here. It's going to come up with a program that implements the list sorting algorithm. It's going to do that by a few principles. Principle one, of course, it needs to fit all of the examples. It needs to explain all of the examples, otherwise it's not a correct program. And concept two is it needs to be easy. It needs to be very, very explainable in the sense of it needs to be very short, because there are many different rules that these lists follow. I can come up with... I can literally create this as a hash table. I can implement this as a hash table for these three lists, and that hash table would solve the problem exactly as well as the sorting algorithm. Now, the sorting algorithm is much more compact. It's simply... Well, it's this thing down here. And beyond that, what the system does is it builds up a library of concepts. So not only... The system doesn't see the program at the bottom. The system actually sees this program right here. So this is the sorting algorithm in the system's language, because the system has built up a learned library of concepts over time. So as we train the system to solve different tasks on lists, such as sum a few things, double a few things, and so on, it builds up this library of concepts. So there are these primitives right here that you give it, and then it's able to come up with these concepts that we as programmers might call functions. So it's able to come up with a thing that can filter a list. It doesn't have it in its initial primitives, but it's able to discover that because it uses it again and again and again. And now it's able to use that function instead of the primitives. So whereas before, it would have used the entire code in this thing, now it's just able to say, well, I want to use concept four right here. And that makes the programs that are written much shorter. So it uses this to implement the maximum function, which it calls concept 13. Of course, it has no concept of what we name function. And then it's able to use concept 13 and concept four together to implement the nth largest element function. And once I have the nth largest element function, I can simply iterate from the beginning. I have a list, I simply iterate over its length. So I iterate that, and I always use the nth largest number. And that will sort my list. So you can see that the program that sorts the list is super short in terms of this library we've built up. So this is our challenge for building this system. We somehow need a system that is able to come up with programs to solve problems, that is able to build up a library, and that is able to efficiently search through that self-built up library of concepts. And DreamCoder does all of this at the same time. So DreamCoder has three different stages in which these things are tackled. So imagine you have a data set of tasks. So the tasks here are these Xs. So X are the tasks. Now, the tasks can either be, as I understand it, of a single thing like list sorting, right? But they can also be the general class of list problems, which makes more sense in our class. So imagine we have the general class of list problems. Now, it maintains, as we said, this library L. And you can really imagine this as a programming library. So it contains functions that the program can call. And it also contains all the primitives that you give it. So there are going to be a bunch of... So this is going to be like a set. There are going to be a bunch of primitives like a plus b, a minus b, a times b. That's in terms of math, right? Here we're in lists. And there's also going to be a section down here that the program can fill itself. So the program can define a function that's like 2a plus b, right? And then it's able to call that. So that's the library right here. Now, what the system needs to do is it's given a task. So the task here, as you can see, is a few examples of... I don't even know what it does here. Do you know what it does? It kind of reverses the list and adds one or subtracts one, something like this. Yeah, I think it reverses the list and then it adds one, right? That's the task that we handle right here. You can see all of these things is reversing and adding. I've actually not solved that before, so it might be wrong. So what we have to do is we have to come up with a program that solves these tasks, right? That if we give the left side as an input, the right side appears. And that is hard. That is a hard problem because we start right here with an empty program and we build up a search tree. Now every single one of those rules here could be applied, right? So the program could be... Let's take the... Or yeah, let's say these are not math things, but these are list things. So I guess reversing is one of them, map is another one, but you get the point. So you have you put these rules here and you apply, you could apply the first rule, right? You could build a program made up out of the first rule. You could build a program made up of the second or the third. Now if you already have, so here your program is A plus B. If you have that, you could then again apply the first rule, which would give you A plus, sorry, A plus A plus B. You could apply the second rule, which would give you A plus A minus B, right? I'm just substituting kind of the second element right here. This is obviously implemented in a functional programming language that makes all of this really well defined. I'm just kind of showing it in easy mode, right? But you get the point. I can arbitrarily search through this tree and I can apply each of those rules over and over and over again. You can already see that this is going to give me a massive search tree. How am I going to solve these problems in these kind of massive trees? And that's where the neural network comes in. It's actually the only part in the system that is machine learned as far as I understand it or at least that is neural networked since machine learning isn't only deep learning. But the search through a discrete space that is really large is hard, but you as a human are able to do it. How are you able to do it? You have an intuition, right? You have some intuition that, you know, here, for example, the lists appear to be the same length if you look at the problem. So you know, you look at that and you say, well, maybe there's something with the ordering, maybe the first corresponds to the first or the first to the last or something like this. So you have some kind of intuition of which rules you want to apply. And this intuition, whenever you say intuition in a program, that's a prime place to put in a neural network. So if you know alpha go or alpha zero, that is exactly what it does, right? It is here at a particular chess board, right? And it could do all of these different moves. But it cannot brute force search all of the game tree because that would be impossible. It's computationally too expensive. So what it does is it employs a neural network that tells it, well, this here looks promising, you know, off the bat, and this one doesn't, this one doesn't, this one looks promising, and so on. And then you only go down those two. And from there, again, you have many options, but the neural network eliminates almost all of them and tells you which ones look promising. So if the neural network is a good guide, that enables you to quickly build a program that might solve the problem. So you do that, you search, you search, a newly guided search, you propose programs in decreasing order under your model. So this here, this is your guiding model. This is a likelihood model, like how likely is a program given the task that you're trying to solve, you try the most likely one first, and then you go down. So you search for the best program, which in this case means the program that solves the task, but is also the shortest, right? The intuition is always that a very short program is going to be, is going to be the better program, because it's a kind of a simpler explanation, right? So here, the fewer steps you make in your search, that's a better program. And the more the neural network likes the program, that's a better program, because the neural network is trained for this, right? So the best pro and you come up with the best program for the task. So you choose the program that maximizes the likelihood of the program given the task and the library, which is proportional if you apply Bayes rule to the likelihood of the likelihood that the program generates the solution, which this is just one or zero. If you have a, if you have a non probabilistic program, and then this here, the likelihood of generating a program from your library is just going to be proportional to the number of steps, the number of search steps that you need to make. Okay. So that's the wake algorithm in the wake phase, you try to solve the problem from the training set. You, sorry, you try to solve the, the tasks by coming up with programs that solve them. Now that gives you a data set of solved programs, right? So initially you're going to have a data set of tasks. You're going to run this through the wake phase. And most of the time you're probably going to fail, right? Most of the time it's like, no, can't solve it. But some of the time you're going to succeed. So you're going to have a little bit of a data set of where you've actually succeeded. And this data set is now going to be the, the input into the sleep phases. So what do the sleep phases do? And the sleep phases are crucial here, because if you only, if you only have the guided search, that's already okay. That's already good, right? But it's not going to help you to build more complex programs, because those are still, if you look at the program that is the list sorting program down here, like this is so large, you can never get here with search at least, you know, not in a reasonable time. You need to construct these abstract concepts, because this program here is much shorter. This short program is much shorter than the long program. And you can only get there by building these, these useful concepts by building up the library. So in the sleep phase, we're going to build, first of all, build up the library, which means we're going to take this data set that we've constructed, like here are all the things that we could solve. Now we're going to take that. And what we're going to do is we're going to look at our solutions. And we're going to compress them grow library to compress programs found during waking. Okay, so here we have a bunch of primitives, this is all the stuff we can do. Now we're going to see which of the things that we use often in combination with each other. So if we did very often dead, like, apply the first rule twice, right? So if we applied a plus b, and then we applied a plus b again, which would amount to a plus a plus b, which is to a plus b, we can say, since I use these two rules, we can say, since I use these two rules in conjunction very often, I'm going to make a new rule in my library, that allows me to simply apply this with just one step instead of two. So I'm going to add to a plus b to my library. Because now, since I already know I need those two often together, I, this is simply going to be just a single rule in reinforcement learning, this is sometimes called an option. So it's kind of a higher order action that you can take. And it is, you know, it's, it's, there, there's a lot of work trying to get these options. So what they do right here is sort of the same, it's a compression step. So they're trying to compress the programs that you found during the wake phase. So here you can see an example of this, you have a program for task one, and a program for task two. These don't necessarily even need to be the same tab, like they don't need to be the same. They don't need to come from the same task description, right? But it's just kind of from the same data set. And you notice that you've used this subroutine right here, the orange subroutine in both programs. What they do is they extract this subroutine into the library. And they have special algorithms for this. This is not an easy thing. So they have a very efficient way to search through these program trees, recognize commonalities and extract those. They don't describe that in the paper. But it is it is not a trivial trivial thing to do this. However, imagine that you can just do this. And then you expand your library. So mathematically, you expand the library with the routine that maximizes the following. So you essentially want to do two things. This here is simply the the p of the library itself is simply how large the library is. So you want to you want to keep your library small, right? If you could just add things at will, your search problem would again become too large because you have all these rules you could apply. So you only want to keep the best rules. But then also, you want to maximize this right here over refactorings of the programs that you found. So you want to keep programs. Again, this first term simply means the programs actually solve the tasks that you have. So there, if it's probabilistic, it's different. But we will just say the programs need to solve the tasks that you've encountered. And also, the programs need to be reasonably short given your library, right? And the given your library, you've already seen this before in the wake algorithm right here. This is the same term. And the important thing is that is given your library, right? A program that the sorting program up top isn't short. It's like it's freaking long. But the the program, the same program, given the library is really short because I can use this concept 15 from the library and the concept 15 in itself can again use the concept 13 and the concept four. So the gray box right here will be kind of the size of your library, right? Because this is all the concept. And then the orange box on the right would be the length of the program itself given the library, these two things combined need to be small, which makes sense. So you extend your library by the rules that are themselves small in terms of the library that are used often that solve a lot of problems. And that don't grow your library too much. So now that you've come up with new new rules, you're going to the third phase, and they call this dreaming. So dreaming this, this would already be I think this would already be enough and they do ablations where they leave out different parts right here. But a thing you can do if you have this, essentially, you have a DSL for your problems, right? And what you can do if you have a DSL is you can just apply, you can just build programs at random, right? You can just take a bunch of rules and apply them. And if you do that, you if de facto generate new, new problems to solve. So if usually during the wake phase, you have an input x and you have an output y, and you ask yourself, which program solves this, right? And these come from the data set. But this right here is built from a grammar, right? There's a grammar, which is your library. So your library builds those programs. Now what I can do is I can simply I can simply instead of doing the search tree thing, I can just apply a bunch of those rules, I can just simply start here and apply rule one, then apply rule two, apply rule five, and so on. And that's going to give me a program. I can apply that program to some input data that comes also from my training set is going to give me some different output data because it's a different program. But this now gives me another training data point. It's not from the real program. But I don't care, right? I can train my neural network to I can train my neural network. Now it's again, let's find this program. I can train my neural network to get better at finding programs because I know the program in this case, right? The difference between in the wake phase, I don't know what my program is. In the dream phase, I construct the program. So I know what the neural network should suggest as my steps, right? Here it should suggest of all the options, it should suggest the first one. Here it should suggest the third one, and so on. So I can do supervised learning of my neural network to to learn to search better in the space of programs by coming up with my own programs, and therefore generating my own training data. That's exactly what this dreaming phase does. So in the dreaming phase, actually, we're going to take two things. So we're going to train this neural network, which which they call the recognition model. And you can see, this is this is the thing that guides your search to predict the best programs for typical tasks and the current library. And typical tasks means either tasks that we sample or tasked with the input from the training set. But, you know, we come up with the output ourselves. So this what I've just described, they call fantasies, draw programs from the library. So construct the program, set task x to the output of executing the program, and then learn, learn, given x, I want the program P train the neural network to come up with the program P since I know what the program was. Or alternatively, I can again use these tasks that I solved correctly, right here. And I can use those as a training data set. Since I already I know that I just like I don't necessarily know that the program is the correct one. I just know that the program I came up with is able to solve the examples that I had. But it's good enough, right? It's good enough to act as a data set as well. And we do that to keep ourselves grounded in reality. We can't just start, you know, start dreaming up fantasies, because the fantasies, it's sort of a cycle. And like, this is a cycle, we come up with a library of like a language to describe the problems. And then we use the language to generate new problems. And then we use those generated problems to train our neural network. If we were to only do that, the danger is that we kind of drift away from reality and that our neural network learns very well to search through our imagined things. But you know, as soon as something real comes along, it's so different from what we imagined, it's no longer viable. That's why we also use the replays. And I think they use a 5050 mix of fantasies and replays. The reason why they even use fantasies is to be more data efficient. So you could do all of these things without the fantasy dreaming stage by simply training the neural network on successful replays. But that would be much more data inefficient. So yeah, it's sort of a house of cards that you build up. And I feel it depends a lot on many things right here. Like it depends a lot on the primitives that you give beforehand. It depends a lot on the tasks you choose and how well they are suited. It depends on the language itself, like how you can apply the rules. Of course, the paper is trying to tell us that the same basic algorithm can solve a lot of these tasks. But I still think the tasks are very suited to what the network does. And the network is or the system is built a lot with tasks like that in mind. And that leads to the that leads to this opportunity that you can even do this dreaming, because you can only do this dreaming thing if you know if constructing problems out of your library right here out of your library L is is useful for training your recognition model. If that were not useful, this algorithm would probably work much worse. But as it turns out for these problems, it's useful. So here you see another example of this abstraction step. So we have we have two tasks in the in the wake phase that the the system solved by the way, there is a little bit of a mistake here. But you know, we're we're humans, we can we can successfully work our way around this problem, which, yeah. So there are, you know, these these, the wake phase has actually solved both by coming up with programs. And now the the sleep the abstraction phase is able to search through a giant number of refactorings in order to come up with this primitive, the map primitive. And they stress again, so their algorithm that they have for this compression, which they don't explain necessarily in this paper, but is is able to wade through a giant number of possible refactorings to come up with these common sub algorithms. It's not as easy as simply looking at comparing trees. It's actually much harder because you can refactor programs in many different ways, as especially if you have a sufficiently general programming language like this one right here. So ultimately, it would extract this map primitive. And then you can see that both programs immediately become a lot shorter, like the the top program. Sorry, the left one is this and the right one is this. Once you have the primitive, they become super duper easy. So in terms of experiments, what they do is they they apply this, as we said, to these kind of list tasks, but also to these drawing tasks. And here the primitives aren't as much plus and minus and so on, or these languages that you've seen, the primitives are much more like you have a pen. And you know, it is at a point and you're able to kind of move the pen in very basic forms, I imagine. So it's sort of a descriptive descriptive language of a vector graphic. And you can see right here. So this is these logo graphic tasks, the model writes programs controlling a pen that draws the target picture. So that's just these are the tasks. The task is simply get me a program that draws these pictures. Okay, those are the tasks you can see they are fairly diverse. So there is a lot of things that you somehow have to have to get in order to be able to draw this. And when they analyze what the algorithm comes up with during training of on these tasks is that it discovers these primitives. So the primitives if they analyze the library after training contains things like the semicircle function. So the algorithm comes up with a function that takes a value or and draws a semicircle with the given radius, you can see that depending on the value of our the semicircle is larger, right? It all it comes up with primitives like I can draw a Greek spiral, I can draw an S curve. And so on. It also comes up with so what do you see in C right here. So each row, sorry, each row and B shows the same code executed with different parameters. Each image in C shows the same code executed with different parameters and a different sub program. So it is able to to come up with higher order functions that so functions that take another function as an input in this case, the the radial symmetry function that takes in a number n and a lower order function, and it will replicate that lower order function in in kind of a circle manner. So this, it comes it comes up with these things by itself. Now, again, this is pretty cool, by the way. And at the bottom, you can see what the dreaming phase comes up with. So at the beginning, you can see that the programs that the dreaming phase comes up with are fairly simple, right? And as the library grows, so grows the complexity of the programs it's able to come up with. So this is sort of a built in curriculum that the model has. It starts by constructing problems from its own library, given that at the beginning, the library is pretty primitive. It, you know, it doesn't do much, but over time, it does. Now here you can, by the way, I think the the pen starts at the dark and goes to the light like the color coding is where the pen starts and ends. And I'm not I'm not sure the exact direction they stated. So yeah, it's starts at blue and finishes at pink. Okay, and you can this is during super early, like this doesn't need many iterations. So illustrate the most interesting dreams found across five runs. Oh, sorry, no across five runs both before and after learning. But the sort of the iterations that it takes aren't that many to find solutions to new programs. But you can see, I feel right, this is just my opinion, that if you look at the problems, and if you look at the primitives that the thing comes up with, you probably see like I see that the person or the system who came up with these tasks is constructed in much the same way as these sort of primitives, like probably the person that came up with the tasks wrote a little DSL, saying, okay, you know, I'm gonna, you know, have a semicircle function, and that's going to be parameterized, and so on. And no, so these, these problems themselves are sort of generated by already by a DSL or by a human that has kind of this DSL in mind and applies it. And therefore, I think that's what I said when I said it's probably the system is very geared towards these problems, because what it's going to end up doing, it's going to end up kind of rediscovering how the data was generated. And that makes me a bit so so the question now is, does is this going to work on data that wasn't generated in this way? Or alternatively, you can ask, does the universe have a structure like this? And there's good arguments like it like it can discover physical laws. So here, it can also do, by the way, the same thing with these tower buildings. And you can see the primitives it's discovering are things like build an arch, build a wall, build a pyramid, like those are primitives and with arguments, and the different arguments will give you different structures right here is very cool. And these are the dreams down here, what it comes up with. So it's, you know, pretty intricate dreams, the combination of those rules. Now, again, the question is, does this work on let's say, real world data? And I feel that is, you know, is real world data? Does it behave similarly? And you know, maybe, I don't know. Yeah. So here you can see a bunch of ablations where they show that if you for example, if you're missing the abstraction, you won't get very far very often. For example, in these in these logo graphics, you see pretty clearly that without abstraction or without dreaming, you won't you won't get very far, especially I feel that abstraction hurts quite a bit. Because if you can't abstract, you're only going to go so far in constructing programs. So you can't construct large programs, even if you have a very good neural network guiding your search. And lastly, they go about, as I said, discovering sort of physical laws, and they sort of rediscover physical laws from numerical inputs. And that's what I mean, maybe the world is actually like this, at least that's how we humans solve problems, right? We search for a simple, simple explanation to the things that we see. And you know, science has been very successful, especially, you know, Newton has described, Newton's second law is like literally this big. So and it describes a whole lot of interesting physics. And you know, similarly, lots of other physical laws, which is kind of an unsolved mystery why everything is so simple. But given that it is a program like this might very well be appropriate, so our program search system might very well be appropriate. You know, that being said, it probably can't out of the box solve computer vision or something like this. And they admit that in the in the in the last part here, but just look at kind of the primitives it discovers itself. So just from the initial primitives that you see right here, like map zip, call, I don't even know what that is, like I'm not into functional programming. But from the initial primitives, it discovers the concept of subtracting vectors, adding vectors, dividing by two, and so on. From those, it constructs things like the square root function, which, you know, it's pretty remarkable. And from those, it discovers things like the inverse square law. And you can then see that, for example, Newton's second law is only a combination of, you know, very few applications of library rules. So it's an exceptionally short program, given this library. And also Coulomb's law, you can see, it's just kind of two rules applied to the four inputs, which if you expand this, it's a fairly large program. But because you have this library built up, it's it's a short program. And they do one other experiment where they give it so they they do recursive programming algorithms, like list operations again, but they only give it like the bare minimum that according to functional programming theory, as far as I understand it, you these are the real the primitives you need to solve the problems. And specifically, what it does is it first discovers the fold and unfold functions. So fold is also called reduce, I think if like that's a more common name. First it discover these these and from these, it builds all the other ones. And they say, if you go and you look at kind of functional programming theory, that's exactly what they say is necessary. So they say, given fold and unfold, you can sort of build all the other ones and these primitives. And again, you can see list difference function is very super duper short in terms of this, if you have this library. So if you've discovered the zip function, and that expands to a program that is fairly long that you would never reach with even with neural guided program search. And not only like reaching it is one point, but then you also have to recognize that that is actually the correct one. Right. And you do that as a human by looking how short it is. And this is not a short program, like you could building this as a hash table is shorter than this program. So you would rather take the hash table, I guess, if you just have two examples, rather than the program, but given that you have all this library, the zip a minus b is actually much shorter than encoding it as a hash table. All right, so they say, you know, the real world data, they say that here, much real world data is far messier. A key challenge for program induction going forward is to handle more pervasive noise and uncertainty by leaning more heavily on probabilistic and neural AI approaches. Recent research has explored program induction with various hybrid neuro symbolic representations and integrating these approaches with the library learning and bootstrapping capacities of DreamCoder could especially be valuable going forward. And I agree this. So we if it's not out yet, we had Francois Chollet on the machine learning street talk. And if you if you know him, he came up with this this arc challenge where you do like it's almost the same thing as DreamCoder does, except with these kind of pictures. And you assume that humans have this thing called core knowledge, which they also allude to in this paper. And core knowledge is things like an intuitive understanding of physics and objectness and so on. So one of the arc challenge things is like, there's kind of a thing here. And there's a thing here. And then the solution, the solution to that is there's again the thing here. And that, so that's the solution, right. And you can already see from one example, it's kind of like a ball bouncing off the wall. And you do that by applying your core knowledge, so to say. So this, again, is very, very clean data. So the in arc, I think everything is super clean data, and they say, you know, if we want to apply this to real world problems. And this is also something that Chollet has said in the podcast, which I invite you to listen to as soon as it's out, is that we're going to have to combine this search. So the the DreamCoder, it does kind of the search, which the search over a DSL. So and the DSL is learned, right. Now what we need, this is kind of these are different layers. What deep learning usually does is this perception. So deep learning is really good at doing perception. So this is current deep learning. And this up here is what DreamCoder does, or generally, program synthesis approaches do. And we need a way to connect the two. So we need a way to learn these jointly, because that's what you as a as a human some somehow do. You're able to learn your perception model, which is kind of a perceiving model, and your your logic model, your reasoning model at the same time, or just jointly in some way. And we haven't exactly figured out how to do that yet. And I feel, and I agree with this paper, that is probably going to be a very valuable thing to do. All right, so let me know what you think about this paper, I invite you to read it. It is it is high level, right. But there are some other cool things in it, like the DreamCoder learning reg exes for different types of numbers and so on. But yeah, I think it's an interesting field. It's a bit different from just kind of core machine learning. And that was it. I'll see you next time. Bye.
[ { "start": 0, "end": 4.4, "text": " Hi there, I have a little challenge for you right here." }, { "start": 4.4, "end": 10, "text": " Look at these numbers and see if you can figure out what comes where the question mark is." }, { "start": 10, "end": 12.68, "text": " Now, if you look at it a little bit," }, { "start": 12.68, "end": 16.96, "text": " you'll recognize that this is the sorting algorithm." }, { "start": 16.96, "end": 21.44, "text": " You're supposed to sort these numbers in ascending order," }, { "start": 21.44, "end": 24.12, "text": " and that's going to be the solution." }, { "start": 24.12, "end": 28.84, "text": " Why I'm showing you this isn't because it's particularly hard or because I'm" }, { "start": 28.84, "end": 31.6, "text": " particularly good at sorting numbers." }, { "start": 31.6, "end": 35.88, "text": " It is because this is a core feature of" }, { "start": 35.88, "end": 40.96, "text": " human intelligence that we haven't been able to reach with machine learning quite yet." }, { "start": 40.96, "end": 48.28, "text": " We are able to look at very few examples and then generalize to new examples." }, { "start": 48.28, "end": 54.68, "text": " We do that not by the way machine learning does it by gradient descent into a model," }, { "start": 54.68, "end": 58.8, "text": " but we do it by coming up with a rule such as," }, { "start": 58.8, "end": 60.88, "text": " hey, this is sorting." }, { "start": 60.88, "end": 63.72, "text": " Even if we didn't know what sorting was," }, { "start": 63.72, "end": 66.96000000000001, "text": " we would be able to come up with the rule nevertheless," }, { "start": 66.96000000000001, "end": 69.2, "text": " because we would realize," }, { "start": 69.2, "end": 72.8, "text": " I need to compare the numbers and I need to pick the lowest one first," }, { "start": 72.8, "end": 76.32, "text": " and then the second lowest one second, and so on." }, { "start": 76.32, "end": 81.68, "text": " We humans are able to come up with rules to solve the problem," }, { "start": 81.68, "end": 85.36000000000001, "text": " and in more general sense, we're able to come up with a program," }, { "start": 85.36000000000001, "end": 88.84, "text": " with an algorithm that solves the problem." }, { "start": 88.84, "end": 93.24000000000001, "text": " That is the point of this paper," }, { "start": 93.24000000000001, "end": 99.44000000000001, "text": " to solve problems not with pure brute force machine learning like gradient descent from" }, { "start": 99.44000000000001, "end": 105.16000000000001, "text": " a dataset but with coming up with rules with algorithms to solve the problem." }, { "start": 105.16000000000001, "end": 107.24000000000001, "text": " Now, this brings its inherent challenges." }, { "start": 107.24000000000001, "end": 108.72, "text": " It's not a new approach," }, { "start": 108.72, "end": 114.03999999999999, "text": " but this paper makes it more scalable than before." }, { "start": 114.03999999999999, "end": 116.28, "text": " The paper is called Dream Coder," }, { "start": 116.28, "end": 122.12, "text": " Growing Generalizable Interpretable Knowledge with Wake Sleep Bayesian Program Learning." }, { "start": 122.12, "end": 125.12, "text": " It's by Kevin Ellis, Catherine Wong, Maxwell Nye," }, { "start": 125.12, "end": 127.8, "text": " Matthias Sable-Meier, Luke Carey," }, { "start": 127.8, "end": 130.07999999999998, "text": " Luca Moral, Luke Hewitt," }, { "start": 130.07999999999998, "end": 135.16, "text": " Armando Soler-Lesema, and Joshua B. Tenbaum." }, { "start": 135.16, "end": 140.16, "text": " Again, the paper says itself," }, { "start": 140.16, "end": 142.4, "text": " we present Dream Coder," }, { "start": 142.4, "end": 148.2, "text": " a system that learns to solve problems by writing programs." }, { "start": 148.2, "end": 153.6, "text": " It builds expertise by creating programming languages for" }, { "start": 153.6, "end": 156.64, "text": " expressing domain concepts together with" }, { "start": 156.64, "end": 161.32, "text": " neural networks to guide the search for programs within these languages." }, { "start": 161.32, "end": 167.6, "text": " The entire model is going to be a system that sees problems," }, { "start": 167.6, "end": 169.07999999999998, "text": " just a few of them," }, { "start": 169.07999999999998, "end": 173.72, "text": " and comes up with programs that solve these problems." }, { "start": 173.72, "end": 175.64, "text": " It does so in its own language." }, { "start": 175.64, "end": 177.95999999999998, "text": " It builds up its own programming language," }, { "start": 177.95999999999998, "end": 184.64, "text": " and then it's able to synthesize programs in this language that solve the problem." }, { "start": 184.64, "end": 188.88, "text": " It does so by having a neural network guide that search." }, { "start": 188.88, "end": 190.6, "text": " That's Dream Coder." }, { "start": 190.6, "end": 193.07999999999998, "text": " It includes this wake-sleep algorithm," }, { "start": 193.07999999999998, "end": 195.88, "text": " which has been also around for a while," }, { "start": 195.88, "end": 198.24, "text": " but it's a different take on it." }, { "start": 198.24, "end": 202, "text": " The wake-sleep learning algorithm alternatively extends the language with" }, { "start": 202, "end": 204.76, "text": " new symbolic abstractions and trains" }, { "start": 204.76, "end": 209.07999999999998, "text": " the neural network on imagined and replayed problems." }, { "start": 209.07999999999998, "end": 216.88, "text": " The past ventures into program synthesis have all been not really scalable," }, { "start": 216.88, "end": 223, "text": " because either they have some handcrafted programming language that you search over," }, { "start": 223, "end": 226.92, "text": " or they have handcrafted rules of how you search, and so on." }, { "start": 226.92, "end": 229.72, "text": " This system here is much more general," }, { "start": 229.72, "end": 234.84, "text": " and it can solve a vast variety of different tasks." }, { "start": 234.84, "end": 237.24, "text": " For example, here you can see" }, { "start": 237.24, "end": 240.92, "text": " the different types of tasks that the system can solve." }, { "start": 240.92, "end": 243, "text": " There is list processing." }, { "start": 243, "end": 246.04, "text": " Sorry, that's a bit heavy." }, { "start": 246.04, "end": 248, "text": " There's list processing," }, { "start": 248, "end": 250.44, "text": " such as summing lists," }, { "start": 250.44, "end": 253.95999999999998, "text": " doubling each element, check for evens," }, { "start": 253.95999999999998, "end": 258.59999999999997, "text": " text editing, learning regex for stuff," }, { "start": 258.59999999999997, "end": 263.2, "text": " and also very creative things like creating graphics," }, { "start": 263.2, "end": 267.15999999999997, "text": " creating block towers, regressing symbolically," }, { "start": 267.15999999999997, "end": 270.76, "text": " recursive programming, and figuring out physical laws." }, { "start": 270.76, "end": 275.12, "text": " We've already looked at paper that figure out physical laws from data," }, { "start": 275.12, "end": 278.8, "text": " but they have been geared towards that." }, { "start": 278.8, "end": 283.2, "text": " This is the same system that can figure out all of these things." }, { "start": 283.2, "end": 286.52, "text": " Now, of course, it's going to be configured a little bit differently" }, { "start": 286.52, "end": 291.4, "text": " if you talk about list processing versus figuring out physical laws," }, { "start": 291.4, "end": 295.28000000000003, "text": " but it is the same underlying system." }, { "start": 295.28000000000003, "end": 299.2, "text": " Ultimately, what does that amount to?" }, { "start": 299.2, "end": 306.44, "text": " That amounts to you giving the system a problem." }, { "start": 306.44, "end": 309.8, "text": " Let's say the problem right here is..." }, { "start": 309.8, "end": 311.32, "text": " What do we have here?" }, { "start": 311.32, "end": 313.08, "text": " To sort a list." }, { "start": 313.08, "end": 315.12, "text": " That's what we came up with at the beginning." }, { "start": 315.12, "end": 319.08, "text": " Here you have the problem of sorting a list." }, { "start": 319.08, "end": 322.52, "text": " You're going to give the program a few examples," }, { "start": 322.52, "end": 325.28, "text": " like three like I gave you at the beginning," }, { "start": 325.28, "end": 329.71999999999997, "text": " and the system is going to come up with a program." }, { "start": 329.71999999999997, "end": 333.79999999999995, "text": " The program ultimately is going to look like the thing down here." }, { "start": 333.79999999999995, "end": 336.11999999999995, "text": " It's going to come up with a program" }, { "start": 336.11999999999995, "end": 339.59999999999997, "text": " that implements the list sorting algorithm." }, { "start": 339.59999999999997, "end": 342.88, "text": " It's going to do that by a few principles." }, { "start": 342.88, "end": 348.35999999999996, "text": " Principle one, of course, it needs to fit all of the examples." }, { "start": 348.35999999999996, "end": 350.28, "text": " It needs to explain all of the examples," }, { "start": 350.28, "end": 352.55999999999995, "text": " otherwise it's not a correct program." }, { "start": 352.56, "end": 358.32, "text": " And concept two is it needs to be easy." }, { "start": 358.32, "end": 362.68, "text": " It needs to be very, very explainable" }, { "start": 362.68, "end": 365.12, "text": " in the sense of it needs to be very short," }, { "start": 365.12, "end": 373.16, "text": " because there are many different rules that these lists follow." }, { "start": 373.16, "end": 374.96, "text": " I can come up with..." }, { "start": 374.96, "end": 377.24, "text": " I can literally create this as a hash table." }, { "start": 377.24, "end": 381.16, "text": " I can implement this as a hash table for these three lists," }, { "start": 381.16, "end": 384.44, "text": " and that hash table would solve the problem" }, { "start": 384.44, "end": 388.68, "text": " exactly as well as the sorting algorithm." }, { "start": 388.68, "end": 392.36, "text": " Now, the sorting algorithm is much more compact." }, { "start": 392.36, "end": 393.48, "text": " It's simply..." }, { "start": 393.48, "end": 395.8, "text": " Well, it's this thing down here." }, { "start": 395.8, "end": 400.92, "text": " And beyond that, what the system does" }, { "start": 400.92, "end": 404.56, "text": " is it builds up a library of concepts." }, { "start": 404.56, "end": 405.96000000000004, "text": " So not only..." }, { "start": 405.96000000000004, "end": 407.96000000000004, "text": " The system doesn't see the program at the bottom." }, { "start": 407.96, "end": 411.64, "text": " The system actually sees this program right here." }, { "start": 411.64, "end": 415.68, "text": " So this is the sorting algorithm in the system's language," }, { "start": 415.68, "end": 422.59999999999997, "text": " because the system has built up a learned library of concepts over time." }, { "start": 422.59999999999997, "end": 426.84, "text": " So as we train the system to solve different tasks on lists," }, { "start": 426.84, "end": 432.28, "text": " such as sum a few things, double a few things, and so on," }, { "start": 432.28, "end": 436.47999999999996, "text": " it builds up this library of concepts." }, { "start": 436.48, "end": 441.6, "text": " So there are these primitives right here that you give it," }, { "start": 441.6, "end": 445.04, "text": " and then it's able to come up with these concepts" }, { "start": 445.04, "end": 448.8, "text": " that we as programmers might call functions." }, { "start": 448.8, "end": 452.24, "text": " So it's able to come up with a thing that can filter a list." }, { "start": 452.24, "end": 454.84000000000003, "text": " It doesn't have it in its initial primitives," }, { "start": 454.84000000000003, "end": 459.56, "text": " but it's able to discover that because it uses it again and again and again." }, { "start": 459.56, "end": 464.12, "text": " And now it's able to use that function instead of the primitives." }, { "start": 464.12, "end": 471.2, "text": " So whereas before, it would have used the entire code in this thing," }, { "start": 471.2, "end": 473.36, "text": " now it's just able to say," }, { "start": 473.36, "end": 476.64, "text": " well, I want to use concept four right here." }, { "start": 476.64, "end": 481, "text": " And that makes the programs that are written much shorter." }, { "start": 481, "end": 485.12, "text": " So it uses this to implement the maximum function," }, { "start": 485.12, "end": 487.48, "text": " which it calls concept 13." }, { "start": 487.48, "end": 492.04, "text": " Of course, it has no concept of what we name function." }, { "start": 492.04, "end": 497.36, "text": " And then it's able to use concept 13 and concept four together" }, { "start": 497.36, "end": 501.6, "text": " to implement the nth largest element function." }, { "start": 501.6, "end": 504.8, "text": " And once I have the nth largest element function," }, { "start": 504.8, "end": 507.84000000000003, "text": " I can simply iterate from the beginning." }, { "start": 507.84000000000003, "end": 510.92, "text": " I have a list, I simply iterate over its length." }, { "start": 510.92, "end": 516.28, "text": " So I iterate that, and I always use the nth largest number." }, { "start": 516.28, "end": 518.84, "text": " And that will sort my list." }, { "start": 518.84, "end": 524.08, "text": " So you can see that the program that sorts the list is super short" }, { "start": 524.08, "end": 526.6, "text": " in terms of this library we've built up." }, { "start": 526.6, "end": 529.64, "text": " So this is our challenge for building this system." }, { "start": 529.64, "end": 535.9200000000001, "text": " We somehow need a system that is able to come up with programs to solve problems," }, { "start": 535.9200000000001, "end": 537.88, "text": " that is able to build up a library," }, { "start": 537.88, "end": 545.24, "text": " and that is able to efficiently search through that self-built up library of concepts." }, { "start": 545.24, "end": 549.6, "text": " And DreamCoder does all of this at the same time." }, { "start": 549.6, "end": 556.48, "text": " So DreamCoder has three different stages in which these things are tackled." }, { "start": 556.48, "end": 561, "text": " So imagine you have a data set of tasks." }, { "start": 561, "end": 564.96, "text": " So the tasks here are these Xs." }, { "start": 564.96, "end": 568, "text": " So X are the tasks." }, { "start": 568, "end": 572.2, "text": " Now, the tasks can either be, as I understand it," }, { "start": 572.2, "end": 576.08, "text": " of a single thing like list sorting, right?" }, { "start": 576.08, "end": 580.5600000000001, "text": " But they can also be the general class of list problems," }, { "start": 580.5600000000001, "end": 584.8000000000001, "text": " which makes more sense in our class." }, { "start": 584.8000000000001, "end": 598.6, "text": " So imagine we have the general class of list problems." }, { "start": 598.6, "end": 603.8000000000001, "text": " Now, it maintains, as we said, this library L." }, { "start": 603.8000000000001, "end": 608.16, "text": " And you can really imagine this as a programming library." }, { "start": 608.16, "end": 614.52, "text": " So it contains functions that the program can call." }, { "start": 614.52, "end": 617.84, "text": " And it also contains all the primitives that you give it." }, { "start": 617.84, "end": 620.52, "text": " So there are going to be a bunch of..." }, { "start": 620.52, "end": 623.1600000000001, "text": " So this is going to be like a set." }, { "start": 623.16, "end": 630.24, "text": " There are going to be a bunch of primitives like a plus b, a minus b, a times b." }, { "start": 630.24, "end": 632.04, "text": " That's in terms of math, right?" }, { "start": 632.04, "end": 633.56, "text": " Here we're in lists." }, { "start": 633.56, "end": 640.8399999999999, "text": " And there's also going to be a section down here that the program can fill itself." }, { "start": 640.8399999999999, "end": 646.6, "text": " So the program can define a function that's like 2a plus b, right?" }, { "start": 646.6, "end": 649.36, "text": " And then it's able to call that." }, { "start": 649.36, "end": 652.92, "text": " So that's the library right here." }, { "start": 652.92, "end": 658.24, "text": " Now, what the system needs to do is it's given a task." }, { "start": 658.24, "end": 663, "text": " So the task here, as you can see, is a few examples of..." }, { "start": 663, "end": 666.76, "text": " I don't even know what it does here." }, { "start": 666.76, "end": 668.1999999999999, "text": " Do you know what it does?" }, { "start": 668.1999999999999, "end": 676.16, "text": " It kind of reverses the list and adds one or subtracts one, something like this." }, { "start": 676.16, "end": 681.76, "text": " Yeah, I think it reverses the list and then it adds one, right?" }, { "start": 681.76, "end": 685.96, "text": " That's the task that we handle right here." }, { "start": 685.96, "end": 691.08, "text": " You can see all of these things is reversing and adding." }, { "start": 691.08, "end": 696.92, "text": " I've actually not solved that before, so it might be wrong." }, { "start": 696.92, "end": 703.56, "text": " So what we have to do is we have to come up with a program that solves these tasks, right?" }, { "start": 703.56, "end": 708.4399999999999, "text": " That if we give the left side as an input, the right side appears." }, { "start": 708.4399999999999, "end": 710.88, "text": " And that is hard." }, { "start": 710.88, "end": 716.78, "text": " That is a hard problem because we start right here with an empty program and we build up" }, { "start": 716.78, "end": 718, "text": " a search tree." }, { "start": 718, "end": 723.22, "text": " Now every single one of those rules here could be applied, right?" }, { "start": 723.22, "end": 726.88, "text": " So the program could be..." }, { "start": 726.88, "end": 730.84, "text": " Let's take the..." }, { "start": 730.84, "end": 735.2, "text": " Or yeah, let's say these are not math things, but these are list things." }, { "start": 735.2, "end": 742.48, "text": " So I guess reversing is one of them, map is another one, but you get the point." }, { "start": 742.48, "end": 747.2, "text": " So you have you put these rules here and you apply, you could apply the first rule, right?" }, { "start": 747.2, "end": 750.6, "text": " You could build a program made up out of the first rule." }, { "start": 750.6, "end": 755.26, "text": " You could build a program made up of the second or the third." }, { "start": 755.26, "end": 760.7, "text": " Now if you already have, so here your program is A plus B. If you have that, you could then" }, { "start": 760.7, "end": 770.12, "text": " again apply the first rule, which would give you A plus, sorry, A plus A plus B. You could" }, { "start": 770.12, "end": 777.5600000000001, "text": " apply the second rule, which would give you A plus A minus B, right?" }, { "start": 777.5600000000001, "end": 782.1600000000001, "text": " I'm just substituting kind of the second element right here." }, { "start": 782.1600000000001, "end": 787.5600000000001, "text": " This is obviously implemented in a functional programming language that makes all of this" }, { "start": 787.5600000000001, "end": 788.84, "text": " really well defined." }, { "start": 788.84, "end": 794.5600000000001, "text": " I'm just kind of showing it in easy mode, right?" }, { "start": 794.5600000000001, "end": 795.64, "text": " But you get the point." }, { "start": 795.64, "end": 801, "text": " I can arbitrarily search through this tree and I can apply each of those rules over and" }, { "start": 801, "end": 802.6, "text": " over and over again." }, { "start": 802.6, "end": 807.5, "text": " You can already see that this is going to give me a massive search tree." }, { "start": 807.5, "end": 813.2800000000001, "text": " How am I going to solve these problems in these kind of massive trees?" }, { "start": 813.2800000000001, "end": 816.7800000000001, "text": " And that's where the neural network comes in." }, { "start": 816.78, "end": 822.88, "text": " It's actually the only part in the system that is machine learned as far as I understand" }, { "start": 822.88, "end": 830.8399999999999, "text": " it or at least that is neural networked since machine learning isn't only deep learning." }, { "start": 830.8399999999999, "end": 839.9599999999999, "text": " But the search through a discrete space that is really large is hard, but you as a human" }, { "start": 839.9599999999999, "end": 841.12, "text": " are able to do it." }, { "start": 841.12, "end": 842.92, "text": " How are you able to do it?" }, { "start": 842.92, "end": 844.76, "text": " You have an intuition, right?" }, { "start": 844.76, "end": 851.68, "text": " You have some intuition that, you know, here, for example, the lists appear to be the same" }, { "start": 851.68, "end": 853.76, "text": " length if you look at the problem." }, { "start": 853.76, "end": 858.4399999999999, "text": " So you know, you look at that and you say, well, maybe there's something with the ordering," }, { "start": 858.4399999999999, "end": 862.48, "text": " maybe the first corresponds to the first or the first to the last or something like this." }, { "start": 862.48, "end": 867.12, "text": " So you have some kind of intuition of which rules you want to apply." }, { "start": 867.12, "end": 873.64, "text": " And this intuition, whenever you say intuition in a program, that's a prime place to put" }, { "start": 873.64, "end": 875.48, "text": " in a neural network." }, { "start": 875.48, "end": 882, "text": " So if you know alpha go or alpha zero, that is exactly what it does, right?" }, { "start": 882, "end": 885.28, "text": " It is here at a particular chess board, right?" }, { "start": 885.28, "end": 889.1999999999999, "text": " And it could do all of these different moves." }, { "start": 889.1999999999999, "end": 895.52, "text": " But it cannot brute force search all of the game tree because that would be impossible." }, { "start": 895.52, "end": 897.52, "text": " It's computationally too expensive." }, { "start": 897.52, "end": 903.16, "text": " So what it does is it employs a neural network that tells it, well, this here looks promising," }, { "start": 903.16, "end": 909.12, "text": " you know, off the bat, and this one doesn't, this one doesn't, this one looks promising," }, { "start": 909.12, "end": 910.12, "text": " and so on." }, { "start": 910.12, "end": 912.6, "text": " And then you only go down those two." }, { "start": 912.6, "end": 917.56, "text": " And from there, again, you have many options, but the neural network eliminates almost all" }, { "start": 917.56, "end": 922.1999999999999, "text": " of them and tells you which ones look promising." }, { "start": 922.1999999999999, "end": 930.64, "text": " So if the neural network is a good guide, that enables you to quickly build a program" }, { "start": 930.64, "end": 933.92, "text": " that might solve the problem." }, { "start": 933.92, "end": 943.1999999999999, "text": " So you do that, you search, you search, a newly guided search, you propose programs in decreasing" }, { "start": 943.1999999999999, "end": 945.84, "text": " order under your model." }, { "start": 945.84, "end": 948.3199999999999, "text": " So this here, this is your guiding model." }, { "start": 948.3199999999999, "end": 954.26, "text": " This is a likelihood model, like how likely is a program given the task that you're trying" }, { "start": 954.26, "end": 959.42, "text": " to solve, you try the most likely one first, and then you go down." }, { "start": 959.42, "end": 966, "text": " So you search for the best program, which in this case means the program that solves" }, { "start": 966, "end": 968.9599999999999, "text": " the task, but is also the shortest, right?" }, { "start": 968.9599999999999, "end": 976.4399999999999, "text": " The intuition is always that a very short program is going to be, is going to be the" }, { "start": 976.4399999999999, "end": 980.8, "text": " better program, because it's a kind of a simpler explanation, right?" }, { "start": 980.8, "end": 988.38, "text": " So here, the fewer steps you make in your search, that's a better program." }, { "start": 988.38, "end": 994.12, "text": " And the more the neural network likes the program, that's a better program, because" }, { "start": 994.12, "end": 996.24, "text": " the neural network is trained for this, right?" }, { "start": 996.24, "end": 1004, "text": " So the best pro and you come up with the best program for the task." }, { "start": 1004, "end": 1012, "text": " So you choose the program that maximizes the likelihood of the program given the task and" }, { "start": 1012, "end": 1020.84, "text": " the library, which is proportional if you apply Bayes rule to the likelihood of the" }, { "start": 1020.84, "end": 1027.4, "text": " likelihood that the program generates the solution, which this is just one or zero." }, { "start": 1027.4, "end": 1033.36, "text": " If you have a, if you have a non probabilistic program, and then this here, the likelihood" }, { "start": 1033.36, "end": 1039.48, "text": " of generating a program from your library is just going to be proportional to the number" }, { "start": 1039.48, "end": 1044.32, "text": " of steps, the number of search steps that you need to make." }, { "start": 1044.32, "end": 1046.5, "text": " Okay." }, { "start": 1046.5, "end": 1052.1200000000001, "text": " So that's the wake algorithm in the wake phase, you try to solve the problem from the training" }, { "start": 1052.1200000000001, "end": 1053.1200000000001, "text": " set." }, { "start": 1053.1200000000001, "end": 1060.2, "text": " You, sorry, you try to solve the, the tasks by coming up with programs that solve them." }, { "start": 1060.2, "end": 1066, "text": " Now that gives you a data set of solved programs, right?" }, { "start": 1066, "end": 1070.92, "text": " So initially you're going to have a data set of tasks." }, { "start": 1070.92, "end": 1073.92, "text": " You're going to run this through the wake phase." }, { "start": 1073.92, "end": 1076.96, "text": " And most of the time you're probably going to fail, right?" }, { "start": 1076.96, "end": 1079.84, "text": " Most of the time it's like, no, can't solve it." }, { "start": 1079.84, "end": 1082.92, "text": " But some of the time you're going to succeed." }, { "start": 1082.92, "end": 1088.56, "text": " So you're going to have a little bit of a data set of where you've actually succeeded." }, { "start": 1088.56, "end": 1095.46, "text": " And this data set is now going to be the, the input into the sleep phases." }, { "start": 1095.46, "end": 1097.2, "text": " So what do the sleep phases do?" }, { "start": 1097.2, "end": 1104.28, "text": " And the sleep phases are crucial here, because if you only, if you only have the guided search," }, { "start": 1104.28, "end": 1105.44, "text": " that's already okay." }, { "start": 1105.44, "end": 1107.16, "text": " That's already good, right?" }, { "start": 1107.16, "end": 1111.6200000000001, "text": " But it's not going to help you to build more complex programs, because those are still," }, { "start": 1111.6200000000001, "end": 1118.32, "text": " if you look at the program that is the list sorting program down here, like this is so" }, { "start": 1118.32, "end": 1126.08, "text": " large, you can never get here with search at least, you know, not in a reasonable time." }, { "start": 1126.08, "end": 1133.4399999999998, "text": " You need to construct these abstract concepts, because this program here is much shorter." }, { "start": 1133.4399999999998, "end": 1138.2, "text": " This short program is much shorter than the long program." }, { "start": 1138.2, "end": 1144.98, "text": " And you can only get there by building these, these useful concepts by building up the library." }, { "start": 1144.98, "end": 1149.64, "text": " So in the sleep phase, we're going to build, first of all, build up the library, which" }, { "start": 1149.64, "end": 1155.84, "text": " means we're going to take this data set that we've constructed, like here are all the things" }, { "start": 1155.84, "end": 1158.1200000000001, "text": " that we could solve." }, { "start": 1158.1200000000001, "end": 1162.5, "text": " Now we're going to take that." }, { "start": 1162.5, "end": 1168, "text": " And what we're going to do is we're going to look at our solutions." }, { "start": 1168, "end": 1174.04, "text": " And we're going to compress them grow library to compress programs found during waking." }, { "start": 1174.04, "end": 1178.96, "text": " Okay, so here we have a bunch of primitives, this is all the stuff we can do." }, { "start": 1178.96, "end": 1185.6, "text": " Now we're going to see which of the things that we use often in combination with each" }, { "start": 1185.6, "end": 1186.6, "text": " other." }, { "start": 1186.6, "end": 1192.6, "text": " So if we did very often dead, like, apply the first rule twice, right?" }, { "start": 1192.6, "end": 1197.84, "text": " So if we applied a plus b, and then we applied a plus b again, which would amount to a plus" }, { "start": 1197.84, "end": 1202.84, "text": " a plus b, which is to a plus b, we can say, since I use these two rules, we can say, since" }, { "start": 1202.84, "end": 1210.24, "text": " I use these two rules in conjunction very often, I'm going to make a new rule in my" }, { "start": 1210.24, "end": 1214.52, "text": " library, that allows me to simply apply this with just one step instead of two." }, { "start": 1214.52, "end": 1219.78, "text": " So I'm going to add to a plus b to my library." }, { "start": 1219.78, "end": 1226.58, "text": " Because now, since I already know I need those two often together, I, this is simply going" }, { "start": 1226.58, "end": 1231.1, "text": " to be just a single rule in reinforcement learning, this is sometimes called an option." }, { "start": 1231.1, "end": 1235.32, "text": " So it's kind of a higher order action that you can take." }, { "start": 1235.32, "end": 1241.1799999999998, "text": " And it is, you know, it's, it's, there, there's a lot of work trying to get these options." }, { "start": 1241.1799999999998, "end": 1245.76, "text": " So what they do right here is sort of the same, it's a compression step." }, { "start": 1245.76, "end": 1251.9199999999998, "text": " So they're trying to compress the programs that you found during the wake phase." }, { "start": 1251.9199999999998, "end": 1258.1799999999998, "text": " So here you can see an example of this, you have a program for task one, and a program" }, { "start": 1258.1799999999998, "end": 1259.1799999999998, "text": " for task two." }, { "start": 1259.18, "end": 1264, "text": " These don't necessarily even need to be the same tab, like they don't need to be the same." }, { "start": 1264, "end": 1269, "text": " They don't need to come from the same task description, right?" }, { "start": 1269, "end": 1272.16, "text": " But it's just kind of from the same data set." }, { "start": 1272.16, "end": 1278.0600000000002, "text": " And you notice that you've used this subroutine right here, the orange subroutine in both" }, { "start": 1278.0600000000002, "end": 1280.24, "text": " programs." }, { "start": 1280.24, "end": 1286.1200000000001, "text": " What they do is they extract this subroutine into the library." }, { "start": 1286.1200000000001, "end": 1288.2, "text": " And they have special algorithms for this." }, { "start": 1288.2, "end": 1289.76, "text": " This is not an easy thing." }, { "start": 1289.76, "end": 1297, "text": " So they have a very efficient way to search through these program trees, recognize commonalities" }, { "start": 1297, "end": 1298.96, "text": " and extract those." }, { "start": 1298.96, "end": 1302, "text": " They don't describe that in the paper." }, { "start": 1302, "end": 1306.56, "text": " But it is it is not a trivial trivial thing to do this." }, { "start": 1306.56, "end": 1309.88, "text": " However, imagine that you can just do this." }, { "start": 1309.88, "end": 1312.16, "text": " And then you expand your library." }, { "start": 1312.16, "end": 1318.44, "text": " So mathematically, you expand the library with the routine that maximizes the following." }, { "start": 1318.44, "end": 1322.8200000000002, "text": " So you essentially want to do two things." }, { "start": 1322.8200000000002, "end": 1329.5600000000002, "text": " This here is simply the the p of the library itself is simply how large the library is." }, { "start": 1329.5600000000002, "end": 1334.0800000000002, "text": " So you want to you want to keep your library small, right?" }, { "start": 1334.0800000000002, "end": 1339.3200000000002, "text": " If you could just add things at will, your search problem would again become too large" }, { "start": 1339.3200000000002, "end": 1341.52, "text": " because you have all these rules you could apply." }, { "start": 1341.52, "end": 1343.86, "text": " So you only want to keep the best rules." }, { "start": 1343.86, "end": 1351.6, "text": " But then also, you want to maximize this right here over refactorings of the programs that" }, { "start": 1351.6, "end": 1353.06, "text": " you found." }, { "start": 1353.06, "end": 1354.8, "text": " So you want to keep programs." }, { "start": 1354.8, "end": 1362.74, "text": " Again, this first term simply means the programs actually solve the tasks that you have." }, { "start": 1362.74, "end": 1365.54, "text": " So there, if it's probabilistic, it's different." }, { "start": 1365.54, "end": 1371.48, "text": " But we will just say the programs need to solve the tasks that you've encountered." }, { "start": 1371.48, "end": 1377.42, "text": " And also, the programs need to be reasonably short given your library, right?" }, { "start": 1377.42, "end": 1381.86, "text": " And the given your library, you've already seen this before in the wake algorithm right" }, { "start": 1381.86, "end": 1382.86, "text": " here." }, { "start": 1382.86, "end": 1385.1599999999999, "text": " This is the same term." }, { "start": 1385.1599999999999, "end": 1388.5, "text": " And the important thing is that is given your library, right?" }, { "start": 1388.5, "end": 1393.6, "text": " A program that the sorting program up top isn't short." }, { "start": 1393.6, "end": 1395.9199999999998, "text": " It's like it's freaking long." }, { "start": 1395.9199999999998, "end": 1402.9599999999998, "text": " But the the program, the same program, given the library is really short because I can" }, { "start": 1402.9599999999998, "end": 1409.6999999999998, "text": " use this concept 15 from the library and the concept 15 in itself can again use the concept" }, { "start": 1409.6999999999998, "end": 1412.04, "text": " 13 and the concept four." }, { "start": 1412.04, "end": 1418.3, "text": " So the gray box right here will be kind of the size of your library, right?" }, { "start": 1418.3, "end": 1419.84, "text": " Because this is all the concept." }, { "start": 1419.84, "end": 1424.36, "text": " And then the orange box on the right would be the length of the program itself given" }, { "start": 1424.36, "end": 1431.36, "text": " the library, these two things combined need to be small, which makes sense." }, { "start": 1431.36, "end": 1439.1599999999999, "text": " So you extend your library by the rules that are themselves small in terms of the library" }, { "start": 1439.1599999999999, "end": 1443.3999999999999, "text": " that are used often that solve a lot of problems." }, { "start": 1443.3999999999999, "end": 1446.6399999999999, "text": " And that don't grow your library too much." }, { "start": 1446.64, "end": 1453.44, "text": " So now that you've come up with new new rules, you're going to the third phase, and they" }, { "start": 1453.44, "end": 1455.88, "text": " call this dreaming." }, { "start": 1455.88, "end": 1461.3000000000002, "text": " So dreaming this, this would already be I think this would already be enough and they" }, { "start": 1461.3000000000002, "end": 1465.64, "text": " do ablations where they leave out different parts right here." }, { "start": 1465.64, "end": 1478.68, "text": " But a thing you can do if you have this, essentially, you have a DSL for your problems, right?" }, { "start": 1478.68, "end": 1485.48, "text": " And what you can do if you have a DSL is you can just apply, you can just build programs" }, { "start": 1485.48, "end": 1486.6000000000001, "text": " at random, right?" }, { "start": 1486.6000000000001, "end": 1489.88, "text": " You can just take a bunch of rules and apply them." }, { "start": 1489.88, "end": 1497.4, "text": " And if you do that, you if de facto generate new, new problems to solve." }, { "start": 1497.4, "end": 1505.0800000000002, "text": " So if usually during the wake phase, you have an input x and you have an output y, and you" }, { "start": 1505.0800000000002, "end": 1510.94, "text": " ask yourself, which program solves this, right?" }, { "start": 1510.94, "end": 1512.8000000000002, "text": " And these come from the data set." }, { "start": 1512.8000000000002, "end": 1517.64, "text": " But this right here is built from a grammar, right?" }, { "start": 1517.64, "end": 1520.92, "text": " There's a grammar, which is your library." }, { "start": 1520.92, "end": 1524.1200000000001, "text": " So your library builds those programs." }, { "start": 1524.1200000000001, "end": 1531.3600000000001, "text": " Now what I can do is I can simply I can simply instead of doing the search tree thing, I" }, { "start": 1531.3600000000001, "end": 1538.6000000000001, "text": " can just apply a bunch of those rules, I can just simply start here and apply rule one," }, { "start": 1538.6000000000001, "end": 1542.16, "text": " then apply rule two, apply rule five, and so on." }, { "start": 1542.16, "end": 1544.8000000000002, "text": " And that's going to give me a program." }, { "start": 1544.8, "end": 1552, "text": " I can apply that program to some input data that comes also from my training set is going" }, { "start": 1552, "end": 1555.8799999999999, "text": " to give me some different output data because it's a different program." }, { "start": 1555.8799999999999, "end": 1559.82, "text": " But this now gives me another training data point." }, { "start": 1559.82, "end": 1561.9199999999998, "text": " It's not from the real program." }, { "start": 1561.9199999999998, "end": 1563.22, "text": " But I don't care, right?" }, { "start": 1563.22, "end": 1571.32, "text": " I can train my neural network to I can train my neural network." }, { "start": 1571.32, "end": 1573.76, "text": " Now it's again, let's find this program." }, { "start": 1573.76, "end": 1581.36, "text": " I can train my neural network to get better at finding programs because I know the program" }, { "start": 1581.36, "end": 1582.64, "text": " in this case, right?" }, { "start": 1582.64, "end": 1588.52, "text": " The difference between in the wake phase, I don't know what my program is." }, { "start": 1588.52, "end": 1592.82, "text": " In the dream phase, I construct the program." }, { "start": 1592.82, "end": 1598.3799999999999, "text": " So I know what the neural network should suggest as my steps, right?" }, { "start": 1598.3799999999999, "end": 1603.74, "text": " Here it should suggest of all the options, it should suggest the first one." }, { "start": 1603.74, "end": 1607.4, "text": " Here it should suggest the third one, and so on." }, { "start": 1607.4, "end": 1615.2, "text": " So I can do supervised learning of my neural network to to learn to search better in the" }, { "start": 1615.2, "end": 1621.36, "text": " space of programs by coming up with my own programs, and therefore generating my own" }, { "start": 1621.36, "end": 1623.16, "text": " training data." }, { "start": 1623.16, "end": 1625.96, "text": " That's exactly what this dreaming phase does." }, { "start": 1625.96, "end": 1631.14, "text": " So in the dreaming phase, actually, we're going to take two things." }, { "start": 1631.14, "end": 1635.48, "text": " So we're going to train this neural network, which which they call the recognition model." }, { "start": 1635.48, "end": 1641.8400000000001, "text": " And you can see, this is this is the thing that guides your search to predict the best" }, { "start": 1641.8400000000001, "end": 1647.0800000000002, "text": " programs for typical tasks and the current library." }, { "start": 1647.0800000000002, "end": 1654.24, "text": " And typical tasks means either tasks that we sample or tasked with the input from the" }, { "start": 1654.24, "end": 1655.24, "text": " training set." }, { "start": 1655.24, "end": 1658.8600000000001, "text": " But, you know, we come up with the output ourselves." }, { "start": 1658.86, "end": 1665.24, "text": " So this what I've just described, they call fantasies, draw programs from the library." }, { "start": 1665.24, "end": 1671.1999999999998, "text": " So construct the program, set task x to the output of executing the program, and then" }, { "start": 1671.1999999999998, "end": 1680.04, "text": " learn, learn, given x, I want the program P train the neural network to come up with" }, { "start": 1680.04, "end": 1682.9199999999998, "text": " the program P since I know what the program was." }, { "start": 1682.92, "end": 1690.28, "text": " Or alternatively, I can again use these tasks that I solved correctly, right here." }, { "start": 1690.28, "end": 1693.44, "text": " And I can use those as a training data set." }, { "start": 1693.44, "end": 1701.3200000000002, "text": " Since I already I know that I just like I don't necessarily know that the program is" }, { "start": 1701.3200000000002, "end": 1702.3200000000002, "text": " the correct one." }, { "start": 1702.3200000000002, "end": 1708.8400000000001, "text": " I just know that the program I came up with is able to solve the examples that I had." }, { "start": 1708.8400000000001, "end": 1710.4, "text": " But it's good enough, right?" }, { "start": 1710.4, "end": 1715.3200000000002, "text": " It's good enough to act as a data set as well." }, { "start": 1715.3200000000002, "end": 1718.8600000000001, "text": " And we do that to keep ourselves grounded in reality." }, { "start": 1718.8600000000001, "end": 1725.0400000000002, "text": " We can't just start, you know, start dreaming up fantasies, because the fantasies, it's" }, { "start": 1725.0400000000002, "end": 1726.2800000000002, "text": " sort of a cycle." }, { "start": 1726.2800000000002, "end": 1734.2, "text": " And like, this is a cycle, we come up with a library of like a language to describe the" }, { "start": 1734.2, "end": 1735.2, "text": " problems." }, { "start": 1735.2, "end": 1738.0800000000002, "text": " And then we use the language to generate new problems." }, { "start": 1738.08, "end": 1742.1, "text": " And then we use those generated problems to train our neural network." }, { "start": 1742.1, "end": 1747.48, "text": " If we were to only do that, the danger is that we kind of drift away from reality and" }, { "start": 1747.48, "end": 1752.1599999999999, "text": " that our neural network learns very well to search through our imagined things." }, { "start": 1752.1599999999999, "end": 1758.52, "text": " But you know, as soon as something real comes along, it's so different from what we imagined," }, { "start": 1758.52, "end": 1760.08, "text": " it's no longer viable." }, { "start": 1760.08, "end": 1761.6, "text": " That's why we also use the replays." }, { "start": 1761.6, "end": 1765.9199999999998, "text": " And I think they use a 5050 mix of fantasies and replays." }, { "start": 1765.92, "end": 1770.8400000000001, "text": " The reason why they even use fantasies is to be more data efficient." }, { "start": 1770.8400000000001, "end": 1777, "text": " So you could do all of these things without the fantasy dreaming stage by simply training" }, { "start": 1777, "end": 1780.24, "text": " the neural network on successful replays." }, { "start": 1780.24, "end": 1784.8400000000001, "text": " But that would be much more data inefficient." }, { "start": 1784.8400000000001, "end": 1788.5800000000002, "text": " So yeah, it's sort of a house of cards that you build up." }, { "start": 1788.5800000000002, "end": 1792.28, "text": " And I feel it depends a lot on many things right here." }, { "start": 1792.28, "end": 1797.24, "text": " Like it depends a lot on the primitives that you give beforehand." }, { "start": 1797.24, "end": 1801.3999999999999, "text": " It depends a lot on the tasks you choose and how well they are suited." }, { "start": 1801.3999999999999, "end": 1806.44, "text": " It depends on the language itself, like how you can apply the rules." }, { "start": 1806.44, "end": 1810.84, "text": " Of course, the paper is trying to tell us that the same basic algorithm can solve a" }, { "start": 1810.84, "end": 1812.34, "text": " lot of these tasks." }, { "start": 1812.34, "end": 1817.68, "text": " But I still think the tasks are very suited to what the network does." }, { "start": 1817.68, "end": 1824.2, "text": " And the network is or the system is built a lot with tasks like that in mind." }, { "start": 1824.2, "end": 1831.28, "text": " And that leads to the that leads to this opportunity that you can even do this dreaming, because" }, { "start": 1831.28, "end": 1839.24, "text": " you can only do this dreaming thing if you know if constructing problems out of your" }, { "start": 1839.24, "end": 1846.48, "text": " library right here out of your library L is is useful for training your recognition" }, { "start": 1846.48, "end": 1847.48, "text": " model." }, { "start": 1847.48, "end": 1853.8, "text": " If that were not useful, this algorithm would probably work much worse." }, { "start": 1853.8, "end": 1856.88, "text": " But as it turns out for these problems, it's useful." }, { "start": 1856.88, "end": 1861.88, "text": " So here you see another example of this abstraction step." }, { "start": 1861.88, "end": 1871.28, "text": " So we have we have two tasks in the in the wake phase that the the system solved by the" }, { "start": 1871.28, "end": 1876, "text": " way, there is a little bit of a mistake here." }, { "start": 1876, "end": 1882.36, "text": " But you know, we're we're humans, we can we can successfully work our way around this" }, { "start": 1882.36, "end": 1885.4, "text": " problem, which, yeah." }, { "start": 1885.4, "end": 1892, "text": " So there are, you know, these these, the wake phase has actually solved both by coming up" }, { "start": 1892, "end": 1893.9, "text": " with programs." }, { "start": 1893.9, "end": 1902.56, "text": " And now the the sleep the abstraction phase is able to search through a giant number of" }, { "start": 1902.56, "end": 1909.84, "text": " refactorings in order to come up with this primitive, the map primitive." }, { "start": 1909.84, "end": 1914.76, "text": " And they stress again, so their algorithm that they have for this compression, which" }, { "start": 1914.76, "end": 1921.56, "text": " they don't explain necessarily in this paper, but is is able to wade through a giant number" }, { "start": 1921.56, "end": 1927.84, "text": " of possible refactorings to come up with these common sub algorithms." }, { "start": 1927.84, "end": 1930.9199999999998, "text": " It's not as easy as simply looking at comparing trees." }, { "start": 1930.92, "end": 1935.72, "text": " It's actually much harder because you can refactor programs in many different ways," }, { "start": 1935.72, "end": 1942.8400000000001, "text": " as especially if you have a sufficiently general programming language like this one right here." }, { "start": 1942.8400000000001, "end": 1947.6000000000001, "text": " So ultimately, it would extract this map primitive." }, { "start": 1947.6000000000001, "end": 1953.5600000000002, "text": " And then you can see that both programs immediately become a lot shorter, like the the top program." }, { "start": 1953.5600000000002, "end": 1956.3600000000001, "text": " Sorry, the left one is this and the right one is this." }, { "start": 1956.36, "end": 1963.1999999999998, "text": " Once you have the primitive, they become super duper easy." }, { "start": 1963.1999999999998, "end": 1970.32, "text": " So in terms of experiments, what they do is they they apply this, as we said, to these" }, { "start": 1970.32, "end": 1973.4399999999998, "text": " kind of list tasks, but also to these drawing tasks." }, { "start": 1973.4399999999998, "end": 1980, "text": " And here the primitives aren't as much plus and minus and so on, or these languages that" }, { "start": 1980, "end": 1984.6399999999999, "text": " you've seen, the primitives are much more like you have a pen." }, { "start": 1984.64, "end": 1990.48, "text": " And you know, it is at a point and you're able to kind of move the pen in very basic" }, { "start": 1990.48, "end": 1993.14, "text": " forms, I imagine." }, { "start": 1993.14, "end": 1998.68, "text": " So it's sort of a descriptive descriptive language of a vector graphic." }, { "start": 1998.68, "end": 2001.7800000000002, "text": " And you can see right here." }, { "start": 2001.7800000000002, "end": 2009.5600000000002, "text": " So this is these logo graphic tasks, the model writes programs controlling a pen that draws" }, { "start": 2009.5600000000002, "end": 2010.94, "text": " the target picture." }, { "start": 2010.94, "end": 2013.9, "text": " So that's just these are the tasks." }, { "start": 2013.9, "end": 2018.96, "text": " The task is simply get me a program that draws these pictures." }, { "start": 2018.96, "end": 2023.52, "text": " Okay, those are the tasks you can see they are fairly diverse." }, { "start": 2023.52, "end": 2030.68, "text": " So there is a lot of things that you somehow have to have to get in order to be able to" }, { "start": 2030.68, "end": 2031.7800000000002, "text": " draw this." }, { "start": 2031.7800000000002, "end": 2038.8400000000001, "text": " And when they analyze what the algorithm comes up with during training of on these tasks" }, { "start": 2038.8400000000001, "end": 2042.0400000000002, "text": " is that it discovers these primitives." }, { "start": 2042.04, "end": 2048.56, "text": " So the primitives if they analyze the library after training contains things like the semicircle" }, { "start": 2048.56, "end": 2049.56, "text": " function." }, { "start": 2049.56, "end": 2056, "text": " So the algorithm comes up with a function that takes a value or and draws a semicircle" }, { "start": 2056, "end": 2063.44, "text": " with the given radius, you can see that depending on the value of our the semicircle is larger," }, { "start": 2063.44, "end": 2064.44, "text": " right?" }, { "start": 2064.44, "end": 2071.92, "text": " It all it comes up with primitives like I can draw a Greek spiral, I can draw an S curve." }, { "start": 2071.92, "end": 2073.76, "text": " And so on." }, { "start": 2073.76, "end": 2078.6800000000003, "text": " It also comes up with so what do you see in C right here." }, { "start": 2078.6800000000003, "end": 2085.32, "text": " So each row, sorry, each row and B shows the same code executed with different parameters." }, { "start": 2085.32, "end": 2090.8, "text": " Each image in C shows the same code executed with different parameters and a different" }, { "start": 2090.8, "end": 2092.46, "text": " sub program." }, { "start": 2092.46, "end": 2102.7200000000003, "text": " So it is able to to come up with higher order functions that so functions that take another" }, { "start": 2102.7200000000003, "end": 2110, "text": " function as an input in this case, the the radial symmetry function that takes in a number" }, { "start": 2110, "end": 2117.48, "text": " n and a lower order function, and it will replicate that lower order function in in" }, { "start": 2117.48, "end": 2119.2400000000002, "text": " kind of a circle manner." }, { "start": 2119.24, "end": 2123.9199999999996, "text": " So this, it comes it comes up with these things by itself." }, { "start": 2123.9199999999996, "end": 2127.68, "text": " Now, again, this is pretty cool, by the way." }, { "start": 2127.68, "end": 2132, "text": " And at the bottom, you can see what the dreaming phase comes up with." }, { "start": 2132, "end": 2136.4799999999996, "text": " So at the beginning, you can see that the programs that the dreaming phase comes up" }, { "start": 2136.4799999999996, "end": 2141.04, "text": " with are fairly simple, right?" }, { "start": 2141.04, "end": 2147.62, "text": " And as the library grows, so grows the complexity of the programs it's able to come up with." }, { "start": 2147.62, "end": 2152.08, "text": " So this is sort of a built in curriculum that the model has." }, { "start": 2152.08, "end": 2158.8399999999997, "text": " It starts by constructing problems from its own library, given that at the beginning," }, { "start": 2158.8399999999997, "end": 2160.68, "text": " the library is pretty primitive." }, { "start": 2160.68, "end": 2169.3199999999997, "text": " It, you know, it doesn't do much, but over time, it does." }, { "start": 2169.3199999999997, "end": 2176.24, "text": " Now here you can, by the way, I think the the pen starts at the dark and goes to the" }, { "start": 2176.24, "end": 2181.52, "text": " light like the color coding is where the pen starts and ends." }, { "start": 2181.52, "end": 2184.72, "text": " And I'm not I'm not sure the exact direction they stated." }, { "start": 2184.72, "end": 2189.3199999999997, "text": " So yeah, it's starts at blue and finishes at pink." }, { "start": 2189.3199999999997, "end": 2198.52, "text": " Okay, and you can this is during super early, like this doesn't need many iterations." }, { "start": 2198.52, "end": 2203.08, "text": " So illustrate the most interesting dreams found across five runs." }, { "start": 2203.08, "end": 2207.2799999999997, "text": " Oh, sorry, no across five runs both before and after learning." }, { "start": 2207.2799999999997, "end": 2213.52, "text": " But the sort of the iterations that it takes aren't that many to find solutions to new" }, { "start": 2213.52, "end": 2216.16, "text": " programs." }, { "start": 2216.16, "end": 2224.16, "text": " But you can see, I feel right, this is just my opinion, that if you look at the problems," }, { "start": 2224.16, "end": 2230.36, "text": " and if you look at the primitives that the thing comes up with, you probably see like" }, { "start": 2230.36, "end": 2240, "text": " I see that the person or the system who came up with these tasks is constructed in much" }, { "start": 2240, "end": 2245.7200000000003, "text": " the same way as these sort of primitives, like probably the person that came up with" }, { "start": 2245.7200000000003, "end": 2252.26, "text": " the tasks wrote a little DSL, saying, okay, you know, I'm gonna, you know, have a semicircle" }, { "start": 2252.26, "end": 2256.1200000000003, "text": " function, and that's going to be parameterized, and so on." }, { "start": 2256.12, "end": 2265.16, "text": " And no, so these, these problems themselves are sort of generated by already by a DSL" }, { "start": 2265.16, "end": 2270.08, "text": " or by a human that has kind of this DSL in mind and applies it." }, { "start": 2270.08, "end": 2276.7599999999998, "text": " And therefore, I think that's what I said when I said it's probably the system is very" }, { "start": 2276.7599999999998, "end": 2280.48, "text": " geared towards these problems, because what it's going to end up doing, it's going to" }, { "start": 2280.48, "end": 2284.8599999999997, "text": " end up kind of rediscovering how the data was generated." }, { "start": 2284.86, "end": 2292.4, "text": " And that makes me a bit so so the question now is, does is this going to work on data" }, { "start": 2292.4, "end": 2295.76, "text": " that wasn't generated in this way?" }, { "start": 2295.76, "end": 2301.8, "text": " Or alternatively, you can ask, does the universe have a structure like this?" }, { "start": 2301.8, "end": 2305.76, "text": " And there's good arguments like it like it can discover physical laws." }, { "start": 2305.76, "end": 2310.32, "text": " So here, it can also do, by the way, the same thing with these tower buildings." }, { "start": 2310.32, "end": 2315.84, "text": " And you can see the primitives it's discovering are things like build an arch, build a wall," }, { "start": 2315.84, "end": 2321.48, "text": " build a pyramid, like those are primitives and with arguments, and the different arguments" }, { "start": 2321.48, "end": 2327.32, "text": " will give you different structures right here is very cool." }, { "start": 2327.32, "end": 2330.84, "text": " And these are the dreams down here, what it comes up with." }, { "start": 2330.84, "end": 2336.2000000000003, "text": " So it's, you know, pretty intricate dreams, the combination of those rules." }, { "start": 2336.2, "end": 2342.48, "text": " Now, again, the question is, does this work on let's say, real world data?" }, { "start": 2342.48, "end": 2346.96, "text": " And I feel that is, you know, is real world data?" }, { "start": 2346.96, "end": 2348.9199999999996, "text": " Does it behave similarly?" }, { "start": 2348.9199999999996, "end": 2351.48, "text": " And you know, maybe, I don't know." }, { "start": 2351.48, "end": 2352.52, "text": " Yeah." }, { "start": 2352.52, "end": 2358.2, "text": " So here you can see a bunch of ablations where they show that if you for example, if you're" }, { "start": 2358.2, "end": 2364.66, "text": " missing the abstraction, you won't get very far very often." }, { "start": 2364.66, "end": 2370, "text": " For example, in these in these logo graphics, you see pretty clearly that without abstraction" }, { "start": 2370, "end": 2376.7999999999997, "text": " or without dreaming, you won't you won't get very far, especially I feel that abstraction" }, { "start": 2376.7999999999997, "end": 2378.64, "text": " hurts quite a bit." }, { "start": 2378.64, "end": 2384.96, "text": " Because if you can't abstract, you're only going to go so far in constructing programs." }, { "start": 2384.96, "end": 2389.52, "text": " So you can't construct large programs, even if you have a very good neural network guiding" }, { "start": 2389.52, "end": 2392.8599999999997, "text": " your search." }, { "start": 2392.86, "end": 2400.42, "text": " And lastly, they go about, as I said, discovering sort of physical laws, and they sort of rediscover" }, { "start": 2400.42, "end": 2406.2400000000002, "text": " physical laws from numerical inputs." }, { "start": 2406.2400000000002, "end": 2410.52, "text": " And that's what I mean, maybe the world is actually like this, at least that's how we" }, { "start": 2410.52, "end": 2413.28, "text": " humans solve problems, right?" }, { "start": 2413.28, "end": 2420.04, "text": " We search for a simple, simple explanation to the things that we see." }, { "start": 2420.04, "end": 2426.04, "text": " And you know, science has been very successful, especially, you know, Newton has described," }, { "start": 2426.04, "end": 2429.8, "text": " Newton's second law is like literally this big." }, { "start": 2429.8, "end": 2435.24, "text": " So and it describes a whole lot of interesting physics." }, { "start": 2435.24, "end": 2442.88, "text": " And you know, similarly, lots of other physical laws, which is kind of an unsolved mystery" }, { "start": 2442.88, "end": 2445.18, "text": " why everything is so simple." }, { "start": 2445.18, "end": 2452.7999999999997, "text": " But given that it is a program like this might very well be appropriate, so our program search" }, { "start": 2452.7999999999997, "end": 2456.3599999999997, "text": " system might very well be appropriate." }, { "start": 2456.3599999999997, "end": 2463.08, "text": " You know, that being said, it probably can't out of the box solve computer vision or something" }, { "start": 2463.08, "end": 2464.08, "text": " like this." }, { "start": 2464.08, "end": 2470.74, "text": " And they admit that in the in the in the last part here, but just look at kind of the primitives" }, { "start": 2470.74, "end": 2473.12, "text": " it discovers itself." }, { "start": 2473.12, "end": 2479.72, "text": " So just from the initial primitives that you see right here, like map zip, call, I don't" }, { "start": 2479.72, "end": 2483.3199999999997, "text": " even know what that is, like I'm not into functional programming." }, { "start": 2483.3199999999997, "end": 2489.44, "text": " But from the initial primitives, it discovers the concept of subtracting vectors, adding" }, { "start": 2489.44, "end": 2495.52, "text": " vectors, dividing by two, and so on." }, { "start": 2495.52, "end": 2503.08, "text": " From those, it constructs things like the square root function, which, you know, it's" }, { "start": 2503.08, "end": 2504.72, "text": " pretty remarkable." }, { "start": 2504.72, "end": 2510.36, "text": " And from those, it discovers things like the inverse square law." }, { "start": 2510.36, "end": 2518.4, "text": " And you can then see that, for example, Newton's second law is only a combination of, you know," }, { "start": 2518.4, "end": 2522.68, "text": " very few applications of library rules." }, { "start": 2522.68, "end": 2528.2799999999997, "text": " So it's an exceptionally short program, given this library." }, { "start": 2528.2799999999997, "end": 2533.8799999999997, "text": " And also Coulomb's law, you can see, it's just kind of two rules applied to the four" }, { "start": 2533.8799999999997, "end": 2539.64, "text": " inputs, which if you expand this, it's a fairly large program." }, { "start": 2539.64, "end": 2546.24, "text": " But because you have this library built up, it's it's a short program." }, { "start": 2546.24, "end": 2555.08, "text": " And they do one other experiment where they give it so they they do recursive programming" }, { "start": 2555.08, "end": 2561.8399999999997, "text": " algorithms, like list operations again, but they only give it like the bare minimum that" }, { "start": 2561.8399999999997, "end": 2567.4399999999996, "text": " according to functional programming theory, as far as I understand it, you these are the" }, { "start": 2567.4399999999996, "end": 2571.16, "text": " real the primitives you need to solve the problems." }, { "start": 2571.16, "end": 2577.56, "text": " And specifically, what it does is it first discovers the fold and unfold functions." }, { "start": 2577.56, "end": 2583.72, "text": " So fold is also called reduce, I think if like that's a more common name." }, { "start": 2583.72, "end": 2588.24, "text": " First it discover these these and from these, it builds all the other ones." }, { "start": 2588.24, "end": 2594.62, "text": " And they say, if you go and you look at kind of functional programming theory, that's exactly" }, { "start": 2594.62, "end": 2596.56, "text": " what they say is necessary." }, { "start": 2596.56, "end": 2601.6, "text": " So they say, given fold and unfold, you can sort of build all the other ones and these" }, { "start": 2601.6, "end": 2604.32, "text": " primitives." }, { "start": 2604.32, "end": 2611.2, "text": " And again, you can see list difference function is very super duper short in terms of this," }, { "start": 2611.2, "end": 2612.2, "text": " if you have this library." }, { "start": 2612.2, "end": 2617.96, "text": " So if you've discovered the zip function, and that expands to a program that is fairly" }, { "start": 2617.96, "end": 2625.12, "text": " long that you would never reach with even with neural guided program search." }, { "start": 2625.12, "end": 2630.68, "text": " And not only like reaching it is one point, but then you also have to recognize that that" }, { "start": 2630.68, "end": 2632.7999999999997, "text": " is actually the correct one." }, { "start": 2632.7999999999997, "end": 2633.7999999999997, "text": " Right." }, { "start": 2633.7999999999997, "end": 2638.7799999999997, "text": " And you do that as a human by looking how short it is." }, { "start": 2638.7799999999997, "end": 2645.12, "text": " And this is not a short program, like you could building this as a hash table is shorter" }, { "start": 2645.12, "end": 2646.8599999999997, "text": " than this program." }, { "start": 2646.8599999999997, "end": 2653.16, "text": " So you would rather take the hash table, I guess, if you just have two examples, rather" }, { "start": 2653.16, "end": 2658.16, "text": " than the program, but given that you have all this library, the zip a minus b is actually" }, { "start": 2658.16, "end": 2661.24, "text": " much shorter than encoding it as a hash table." }, { "start": 2661.24, "end": 2668.7799999999997, "text": " All right, so they say, you know, the real world data, they say that here, much real" }, { "start": 2668.7799999999997, "end": 2671.2799999999997, "text": " world data is far messier." }, { "start": 2671.2799999999997, "end": 2676.08, "text": " A key challenge for program induction going forward is to handle more pervasive noise" }, { "start": 2676.08, "end": 2684.68, "text": " and uncertainty by leaning more heavily on probabilistic and neural AI approaches." }, { "start": 2684.68, "end": 2690.16, "text": " Recent research has explored program induction with various hybrid neuro symbolic representations" }, { "start": 2690.16, "end": 2694.7599999999998, "text": " and integrating these approaches with the library learning and bootstrapping capacities" }, { "start": 2694.7599999999998, "end": 2699.64, "text": " of DreamCoder could especially be valuable going forward." }, { "start": 2699.64, "end": 2701.12, "text": " And I agree this." }, { "start": 2701.12, "end": 2709.08, "text": " So we if it's not out yet, we had Francois Chollet on the machine learning street talk." }, { "start": 2709.08, "end": 2715.18, "text": " And if you if you know him, he came up with this this arc challenge where you do like" }, { "start": 2715.18, "end": 2721.16, "text": " it's almost the same thing as DreamCoder does, except with these kind of pictures." }, { "start": 2721.16, "end": 2725.7599999999998, "text": " And you assume that humans have this thing called core knowledge, which they also allude" }, { "start": 2725.7599999999998, "end": 2726.7599999999998, "text": " to in this paper." }, { "start": 2726.76, "end": 2732.1200000000003, "text": " And core knowledge is things like an intuitive understanding of physics and objectness and" }, { "start": 2732.1200000000003, "end": 2733.1200000000003, "text": " so on." }, { "start": 2733.1200000000003, "end": 2738.1600000000003, "text": " So one of the arc challenge things is like, there's kind of a thing here." }, { "start": 2738.1600000000003, "end": 2741.28, "text": " And there's a thing here." }, { "start": 2741.28, "end": 2749.7200000000003, "text": " And then the solution, the solution to that is there's again the thing here." }, { "start": 2749.72, "end": 2757.9399999999996, "text": " And that, so that's the solution, right." }, { "start": 2757.9399999999996, "end": 2762.08, "text": " And you can already see from one example, it's kind of like a ball bouncing off the" }, { "start": 2762.08, "end": 2763.08, "text": " wall." }, { "start": 2763.08, "end": 2769.3999999999996, "text": " And you do that by applying your core knowledge, so to say." }, { "start": 2769.3999999999996, "end": 2774.8199999999997, "text": " So this, again, is very, very clean data." }, { "start": 2774.82, "end": 2779.7200000000003, "text": " So the in arc, I think everything is super clean data, and they say, you know, if we" }, { "start": 2779.7200000000003, "end": 2782.56, "text": " want to apply this to real world problems." }, { "start": 2782.56, "end": 2787.76, "text": " And this is also something that Chollet has said in the podcast, which I invite you to" }, { "start": 2787.76, "end": 2793.9, "text": " listen to as soon as it's out, is that we're going to have to combine this search." }, { "start": 2793.9, "end": 2803.7200000000003, "text": " So the the DreamCoder, it does kind of the search, which the search over a DSL." }, { "start": 2803.72, "end": 2807.4399999999996, "text": " So and the DSL is learned, right." }, { "start": 2807.4399999999996, "end": 2813.68, "text": " Now what we need, this is kind of these are different layers." }, { "start": 2813.68, "end": 2819.3199999999997, "text": " What deep learning usually does is this perception." }, { "start": 2819.3199999999997, "end": 2823.74, "text": " So deep learning is really good at doing perception." }, { "start": 2823.74, "end": 2826.8999999999996, "text": " So this is current deep learning." }, { "start": 2826.8999999999996, "end": 2832.8399999999997, "text": " And this up here is what DreamCoder does, or generally, program synthesis approaches" }, { "start": 2832.84, "end": 2833.84, "text": " do." }, { "start": 2833.84, "end": 2835.8, "text": " And we need a way to connect the two." }, { "start": 2835.8, "end": 2842.2400000000002, "text": " So we need a way to learn these jointly, because that's what you as a as a human some somehow" }, { "start": 2842.2400000000002, "end": 2843.2400000000002, "text": " do." }, { "start": 2843.2400000000002, "end": 2850.7200000000003, "text": " You're able to learn your perception model, which is kind of a perceiving model, and your" }, { "start": 2850.7200000000003, "end": 2858.02, "text": " your logic model, your reasoning model at the same time, or just jointly in some way." }, { "start": 2858.02, "end": 2862.36, "text": " And we haven't exactly figured out how to do that yet." }, { "start": 2862.36, "end": 2868.08, "text": " And I feel, and I agree with this paper, that is probably going to be a very valuable thing" }, { "start": 2868.08, "end": 2869.08, "text": " to do." }, { "start": 2869.08, "end": 2875.02, "text": " All right, so let me know what you think about this paper, I invite you to read it." }, { "start": 2875.02, "end": 2877.6400000000003, "text": " It is it is high level, right." }, { "start": 2877.6400000000003, "end": 2883.1, "text": " But there are some other cool things in it, like the DreamCoder learning reg exes for" }, { "start": 2883.1, "end": 2887.56, "text": " different types of numbers and so on." }, { "start": 2887.56, "end": 2891, "text": " But yeah, I think it's an interesting field." }, { "start": 2891, "end": 2894.88, "text": " It's a bit different from just kind of core machine learning." }, { "start": 2894.88, "end": 2895.88, "text": " And that was it." }, { "start": 2895.88, "end": 2896.88, "text": " I'll see you next time." }, { "start": 2896.88, "end": 2921.36, "text": " Bye." } ]
TOo-HnjjuhU
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
[ML News] Multiplayer Stable Diffusion | OpenAI needs more funding | Text-to-Video models incoming
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "mlnews", "ml news", "kilcher news", "ml news yannic", "phenaki", "imagen", "imagen video", "phenaki ai", "phenaki google", "google ai", "make a video", "ai video", "text to video", "ai video generator", "huggingface", "hugging face", "what is deep learning", "deep learning tutorial", "introduction to deep learning", "mlinpl", "ml in pl" ]
#mlnews #ai #mlinpl Your news from the world of Machine Learning! OUTLINE: 0:00 - Introduction 1:25 - Stable Diffusion Multiplayer 2:15 - Huggingface: DOI for Models & Datasets 3:10 - OpenAI asks for more funding 4:25 - The Stack: Source Code Dataset 6:30 - Google Vizier Open-Sourced 7:10 - New Models 11:50 - Helpful Things 20:30 - Prompt Databases 22:15 - Lexicap by Karpathy References: Stable Diffusion Multiplayer https://huggingface.co/spaces/huggingface-projects/stable-diffusion-multiplayer?roomid=room-0 Huggingface: DOI for Models & Datasets https://huggingface.co/blog/introducing-doi OpenAI asks for more funding https://www.theinformation.com/articles/openai-valued-at-nearly-20-billion-in-advanced-talks-with-microsoft-for-more-funding https://www.wsj.com/articles/microsoft-in-advanced-talks-to-increase-investment-in-openai-11666299548 The Stack: Source Code Dataset https://huggingface.co/datasets/bigcode/the-stack?utm_source=pocket_mylist Google Vizier Open-Sourced https://github.com/google/vizier New Models https://imagen.research.google/video/ https://phenaki.github.io/ https://makeavideo.studio/?utm_source=pocket_mylist https://dreamfusion3d.github.io/ https://arxiv.org/pdf/2210.15257.pdf https://huggingface.co/spaces/PaddlePaddle/ERNIE-ViLG https://github.com/PaddlePaddle/PaddleHub Helpful Things https://thecharlieblake.co.uk/visualising-ml-number-formats https://griddly.ai/ https://engineering.fb.com/2022/10/18/open-source/ocp-summit-2022-grand-teton/?utm_source=twitter&utm_medium=organic_social&utm_campaign=eng2022h2 https://twitter.com/psuraj28/status/1580640841583902720?utm_source=pocket_mylist https://huggingface.co/blog/stable_diffusion_jax https://github.com/Lightning-AI/stable-diffusion-deploy https://lightning.ai/docs/stable/ https://github.com/CarperAI/trlx https://github.com/DLR-RM/rl-baselines3-zoo https://github.com/Sea-Snell/JAXSeq https://www.reddit.com/r/MachineLearning/comments/xoitw9/p_albumentations_13_is_released_a_python_library/?utm_source=pocket_mylist https://twitter.com/Warvito/status/1570691960792580096?utm_source=pocket_mylist https://arxiv.org/abs/2209.07162 https://academictorrents.com/details/63aeb864bbe2115ded0aa0d7d36334c026f0660b https://huggingface.co/spaces/THUDM/CodeGeeX https://ai.facebook.com/blog/gpu-inference-engine-nvidia-amd-open-source/?utm_source=twitter&utm_medium=organic_social&utm_campaign=blog https://github.com/nerfstudio-project/nerfstudio https://www.nerfacc.com/en/latest/ https://github.com/dstackai/dstack https://www.reddit.com/r/MachineLearning/comments/yeyxlo/p_openai_whisper_3x_cpu_inference_speedup/?utm_source=pocket_mylist https://github.com/MiscellaneousStuff/openai-whisper-cpu/issues/1 Prompt Databases https://huggingface.co/datasets/poloclub/diffusiondb https://publicprompts.art/ https://visualise.ai/ https://twitter.com/SamuelAlbanie/status/1574111928431026179/photo/1 Lexicap by Karpathy https://karpathy.ai/lexicap/0139-large.html Links: Homepage: https://ykilcher.com Merch: https://ykilcher.com/merch YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://ykilcher.com/discord LinkedIn: https://www.linkedin.com/in/ykilcher If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
A lot of text to video models have recently come out, but not only that, a lot of other stuff has happened too, such as multiplayer stable diffusion and OpenAI is looking for even more money from Microsoft. Stay tuned. This is ML News. Hello everyone. As you can see, I'm not in my usual setting. I'm actually currently in Poland. It is the last day of the machine learning in Poland conference. This conference is absolutely glorious. Absolutely fantastic. It was really cool being here. It is over now. I'm going home. But next year, please be here. Or if you're a company that's looking to get rid of some money and sponsor an awesome conference, the ML and PL conference has been organized at least as well as any of the new rips or ICMLs that I've ever been to. And it is very likely that this conference is going to grow and become more notorious in the next few years. There was a great lineup of keynote speakers, tutorials and other content. And I even had the pleasure of joining into a bit of a concert at one of the poster sessions, which was certainly a unique experience. So thanks again to the ML and PL organizers. See you there next year. All right. So stable diffusion is going multiplayer. This is a hugging face space. It's essentially a giant canvas. And you can just come in here and you drag this square somewhere and you give it some kind of a description and it will just kind of fit in what you're doing. All of this is collectively drawn by people. And I'm always afraid because I don't want to destroy something, right? Because all of this is just very, very cool what people come up with. Just another example of something that I would have never thought of. But because stuff is open and release, this is you know, this can be built. So absolutely cool. Give it a try. And maybe this inspires you to build something that is even cooler than this. I don't know what it's going to be. But I'm sure one of you has a great idea right now. Another hugging face news, they introduce DOI, digital object identifiers for data sets and models. DOIs are sort of a standard way in scientific literature of addressing things like addressing papers, addressing artifacts, and now hugging face is introducing these things for their models and data sets on the hub. So on the hub, you're going to see this little box with which you can generate essentially it's a UUID for a model or a data set that is never going to change in the future. Now you can out date it so you can say, well, this one is deprecated. I have a new version of this model, but it is a unique identifier to that model that you have. And this is really good if you want to put it inside a paper so as to make it reproducible. And given that it is a standard, it just incorporates with the whole rest of the scientific ecosystem. So definitely a big plus for anyone who does work in research. The Wall Street Journal writes Microsoft in advance talks to increase investment in open AI. In this article, essentially there isn't much detail, but open AI is apparently asking for more money, more investment. Microsoft has previously invested about a billion dollars into Microsoft. And on top of that, probably really preferential access to Azure in exchange that open AI will provide preferential access to Microsoft for its product. It's funny because here it says last week, Microsoft announced it was integrating Dolly 2 with various products, including Microsoft Design, a new graphic design app, which is cool, and the image creator for search app Bing. Is that their big plan? Is that the one billion dollar investment to get Bing off the ground finally? I'm not sure. Now keep in mind that just because open AI goes and asks for more money, that doesn't mean that they're bankrupt soon. It could also mean that they're planning for an even bigger push startups. And I don't know if open AI can still be considered a startup, but startups often they do take on more money whenever they want to start scaling even more. Now how much open AI wants to scale even more? I don't know. It could also be that they're just out of money and need more. The stack is a data set. It's by the big code project and it's three terabyte of permissively licensed source code. So this data set is fully open, you can download it if you want to train anything like a codex model or something similar. The data set pays specific attention to the licensing of the code that is included in the data set. The code is MIT licensed, Apache licensed, BSD3 licensed, essentially licensed such that you can do whatever you want with it. Now that doesn't get you out of the weeds legally of doing anything and everything because you still have to do things like provide a copyright. Notice if you copy one of these codes verbatim. But the stack not only pays attention to this when they collect this initially, but also as you can see on the hugging face entry in the hugging face hub, there are terms of use for the stack. And one of the terms of use of the stack is that you must always update your own version of the stack to the most recent usable version. And this is because they have essentially a form where you as a source code author can go and request removal of your source code from the stack. So even if you license this under MIT license, they don't want anyone's code who doesn't want to be part of the stack. So you can go and request that your code be removed from the stack, they will then do that update the data set. And by agreeing to these terms, if you download the data set, you essentially agree to always download the newest version and use the newest version of the data set such as to propagate that removal of that code. Now as I understand it, I'm not a lawyer, this is not legal advice. But as I understand it, you are entering into a binding agreement by clicking this checkbox and clicking this button. So think about whether you want that or not. But it is good that another option is out there next to just scraping GitHub, I guess. Google releases Vizier open source Vizier is a black box optimizer that works at scale. So many, many different experiments that need to be hyper parameter optimized. Vizier essentially decides which hyper parameter to try next. So you can run this as a service if you have a lot of parallel workers and you want to run hyper parameter optimizations, they have API's for users. And the user here is essentially someone who wants to do hyper parameter optimization, they have API's for developers, which means that you can put in new optimization algorithms. So if you're a developer of a black box optimization algorithm, you can integrate that with Vizier and they have a benchmarking API. So apparently this thing has been running inside of Google for a while. And now they finally decided to release it open source. So it's certainly tried and tested. All right, now we get into the video models. There have been a few video models. Now they have been released a while back. But I'll just summarize them briefly here. Imagine video is a text to video model, you can see a bunch of samples right here. And they look really, really cool. So this is a video diffusion model. But as far as I understand it is kind of a combination of fully convolutional networks and super resolution networks in order to get this effect. They described this further in a few diagrams on their website. Imagine video uses video unit architecture to capture spatial fidelity and temporal dynamics. Temporal self attention is used in the base video diffusion model, while temporal convolutions are used in the temporal and spatial super resolution models. There is a paper to go along with it if you are interested. Now also from Google research is Fennaky. I'm not exactly sure how to pronounce that. But it is a different text to video model that can produce up to minutes long videos with changing text. So here you can see a prompt that constantly changes. And as it does, the video changes as well. So rather than being a diffusion model, this model compresses video to a tokenized representation and then essentially uses a causal autoregressive language model to continue that tokenized representation. With that they're able to essentially produce unbounded video as the beginning of the video simply drops out of the context. But as long as you feed into the side input more and more text that you want to be produced, you can see that the video keeps changing, keeps adapting and keeps being faithful to the currently in focus part of the prompt. What's interesting is that the training data seems to be mostly text to image with just a few text to video pairs inside of the training data. Now we're not done with the text to video models yet. MetaAI actually released Make a Video, yet another text to video model. And this one is also a bit special because it essentially only produces a single image from text. So this is a essentially text to image model and then an unsupervised video generator from that image. So the text to image model is essentially as we know text to image models, but then the video model is unsupervised. It simply learns from unsupervised video data, how video behaves and is then able to take a single picture, a single frame of that video and make the entire video out of it. The results look really cool. What I think is cool between all of these works is that they all have a different approach for the same problem. The all the results they produce are very cool. And it's going to be interesting to see how this text to video problem will ultimately be like canonically solved, let's say. I don't know, but I'm keeping my eyes open. Now slightly different, but not entirely different is dream fusion. This isn't text to video. This is text to 3D. Now if you think that, you know, is relatively straightforward, then none of these things actually involve 3D training data, at least as far as I can understand it. Rather what they do is they consider the entire scene essentially like a nerve. So what they do is they start with a random 3D scene. So pick your 3D scene, fill a bunch of voxels and don't fill the other voxels. And then you optimize that 3D scene to satisfy text to image models that essentially act as photographs of that scene. So it is a lot like nerve, except that you don't have pictures, but you like optimize for a text to image model rather than optimizing for an actual image. And that is a really cool idea. And it actually seems to work pretty great. Now there's other work still improving text to image diffusion models themselves. Ernie BILG 2.0 is one of them. This is an iteration of the previous model and it is using mixture of denoising experts. I don't want to go too much into this, but you can definitely see right here that the results are breathtaking and very good with a great resolution. Now there is a demo on the hogging face hub. But as far as I understand, this model isn't released, so the demo and the code that they put on GitHub, they simply calls some API where the model is actually stored. This is a neat tool, not directly related to machine learning. But if you've ever wondered what like the difference between a B float 16 and an FP 16 is, I never knew. Charlie Blake has a very cool tool on a blog that essentially shows you the different tradeoffs you can make when you choose a number format. So it shows you for the different numbers, what kind of ranges you can represent with them, where they're good at, where they're not good at. So you can see here clearly the difference between a B float 16 and an FP 16. One can represent a lot of numbers and the other one can represent just very small range of numbers, but to more precision. Gridly JS is a tool that allows you to interact with grid world reinforcement learning environments. So there are a number of cool features right here. You can edit levels directly. You can also try out the levels. You can debug your policies. You can record trajectories. So right now I don't have a trajectory, but what I can do is I can record right here and I can move this thing around here, here, going to the lava and then I die. And you can see the steps I've taken right here. So you can use this to do various kinds of things, debugging, investigating, and so on. If you are into reinforcement learning and you work with grid world, then by all means, check this out. Meta announces their new box, I guess. This is the box. This is an architecture for deep learning, the grand Teton. Essentially they release the architecture open source. So their engineers have sat down and thought long and hard about what it takes for a great machine learning system. Like they're a bit more older VGX boxes. And they essentially tell you, look, we believe that this combination of hardware, this processors, these GPUs connected like this with these power supplies will be a very great base for doing research. Yeah, they're releasing these specs essentially for you to just buy or assemble. I guess whatever you want to do with it. But I can tell you it is relatively hard to decide exactly on every component of the hardware. And it's really great that people who are very competent in this actually think about it and give their suggestions. So if you have a lab or a company and you really want to buy your own hardware, maybe this is a good option for you. Pugging face diffusers from version 0.5.1 on forward supports diffusers in Jax. If you like Jax, if you like stable diffusion, go for it. Muse is an open source stable diffusion production server. Well it is not as much a server as it is sort of like a tutorial on how to bring up a server. This is based on the lightning apps framework, which is open source. And it's kind of an easy way to bring together all the components you need to deploy machine learning things. And this repository is essentially a specification on how to pull up a stable diffusion server. So if you want to deploy stable diffusion yourself, this is probably the fastest and simplest way to do so. TRLX by Carper AI is a library that allows you to do reinforcement learning for text models. So you can see right here, you can give either sort of a reward function or you can give a data set that assigns values to expert demonstrations. And you can train a language model to incorporate that. This is a relatively new domain to do reinforcement learning on text models, but it is cool to have another library to tackle the problem. RLBaselines3zoo is a training framework for stable baselines 3 reinforcement learning agents. Stable baselines is a library that tries to give reference implementations of reinforcement learning algorithms because they're very tricky and they're very hard to get right. So these are good, solid and performant reference implementations. Stable baselines 3 is the third iteration of it. And this repository right here, the zoo contains a number of surrounding things like scripts that make it very easy to interact with it, but also prepared agents and prepared hyper parameter settings that work well in different standard environments. Jaxsec is a library that allows you to train very large language models in Jax. So the cool thing is that with this library, you essentially get things like data parallelism or model parallelism for free. You can just specify them and you can trade them off however you want. This is due to the power and simplicity of Jax. Albuminations, I hope I'm pronouncing that correctly, 1.3 is out and it introduces a bunch of new image augmentations. This is a library for image augmentations. So it's good that they introduce new augmentations that fits very well to the augmentations they already have. There's also a bunch of bug fixes and more. If you're looking for image augmentations in Python, this might be a good library. This is a really cool thing you can do with diffusion models. These people have trained diffusion models of brain images and were able to create new synthetic brain images with a degree of controllability. Now there is a paper on archive if you are interested. You can also download the dataset of 100,000 synthetic brain images. CodeGeeks is a multilingual code generation model. This is as it says, it's essentially something similar like Codex, but it is released. You can actually go and you can download the model and use it yourself. MetaAI releases AI template, which is an inference engine. The goal here is to make inference faster. You get a lot of speed ups over just running standard inference and something like eye torch. So this does two things. First of all, it optimizes your computation graph. If your computation graph contains a lot of like little operations that could be used together into something that's really optimal for a given hardware, or just that can be expressed in a smarter way, then a graph optimizer can do that. And in a second step, there is a compiler to compile all of this to highly performance C++ code that runs on backend hardware such as a GPU that uses CUDA or even an AMD GPU. So if fast inference is a concern to you, this is definitely a thing to check out. Nerve Studio describes itself as a collaboration friendly studio for nerves, but it is more like a collection, an entire collection of software to handle nerves, anything from training, validating, or even experiencing yourself. You can see they have a viewer that allows you to just explore the nerves that you do and make videos from it. But really it covers everything to do with nerves. Now speaking of nerve, Nerf Pack is a pipe torch nerve acceleration toolbox. This gets significant speed ups over simply using nerve code that's out there. For example, vanilla nerve model with eight layer multilayer perceptrons can be trained to better quality in one hour rather than one to two days as in the paper. Dstack, the logo doesn't exactly work on dark background, but Dstack is a library that wants to standardize your ML workflows that you run in the cloud. This is essentially you check your workflows into GitHub and Dstack helps you to run them uniformly anywhere. So in a workflow, you can specify things like your workflow name, obviously, but then it starts, you can say, okay, my provider is bash. So this is essentially a bash script. Now what are the commands? I want to pip install some stuff, I want to run this training script right here, but it also has things like artifacts. And you can also specify things like I want to load data from this S3 bucket over there, I want to run on this cloud over there. So all of this is quite geared towards machine learning. It's certainly not the first workflow engine or the first iteration from, hey, let's check our things into source code. But it is very targeted at running ML workflows in the cloud. Several people have figured out massive speed ups in the OpenAI whisper model. For example, this person here has figured out a 3x speed up on CPU inference, but refers to the GitHub thread where someone else has found an even bigger 3.25x speed up. Again, it's very cool to see what people do when you just give them the model. And lastly, I want to point to a couple of databases for stuff mainly around stable diffusion. So diffusion DB is on the hugging face hub. It's a data set of prompts that have been entered by real users into stable diffusion and the corresponding images that they got out. Public prompts, that's public prompts dot art in your browser is a database of three prompts and three models. These models are mostly trained using dream booth, but if you're looking for inspiration for prompts and what they turn out, then this is maybe a good place to go. Likewise, visualize.ai is a website that goes a little bit more businessy. So it lets you create some free stuff like stable diffusion. But then it also acts like as a bit of a marketplace for these things, such that you could also buy them or sell them. It's cool to see that different business models are trying to spring up around this ecosystem. Ultimately, someone will figure out how to really make money off of this stuff. But you know, it's good to be part of the time when people are just trying stuff and seeing what happens with not only on the research side, but also on the business side. Lastly, Big Science has released prompt source, which is an IDE for natural language prompts. So this is a way to give people a bit more help and a bit more standardization when they use prompts to achieve certain goals, for example, when they use prompts to tackle some of the NLP challenges that are now more and more phrased simply as prompts into these large language models, rather than as data that goes into a specially trained model for that task. So if you find yourself in this situation or a similar one, then prompt source may be for you. Finally, this is a database of all Lex Friedman podcasts transcribed. This is the website of Andre Karpotty. And he used a simple combination of a download script from YouTube combined with OpenAI's whisper to transcribe all of Lex Friedman's podcast episodes. You can go to any one of them, you can click and they are here with time annotations and all is a very simple but very cool project. Thank you, Andre. And I thank all of you for listening. I'll be home again next week. Until then, stay hydrated. Bye bye.
[ { "start": 0, "end": 5.44, "text": " A lot of text to video models have recently come out, but not only that, a lot of other" }, { "start": 5.44, "end": 12.08, "text": " stuff has happened too, such as multiplayer stable diffusion and OpenAI is looking for" }, { "start": 12.08, "end": 14.6, "text": " even more money from Microsoft." }, { "start": 14.6, "end": 15.6, "text": " Stay tuned." }, { "start": 15.6, "end": 20.96, "text": " This is ML News." }, { "start": 20.96, "end": 21.96, "text": " Hello everyone." }, { "start": 21.96, "end": 23.88, "text": " As you can see, I'm not in my usual setting." }, { "start": 23.88, "end": 25.88, "text": " I'm actually currently in Poland." }, { "start": 25.88, "end": 30.64, "text": " It is the last day of the machine learning in Poland conference." }, { "start": 30.64, "end": 33.6, "text": " This conference is absolutely glorious." }, { "start": 33.6, "end": 34.6, "text": " Absolutely fantastic." }, { "start": 34.6, "end": 36.239999999999995, "text": " It was really cool being here." }, { "start": 36.239999999999995, "end": 37.239999999999995, "text": " It is over now." }, { "start": 37.239999999999995, "end": 38.239999999999995, "text": " I'm going home." }, { "start": 38.239999999999995, "end": 40.36, "text": " But next year, please be here." }, { "start": 40.36, "end": 43.84, "text": " Or if you're a company that's looking to get rid of some money and sponsor an awesome" }, { "start": 43.84, "end": 49.68, "text": " conference, the ML and PL conference has been organized at least as well as any of" }, { "start": 49.68, "end": 53.56, "text": " the new rips or ICMLs that I've ever been to." }, { "start": 53.56, "end": 58.800000000000004, "text": " And it is very likely that this conference is going to grow and become more notorious" }, { "start": 58.800000000000004, "end": 59.800000000000004, "text": " in the next few years." }, { "start": 59.800000000000004, "end": 64.2, "text": " There was a great lineup of keynote speakers, tutorials and other content." }, { "start": 64.2, "end": 69.84, "text": " And I even had the pleasure of joining into a bit of a concert at one of the poster sessions," }, { "start": 69.84, "end": 71.92, "text": " which was certainly a unique experience." }, { "start": 71.92, "end": 74.96000000000001, "text": " So thanks again to the ML and PL organizers." }, { "start": 74.96000000000001, "end": 75.96000000000001, "text": " See you there next year." }, { "start": 75.96000000000001, "end": 76.96000000000001, "text": " All right." }, { "start": 76.96000000000001, "end": 79.08, "text": " So stable diffusion is going multiplayer." }, { "start": 79.08, "end": 81.24000000000001, "text": " This is a hugging face space." }, { "start": 81.24000000000001, "end": 83.32000000000001, "text": " It's essentially a giant canvas." }, { "start": 83.32, "end": 88.83999999999999, "text": " And you can just come in here and you drag this square somewhere and you give it some" }, { "start": 88.83999999999999, "end": 92.47999999999999, "text": " kind of a description and it will just kind of fit in what you're doing." }, { "start": 92.47999999999999, "end": 96, "text": " All of this is collectively drawn by people." }, { "start": 96, "end": 99.72, "text": " And I'm always afraid because I don't want to destroy something, right?" }, { "start": 99.72, "end": 104.24, "text": " Because all of this is just very, very cool what people come up with." }, { "start": 104.24, "end": 108.03999999999999, "text": " Just another example of something that I would have never thought of." }, { "start": 108.04, "end": 113.52000000000001, "text": " But because stuff is open and release, this is you know, this can be built." }, { "start": 113.52000000000001, "end": 114.78, "text": " So absolutely cool." }, { "start": 114.78, "end": 115.78, "text": " Give it a try." }, { "start": 115.78, "end": 119.92, "text": " And maybe this inspires you to build something that is even cooler than this." }, { "start": 119.92, "end": 121.32000000000001, "text": " I don't know what it's going to be." }, { "start": 121.32000000000001, "end": 125.16000000000001, "text": " But I'm sure one of you has a great idea right now." }, { "start": 125.16000000000001, "end": 130.76, "text": " Another hugging face news, they introduce DOI, digital object identifiers for data sets" }, { "start": 130.76, "end": 131.76, "text": " and models." }, { "start": 131.76, "end": 138.16, "text": " DOIs are sort of a standard way in scientific literature of addressing things like addressing" }, { "start": 138.16, "end": 142.72, "text": " papers, addressing artifacts, and now hugging face is introducing these things for their" }, { "start": 142.72, "end": 144.67999999999998, "text": " models and data sets on the hub." }, { "start": 144.67999999999998, "end": 149.72, "text": " So on the hub, you're going to see this little box with which you can generate essentially" }, { "start": 149.72, "end": 155.72, "text": " it's a UUID for a model or a data set that is never going to change in the future." }, { "start": 155.72, "end": 159.2, "text": " Now you can out date it so you can say, well, this one is deprecated." }, { "start": 159.2, "end": 165.23999999999998, "text": " I have a new version of this model, but it is a unique identifier to that model that" }, { "start": 165.23999999999998, "end": 166.23999999999998, "text": " you have." }, { "start": 166.23999999999998, "end": 171.11999999999998, "text": " And this is really good if you want to put it inside a paper so as to make it reproducible." }, { "start": 171.11999999999998, "end": 176.83999999999997, "text": " And given that it is a standard, it just incorporates with the whole rest of the scientific ecosystem." }, { "start": 176.83999999999997, "end": 181.2, "text": " So definitely a big plus for anyone who does work in research." }, { "start": 181.2, "end": 186.64, "text": " The Wall Street Journal writes Microsoft in advance talks to increase investment in open" }, { "start": 186.64, "end": 187.64, "text": " AI." }, { "start": 187.64, "end": 192.48, "text": " In this article, essentially there isn't much detail, but open AI is apparently asking for" }, { "start": 192.48, "end": 194.64, "text": " more money, more investment." }, { "start": 194.64, "end": 198.55999999999997, "text": " Microsoft has previously invested about a billion dollars into Microsoft." }, { "start": 198.55999999999997, "end": 204.67999999999998, "text": " And on top of that, probably really preferential access to Azure in exchange that open AI will" }, { "start": 204.67999999999998, "end": 208.27999999999997, "text": " provide preferential access to Microsoft for its product." }, { "start": 208.27999999999997, "end": 212.2, "text": " It's funny because here it says last week, Microsoft announced it was integrating Dolly" }, { "start": 212.2, "end": 217.56, "text": " 2 with various products, including Microsoft Design, a new graphic design app, which is" }, { "start": 217.56, "end": 222.16, "text": " cool, and the image creator for search app Bing." }, { "start": 222.16, "end": 223.48, "text": " Is that their big plan?" }, { "start": 223.48, "end": 227.96, "text": " Is that the one billion dollar investment to get Bing off the ground finally?" }, { "start": 227.96, "end": 228.96, "text": " I'm not sure." }, { "start": 228.96, "end": 233.64000000000001, "text": " Now keep in mind that just because open AI goes and asks for more money, that doesn't" }, { "start": 233.64000000000001, "end": 235.76, "text": " mean that they're bankrupt soon." }, { "start": 235.76, "end": 239.68, "text": " It could also mean that they're planning for an even bigger push startups." }, { "start": 239.68, "end": 245.12, "text": " And I don't know if open AI can still be considered a startup, but startups often they do take" }, { "start": 245.12, "end": 249.28, "text": " on more money whenever they want to start scaling even more." }, { "start": 249.28, "end": 252, "text": " Now how much open AI wants to scale even more?" }, { "start": 252, "end": 253, "text": " I don't know." }, { "start": 253, "end": 256.72, "text": " It could also be that they're just out of money and need more." }, { "start": 256.72, "end": 258.76, "text": " The stack is a data set." }, { "start": 258.76, "end": 264.48, "text": " It's by the big code project and it's three terabyte of permissively licensed source code." }, { "start": 264.48, "end": 270.84000000000003, "text": " So this data set is fully open, you can download it if you want to train anything like a codex" }, { "start": 270.84000000000003, "end": 272.64, "text": " model or something similar." }, { "start": 272.64, "end": 278.15999999999997, "text": " The data set pays specific attention to the licensing of the code that is included in" }, { "start": 278.15999999999997, "end": 279.15999999999997, "text": " the data set." }, { "start": 279.15999999999997, "end": 285.44, "text": " The code is MIT licensed, Apache licensed, BSD3 licensed, essentially licensed such that" }, { "start": 285.44, "end": 287.96, "text": " you can do whatever you want with it." }, { "start": 287.96, "end": 292.59999999999997, "text": " Now that doesn't get you out of the weeds legally of doing anything and everything because" }, { "start": 292.59999999999997, "end": 296.44, "text": " you still have to do things like provide a copyright." }, { "start": 296.44, "end": 299.4, "text": " Notice if you copy one of these codes verbatim." }, { "start": 299.4, "end": 304, "text": " But the stack not only pays attention to this when they collect this initially, but also" }, { "start": 304, "end": 309.56, "text": " as you can see on the hugging face entry in the hugging face hub, there are terms of use" }, { "start": 309.56, "end": 310.56, "text": " for the stack." }, { "start": 310.56, "end": 315.12, "text": " And one of the terms of use of the stack is that you must always update your own version" }, { "start": 315.12, "end": 318.28, "text": " of the stack to the most recent usable version." }, { "start": 318.28, "end": 323.47999999999996, "text": " And this is because they have essentially a form where you as a source code author can" }, { "start": 323.47999999999996, "end": 327.44, "text": " go and request removal of your source code from the stack." }, { "start": 327.44, "end": 333.12, "text": " So even if you license this under MIT license, they don't want anyone's code who doesn't" }, { "start": 333.12, "end": 335.21999999999997, "text": " want to be part of the stack." }, { "start": 335.21999999999997, "end": 340.12, "text": " So you can go and request that your code be removed from the stack, they will then do" }, { "start": 340.12, "end": 342.2, "text": " that update the data set." }, { "start": 342.2, "end": 347.28, "text": " And by agreeing to these terms, if you download the data set, you essentially agree to always" }, { "start": 347.28, "end": 353, "text": " download the newest version and use the newest version of the data set such as to propagate" }, { "start": 353, "end": 355.04, "text": " that removal of that code." }, { "start": 355.04, "end": 359.04, "text": " Now as I understand it, I'm not a lawyer, this is not legal advice." }, { "start": 359.04, "end": 363.40000000000003, "text": " But as I understand it, you are entering into a binding agreement by clicking this checkbox" }, { "start": 363.40000000000003, "end": 364.72, "text": " and clicking this button." }, { "start": 364.72, "end": 367.28000000000003, "text": " So think about whether you want that or not." }, { "start": 367.28000000000003, "end": 372.44, "text": " But it is good that another option is out there next to just scraping GitHub, I guess." }, { "start": 372.44, "end": 379.26, "text": " Google releases Vizier open source Vizier is a black box optimizer that works at scale." }, { "start": 379.26, "end": 383.84000000000003, "text": " So many, many different experiments that need to be hyper parameter optimized." }, { "start": 383.84, "end": 387.11999999999995, "text": " Vizier essentially decides which hyper parameter to try next." }, { "start": 387.11999999999995, "end": 391.67999999999995, "text": " So you can run this as a service if you have a lot of parallel workers and you want to" }, { "start": 391.67999999999995, "end": 395.64, "text": " run hyper parameter optimizations, they have API's for users." }, { "start": 395.64, "end": 399.79999999999995, "text": " And the user here is essentially someone who wants to do hyper parameter optimization," }, { "start": 399.79999999999995, "end": 405.44, "text": " they have API's for developers, which means that you can put in new optimization algorithms." }, { "start": 405.44, "end": 411.35999999999996, "text": " So if you're a developer of a black box optimization algorithm, you can integrate that with Vizier" }, { "start": 411.35999999999996, "end": 413.7, "text": " and they have a benchmarking API." }, { "start": 413.7, "end": 417.4, "text": " So apparently this thing has been running inside of Google for a while." }, { "start": 417.4, "end": 420.56, "text": " And now they finally decided to release it open source." }, { "start": 420.56, "end": 423, "text": " So it's certainly tried and tested." }, { "start": 423, "end": 425.71999999999997, "text": " All right, now we get into the video models." }, { "start": 425.71999999999997, "end": 427.56, "text": " There have been a few video models." }, { "start": 427.56, "end": 429.88, "text": " Now they have been released a while back." }, { "start": 429.88, "end": 432.15999999999997, "text": " But I'll just summarize them briefly here." }, { "start": 432.15999999999997, "end": 438.59999999999997, "text": " Imagine video is a text to video model, you can see a bunch of samples right here." }, { "start": 438.59999999999997, "end": 441.02, "text": " And they look really, really cool." }, { "start": 441.02, "end": 444.03999999999996, "text": " So this is a video diffusion model." }, { "start": 444.03999999999996, "end": 448.71999999999997, "text": " But as far as I understand it is kind of a combination of fully convolutional networks" }, { "start": 448.71999999999997, "end": 453.03999999999996, "text": " and super resolution networks in order to get this effect." }, { "start": 453.03999999999996, "end": 456.52, "text": " They described this further in a few diagrams on their website." }, { "start": 456.52, "end": 462.96, "text": " Imagine video uses video unit architecture to capture spatial fidelity and temporal dynamics." }, { "start": 462.96, "end": 468.12, "text": " Temporal self attention is used in the base video diffusion model, while temporal convolutions" }, { "start": 468.12, "end": 472.16, "text": " are used in the temporal and spatial super resolution models." }, { "start": 472.16, "end": 475.12, "text": " There is a paper to go along with it if you are interested." }, { "start": 475.12, "end": 478.04, "text": " Now also from Google research is Fennaky." }, { "start": 478.04, "end": 480.44, "text": " I'm not exactly sure how to pronounce that." }, { "start": 480.44, "end": 486.88, "text": " But it is a different text to video model that can produce up to minutes long videos" }, { "start": 486.88, "end": 488.28000000000003, "text": " with changing text." }, { "start": 488.28000000000003, "end": 491.66, "text": " So here you can see a prompt that constantly changes." }, { "start": 491.66, "end": 494.72, "text": " And as it does, the video changes as well." }, { "start": 494.72, "end": 502.24, "text": " So rather than being a diffusion model, this model compresses video to a tokenized representation" }, { "start": 502.24, "end": 508.24, "text": " and then essentially uses a causal autoregressive language model to continue that tokenized" }, { "start": 508.24, "end": 509.6, "text": " representation." }, { "start": 509.6, "end": 515.64, "text": " With that they're able to essentially produce unbounded video as the beginning of the video" }, { "start": 515.64, "end": 518, "text": " simply drops out of the context." }, { "start": 518, "end": 523.52, "text": " But as long as you feed into the side input more and more text that you want to be produced," }, { "start": 523.52, "end": 528.96, "text": " you can see that the video keeps changing, keeps adapting and keeps being faithful to" }, { "start": 528.96, "end": 532.76, "text": " the currently in focus part of the prompt." }, { "start": 532.76, "end": 537.72, "text": " What's interesting is that the training data seems to be mostly text to image with just" }, { "start": 537.72, "end": 542, "text": " a few text to video pairs inside of the training data." }, { "start": 542, "end": 544.68, "text": " Now we're not done with the text to video models yet." }, { "start": 544.68, "end": 550.36, "text": " MetaAI actually released Make a Video, yet another text to video model." }, { "start": 550.36, "end": 555.6800000000001, "text": " And this one is also a bit special because it essentially only produces a single image" }, { "start": 555.6800000000001, "end": 556.88, "text": " from text." }, { "start": 556.88, "end": 564.44, "text": " So this is a essentially text to image model and then an unsupervised video generator from" }, { "start": 564.44, "end": 565.44, "text": " that image." }, { "start": 565.44, "end": 570.88, "text": " So the text to image model is essentially as we know text to image models, but then" }, { "start": 570.88, "end": 573.12, "text": " the video model is unsupervised." }, { "start": 573.12, "end": 579.96, "text": " It simply learns from unsupervised video data, how video behaves and is then able to take" }, { "start": 579.96, "end": 585.8000000000001, "text": " a single picture, a single frame of that video and make the entire video out of it." }, { "start": 585.8000000000001, "end": 587.76, "text": " The results look really cool." }, { "start": 587.76, "end": 592.44, "text": " What I think is cool between all of these works is that they all have a different approach" }, { "start": 592.44, "end": 593.44, "text": " for the same problem." }, { "start": 593.44, "end": 595.96, "text": " The all the results they produce are very cool." }, { "start": 595.96, "end": 600.8000000000001, "text": " And it's going to be interesting to see how this text to video problem will ultimately" }, { "start": 600.8000000000001, "end": 603.2800000000001, "text": " be like canonically solved, let's say." }, { "start": 603.2800000000001, "end": 606.34, "text": " I don't know, but I'm keeping my eyes open." }, { "start": 606.34, "end": 610, "text": " Now slightly different, but not entirely different is dream fusion." }, { "start": 610, "end": 611.24, "text": " This isn't text to video." }, { "start": 611.24, "end": 613.2, "text": " This is text to 3D." }, { "start": 613.2, "end": 620.2800000000001, "text": " Now if you think that, you know, is relatively straightforward, then none of these things" }, { "start": 620.2800000000001, "end": 625.2, "text": " actually involve 3D training data, at least as far as I can understand it." }, { "start": 625.2, "end": 629.94, "text": " Rather what they do is they consider the entire scene essentially like a nerve." }, { "start": 629.94, "end": 633.96, "text": " So what they do is they start with a random 3D scene." }, { "start": 633.96, "end": 638.9200000000001, "text": " So pick your 3D scene, fill a bunch of voxels and don't fill the other voxels." }, { "start": 638.9200000000001, "end": 645.84, "text": " And then you optimize that 3D scene to satisfy text to image models that essentially act" }, { "start": 645.84, "end": 648.12, "text": " as photographs of that scene." }, { "start": 648.12, "end": 653.8000000000001, "text": " So it is a lot like nerve, except that you don't have pictures, but you like optimize" }, { "start": 653.8000000000001, "end": 658.32, "text": " for a text to image model rather than optimizing for an actual image." }, { "start": 658.32, "end": 659.96, "text": " And that is a really cool idea." }, { "start": 659.96, "end": 662.08, "text": " And it actually seems to work pretty great." }, { "start": 662.08, "end": 666.84, "text": " Now there's other work still improving text to image diffusion models themselves." }, { "start": 666.84, "end": 670.38, "text": " Ernie BILG 2.0 is one of them." }, { "start": 670.38, "end": 676.2, "text": " This is an iteration of the previous model and it is using mixture of denoising experts." }, { "start": 676.2, "end": 680.5200000000001, "text": " I don't want to go too much into this, but you can definitely see right here that the" }, { "start": 680.5200000000001, "end": 685.62, "text": " results are breathtaking and very good with a great resolution." }, { "start": 685.62, "end": 688.2, "text": " Now there is a demo on the hogging face hub." }, { "start": 688.2, "end": 693.2800000000001, "text": " But as far as I understand, this model isn't released, so the demo and the code that they" }, { "start": 693.2800000000001, "end": 701.96, "text": " put on GitHub, they simply calls some API where the model is actually stored." }, { "start": 701.96, "end": 705.94, "text": " This is a neat tool, not directly related to machine learning." }, { "start": 705.94, "end": 711.36, "text": " But if you've ever wondered what like the difference between a B float 16 and an FP" }, { "start": 711.36, "end": 713.6800000000001, "text": " 16 is, I never knew." }, { "start": 713.68, "end": 720.8, "text": " Charlie Blake has a very cool tool on a blog that essentially shows you the different tradeoffs" }, { "start": 720.8, "end": 723.88, "text": " you can make when you choose a number format." }, { "start": 723.88, "end": 727.92, "text": " So it shows you for the different numbers, what kind of ranges you can represent with" }, { "start": 727.92, "end": 730.3599999999999, "text": " them, where they're good at, where they're not good at." }, { "start": 730.3599999999999, "end": 735.8599999999999, "text": " So you can see here clearly the difference between a B float 16 and an FP 16." }, { "start": 735.8599999999999, "end": 741.64, "text": " One can represent a lot of numbers and the other one can represent just very small range" }, { "start": 741.64, "end": 744.6, "text": " of numbers, but to more precision." }, { "start": 744.6, "end": 751.52, "text": " Gridly JS is a tool that allows you to interact with grid world reinforcement learning environments." }, { "start": 751.52, "end": 753.88, "text": " So there are a number of cool features right here." }, { "start": 753.88, "end": 756.04, "text": " You can edit levels directly." }, { "start": 756.04, "end": 757.6, "text": " You can also try out the levels." }, { "start": 757.6, "end": 759.3199999999999, "text": " You can debug your policies." }, { "start": 759.3199999999999, "end": 761.24, "text": " You can record trajectories." }, { "start": 761.24, "end": 766.04, "text": " So right now I don't have a trajectory, but what I can do is I can record right here and" }, { "start": 766.04, "end": 771.56, "text": " I can move this thing around here, here, going to the lava and then I die." }, { "start": 771.56, "end": 775.4, "text": " And you can see the steps I've taken right here." }, { "start": 775.4, "end": 780.6999999999999, "text": " So you can use this to do various kinds of things, debugging, investigating, and so on." }, { "start": 780.6999999999999, "end": 785.4, "text": " If you are into reinforcement learning and you work with grid world, then by all means," }, { "start": 785.4, "end": 786.4, "text": " check this out." }, { "start": 786.4, "end": 790.3199999999999, "text": " Meta announces their new box, I guess." }, { "start": 790.3199999999999, "end": 791.3199999999999, "text": " This is the box." }, { "start": 791.3199999999999, "end": 796, "text": " This is an architecture for deep learning, the grand Teton." }, { "start": 796, "end": 799.5999999999999, "text": " Essentially they release the architecture open source." }, { "start": 799.6, "end": 805.48, "text": " So their engineers have sat down and thought long and hard about what it takes for a great" }, { "start": 805.48, "end": 806.88, "text": " machine learning system." }, { "start": 806.88, "end": 809.96, "text": " Like they're a bit more older VGX boxes." }, { "start": 809.96, "end": 815.76, "text": " And they essentially tell you, look, we believe that this combination of hardware, this processors," }, { "start": 815.76, "end": 822.2, "text": " these GPUs connected like this with these power supplies will be a very great base for" }, { "start": 822.2, "end": 823.2, "text": " doing research." }, { "start": 823.2, "end": 829.0400000000001, "text": " Yeah, they're releasing these specs essentially for you to just buy or assemble." }, { "start": 829.04, "end": 830.64, "text": " I guess whatever you want to do with it." }, { "start": 830.64, "end": 836.7199999999999, "text": " But I can tell you it is relatively hard to decide exactly on every component of the hardware." }, { "start": 836.7199999999999, "end": 842.24, "text": " And it's really great that people who are very competent in this actually think about" }, { "start": 842.24, "end": 844.8399999999999, "text": " it and give their suggestions." }, { "start": 844.8399999999999, "end": 850.04, "text": " So if you have a lab or a company and you really want to buy your own hardware, maybe" }, { "start": 850.04, "end": 852.04, "text": " this is a good option for you." }, { "start": 852.04, "end": 859.4399999999999, "text": " Pugging face diffusers from version 0.5.1 on forward supports diffusers in Jax." }, { "start": 859.4399999999999, "end": 863.48, "text": " If you like Jax, if you like stable diffusion, go for it." }, { "start": 863.48, "end": 868.04, "text": " Muse is an open source stable diffusion production server." }, { "start": 868.04, "end": 873.9599999999999, "text": " Well it is not as much a server as it is sort of like a tutorial on how to bring up a server." }, { "start": 873.9599999999999, "end": 878.48, "text": " This is based on the lightning apps framework, which is open source." }, { "start": 878.48, "end": 883.32, "text": " And it's kind of an easy way to bring together all the components you need to deploy machine" }, { "start": 883.32, "end": 884.52, "text": " learning things." }, { "start": 884.52, "end": 889.64, "text": " And this repository is essentially a specification on how to pull up a stable diffusion server." }, { "start": 889.64, "end": 894.52, "text": " So if you want to deploy stable diffusion yourself, this is probably the fastest and" }, { "start": 894.52, "end": 896.52, "text": " simplest way to do so." }, { "start": 896.52, "end": 902.84, "text": " TRLX by Carper AI is a library that allows you to do reinforcement learning for text" }, { "start": 902.84, "end": 903.84, "text": " models." }, { "start": 903.84, "end": 908.4, "text": " So you can see right here, you can give either sort of a reward function or you can give" }, { "start": 908.4, "end": 912.52, "text": " a data set that assigns values to expert demonstrations." }, { "start": 912.52, "end": 916.4599999999999, "text": " And you can train a language model to incorporate that." }, { "start": 916.4599999999999, "end": 922.28, "text": " This is a relatively new domain to do reinforcement learning on text models, but it is cool to" }, { "start": 922.28, "end": 925.4, "text": " have another library to tackle the problem." }, { "start": 925.4, "end": 930.52, "text": " RLBaselines3zoo is a training framework for stable baselines 3 reinforcement learning" }, { "start": 930.52, "end": 931.6, "text": " agents." }, { "start": 931.6, "end": 936.88, "text": " Stable baselines is a library that tries to give reference implementations of reinforcement" }, { "start": 936.88, "end": 940.9, "text": " learning algorithms because they're very tricky and they're very hard to get right." }, { "start": 940.9, "end": 945.88, "text": " So these are good, solid and performant reference implementations." }, { "start": 945.88, "end": 948.68, "text": " Stable baselines 3 is the third iteration of it." }, { "start": 948.68, "end": 955.14, "text": " And this repository right here, the zoo contains a number of surrounding things like scripts" }, { "start": 955.14, "end": 960.8, "text": " that make it very easy to interact with it, but also prepared agents and prepared hyper" }, { "start": 960.8, "end": 965.4, "text": " parameter settings that work well in different standard environments." }, { "start": 965.4, "end": 971.6999999999999, "text": " Jaxsec is a library that allows you to train very large language models in Jax." }, { "start": 971.6999999999999, "end": 976.04, "text": " So the cool thing is that with this library, you essentially get things like data parallelism" }, { "start": 976.04, "end": 978.02, "text": " or model parallelism for free." }, { "start": 978.02, "end": 981.6, "text": " You can just specify them and you can trade them off however you want." }, { "start": 981.6, "end": 985.56, "text": " This is due to the power and simplicity of Jax." }, { "start": 985.56, "end": 991.88, "text": " Albuminations, I hope I'm pronouncing that correctly, 1.3 is out and it introduces a" }, { "start": 991.88, "end": 994.24, "text": " bunch of new image augmentations." }, { "start": 994.24, "end": 996.72, "text": " This is a library for image augmentations." }, { "start": 996.72, "end": 1002.26, "text": " So it's good that they introduce new augmentations that fits very well to the augmentations they" }, { "start": 1002.26, "end": 1003.26, "text": " already have." }, { "start": 1003.26, "end": 1005.26, "text": " There's also a bunch of bug fixes and more." }, { "start": 1005.26, "end": 1009.76, "text": " If you're looking for image augmentations in Python, this might be a good library." }, { "start": 1009.76, "end": 1012.88, "text": " This is a really cool thing you can do with diffusion models." }, { "start": 1012.88, "end": 1018.32, "text": " These people have trained diffusion models of brain images and were able to create new" }, { "start": 1018.32, "end": 1022.5600000000001, "text": " synthetic brain images with a degree of controllability." }, { "start": 1022.56, "end": 1026.08, "text": " Now there is a paper on archive if you are interested." }, { "start": 1026.08, "end": 1031.44, "text": " You can also download the dataset of 100,000 synthetic brain images." }, { "start": 1031.44, "end": 1035.74, "text": " CodeGeeks is a multilingual code generation model." }, { "start": 1035.74, "end": 1041.22, "text": " This is as it says, it's essentially something similar like Codex, but it is released." }, { "start": 1041.22, "end": 1045.2, "text": " You can actually go and you can download the model and use it yourself." }, { "start": 1045.2, "end": 1049.3999999999999, "text": " MetaAI releases AI template, which is an inference engine." }, { "start": 1049.3999999999999, "end": 1051.9199999999998, "text": " The goal here is to make inference faster." }, { "start": 1051.92, "end": 1056.64, "text": " You get a lot of speed ups over just running standard inference and something like eye" }, { "start": 1056.64, "end": 1057.64, "text": " torch." }, { "start": 1057.64, "end": 1059.0600000000002, "text": " So this does two things." }, { "start": 1059.0600000000002, "end": 1062.44, "text": " First of all, it optimizes your computation graph." }, { "start": 1062.44, "end": 1066.96, "text": " If your computation graph contains a lot of like little operations that could be used" }, { "start": 1066.96, "end": 1072.8400000000001, "text": " together into something that's really optimal for a given hardware, or just that can be" }, { "start": 1072.8400000000001, "end": 1077.3600000000001, "text": " expressed in a smarter way, then a graph optimizer can do that." }, { "start": 1077.36, "end": 1082.1999999999998, "text": " And in a second step, there is a compiler to compile all of this to highly performance" }, { "start": 1082.1999999999998, "end": 1090.1999999999998, "text": " C++ code that runs on backend hardware such as a GPU that uses CUDA or even an AMD GPU." }, { "start": 1090.1999999999998, "end": 1094.6, "text": " So if fast inference is a concern to you, this is definitely a thing to check out." }, { "start": 1094.6, "end": 1099.9199999999998, "text": " Nerve Studio describes itself as a collaboration friendly studio for nerves, but it is more" }, { "start": 1099.9199999999998, "end": 1106.12, "text": " like a collection, an entire collection of software to handle nerves, anything from training," }, { "start": 1106.12, "end": 1108.9199999999998, "text": " validating, or even experiencing yourself." }, { "start": 1108.9199999999998, "end": 1113.1999999999998, "text": " You can see they have a viewer that allows you to just explore the nerves that you do" }, { "start": 1113.1999999999998, "end": 1115.1999999999998, "text": " and make videos from it." }, { "start": 1115.1999999999998, "end": 1118.36, "text": " But really it covers everything to do with nerves." }, { "start": 1118.36, "end": 1124.2399999999998, "text": " Now speaking of nerve, Nerf Pack is a pipe torch nerve acceleration toolbox." }, { "start": 1124.2399999999998, "end": 1129.1999999999998, "text": " This gets significant speed ups over simply using nerve code that's out there." }, { "start": 1129.1999999999998, "end": 1134.2399999999998, "text": " For example, vanilla nerve model with eight layer multilayer perceptrons can be trained" }, { "start": 1134.24, "end": 1140.36, "text": " to better quality in one hour rather than one to two days as in the paper." }, { "start": 1140.36, "end": 1146.1200000000001, "text": " Dstack, the logo doesn't exactly work on dark background, but Dstack is a library that" }, { "start": 1146.1200000000001, "end": 1150.6, "text": " wants to standardize your ML workflows that you run in the cloud." }, { "start": 1150.6, "end": 1156.88, "text": " This is essentially you check your workflows into GitHub and Dstack helps you to run them" }, { "start": 1156.88, "end": 1158.6200000000001, "text": " uniformly anywhere." }, { "start": 1158.6200000000001, "end": 1163.64, "text": " So in a workflow, you can specify things like your workflow name, obviously, but then it" }, { "start": 1163.64, "end": 1166.44, "text": " starts, you can say, okay, my provider is bash." }, { "start": 1166.44, "end": 1168.24, "text": " So this is essentially a bash script." }, { "start": 1168.24, "end": 1169.24, "text": " Now what are the commands?" }, { "start": 1169.24, "end": 1173.64, "text": " I want to pip install some stuff, I want to run this training script right here, but it" }, { "start": 1173.64, "end": 1175.8000000000002, "text": " also has things like artifacts." }, { "start": 1175.8000000000002, "end": 1180.76, "text": " And you can also specify things like I want to load data from this S3 bucket over there," }, { "start": 1180.76, "end": 1182.8200000000002, "text": " I want to run on this cloud over there." }, { "start": 1182.8200000000002, "end": 1186.3200000000002, "text": " So all of this is quite geared towards machine learning." }, { "start": 1186.3200000000002, "end": 1191.72, "text": " It's certainly not the first workflow engine or the first iteration from, hey, let's check" }, { "start": 1191.72, "end": 1193.46, "text": " our things into source code." }, { "start": 1193.46, "end": 1197.1200000000001, "text": " But it is very targeted at running ML workflows in the cloud." }, { "start": 1197.1200000000001, "end": 1202.2, "text": " Several people have figured out massive speed ups in the OpenAI whisper model." }, { "start": 1202.2, "end": 1209.28, "text": " For example, this person here has figured out a 3x speed up on CPU inference, but refers" }, { "start": 1209.28, "end": 1215.8400000000001, "text": " to the GitHub thread where someone else has found an even bigger 3.25x speed up." }, { "start": 1215.8400000000001, "end": 1220.52, "text": " Again, it's very cool to see what people do when you just give them the model." }, { "start": 1220.52, "end": 1227.16, "text": " And lastly, I want to point to a couple of databases for stuff mainly around stable diffusion." }, { "start": 1227.16, "end": 1229.8, "text": " So diffusion DB is on the hugging face hub." }, { "start": 1229.8, "end": 1235.72, "text": " It's a data set of prompts that have been entered by real users into stable diffusion" }, { "start": 1235.72, "end": 1238.6, "text": " and the corresponding images that they got out." }, { "start": 1238.6, "end": 1245.2, "text": " Public prompts, that's public prompts dot art in your browser is a database of three" }, { "start": 1245.2, "end": 1247.36, "text": " prompts and three models." }, { "start": 1247.36, "end": 1252.24, "text": " These models are mostly trained using dream booth, but if you're looking for inspiration" }, { "start": 1252.24, "end": 1257.12, "text": " for prompts and what they turn out, then this is maybe a good place to go." }, { "start": 1257.12, "end": 1262.4399999999998, "text": " Likewise, visualize.ai is a website that goes a little bit more businessy." }, { "start": 1262.4399999999998, "end": 1266.84, "text": " So it lets you create some free stuff like stable diffusion." }, { "start": 1266.84, "end": 1272.1599999999999, "text": " But then it also acts like as a bit of a marketplace for these things, such that you could also" }, { "start": 1272.1599999999999, "end": 1273.84, "text": " buy them or sell them." }, { "start": 1273.84, "end": 1278.8, "text": " It's cool to see that different business models are trying to spring up around this ecosystem." }, { "start": 1278.8, "end": 1284, "text": " Ultimately, someone will figure out how to really make money off of this stuff." }, { "start": 1284, "end": 1288.48, "text": " But you know, it's good to be part of the time when people are just trying stuff and" }, { "start": 1288.48, "end": 1292.9199999999998, "text": " seeing what happens with not only on the research side, but also on the business side." }, { "start": 1292.9199999999998, "end": 1298.6399999999999, "text": " Lastly, Big Science has released prompt source, which is an IDE for natural language prompts." }, { "start": 1298.64, "end": 1304.16, "text": " So this is a way to give people a bit more help and a bit more standardization when they" }, { "start": 1304.16, "end": 1309.48, "text": " use prompts to achieve certain goals, for example, when they use prompts to tackle some" }, { "start": 1309.48, "end": 1315.5200000000002, "text": " of the NLP challenges that are now more and more phrased simply as prompts into these" }, { "start": 1315.5200000000002, "end": 1321.0800000000002, "text": " large language models, rather than as data that goes into a specially trained model for" }, { "start": 1321.0800000000002, "end": 1322.0800000000002, "text": " that task." }, { "start": 1322.0800000000002, "end": 1326.8400000000001, "text": " So if you find yourself in this situation or a similar one, then prompt source may be" }, { "start": 1326.8400000000001, "end": 1327.8400000000001, "text": " for you." }, { "start": 1327.84, "end": 1333.1999999999998, "text": " Finally, this is a database of all Lex Friedman podcasts transcribed." }, { "start": 1333.1999999999998, "end": 1335.3999999999999, "text": " This is the website of Andre Karpotty." }, { "start": 1335.3999999999999, "end": 1341.56, "text": " And he used a simple combination of a download script from YouTube combined with OpenAI's" }, { "start": 1341.56, "end": 1346.12, "text": " whisper to transcribe all of Lex Friedman's podcast episodes." }, { "start": 1346.12, "end": 1352.3999999999999, "text": " You can go to any one of them, you can click and they are here with time annotations and" }, { "start": 1352.3999999999999, "end": 1355.6, "text": " all is a very simple but very cool project." }, { "start": 1355.6, "end": 1356.72, "text": " Thank you, Andre." }, { "start": 1356.72, "end": 1358.8, "text": " And I thank all of you for listening." }, { "start": 1358.8, "end": 1360.44, "text": " I'll be home again next week." }, { "start": 1360.44, "end": 1361.44, "text": " Until then, stay hydrated." }, { "start": 1361.44, "end": 1387.68, "text": " Bye bye." } ]
sbKaUc0tPaY
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
The Odds are Odd: A Statistical Test for Detecting Adversarial Examples
[ "Science & Technology" ]
[]
https://arxiv.org/abs/1902.04818 Abstract: We investigate conditions under which test statistics exist that can reliably detect examples, which have been adversarially manipulated in a white-box attack. These statistics can be easily computed and calibrated by randomly corrupting inputs. They exploit certain anomalies that adversarial attacks introduce, in particular if they follow the paradigm of choosing perturbations optimally under p-norm constraints. Access to the log-odds is the only requirement to defend models. We justify our approach empirically, but also provide conditions under which detectability via the suggested test statistics is guaranteed to be effective. In our experiments, we show that it is even possible to correct test time predictions for adversarial attacks with high accuracy. Authors: Kevin Roth, Yannic Kilcher, Thomas Hofmann
Hello and welcome. Today we're looking at the odds are odd, a statistical test for detecting adversarial examples. So shameless self-promotion here since this is me. So this is an archive and basically what we do is we're detecting adversarial examples. For those who don't know what an adversarial example is, it's basically a way of fooling a classifier in order to kind of get it to do something weird. Let's look at it. So maybe you have an image of a cat. I have no clue how a cat looks. Alright, so you have an image of a cat and you have a classifier. So the classifier takes this image as an input, kind of winds it down to some probabilities of classes and cat, dog and so on. And it then gives you an estimate of how likely each class is. So what the adversarial example does is it changes this image and it adds a noise. So this is just a very specific noise and you have kind of a multiplier here, gamma, which is super small. So the noise is almost... you can't see it with a human eye basically, it's so small. But it's able to perturb this image in a way that the probabilities will change such that all of a sudden a different class now is the highest class. So basically we're able to fool these classifiers by adding just very little bit of very very specific noise. So that's an adversarial example. These have many implications in, let's say, security applications and also in understanding how these classifier works. Alright, so our task is to explain and detect them, explain why they happen and detect when they happen. Alright, so what do we do? Basically let's just jump right into the thing here. We view a classifier as an output, so you have logits, what's called logits, L is this. This here is your neural network up to the last layer. Basically it can be like something like a convolutional neural network and so on. It gives you a feature representation. So you extract from the image X a feature representation, which is this entire thing here, and then you multiply this feature representation. So this is going to be some vector of dimension D. You multiply this by this weight matrix, which is going to be something like, okay I've drawn it in the wrong direction here. Let's draw W over here. It's going to be D by, let's say K, where K is the number of classes. Okay, still wrong. D by K, right? And output a vector of dimension K, which then is this cat, dog and so on. So these are the logits and the logits get transformed to the probabilities by running it through a softmax layer. But basically we view a classifier as having a feature representation and a weight matrix. And this here is a matrix multiplication adult product by matrix. So what we see basically is, this is kind of where the adversarial examples happen. So when we look at this weight matrix, right, again we look at the D dimensional feature vector here, and we look at the weight matrix, what it does is it has columns, right? Columns. Let's say we have four classes here, right? So it has these four columns and each of them is D dimensional. So each of them is going to be multiplied by this thing and giving a score. So the final score for a class is going to be the multiplication of a row W1, W2, W3, W4 by this feature vector. Let's call the feature vector little f. So your logit of class i is going to be the inner product of W i and f. Alright, we'll leave away biases for now. There's okay, we can introduce biases to make it a bit more complicated but it changes nothing. So your logit is going to be the inner product and whichever logit is the highest wins. So that's going to be the prediction of the classifier. So since you can, in an adversarial example, what can you change? You can change this feature vector here, this f. By changing the x you can change the output of the convolutional neural network which is the feature vector. And what you have to do in order to make a logit as high as possible, basically make one class as high as possible, is you need to make this inner product as high as possible. And what's an inner product? If you look in a classic vector representation space, if this is W i and this is f, what you want to do is you want to make f and W align as much as possible. Because the inner product is going to be basically dependent on the angle and the magnitude. So you can you can stretch f for sure but it's going to be kind of aligned with all the W's then more by stretching or more negatively in whatever way you want it. But basically you want to rotate, you want to align as much as possible the f with the W i. So you want to kind of go into this direction with f. So now not only do you want to kind of maximize f with a particular W i, what you want to do is be adversarial. The adversarial task is often framed as just it's either targeted, so find a... so i needs to be a particular other class, or it's untargeted which means just just give me a perturbation that will make the classifier be fooled. And be fooled means whatever it predicts right now it should predict something different. So what you ultimately want to do is you want this as high as possible for some i that is not the correct i and you want this other quantity W y. Let's call it W y. Let's say the classifier is 100% correct. So W y, y is the label of x. W y is whatever column here is currently predicted. So you want the sum column where i is not equal to y to have maximum inner product and so this is not no longer l i, we'll get to that, to have maximum inner product and you want this inner product with the correct class to be as small as possible, which ultimately means you want this entire quantity maximized. So it's a pretty simple idea. We'll call, let's say this is the log i minus the log y. We have slightly different notation in the paper. I think we call this z but never mind. So you basically just want to make this as large as possible. So our point is since this is not the only thing, you want to maximize this but you have a constraint. Namely your constraint is that your delta x can only be small. Your delta x can only be small because the point of an adversarial example is that the perturbation is so small you can't see it and that means that you basically don't have much wiggle room to do these perturbations, which means that we should be able to detect a pattern like this in the latent space. So this here is the latent space feature vector and if we can kind of detect a pattern in the latent space then we kind of get the adversarial example detector. So how do we do this? We measure exactly this. What we do is we measure the alignment between the original, between the currently predicted class and between all other classes. So in this graphic here you see this. It's a 10 class classifier. This is CIFAR10. We only show one, two, three, four, we only show six of the other classes but we have the full graphic in this. So this shows an adversarial example. The axis going on top of each of the images is the alignment with the adversarial class. So this has been an adversarially perturbed sample. So this shows the alignment with the adversarial class and of course you see the bright red dot, if you just focus on that, that is the adversarial example projected into this. So of course the alignment is going to be very very high with this class since the classifier actually predicts this class. The blue here is the sample that the adversarial sample was derived from which means the original image. And you already see without looking at any of the other dots that the blue is around zero here, around zero here, around zero here, here and here. But here it's very high in this axis. So the axis to the right is for each of these plots here it's one of the other classes except for the currently predicted adversarial class. So that this axis is always the same axis and while the axis to the right for each plot is a different one. And you can already see, and that's why we frame it in the green, this plot here is where the axis to the right corresponds to the original class of the classifier. So don't look yet at the other plots. What you see here is basically the blue is really high in this class right and the adversarial example procedure basically has driven it down this class and up this class which is exactly saying it has made this inner product small and this inner product large. So where do we go from here? Let's actually jump this graphic a bit and go to this one that's way down here. Alright so what we've done is we've taken the an example just out of the data set right and then we've taken an adversarial example. So say X is the example of the data set and then X hat is the adversarial example derived from this. Alright in this plot to the right X would be sitting about here. I'm gonna explain what the what the the kind of meaning is and X hat would be sitting down here right it's about one third from the top one third from the bottom let me draw this more. Alright so what this axis represents here is basically what we've done is we've gone from X to X hat in very small steps and at each step we've asked the classifier hey classifier what's the probability of the class Y so Y is the class of X right and the class of X X hat is some some different what some some other class right since it's an adversarial example we've so we've asked the classifier what's the class of X and basically no basically we've asked what what's the probability that Y is the class of X and that's represented in white right so the more white the higher the classifier thinks the class Y is probable. So the direction down is going into the direction of X hat minus X so it's going into the into the adversarial direction and then the direction across is we've taken some direction that's orthogonal to this direction and then also went into tiny steps and asked the classifier hey classifier what do you think is the probability of Y here so we've basically done this kind of grid sampling and at each point we've asked the classifier what do you think which how probable is Y and the classifier will always output some number and we plot it in white and so this direction again is always the same in this direction we basically randomize it and then aggregate it over over lots and lots of different samples and we also aggregate this entire thing over the entire over the data set so we get a comprehensive view of what adversarial examples look like in the view of the classifier and what we find is pretty interesting so when you go from the original class you basically in every direction here in every direction it kind of the original class kind of decreases smoothly right you see at the edges here it kind of gets black so the further away you go from the original example that the more the kind of shadier the classifier gets it's like yeah I'm not so sure anymore that this is the class right but if you go into the direction of here if you go into the direction of the adversarial example the kind of drop-off is first of all it's very steep so all of a sudden here you're in very dark territory which means the classifier is doesn't think why is probable at all anymore and moreover you get this kind of cone here so what we see is what we what we think is happening is that given an example there are these directions in late in in in image space basically straight directions that go to adversarial examples right and we call these cones because they they're kind of low dimensional directions in in the space where the adversarial example lies and what's really interesting is we have those plots here do we have more so what's what's quite interesting is that if you if you go come on well this is kind of okay the quality of the of the plot is not is not very very good so I'm gonna I may be able to to draw this here so if your start here and you go here what happens to the original class is you start out high you go down rapidly and you stay down even if you go super far into this direction the this class will stay down whereas let's say this is y hat y hat will start low go up and then kind of fade so here is where the adversarial example would sit sorry at about this distance that's this distance here means as you go towards the adversarial example right here the probability of the adversarial class rises and the probability of the original class drops then as you go further this is what's what's interesting kind of this probability here drops which means the classifier is kind of like yeah okay there's too much noise now I'm not so sure about this class anymore but the this this class here kind of stays low very very long even if you go into this direction so this this gives us kind of a hint that adversarial examples are characterized by specific directions that you go into that you that you can go into and kind of suppress the original class and pump the new class up which is kind of exactly what we've claimed with this inner inner product alignment right that the next experiment we've done is we've taken this adversarial example here and said well if we go outside if we go into random directions right it's just really this one direction that's problematic if we go into random directions actually we should be you know go back to the original class right since it's basically surrounded by the original class this is just one direction and this here represents all the other directions there are and how many directions are there in in pixel space like a lot so we should be able to get back to the original class but that's not the case that's we found that's not the case and we also found why so I still want to go back to this plot here if you do this if you add noise and this is the noise magnitude here what you'll see is the orange here is the adversarial class so orange will go down down down down down right as you increase the noise the blue is the source class so the blue goes up and it goes up faster you see it goes up faster than the green which is the highest other class so green is whatever class is not that was there not the source but the highest class other than that so the source class goes up quickly but before the source class can overpass the adversarial class which happens back there the highest other class has already kind of taken over so the source class is basically too weak and if you again look at this this plot here if you go with an actual color picker you see that the amount of white here and here is is not high enough it's like 0.3 or something out of one or even lower so the the kind of source class is not strong enough that by simply adding a bit of noise you can go back but we thought hey if this is correct we can actually detect we can detect this effect here this rising of the source class faster so our plan is basically we add noise a particular amount of noise just a little bit actually and then we detect which basically which class falls and which class rises and the way we do this is we we detect the this exact alignment that I've described before under noise so we form this quantity here for all classes other than y so y is the the class that's currently predicted and we look at it what happens under it under noise right so and that's where we get to this graphic here so again this axis is the adversarial class or the class that's currently predicted right this axis here is all the other classes for each plot one and when we add noise what do you see is the noise magnitude is encoded in the brightness of the dots so the darker the red dots the more noise we've added here is the original adversarial sample then as we add noise you see here here more noise more noise more noise it nothing's really happening for the for the if if if it's like one class that has nothing to do with the original class it simply kind of goes down simply kind of gets less sure about this class right but in case of the original class that the adversarial example was derived from it really rises it really kind of at the same time that it drops it rises into that direction so we're able to measure these these deltas here under noise and we're able to to devise basically statistics of what happens to these quantities under like if it's not an adversarial sample versus what happens to these quantities if it's an adversarial sample so here you see pairings of basically source class and adversarial class samples so each of these histograms is collected from that and what you can see is in blue the kind of alignment under noise of the source class sorry the alignments under noise of a non perturbed sample and in orange the alignments under noise of an adversarial sample and what's cool is that these these alignments you can see in all of these cases are very different so there is a clear signature in the adversarial sample in these noise induced alignments with the with the weight matrix rows that makes you able to basically build a detector you can say all right anything to the left is clean anything to the right is adversarial and we can do this over many different types of noises and then build basically a voting mechanism on that and thereby detect adversarial examples so we have a bunch of experiments we mostly experiment on the c410 and on the image net data set and you can see over here so this is the main kind of one of the main results the detection rates of our statistical test so as you can see we are detection rate this is on clean samples on clean samples you want the detection rate to be low on adversarial samples you want the detection rate to be high and this we achieve very large detection rates while having very low false positive rates especially on image net so it seems like the more tuned these models are the better these models are the better we are at detecting adversarial examples to it it's kind of a direct correlation to how well the models perform on accuracy in a clean setting and what we can do is now since we cannot only detect these things but we can detect these things in a fashion so if if you look at these things and you have like a sample of a particular class that's predicted right let's say this class and you go and look at it at the position of the noise induced features over each of them so let's say here here here here here here here here here right you can then clearly say well not only do I detect an adversarial example here right I look at the I look at each of the class of the classes that it could be derived from right if all if all of them say it's a clean sample then all right it's a clean sample but if one of them says it's an adversarial sample then I don't not only do I know it's an adversarial sample but I say aha this must be the source class right this is the exact effect we saw here all right we can if we detect this pattern here we can also back deduce basically aha so this must be the original class that the adversarial example was derived from so we're basically able to build a not only a detector but we're basically able to reconstruct the original class and here you see for these models let's say on CIFAR-10 we imagine that is a bit too large as of yet for our compute but on these models that have clean accuracies that are pretty high on CIFAR-10 plus this this kind of toy network here we're able to reconstruct the original class so basically this is defense against adversarial examples by by getting to almost clean accuracy back so this is a really surprising actually and kind of nice so we we do a bunch of other experiments including we defend against an attacker that's actually aware of this thing but the main the main point here is we don't say this is kind of the end-all method of defending against adversarial examples we simply want to kind of encourage the way of thinking of of these kind of noise what what if you what if you noise induce perturbations how does your network react to that can you can you detect these effects here can you detect effects like this and are these unavoidable or are there architectures are there architectures we can basically build such that adversarial examples have no chance except doing something like this which we can then easily detect all right so that was a bit of an introduction if you like it check out the entire paper and goodbye
[ { "start": 0, "end": 5.6000000000000005, "text": " Hello and welcome. Today we're looking at the odds are odd, a statistical test for" }, { "start": 5.6000000000000005, "end": 11.76, "text": " detecting adversarial examples. So shameless self-promotion here since this" }, { "start": 11.76, "end": 21.28, "text": " is me. So this is an archive and basically what we do is we're detecting" }, { "start": 21.28, "end": 25.8, "text": " adversarial examples. For those who don't know what an adversarial example is, it's" }, { "start": 25.8, "end": 36.28, "text": " basically a way of fooling a classifier in order to kind of get it to do" }, { "start": 36.28, "end": 42.2, "text": " something weird. Let's look at it. So maybe you have an image of a cat." }, { "start": 42.2, "end": 50.24, "text": " I have no clue how a cat looks. Alright, so you have an image of a cat and you have a" }, { "start": 50.24, "end": 54.88, "text": " classifier. So the classifier takes this image as an input, kind of winds it" }, { "start": 54.88, "end": 65, "text": " down to some probabilities of classes and cat, dog and so on. And it then gives you" }, { "start": 65, "end": 76.4, "text": " an estimate of how likely each class is. So what the adversarial example does" }, { "start": 76.4, "end": 85.92, "text": " is it changes this image and it adds a noise. So this is just a very" }, { "start": 85.92, "end": 91.32000000000001, "text": " specific noise and you have kind of a multiplier here, gamma, which is super" }, { "start": 91.32000000000001, "end": 97.4, "text": " small. So the noise is almost... you can't see it with a human eye basically, it's" }, { "start": 97.4, "end": 104.56, "text": " so small. But it's able to perturb this image in a way that the" }, { "start": 104.56, "end": 111.2, "text": " probabilities will change such that all of a sudden a different class now is the" }, { "start": 111.2, "end": 116.32000000000001, "text": " highest class. So basically we're able to fool these classifiers by adding just" }, { "start": 116.32000000000001, "end": 121.92, "text": " very little bit of very very specific noise. So that's an adversarial example." }, { "start": 121.92, "end": 127, "text": " These have many implications in, let's say, security applications and also in" }, { "start": 127, "end": 132.92000000000002, "text": " understanding how these classifier works. Alright, so our task is to explain and" }, { "start": 132.92, "end": 138.51999999999998, "text": " detect them, explain why they happen and detect when they happen." }, { "start": 138.51999999999998, "end": 150.07999999999998, "text": " Alright, so what do we do? Basically let's just jump right into" }, { "start": 150.07999999999998, "end": 162.35999999999999, "text": " the thing here. We view a classifier as an output, so you" }, { "start": 162.36, "end": 168.44000000000003, "text": " have logits, what's called logits, L is" }, { "start": 172.8, "end": 180.84, "text": " this. This here is your neural network up to the last layer." }, { "start": 180.84, "end": 185.16000000000003, "text": " Basically it can be like something like a convolutional neural network and so on." }, { "start": 185.16000000000003, "end": 190.56, "text": " It gives you a feature representation. So you extract from the image X a feature" }, { "start": 190.56, "end": 196, "text": " representation, which is this entire thing here, and then you multiply this" }, { "start": 196, "end": 202, "text": " feature representation. So this is going to be some vector of dimension D. You" }, { "start": 202, "end": 210.72, "text": " multiply this by this weight matrix, which is going to be something like, okay" }, { "start": 210.72, "end": 216.88, "text": " I've drawn it in the wrong direction here. Let's draw W over here." }, { "start": 216.88, "end": 223.4, "text": " It's going to be D by, let's say K, where K is the number of classes." }, { "start": 223.4, "end": 233.2, "text": " Okay, still wrong. D by K, right? And output a vector of dimension K, which" }, { "start": 233.2, "end": 238.4, "text": " then is this cat, dog and so on. So these are the logits and the logits get" }, { "start": 238.4, "end": 244.2, "text": " transformed to the probabilities by running it through a softmax layer. But" }, { "start": 244.2, "end": 251.83999999999997, "text": " basically we view a classifier as having a feature representation and a weight" }, { "start": 251.83999999999997, "end": 256.96, "text": " matrix. And this here is a matrix multiplication adult product by" }, { "start": 256.96, "end": 266.84, "text": " matrix. So what we see basically is, this is kind of where the" }, { "start": 266.84, "end": 272.08, "text": " adversarial examples happen. So when we look at this weight matrix, right, again" }, { "start": 272.08, "end": 275.64, "text": " we look at the D dimensional feature vector here, and we look at the weight" }, { "start": 275.64, "end": 286.52, "text": " matrix, what it does is it has columns, right? Columns. Let's say we have four" }, { "start": 286.52, "end": 293.28, "text": " classes here, right? So it has these four columns and each of them is D" }, { "start": 293.28, "end": 300.2, "text": " dimensional. So each of them is going to be multiplied by this thing and giving a" }, { "start": 300.2, "end": 305.64, "text": " score. So the final score for a class is going to be the multiplication of a row" }, { "start": 305.64, "end": 317.59999999999997, "text": " W1, W2, W3, W4 by this feature vector. Let's call the feature vector little f." }, { "start": 317.59999999999997, "end": 328.03999999999996, "text": " So your logit of class i is going to be the inner product of W i and f." }, { "start": 328.04, "end": 334.72, "text": " Alright, we'll leave away biases for now. There's okay, we can introduce biases to" }, { "start": 334.72, "end": 339.40000000000003, "text": " make it a bit more complicated but it changes nothing. So your logit is going to" }, { "start": 339.40000000000003, "end": 346, "text": " be the inner product and whichever logit is the highest wins. So that's going" }, { "start": 346, "end": 352.08000000000004, "text": " to be the prediction of the classifier. So since you can, in an" }, { "start": 352.08000000000004, "end": 356.24, "text": " adversarial example, what can you change? You can change this feature vector here," }, { "start": 356.24, "end": 361.84000000000003, "text": " this f. By changing the x you can change the output of the" }, { "start": 361.84000000000003, "end": 367.6, "text": " convolutional neural network which is the feature vector. And what you have to" }, { "start": 367.6, "end": 374.2, "text": " do in order to make a logit as high as possible, basically make one class" }, { "start": 374.2, "end": 378.96000000000004, "text": " as high as possible, is you need to make this inner product as high as possible." }, { "start": 378.96000000000004, "end": 384.52, "text": " And what's an inner product? If you look in a classic vector representation" }, { "start": 384.52, "end": 397.12, "text": " space, if this is W i and this is f, what you want to do is you want to make f and" }, { "start": 397.12, "end": 403.12, "text": " W align as much as possible. Because the inner product is going to be basically" }, { "start": 403.12, "end": 407.91999999999996, "text": " dependent on the angle and the magnitude. So you can you can stretch f for sure" }, { "start": 407.91999999999996, "end": 413.03999999999996, "text": " but it's going to be kind of aligned with all the W's then more by stretching" }, { "start": 413.04, "end": 417.76000000000005, "text": " or more negatively in whatever way you want it. But basically you want to rotate," }, { "start": 417.76000000000005, "end": 425.76000000000005, "text": " you want to align as much as possible the f with the W i. So you want to kind" }, { "start": 425.76000000000005, "end": 434.32000000000005, "text": " of go into this direction with f. So now not only do you want to kind of maximize" }, { "start": 434.32000000000005, "end": 440.32000000000005, "text": " f with a particular W i, what you want to do is be adversarial. The adversarial" }, { "start": 440.32, "end": 446.59999999999997, "text": " task is often framed as just it's either targeted, so find a... so i needs to be a" }, { "start": 446.59999999999997, "end": 450.96, "text": " particular other class, or it's untargeted which means just just give me" }, { "start": 450.96, "end": 457.71999999999997, "text": " a perturbation that will make the classifier be fooled. And be fooled means" }, { "start": 457.71999999999997, "end": 464.96, "text": " whatever it predicts right now it should predict something different. So what" }, { "start": 464.96, "end": 473, "text": " you ultimately want to do is you want this as high as possible for some i" }, { "start": 473, "end": 481.15999999999997, "text": " that is not the correct i and you want this other" }, { "start": 481.15999999999997, "end": 492.56, "text": " quantity W y. Let's call it W y. Let's say the classifier is 100% correct." }, { "start": 492.56, "end": 502.08, "text": " So W y, y is the label of x. W y is whatever column here is" }, { "start": 502.08, "end": 511.64, "text": " currently predicted. So you want the sum column where i is not equal to y to" }, { "start": 511.64, "end": 517.52, "text": " have maximum inner product and so this is not no longer l i, we'll get to that," }, { "start": 517.52, "end": 525.24, "text": " to have maximum inner product and you want this inner product" }, { "start": 525.24, "end": 530.56, "text": " with the correct class to be as small as possible, which ultimately means you want" }, { "start": 530.56, "end": 537.4399999999999, "text": " this entire quantity maximized. So it's a pretty simple idea. We'll call, let's say" }, { "start": 537.4399999999999, "end": 542.6, "text": " this is the log i minus the log y. We have slightly different notation in the" }, { "start": 542.6, "end": 551.52, "text": " paper. I think we call this z but never mind. So you basically just want to make" }, { "start": 551.52, "end": 559.36, "text": " this as large as possible. So our point is since this is not the only" }, { "start": 559.36, "end": 567.44, "text": " thing, you want to maximize this but you have a constraint. Namely your constraint" }, { "start": 567.44, "end": 574.48, "text": " is that your delta x can only be small. Your delta x can only be small" }, { "start": 574.48, "end": 578.96, "text": " because the point of an adversarial example is that the perturbation is so" }, { "start": 578.96, "end": 585.44, "text": " small you can't see it and that means that you basically don't have much" }, { "start": 585.44, "end": 593.48, "text": " wiggle room to do these perturbations, which means that we should be" }, { "start": 593.48, "end": 600.24, "text": " able to detect a pattern like this in the latent space. So this here is" }, { "start": 600.24, "end": 608.08, "text": " the latent space feature vector and if we can kind of detect a pattern in the" }, { "start": 608.08, "end": 616.64, "text": " latent space then we kind of get the adversarial example detector." }, { "start": 616.64, "end": 623.6, "text": " So how do we do this? We measure exactly this. What we do is we measure the" }, { "start": 623.6, "end": 631.16, "text": " alignment between the original, between the currently predicted class and" }, { "start": 631.16, "end": 638.36, "text": " between all other classes. So in this graphic here you see this. It's a 10" }, { "start": 638.36, "end": 644.92, "text": " class classifier. This is CIFAR10. We only show one, two, three, four, we only show six of the other" }, { "start": 644.92, "end": 653.52, "text": " classes but we have the full graphic in this. So this shows an adversarial" }, { "start": 653.52, "end": 662.16, "text": " example. The axis going on top of each of the images is the alignment with the" }, { "start": 662.16, "end": 667.16, "text": " adversarial class. So this has been an adversarially perturbed sample. So this" }, { "start": 667.16, "end": 672.0799999999999, "text": " shows the alignment with the adversarial class and of course you see the bright" }, { "start": 672.08, "end": 679.2, "text": " red dot, if you just focus on that, that is the adversarial example" }, { "start": 679.2, "end": 684.5600000000001, "text": " projected into this. So of course the alignment is going to be very very" }, { "start": 684.5600000000001, "end": 689.48, "text": " high with this class since the classifier actually predicts this class." }, { "start": 689.48, "end": 695.84, "text": " The blue here is the sample that the adversarial sample was derived from" }, { "start": 695.84, "end": 702.88, "text": " which means the original image. And you already see without looking at any of" }, { "start": 702.88, "end": 709.0400000000001, "text": " the other dots that the blue is around zero here, around zero here, around zero" }, { "start": 709.0400000000001, "end": 715.88, "text": " here, here and here. But here it's very high in this axis. So the axis to the" }, { "start": 715.88, "end": 722.2800000000001, "text": " right is for each of these plots here it's one of the other" }, { "start": 722.28, "end": 727.12, "text": " classes except for the currently predicted adversarial class. So that" }, { "start": 727.12, "end": 732.4, "text": " this axis is always the same axis and while the axis to the right for each" }, { "start": 732.4, "end": 736.64, "text": " plot is a different one. And you can already see, and that's why we frame it" }, { "start": 736.64, "end": 742.9599999999999, "text": " in the green, this plot here is where the axis to the right" }, { "start": 742.9599999999999, "end": 749.3199999999999, "text": " corresponds to the original class of the classifier. So don't look yet at" }, { "start": 749.32, "end": 756.5200000000001, "text": " the other plots. What you see here is basically the blue is really high" }, { "start": 756.5200000000001, "end": 763.0400000000001, "text": " in this class right and the adversarial example procedure basically has driven" }, { "start": 763.0400000000001, "end": 770.88, "text": " it down this class and up this class which is exactly saying it has made this" }, { "start": 770.88, "end": 779.68, "text": " inner product small and this inner product large. So where do we go from" }, { "start": 779.68, "end": 788.96, "text": " here? Let's actually jump this graphic a bit and go to this one that's way down" }, { "start": 788.96, "end": 800.48, "text": " here. Alright so what we've done is we've taken the an example just out of the" }, { "start": 800.48, "end": 808.48, "text": " data set right and then we've taken an adversarial example. So say X is the" }, { "start": 808.48, "end": 815.12, "text": " example of the data set and then X hat is the adversarial example derived from" }, { "start": 815.12, "end": 821.08, "text": " this. Alright in this plot to the right X would be sitting about here. I'm gonna" }, { "start": 821.08, "end": 827.76, "text": " explain what the what the the kind of meaning is and X hat would be sitting" }, { "start": 827.76, "end": 833.04, "text": " down here right it's about one third from the top one third from the bottom" }, { "start": 833.04, "end": 844.2, "text": " let me draw this more. Alright so what this axis represents here is basically" }, { "start": 844.2, "end": 851.64, "text": " what we've done is we've gone from X to X hat in very small steps and at each" }, { "start": 851.64, "end": 859.76, "text": " step we've asked the classifier hey classifier what's the probability of the" }, { "start": 859.76, "end": 869.4, "text": " class Y so Y is the class of X right and the class of X X hat is some some" }, { "start": 869.4, "end": 873.64, "text": " different what some some other class right since it's an adversarial example" }, { "start": 873.64, "end": 880.8, "text": " we've so we've asked the classifier what's the class of X and basically no" }, { "start": 880.8, "end": 885.9599999999999, "text": " basically we've asked what what's the probability that Y is the class of X and" }, { "start": 885.9599999999999, "end": 892.28, "text": " that's represented in white right so the more white the higher the classifier" }, { "start": 892.28, "end": 902.12, "text": " thinks the class Y is probable. So the direction down is going into the" }, { "start": 902.12, "end": 912.68, "text": " direction of X hat minus X so it's going into the into the adversarial direction" }, { "start": 912.68, "end": 921.04, "text": " and then the direction across is we've taken some direction that's orthogonal" }, { "start": 921.04, "end": 927.04, "text": " to this direction and then also went into tiny steps and asked the classifier" }, { "start": 927.04, "end": 931.72, "text": " hey classifier what do you think is the probability of Y here so we've basically" }, { "start": 931.72, "end": 938.0400000000001, "text": " done this kind of grid sampling and at each point we've asked the classifier" }, { "start": 938.0400000000001, "end": 943.8000000000001, "text": " what do you think which how probable is Y and the classifier will always output" }, { "start": 943.8000000000001, "end": 949.72, "text": " some number and we plot it in white and so this direction again is always the" }, { "start": 949.72, "end": 955.4, "text": " same in this direction we basically randomize it and then aggregate it over" }, { "start": 955.4, "end": 961.44, "text": " over lots and lots of different samples and we also aggregate this entire thing" }, { "start": 961.44, "end": 968.1600000000001, "text": " over the entire over the data set so we get a comprehensive view of what" }, { "start": 968.1600000000001, "end": 973.32, "text": " adversarial examples look like in the view of the classifier and what we find" }, { "start": 973.32, "end": 981.8000000000001, "text": " is pretty interesting so when you go from the original class you basically in" }, { "start": 981.8000000000001, "end": 989.32, "text": " every direction here in every direction it kind of the original class kind of" }, { "start": 989.32, "end": 994.8000000000001, "text": " decreases smoothly right you see at the edges here it kind of gets black so the" }, { "start": 994.8000000000001, "end": 1001.2, "text": " further away you go from the original example that the more the kind of" }, { "start": 1001.2, "end": 1005.6800000000001, "text": " shadier the classifier gets it's like yeah I'm not so sure anymore that this" }, { "start": 1005.6800000000001, "end": 1011.5200000000001, "text": " is the class right but if you go into the direction of here if you go into" }, { "start": 1011.5200000000001, "end": 1017.9200000000001, "text": " the direction of the adversarial example the kind of drop-off is first of all" }, { "start": 1017.92, "end": 1024.08, "text": " it's very steep so all of a sudden here you're in very dark territory which" }, { "start": 1024.08, "end": 1031.56, "text": " means the classifier is doesn't think why is probable at all anymore and" }, { "start": 1031.56, "end": 1039.92, "text": " moreover you get this kind of cone here so what we see is what we what we think" }, { "start": 1039.92, "end": 1046.6, "text": " is happening is that given an example there are these directions in late in in" }, { "start": 1046.6, "end": 1053.56, "text": " in image space basically straight directions that go to adversarial" }, { "start": 1053.56, "end": 1059.1999999999998, "text": " examples right and we call these cones because they they're kind of low" }, { "start": 1059.1999999999998, "end": 1066.1599999999999, "text": " dimensional directions in in the space where the adversarial example lies and" }, { "start": 1066.16, "end": 1078.72, "text": " what's really interesting is we have those plots here do we have more so" }, { "start": 1078.72, "end": 1096.6000000000001, "text": " what's what's quite interesting is that if you if you go come on well this is" }, { "start": 1096.6000000000001, "end": 1104.92, "text": " kind of okay the quality of the of the plot is not is not very very good so I'm" }, { "start": 1104.92, "end": 1114.52, "text": " gonna I may be able to to draw this here so if your start here and you go here" }, { "start": 1114.52, "end": 1124.1200000000001, "text": " what happens to the original class is you start out high you go down rapidly" }, { "start": 1124.1200000000001, "end": 1132.8000000000002, "text": " and you stay down even if you go super far into this direction the this class" }, { "start": 1132.8, "end": 1141.9199999999998, "text": " will stay down whereas let's say this is y hat y hat will start low go up and" }, { "start": 1141.9199999999998, "end": 1150.8799999999999, "text": " then kind of fade so here is where the adversarial example would sit sorry at" }, { "start": 1150.8799999999999, "end": 1160, "text": " about this distance that's this distance here means as you go towards the" }, { "start": 1160, "end": 1166.12, "text": " adversarial example right here the probability of the adversarial class" }, { "start": 1166.12, "end": 1170.8, "text": " rises and the probability of the original class drops then as you go" }, { "start": 1170.8, "end": 1175.28, "text": " further this is what's what's interesting kind of this probability here" }, { "start": 1175.28, "end": 1179.28, "text": " drops which means the classifier is kind of like yeah okay there's too much noise" }, { "start": 1179.28, "end": 1183.56, "text": " now I'm not so sure about this class anymore but the this this class here" }, { "start": 1183.56, "end": 1189.56, "text": " kind of stays low very very long even if you go into this direction so this this" }, { "start": 1189.56, "end": 1194.1599999999999, "text": " gives us kind of a hint that adversarial examples are characterized by specific" }, { "start": 1194.1599999999999, "end": 1202.28, "text": " directions that you go into that you that you can go into and kind of" }, { "start": 1202.28, "end": 1207.96, "text": " suppress the original class and pump the new class up which is kind of exactly" }, { "start": 1207.96, "end": 1217.6799999999998, "text": " what we've claimed with this inner inner product alignment right that the next" }, { "start": 1217.68, "end": 1223.96, "text": " experiment we've done is we've taken this adversarial example here and said" }, { "start": 1223.96, "end": 1231.6000000000001, "text": " well if we go outside if we go into random directions right it's just really" }, { "start": 1231.6000000000001, "end": 1235.96, "text": " this one direction that's problematic if we go into random directions actually we" }, { "start": 1235.96, "end": 1239.48, "text": " should be you know go back to the original class right since it's" }, { "start": 1239.48, "end": 1243.6000000000001, "text": " basically surrounded by the original class this is just one direction and this" }, { "start": 1243.6000000000001, "end": 1247.5600000000002, "text": " here represents all the other directions there are and how many directions are" }, { "start": 1247.56, "end": 1252.76, "text": " there in in pixel space like a lot so we should be able to get back to the" }, { "start": 1252.76, "end": 1258.32, "text": " original class but that's not the case that's we found that's not the case and" }, { "start": 1258.32, "end": 1267.08, "text": " we also found why so I still want to go back to this plot here if you do this if" }, { "start": 1267.08, "end": 1274.3999999999999, "text": " you add noise and this is the noise magnitude here what you'll see is the" }, { "start": 1274.4, "end": 1281.3600000000001, "text": " orange here is the adversarial class so orange will go down down down down down" }, { "start": 1281.3600000000001, "end": 1289.76, "text": " right as you increase the noise the blue is the source class so the blue goes up" }, { "start": 1289.76, "end": 1295.88, "text": " and it goes up faster you see it goes up faster than the green which is the" }, { "start": 1295.88, "end": 1299.72, "text": " highest other class so green is whatever class is not that was there not the" }, { "start": 1299.72, "end": 1305.92, "text": " source but the highest class other than that so the source class goes up quickly" }, { "start": 1305.92, "end": 1312.3600000000001, "text": " but before the source class can overpass the adversarial class which happens back" }, { "start": 1312.3600000000001, "end": 1317.16, "text": " there the highest other class has already kind of taken over so the source" }, { "start": 1317.16, "end": 1323.3600000000001, "text": " class is basically too weak and if you again look at this this plot here if you" }, { "start": 1323.3600000000001, "end": 1329.68, "text": " go with an actual color picker you see that the amount of white here and here" }, { "start": 1329.68, "end": 1338.64, "text": " is is not high enough it's like 0.3 or something out of one or even lower so" }, { "start": 1338.64, "end": 1343.72, "text": " the the kind of source class is not strong enough that by simply adding a" }, { "start": 1343.72, "end": 1354.16, "text": " bit of noise you can go back but we thought hey if this is correct we can" }, { "start": 1354.16, "end": 1360.76, "text": " actually detect we can detect this effect here this rising of the source" }, { "start": 1360.76, "end": 1368.64, "text": " class faster so our plan is basically we add noise a particular amount of noise" }, { "start": 1368.64, "end": 1375.52, "text": " just a little bit actually and then we detect which basically which class falls" }, { "start": 1375.52, "end": 1381.44, "text": " and which class rises and the way we do this is we we detect the this exact" }, { "start": 1381.44, "end": 1391.6000000000001, "text": " alignment that I've described before under noise so we form this quantity" }, { "start": 1391.6000000000001, "end": 1399.24, "text": " here for all classes other than y so y is the the class that's currently" }, { "start": 1399.24, "end": 1408.72, "text": " predicted and we look at it what happens under it under noise right so and that's" }, { "start": 1408.72, "end": 1418.48, "text": " where we get to this graphic here so again this axis is the adversarial class" }, { "start": 1418.48, "end": 1424.44, "text": " or the class that's currently predicted right this axis here is all the other" }, { "start": 1424.44, "end": 1430.68, "text": " classes for each plot one and when we add noise what do you see is the noise" }, { "start": 1430.68, "end": 1435.28, "text": " magnitude is encoded in the brightness of the dots so the darker the red dots" }, { "start": 1435.28, "end": 1442.36, "text": " the more noise we've added here is the original adversarial sample then as we" }, { "start": 1442.36, "end": 1450.2, "text": " add noise you see here here more noise more noise more noise it nothing's" }, { "start": 1450.2, "end": 1457.2, "text": " really happening for the for the if if if it's like one class that has nothing" }, { "start": 1457.2, "end": 1463.16, "text": " to do with the original class it simply kind of goes down simply kind of gets" }, { "start": 1463.16, "end": 1470.4, "text": " less sure about this class right but in case of the original class that the" }, { "start": 1470.4, "end": 1477.2, "text": " adversarial example was derived from it really rises it really kind of at the" }, { "start": 1477.2, "end": 1482.3600000000001, "text": " same time that it drops it rises into that direction so we're able to measure" }, { "start": 1482.3600000000001, "end": 1489.4, "text": " these these deltas here under noise and we're able to to devise basically" }, { "start": 1489.4, "end": 1496.24, "text": " statistics of what happens to these quantities under like if it's not an" }, { "start": 1496.24, "end": 1499.6000000000001, "text": " adversarial sample versus what happens to these quantities if it's an adversarial" }, { "start": 1499.6000000000001, "end": 1504.3200000000002, "text": " sample so here you see pairings of basically source class and adversarial" }, { "start": 1504.3200000000002, "end": 1508.8000000000002, "text": " class samples so each of these histograms is collected from that and" }, { "start": 1508.8000000000002, "end": 1517.68, "text": " what you can see is in blue the kind of alignment under noise of the source" }, { "start": 1517.68, "end": 1524.92, "text": " class sorry the alignments under noise of a non perturbed sample and in orange" }, { "start": 1524.92, "end": 1530.3200000000002, "text": " the alignments under noise of an adversarial sample and what's cool is" }, { "start": 1530.3200000000002, "end": 1536.52, "text": " that these these alignments you can see in all of these cases are very different" }, { "start": 1536.52, "end": 1541.0800000000002, "text": " so there is a clear signature in the adversarial sample in these noise" }, { "start": 1541.08, "end": 1549.56, "text": " induced alignments with the with the weight matrix rows that makes you able" }, { "start": 1549.56, "end": 1555, "text": " to basically build a detector you can say all right anything to the left is" }, { "start": 1555, "end": 1559.6, "text": " clean anything to the right is adversarial and we can do this over many" }, { "start": 1559.6, "end": 1565.6399999999999, "text": " different types of noises and then build basically a voting mechanism on that and" }, { "start": 1565.64, "end": 1571.88, "text": " thereby detect adversarial examples so we have a bunch of experiments we mostly" }, { "start": 1571.88, "end": 1584.5200000000002, "text": " experiment on the c410 and on the image net data set and you can see over here" }, { "start": 1584.5200000000002, "end": 1591, "text": " so this is the main kind of one of the main results the detection rates of our" }, { "start": 1591, "end": 1597.76, "text": " statistical test so as you can see we are detection rate this is on clean" }, { "start": 1597.76, "end": 1601.6, "text": " samples on clean samples you want the detection rate to be low on adversarial" }, { "start": 1601.6, "end": 1607.96, "text": " samples you want the detection rate to be high and this we achieve very large" }, { "start": 1607.96, "end": 1616.44, "text": " detection rates while having very low false positive rates especially on image" }, { "start": 1616.44, "end": 1621.3600000000001, "text": " net so it seems like the more tuned these models are the better these models" }, { "start": 1621.3600000000001, "end": 1625.64, "text": " are the better we are at detecting adversarial examples to it it's kind of" }, { "start": 1625.64, "end": 1632.92, "text": " a direct correlation to how well the models perform on accuracy in a clean" }, { "start": 1632.92, "end": 1640.0800000000002, "text": " setting and what we can do is now since we cannot only detect these things but" }, { "start": 1640.08, "end": 1646.9199999999998, "text": " we can detect these things in a fashion so if if you look at these things and" }, { "start": 1646.9199999999998, "end": 1652.84, "text": " you have like a sample of a particular class that's predicted right let's say" }, { "start": 1652.84, "end": 1657.36, "text": " this class and you go and look at it at the position of the noise induced" }, { "start": 1657.36, "end": 1665.6, "text": " features over each of them so let's say here here here here here here here here" }, { "start": 1665.6, "end": 1672.4399999999998, "text": " here right you can then clearly say well not only do I detect an adversarial" }, { "start": 1672.4399999999998, "end": 1678.56, "text": " example here right I look at the I look at each of the class of the classes that" }, { "start": 1678.56, "end": 1685.76, "text": " it could be derived from right if all if all of them say it's a clean sample then" }, { "start": 1685.76, "end": 1689.7199999999998, "text": " all right it's a clean sample but if one of them says it's an adversarial sample" }, { "start": 1689.7199999999998, "end": 1694.84, "text": " then I don't not only do I know it's an adversarial sample but I say aha this" }, { "start": 1694.84, "end": 1701.56, "text": " must be the source class right this is the exact effect we saw here all right" }, { "start": 1701.56, "end": 1711.6399999999999, "text": " we can if we detect this pattern here we can also back deduce basically aha so" }, { "start": 1711.6399999999999, "end": 1718.8, "text": " this must be the original class that the adversarial example was derived from so" }, { "start": 1718.8, "end": 1723.24, "text": " we're basically able to build a not only a detector but we're basically able to" }, { "start": 1723.24, "end": 1728.6, "text": " reconstruct the original class and here you see for these models let's say on" }, { "start": 1728.6, "end": 1733.64, "text": " CIFAR-10 we imagine that is a bit too large as of yet for our compute but" }, { "start": 1733.64, "end": 1739.28, "text": " on these models that have clean accuracies that are pretty high on CIFAR-10" }, { "start": 1739.28, "end": 1745.32, "text": " plus this this kind of toy network here we're able to reconstruct the original" }, { "start": 1745.32, "end": 1751.76, "text": " class so basically this is defense against adversarial examples by by" }, { "start": 1751.76, "end": 1757.8, "text": " getting to almost clean accuracy back so this is a really surprising actually and" }, { "start": 1757.8, "end": 1767.84, "text": " kind of nice so we we do a bunch of other experiments including we defend" }, { "start": 1767.84, "end": 1774.4, "text": " against an attacker that's actually aware of this thing but the main the" }, { "start": 1774.4, "end": 1779.8799999999999, "text": " main point here is we don't say this is kind of the end-all method of defending" }, { "start": 1779.88, "end": 1783.8400000000001, "text": " against adversarial examples we simply want to kind of encourage the way of" }, { "start": 1783.8400000000001, "end": 1790.0800000000002, "text": " thinking of of these kind of noise what what if you what if you noise induce" }, { "start": 1790.0800000000002, "end": 1797.16, "text": " perturbations how does your network react to that can you can you detect" }, { "start": 1797.16, "end": 1804.3600000000001, "text": " these effects here can you detect effects like this and are these" }, { "start": 1804.3600000000001, "end": 1809.24, "text": " unavoidable or are there architectures are there architectures we can basically" }, { "start": 1809.24, "end": 1814.84, "text": " build such that adversarial examples have no chance except doing something" }, { "start": 1814.84, "end": 1819.84, "text": " like this which we can then easily detect all right so that was a bit of an" }, { "start": 1819.84, "end": 1840, "text": " introduction if you like it check out the entire paper and goodbye" } ]
EbHUU-gLyRA
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Self-classifying MNIST Digits (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "biology", "biological", "alive", "living", "message passing", "global state", "local state", "information", "cellular automata", "neural cellular automata", "neural ca", "convolution", "recurrent", "rnn", "pixels", "cell state", "latent state", "distill", "distill pub", "mnist", "neural network", "digit classification" ]
#ai #biology #machinelearning Neural Cellular Automata are models for how living creatures can use local message passing to reach global consensus without a central authority. This paper teaches pixels of an image to communicate with each other and figure out as a group which digit they represent. On the way, the authors have to deal with pesky side-effects that come from applying the Cross-Entropy Loss in combination with a Softmax layer, but ultimately achieve a self-sustaining, stable and continuous algorithm that models living systems. OUTLINE: 0:00 - Intro & Overview 3:10 - Neural Cellular Automata 7:30 - Global Agreement via Message-Passing 11:05 - Neural CAs as Recurrent Convolutions 14:30 - Training Continuously Alive Systems 17:30 - Problems with Cross-Entropy 26:10 - Out-of-Distribution Robustness 27:10 - Chimeric Digits 27:45 - Visualizing Latent State Dimensions 29:05 - Conclusion & Comments Paper: https://distill.pub/2020/selforg/mnist/ My Video on Neural CAs: https://youtu.be/9Kec_7WFyp0 Abstract: Growing Neural Cellular Automata [1] demonstrated how simple cellular automata (CAs) can learn to self-organise into complex shapes while being resistant to perturbations. Such a computational model approximates a solution to an open question in biology, namely, how do cells cooperate to create a complex multicellular anatomy and work to regenerate it upon damage? The model parameterizing the cells’ rules is parameter-efficient, end-to-end differentiable, and illustrates a new approach to modeling the regulation of anatomical homeostasis. In this work, we use a version of this model to show how CAs can be applied to a common task in machine learning: classification. We pose the question: can CAs use local message passing to achieve global agreement on what digit they compose? Authors: Ettore Randazzo, Alexander Mordvintsev, Eyvind Niklasson, Michael Levin, Sam Greydanus Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher LinkedIn: https://www.linkedin.com/in/yannic-kilcher-488534136/ If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Check this out. So what you're seeing here is neurocellular automata that are learned to communicate with each other what digit they compose. So every pixel you see is like a little cell and it communicates with its neighbors and only its immediate neighbors about kind of its surroundings. And by doing that all these cells that are connected components have to agree as to what digits they compose. And here you can see the seven symbolized by gray and the three symbolized by green reach an agreement. There are some interesting properties about these cellular automata. Namely here you can see that half of this thinks it's a two and the rest thinks it's a zero. However, let's see when I complete this. No, it's too smart for this. Well, look at that. Now it thinks it's an eight. So you can clearly see there's like some message passing, some evolution going on across the states right here. It doesn't work perfectly. I found it thinks a lot of times that it is in fact a zero. As you can see right here. But so the goal is that this this direction of research isn't about state of the art in digit classification as you might be able to determine right here. It's about neurocellular automata. And I highly recommend if you don't know yet, go watch my video or read the previous article in this distil pub journal about growing neurocellular automata. This paper here is a follow up. It's called self classifying MNIST digits. And it's by Ettore Randazzo, Alexander Mortwintsev, Edwin Nicholson and sorry, I'd been Nicholson, Michael Levin and Sam Graydenes. So this paper is an evolution of the previous paper. And I'm going to switch back and forth here between the website and the thing where I can scribble on. So bear with me for that. They're saying that growing neurocellular automata demonstrated how simple cellular automata can learn to self organize into complex shapes while being resistant to perturbation. So that was the last paper. Such a computational model approximates a solution to an open question biology, namely how do cells cooperate to create a complex multicellular anatomy and work to regenerate it upon damage. Also from the last paper. The model parametrizing the cell rule is parameter efficient and to indifferentiable and illustrates a new approach to modeling the regulation of anatomical homeostasis. Homeostasis. OK. In this work, we use a version of this model to show how cellular automata can be applied to common task and machine learning classification. We pose the question, can cellular automata use local message passing to achieve global agreement on what digit they compose? So that's the question right here. Now, again, I've done a video on cellular automata, but really, really briefly. What you saw above is that there's an image and it's rasterized, of course rasterized in two pixels, and each pixel represents one cell. So you can think of this as basically nodes in a graph and each cell is connected to its immediate neighbors. So each cell, let's take this one, is connected to all its immediate neighbors like so. And of course, each cell, each other cell again is connected to its immediate neighbors. Now, all they know is basically the so if I draw something on this canvas, let's say I draw, let's take this, I draw a two. OK, then you look at this cell right here. And of course, the cell, this is going to be, it's the line would be thicker. So it's either going to be on or off. It's either going to be I painted on it or I didn't paint on it. And it can be in different variations, like there is an alpha level. But ultimately, the each cell can only register whatever was set painted on it. OK, so each cell can be dead or alive and dead cells, they will not send around any messages and dead cells is everywhere where there is no color at all. So this would be a dead cell. This would be a dead cell. This one wouldn't be a dead cell because there is a little bit of color. This would be a dead cell right here. So with this, so you can see that most cells here are actually dead. Now, the cells that aren't dead, they register whatever is painted on them like this cell or this cell or this cell. And then they need to communicate that to each other. And the goal is that all these cells that are alive, like these cells right here, all the cells that are alive, they pass messages to each other such that they all come to an agreement what digit they compose. And if you imagine you're this cell right here, all you see is that there is a bit of purple on you. Right. There is a bit of purple and it could be alpha level 200 out of 255. And only by registering this and communicating this to your neighbors and receiving messages and then passing on those messages to other neighbors, all of these cells need to come to an agreement. So how do these cells agree? Each cell, in fact, has a cell state. So each of these cells has a cell state. And that cell state, first and foremost, composes is composed of 10 different slots, one for each class. So what does it mean to agree on something at the end of this procedure or over time? Each cell in each round of communication can update its own cell state. And whatever number is highest right here, so this could be a high number, this could be a low number, wrong, sideways histograms, whatever one is the highest right here, that's what the cell believes the class is. So you immediately see a bit how this is going to be trained. So this is going to be trained by these authors taking an MNIST digit, placing that on the cells, letting this whatever procedure run. If the procedure is differentiable, you let it run for a number of time steps. And in each time step, you basically impose a cross entropy classification loss on these 10 entries in the cell state. That way you train the cells to output the correct digit. Now, each cell has to do that by itself. So the goal is to devise a communication algorithm such that each cell communicates with each other cell such that at the end, all the cells will be updated as to what the global state is, as to what the digit comprises. So what is this message passing right here? And for that, I think we need to first of all imagine what is actually passed around here. So if you see this sample above right here and you imagine, let's say we are actually in this configuration on the left and there is a slight bend. Let's say here we're in this part of the number two, there's a slight bend right here. So what you can see, maybe let me draw this a bit more clear, is that, for example, this the blue cell will register will by message passing. It can register that there is an alive cell right here. But this alive cell will also register that there is no, there is a dead cell next to it. So it can pass on that message to the blue cell and the blue cell will sort of know that there is kind of a border over there. Then also diagonally to the blue cell, it will register itself. Wow, there is a dead cell right here. And that's right below this alive cell above. So there must be some kind of a bend right here. You can already see how through this sort of message passing and this cell right here, of course, will its neighbor is also dead. Through this message passing, these cells can kind of figure out together the kind of more global shapes and they will recognize, ah, there is a bend. It's something like this. Right. And then other cells, maybe down here, will figure out, well, there is actually a corner right here. OK. And then other cells on top here, they will figure out, well, there is actually a bend like this. And then they can communicate this to each other. So these cells right here that have the corner, they will at some point receive this integrated message that there is a bend on top. And then they can make sense of that, right, and say, well, we are a corner and there is a bend on top. And there is so there must be a digit that's something like this. Right. And you can already see that at that point, they can be fairly sure that this is a two. So you can see that the combination of message passing and kind of think of each cell thinking by itself can give rise to this kind of each cell coming into global agreement, not only agreement, but correct agreement. So the message passing itself, again, described in the last paper, but really briefly, there is these 10 entries right here that decide on what the cell believes the state is. And then you can have extra entries that are just kind of latent state. There is no loss imposed on these latent variables, but ultimately the cell state consists of this long vector. And then this vector is passed on to all the neighbors. OK, this vector is passed to all the neighbors and all the neighbors send their own state vector to this cell. Now, the state vectors of all the neighbor cells are then integrated. So each one has this vector, vector, vector, vector, vector. These are all integrated together with the own state of the of the cell in a linear fashion. So there is like a small neural network in between. And that will update the cell state. In fact, I think they calculate a diff to the cell state. They don't calculate the new cell state by definition. They actually calculate a diff. And this should remind you of... So if we just look at this one dimensionally, right? So here's the cell and there is its neighbor, its neighbor, its neighbor, neighbor, and then the diagonal neighbors. And we want to update this cell right here as a linear combination of all the cells surrounding it and itself. And we want to do that for each. So each cell has the same update rule. So it doesn't matter where the cell is. You're trying to come up with one rule how to integrate the surrounding states into the cell itself. This is so the biological kind of reasoning behind it is that all the cells follow the same rules. But by virtue of where they are and how they communicate, these global patterns can arise. And this cell will update and then if we consider the next cell next to it, it has its neighbors. It will update according to its neighbors. This should remind you of a convolution, right? Because this is exactly convolution. So there will be a convolutional operator, a 3x3 convolutional operator right here. This can be multi-channel, of course, because we have multiple channels right here in the cell state. So the convolution will be learned once globally, which is exactly what a convolutional operator is, a convolutional kernel. It will be learned to update these cell states. In fact, it's a residual convolutional connection, right? This goes through the convolutional kernel and is then added together with the signal itself to give rise to the new cell states. So one convolution across the entire image will take care of updating all the cells. It's one round of message passing. And then now contrary to a convolutional neural network, where then the signal would go into the next layer into the next convolutional kernel. Sorry. This is then repeated with the same convolutional kernel, right? The message passing algorithm is the same in each round. So this is a recurrent neural network with a residual convolution as an operator. That is the model for kind of the biological cell communication algorithm. So these are these neural cellular automata. The difference to the last paper is twofold. First of all, in the last paper, we had RGB values up here. Now it's the class labels. So these are also passed around so that the cell passes to its neighbors what it believes the current labels are, but also these hidden features right here. And we'll come to this in a second. And the second difference is that the dead and alive cells are static. So where these dead cells, where the dead cells and where the alive cells are, that never changes. That used to change in the last paper. Here it never changes. It's only about passing the messages around between the cells. All right. So this is basically it. So this is a model for agreement between cells. I think it's pretty cool. I would still like to go more into what kind of what exactly happens, what kind of messages are passed around. But they do this a little bit. So they have a bunch of experiments. How do they train this stuff? Basically, how do they train this stuff that I can, you know, I can change it in between and it will actually it will update it live. So the cells, you can't only do this once. The cells must have a notion of continuously being alive, continuously updating themselves, continuously being prepared that there is some sort of a modification to the cell. And that's they do this by. So here you can see, can I zoom? Well, I can't. Now I can. Here you can see that this is how they train it. So they just initialize the cell states randomly. That's why you see there are just random colors right here. These are MNIST digits. And then they train these cells, all of them, to predict the label of the MNIST digits, which they have in the training set. And then so you can see, once you've trained it, that happens fairly, fairly quickly. And then after 200 steps, they simply switch out the digit. OK, they leave all the cells as they are. Of course, some cells will be dead now and some cells will be alive. The ones that come alive will just be initialized randomly. But there are always going to be cells that are going to be present in both digits. And those will just keep the label. But, you know, usually the the digit here changes with a 90 percent probability. And since this is one long run of a recurrent network, the network sort of changes. That network sort of has to always be prepared for a change because it's trained with this mutation. So it's trained for 200 steps in the first digit. And then it's switched and trained for 200 steps with the second label. That causes these cells to kind of always be ready for change. And that's, yeah. So you can see there are still some artifacts where the cells that they're not quite sure and so on. And in fact, they get worse over time. So if you pay real close attention towards the end of these cycles, it actually gets worse. So after a while, some of them will start flickering up again. And that's a problem they've observed. And they go into this right here. So they have these graphs of accuracy over time. So accuracy means average cell accuracy. So they just take all the cells and they see how many of them are correct. And you can see at the beginning of training pretty quickly. So in the beginning, this is inference. So inference, of course, you also do over time. Right. So this is in inference, you provide a digit, you initialize randomly, and then you let these cells communicate. So you run the recurrent convolutional algorithm and you count how many cells output the correct label at each step. And pretty quickly reaches high up. And then you can see at the mutation, it drops down to random again, but also pretty quickly recover. So it sounds pretty good. And you can see a teeny tiny bit right here. It's kind of going down after, you know, over time. And so they determine they need to do something about this. In fact, they first of all, they want to make a point that you have to figure out what exactly is happening. So here they have average cell accuracy. But what they also decide to measure is average total agreement across the batch. Average total agreement basically means how many of the cells within a digit agree with each other on the label, which is sort of a measure. If this is really an MNIST digit, you know, it should be perfectly in one class and not the other. I know there's some ambiguity. But so what you should have at least, even if the cells are wrong, you should have a total agreement in the cells. If this is in fact a digit, the cells should somehow agree with each other because that's what you train them to. You train them to agree with each other. And you can see again here as well, pretty quickly you have an agreement after a number of steps. And then that agreement drops again, strangely, right? Because they've already reached an agreement. You might think this will sort of maybe it will hamper down, but it might slightly go up. But no, it actually slightly goes down over time. So why is that? They also analyze this here, and I'm sorry about this chopped up graph. But you can see that the here are the state values, here are the sizes, the real numerical sizes of these entries in the states. And you can see that they grow over time. So not only do they grow until the agreement is reached, but also they keep growing after that. And here are the diffs from state to state. And you can also see that these never go to zero. So why is that? And they have a hypothesis right here. In fact, they have the hypothesis this is due to the cross entropy loss. Now the cross entropy loss is kind of the most famous loss for classification. So usually what you'll have is your neural network will output some distribution like this. Let's say it's three classes. So it believes that class number two here is the correct class. And then you have a label, which you transform into a one-hot distribution, where this is one, these are zero. And then you perform this cross entropy loss between the two, saying that the left thing should be more equal to the right thing. And you do that in the sense of... So this is the kind of the entropy formulation. But what you actually do is this y log p. So p here is going to be the distribution that you output and y is going to be the distribution that the network outputs. You can pretty clearly see y is going to be zero for all the classes that are wrong. So the entire loss reduces to simply the probability here of the... Sorry, there is a negative. The probability of the class that is correct. So what you want to do is you want to push that up. Now, of course, just looking at the loss, only the correct class is pushed up. Nothing else is done. Now, you also know that most of the time we combine this with a so-called softmax operator. So what our network outputs isn't actually a distribution, it's what we call logit, so an unnormalized distribution. So what it actually outputs could be something like this. A high number, a negative number and a negative number. And only by matter of normalization, we reach this distribution. So the softmax operator will take care of normalizing. And also the softmax operator, because of the normalization, when we back propagate this loss, it causes this logit here to rise and it causes these ones to lower because of this normalization step, not actually because of the loss. So I think they correctly say here is the cross entropy loss, but it is the cross entropy loss combined with the softmax operator that we usually use in neural networks that makes this phenomenon happen. So what is actually happening here? If you look at the softmax operator, what it does is it's like e to the x divided by the sum of e to the x prime overall, or overall other classes, so you can fairly easily see that this exponential function here is never, ever, ever going to be zero. So you can never have a zero entry right here. So the loss forces you to push this thing up, but because you can never have zero entries there, of course, this can never be one. So you can never actually reach perfect loss. And what does it do to the logits? You cannot reach perfect loss, but the gradient will always push you into the direction of upping this logit and downing this. So raising the one that is correct and lowering actually into the negative direction, the ones that aren't correct. So you can see that if we do this once, no problem. If we do this in a single neural network, forward propagate, calculate loss, not a problem. But if we do this over and over and over and over again in a convolutional neural network and we let it run for infinite time, of course, what is going to happen is that these things are going to explode more and more and more. So these losses are going to get bigger and bigger, which makes the entire rest of the network behave in a bigger and bigger fashion. And that is exactly what you see here, because these simply the numerical values in the states, they will be bigger and bigger and bigger because they push the network into the direction of more and more and more reducing the loss, thereby raising the logits. So there's it's very disproportionate. At the end, you have to raise the logits by a lot to reduce the loss a little bit. But the network doesn't care because that's what it was trained to do. So they hypothesize if we use an L2 loss, this shouldn't happen. Now in an L2 loss, you do not compare, you don't output logits, you output actual probabilities, and you simply compare the L2 distance to them. So if you compare the L2 distance right here, yes, you will push this one up. But if you push it too high, then it's too high and then it will be pushed down again until it is exactly the same level as the other one. Now, the disadvantages here is that, of course, this isn't actually forced to be a valid probability distribution. You can normalize it, yes, but you can go too high. So you can output probabilities higher than one and so on. So there's a whole slew of problems that come with this, but you can counter this. So beside using an L2 loss, they also have another on top idea in that they always add noise to these residual updates that they do after the convolution, just kind of to keep the network on its toes, saying that everything can always change with noise. So in each step, it basically has to do some of some correction with respect to that noise. And here you can see the clear difference, especially in the lower plot, where the total agreement before this blue line was when it went down over time. And now with the L2 loss and a little bit more with this residual noise, it manages to keep the total agreement up and solve that problem. And you can also see that the average magnitude of the updates no longer is rising over time, but actually it's keeping the same for the cell states and the updates converge towards zero. Of course, not as much with the noise because the noise makes them. The noise will make them non-zero, the updates, but still they are at the same magnitude. And they manage to correct that noise and not incorporate more and more and more like the cross entropy loss. So this, I don't want to go into the last few bits, except this one. These cells have some interesting properties, notably they're also resistant to kind of out of distribution errors. You can see that in this video where you can see it's classifying it fairly solidly as 1s. But as soon as you draw a shape that is not kind of in the training set or in the classes of the training set, the cells keep disagreeing with each other. So this you can see as sort of kind of a robustness to out of distribution samples. And it's also pretty interesting to see that the messages here, where they go from. So you can fairly clearly see that if you draw some kind of shape, that the message passing starts at kind of the most symbolic parts of the digits. And here they have some chimeric digits or something they call it like this. And just pay attention to where the messages start. And you can clearly see that this sort of local determination of what a digit is will spread out over time to the other cells. And I thought there was this last thing. This thing. Yes. So here, not only do they visualize the cell state, so the color of the cell, and that's the thing on the left, is always the first 10 entries in this hidden state. But on the right, they also visualize the other hidden entries. And so each entry is represented by a two color thing where blue is very low number, red is a very high number. And here you can see what these latent states pass around. And also you can fairly clearly see that they do pass around these kind of typical sub shapes of the digit. So in the case of a zero, that's going to be a bend. In the case of a four, that's going to be these ends and corners of the numbers. And you can see that over time, as these messages pass, also the cell states on the left, the visible states, the class labels change over time. This lends a lot of credence, especially the six I like. Or the two. You can see in the different, if you kind of look at the different latent states, that the kind of typical, the bends, the corners, every latent state is sort of assigned to one of them. And then they pass this information around in order to reach an agreement. So I like this research, pretty cool research. I don't want to say it's very useful, but certainly it's very interesting. And I also like the format in this distilled format. I think that's sort of the future of research rather than eight page PDFs. You can look at it, it's interactive, you can have a little demo in it. You can write for as long as you want. And yeah, it's just overall better. This is still going. Doesn't know what it is. So lastly, you can, as I said, you can clearly see that, look, if I do this, it's a zero. But if I do this, then the stem part will immediately go for a six because that's indicative of a six. But then it will disagree with the zero part of the digit. In fact, I seem to be unable to write a six. Is that an American six? Maybe. Yeah, so with that, I'll leave this here. I think this is, again, very interesting, this kind of biological models. And certainly, if you're looking for an exciting research directions, this might be it. And you do not need a lot of resources to do this. This is very parameter efficient, as we saw in the last paper. And certainly kind of a niche right now. So that was it for me. I hope you enjoyed this. If you liked it, share it out and bye bye. See you next time.
[ { "start": 0, "end": 2, "text": " Check this out." }, { "start": 6, "end": 16, "text": " So what you're seeing here is neurocellular automata that are learned to communicate with each other what digit they compose." }, { "start": 16, "end": 27, "text": " So every pixel you see is like a little cell and it communicates with its neighbors and only its immediate neighbors about kind of its surroundings." }, { "start": 27, "end": 35, "text": " And by doing that all these cells that are connected components have to agree as to what digits they compose." }, { "start": 35, "end": 43, "text": " And here you can see the seven symbolized by gray and the three symbolized by green reach an agreement." }, { "start": 43, "end": 48, "text": " There are some interesting properties about these cellular automata." }, { "start": 48, "end": 54, "text": " Namely here you can see that half of this thinks it's a two and the rest thinks it's a zero." }, { "start": 54, "end": 60, "text": " However, let's see when I complete this. No, it's too smart for this." }, { "start": 60, "end": 64, "text": " Well, look at that. Now it thinks it's an eight." }, { "start": 64, "end": 72, "text": " So you can clearly see there's like some message passing, some evolution going on across the states right here." }, { "start": 72, "end": 78, "text": " It doesn't work perfectly. I found it thinks a lot of times that it is in fact a zero." }, { "start": 78, "end": 94, "text": " As you can see right here. But so the goal is that this this direction of research isn't about state of the art in digit classification as you might be able to determine right here." }, { "start": 94, "end": 96, "text": " It's about neurocellular automata." }, { "start": 96, "end": 108, "text": " And I highly recommend if you don't know yet, go watch my video or read the previous article in this distil pub journal about growing neurocellular automata." }, { "start": 108, "end": 112, "text": " This paper here is a follow up. It's called self classifying MNIST digits." }, { "start": 112, "end": 124, "text": " And it's by Ettore Randazzo, Alexander Mortwintsev, Edwin Nicholson and sorry, I'd been Nicholson, Michael Levin and Sam Graydenes." }, { "start": 124, "end": 129, "text": " So this paper is an evolution of the previous paper." }, { "start": 129, "end": 135, "text": " And I'm going to switch back and forth here between the website and the thing where I can scribble on." }, { "start": 135, "end": 137, "text": " So bear with me for that." }, { "start": 137, "end": 148, "text": " They're saying that growing neurocellular automata demonstrated how simple cellular automata can learn to self organize into complex shapes while being resistant to perturbation." }, { "start": 148, "end": 150, "text": " So that was the last paper." }, { "start": 150, "end": 163, "text": " Such a computational model approximates a solution to an open question biology, namely how do cells cooperate to create a complex multicellular anatomy and work to regenerate it upon damage." }, { "start": 163, "end": 165, "text": " Also from the last paper." }, { "start": 165, "end": 176, "text": " The model parametrizing the cell rule is parameter efficient and to indifferentiable and illustrates a new approach to modeling the regulation of anatomical homeostasis." }, { "start": 176, "end": 178, "text": " Homeostasis. OK." }, { "start": 178, "end": 185, "text": " In this work, we use a version of this model to show how cellular automata can be applied to common task and machine learning classification." }, { "start": 185, "end": 194, "text": " We pose the question, can cellular automata use local message passing to achieve global agreement on what digit they compose?" }, { "start": 194, "end": 196, "text": " So that's the question right here." }, { "start": 196, "end": 202, "text": " Now, again, I've done a video on cellular automata, but really, really briefly." }, { "start": 202, "end": 212, "text": " What you saw above is that there's an image and it's rasterized, of course rasterized in two pixels, and each pixel represents one cell." }, { "start": 212, "end": 221, "text": " So you can think of this as basically nodes in a graph and each cell is connected to its immediate neighbors." }, { "start": 221, "end": 228, "text": " So each cell, let's take this one, is connected to all its immediate neighbors like so." }, { "start": 228, "end": 234, "text": " And of course, each cell, each other cell again is connected to its immediate neighbors." }, { "start": 234, "end": 247, "text": " Now, all they know is basically the so if I draw something on this canvas, let's say I draw, let's take this, I draw a two." }, { "start": 247, "end": 251, "text": " OK, then you look at this cell right here." }, { "start": 251, "end": 259, "text": " And of course, the cell, this is going to be, it's the line would be thicker. So it's either going to be on or off." }, { "start": 259, "end": 263, "text": " It's either going to be I painted on it or I didn't paint on it." }, { "start": 263, "end": 267, "text": " And it can be in different variations, like there is an alpha level." }, { "start": 267, "end": 274, "text": " But ultimately, the each cell can only register whatever was set painted on it." }, { "start": 274, "end": 285, "text": " OK, so each cell can be dead or alive and dead cells, they will not send around any messages and dead cells is everywhere where there is no color at all." }, { "start": 285, "end": 288, "text": " So this would be a dead cell. This would be a dead cell." }, { "start": 288, "end": 293, "text": " This one wouldn't be a dead cell because there is a little bit of color." }, { "start": 293, "end": 295, "text": " This would be a dead cell right here." }, { "start": 295, "end": 299, "text": " So with this, so you can see that most cells here are actually dead." }, { "start": 299, "end": 306, "text": " Now, the cells that aren't dead, they register whatever is painted on them like this cell or this cell or this cell." }, { "start": 306, "end": 310, "text": " And then they need to communicate that to each other." }, { "start": 310, "end": 317, "text": " And the goal is that all these cells that are alive, like these cells right here, all the cells that are alive," }, { "start": 317, "end": 325, "text": " they pass messages to each other such that they all come to an agreement what digit they compose." }, { "start": 325, "end": 332, "text": " And if you imagine you're this cell right here, all you see is that there is a bit of purple on you." }, { "start": 332, "end": 340, "text": " Right. There is a bit of purple and it could be alpha level 200 out of 255." }, { "start": 340, "end": 352, "text": " And only by registering this and communicating this to your neighbors and receiving messages and then passing on those messages to other neighbors, all of these cells need to come to an agreement." }, { "start": 352, "end": 357, "text": " So how do these cells agree? Each cell, in fact, has a cell state." }, { "start": 357, "end": 360, "text": " So each of these cells has a cell state." }, { "start": 360, "end": 368, "text": " And that cell state, first and foremost, composes is composed of 10 different slots, one for each class." }, { "start": 368, "end": 374, "text": " So what does it mean to agree on something at the end of this procedure or over time?" }, { "start": 374, "end": 380, "text": " Each cell in each round of communication can update its own cell state." }, { "start": 380, "end": 395, "text": " And whatever number is highest right here, so this could be a high number, this could be a low number, wrong, sideways histograms, whatever one is the highest right here, that's what the cell believes the class is." }, { "start": 395, "end": 400, "text": " So you immediately see a bit how this is going to be trained." }, { "start": 400, "end": 408, "text": " So this is going to be trained by these authors taking an MNIST digit, placing that on the cells, letting this whatever procedure run." }, { "start": 408, "end": 413, "text": " If the procedure is differentiable, you let it run for a number of time steps." }, { "start": 413, "end": 421, "text": " And in each time step, you basically impose a cross entropy classification loss on these 10 entries in the cell state." }, { "start": 421, "end": 426, "text": " That way you train the cells to output the correct digit." }, { "start": 426, "end": 430, "text": " Now, each cell has to do that by itself." }, { "start": 430, "end": 447, "text": " So the goal is to devise a communication algorithm such that each cell communicates with each other cell such that at the end, all the cells will be updated as to what the global state is, as to what the digit comprises." }, { "start": 447, "end": 451, "text": " So what is this message passing right here?" }, { "start": 451, "end": 456, "text": " And for that, I think we need to first of all imagine what is actually passed around here." }, { "start": 456, "end": 466, "text": " So if you see this sample above right here and you imagine, let's say we are actually in this configuration on the left and there is a slight bend." }, { "start": 466, "end": 471, "text": " Let's say here we're in this part of the number two, there's a slight bend right here." }, { "start": 471, "end": 484, "text": " So what you can see, maybe let me draw this a bit more clear, is that, for example, this the blue cell will register will by message passing." }, { "start": 484, "end": 489, "text": " It can register that there is an alive cell right here." }, { "start": 489, "end": 494, "text": " But this alive cell will also register that there is no, there is a dead cell next to it." }, { "start": 494, "end": 504, "text": " So it can pass on that message to the blue cell and the blue cell will sort of know that there is kind of a border over there." }, { "start": 504, "end": 508, "text": " Then also diagonally to the blue cell, it will register itself." }, { "start": 508, "end": 511, "text": " Wow, there is a dead cell right here." }, { "start": 511, "end": 514, "text": " And that's right below this alive cell above." }, { "start": 514, "end": 517, "text": " So there must be some kind of a bend right here." }, { "start": 517, "end": 524, "text": " You can already see how through this sort of message passing and this cell right here, of course, will its neighbor is also dead." }, { "start": 524, "end": 533, "text": " Through this message passing, these cells can kind of figure out together the kind of more global shapes and they will recognize, ah, there is a bend." }, { "start": 533, "end": 535, "text": " It's something like this." }, { "start": 535, "end": 543, "text": " Right. And then other cells, maybe down here, will figure out, well, there is actually a corner right here." }, { "start": 543, "end": 549, "text": " OK. And then other cells on top here, they will figure out, well, there is actually a bend like this." }, { "start": 549, "end": 552, "text": " And then they can communicate this to each other." }, { "start": 552, "end": 561, "text": " So these cells right here that have the corner, they will at some point receive this integrated message that there is a bend on top." }, { "start": 561, "end": 567, "text": " And then they can make sense of that, right, and say, well, we are a corner and there is a bend on top." }, { "start": 567, "end": 572, "text": " And there is so there must be a digit that's something like this." }, { "start": 572, "end": 579, "text": " Right. And you can already see that at that point, they can be fairly sure that this is a two." }, { "start": 579, "end": 590, "text": " So you can see that the combination of message passing and kind of think of each cell thinking by itself can give rise to this kind of" }, { "start": 590, "end": 596, "text": " each cell coming into global agreement, not only agreement, but correct agreement." }, { "start": 596, "end": 608, "text": " So the message passing itself, again, described in the last paper, but really briefly, there is these 10 entries right here that decide on what the cell believes the state is." }, { "start": 608, "end": 612, "text": " And then you can have extra entries that are just kind of latent state." }, { "start": 612, "end": 620, "text": " There is no loss imposed on these latent variables, but ultimately the cell state consists of this long vector." }, { "start": 620, "end": 624, "text": " And then this vector is passed on to all the neighbors." }, { "start": 624, "end": 633, "text": " OK, this vector is passed to all the neighbors and all the neighbors send their own state vector to this cell." }, { "start": 633, "end": 638, "text": " Now, the state vectors of all the neighbor cells are then integrated." }, { "start": 638, "end": 642, "text": " So each one has this vector, vector, vector, vector, vector." }, { "start": 642, "end": 650, "text": " These are all integrated together with the own state of the of the cell in a linear fashion." }, { "start": 650, "end": 654, "text": " So there is like a small neural network in between." }, { "start": 654, "end": 657, "text": " And that will update the cell state." }, { "start": 657, "end": 660, "text": " In fact, I think they calculate a diff to the cell state." }, { "start": 660, "end": 663, "text": " They don't calculate the new cell state by definition." }, { "start": 663, "end": 668, "text": " They actually calculate a diff. And this should remind you of..." }, { "start": 668, "end": 672, "text": " So if we just look at this one dimensionally, right?" }, { "start": 672, "end": 680, "text": " So here's the cell and there is its neighbor, its neighbor, its neighbor, neighbor, and then the diagonal neighbors." }, { "start": 680, "end": 691, "text": " And we want to update this cell right here as a linear combination of all the cells surrounding it and itself." }, { "start": 691, "end": 696, "text": " And we want to do that for each. So each cell has the same update rule." }, { "start": 696, "end": 698, "text": " So it doesn't matter where the cell is." }, { "start": 698, "end": 705, "text": " You're trying to come up with one rule how to integrate the surrounding states into the cell itself." }, { "start": 705, "end": 711, "text": " This is so the biological kind of reasoning behind it is that all the cells follow the same rules." }, { "start": 711, "end": 717, "text": " But by virtue of where they are and how they communicate, these global patterns can arise." }, { "start": 717, "end": 724, "text": " And this cell will update and then if we consider the next cell next to it, it has its neighbors." }, { "start": 724, "end": 726, "text": " It will update according to its neighbors." }, { "start": 726, "end": 731, "text": " This should remind you of a convolution, right? Because this is exactly convolution." }, { "start": 731, "end": 736, "text": " So there will be a convolutional operator, a 3x3 convolutional operator right here." }, { "start": 736, "end": 742, "text": " This can be multi-channel, of course, because we have multiple channels right here in the cell state." }, { "start": 742, "end": 751, "text": " So the convolution will be learned once globally, which is exactly what a convolutional operator is, a convolutional kernel." }, { "start": 751, "end": 754, "text": " It will be learned to update these cell states." }, { "start": 754, "end": 757, "text": " In fact, it's a residual convolutional connection, right?" }, { "start": 757, "end": 764, "text": " This goes through the convolutional kernel and is then added together with the signal itself to give rise to the new cell states." }, { "start": 764, "end": 770, "text": " So one convolution across the entire image will take care of updating all the cells." }, { "start": 770, "end": 772, "text": " It's one round of message passing." }, { "start": 772, "end": 781, "text": " And then now contrary to a convolutional neural network, where then the signal would go into the next layer into the next convolutional kernel." }, { "start": 781, "end": 785, "text": " Sorry." }, { "start": 785, "end": 789, "text": " This is then repeated with the same convolutional kernel, right?" }, { "start": 789, "end": 792, "text": " The message passing algorithm is the same in each round." }, { "start": 792, "end": 799, "text": " So this is a recurrent neural network with a residual convolution as an operator." }, { "start": 799, "end": 806, "text": " That is the model for kind of the biological cell communication algorithm." }, { "start": 806, "end": 808, "text": " So these are these neural cellular automata." }, { "start": 808, "end": 811, "text": " The difference to the last paper is twofold." }, { "start": 811, "end": 814, "text": " First of all, in the last paper, we had RGB values up here." }, { "start": 814, "end": 816, "text": " Now it's the class labels." }, { "start": 816, "end": 825, "text": " So these are also passed around so that the cell passes to its neighbors what it believes the current labels are, but also these hidden features right here." }, { "start": 825, "end": 827, "text": " And we'll come to this in a second." }, { "start": 827, "end": 834, "text": " And the second difference is that the dead and alive cells are static." }, { "start": 834, "end": 839, "text": " So where these dead cells, where the dead cells and where the alive cells are, that never changes." }, { "start": 839, "end": 841, "text": " That used to change in the last paper." }, { "start": 841, "end": 842, "text": " Here it never changes." }, { "start": 842, "end": 847, "text": " It's only about passing the messages around between the cells." }, { "start": 847, "end": 848, "text": " All right." }, { "start": 848, "end": 853, "text": " So this is basically it." }, { "start": 853, "end": 856, "text": " So this is a model for agreement between cells." }, { "start": 856, "end": 859, "text": " I think it's pretty cool." }, { "start": 859, "end": 867, "text": " I would still like to go more into what kind of what exactly happens, what kind of messages are passed around." }, { "start": 867, "end": 871, "text": " But they do this a little bit." }, { "start": 871, "end": 873, "text": " So they have a bunch of experiments." }, { "start": 873, "end": 874, "text": " How do they train this stuff?" }, { "start": 874, "end": 883, "text": " Basically, how do they train this stuff that I can, you know, I can change it in between and it will actually it will update it live." }, { "start": 883, "end": 886, "text": " So the cells, you can't only do this once." }, { "start": 886, "end": 892, "text": " The cells must have a notion of continuously being alive, continuously updating themselves," }, { "start": 892, "end": 898, "text": " continuously being prepared that there is some sort of a modification to the cell." }, { "start": 898, "end": 902, "text": " And that's they do this by." }, { "start": 902, "end": 905, "text": " So here you can see, can I zoom?" }, { "start": 905, "end": 909, "text": " Well, I can't." }, { "start": 909, "end": 910, "text": " Now I can." }, { "start": 910, "end": 914, "text": " Here you can see that this is how they train it." }, { "start": 914, "end": 916, "text": " So they just initialize the cell states randomly." }, { "start": 916, "end": 919, "text": " That's why you see there are just random colors right here." }, { "start": 919, "end": 920, "text": " These are MNIST digits." }, { "start": 920, "end": 928, "text": " And then they train these cells, all of them, to predict the label of the MNIST digits, which they have in the training set." }, { "start": 928, "end": 936, "text": " And then so you can see, once you've trained it, that happens fairly, fairly quickly." }, { "start": 936, "end": 940, "text": " And then after 200 steps, they simply switch out the digit." }, { "start": 940, "end": 942, "text": " OK, they leave all the cells as they are." }, { "start": 942, "end": 945, "text": " Of course, some cells will be dead now and some cells will be alive." }, { "start": 945, "end": 948, "text": " The ones that come alive will just be initialized randomly." }, { "start": 948, "end": 952, "text": " But there are always going to be cells that are going to be present in both digits." }, { "start": 952, "end": 954, "text": " And those will just keep the label." }, { "start": 954, "end": 960, "text": " But, you know, usually the the digit here changes with a 90 percent probability." }, { "start": 960, "end": 965, "text": " And since this is one long run of a recurrent network, the network sort of changes." }, { "start": 965, "end": 972, "text": " That network sort of has to always be prepared for a change because it's trained with this mutation." }, { "start": 972, "end": 975, "text": " So it's trained for 200 steps in the first digit." }, { "start": 975, "end": 979, "text": " And then it's switched and trained for 200 steps with the second label." }, { "start": 979, "end": 984, "text": " That causes these cells to kind of always be ready for change." }, { "start": 984, "end": 985, "text": " And that's, yeah." }, { "start": 985, "end": 991, "text": " So you can see there are still some artifacts where the cells that they're not quite sure and so on." }, { "start": 991, "end": 993, "text": " And in fact, they get worse over time." }, { "start": 993, "end": 999, "text": " So if you pay real close attention towards the end of these cycles, it actually gets worse." }, { "start": 999, "end": 1002, "text": " So after a while, some of them will start flickering up again." }, { "start": 1002, "end": 1005, "text": " And that's a problem they've observed." }, { "start": 1005, "end": 1007, "text": " And they go into this right here." }, { "start": 1007, "end": 1010, "text": " So they have these graphs of accuracy over time." }, { "start": 1010, "end": 1015, "text": " So accuracy means average cell accuracy." }, { "start": 1015, "end": 1019, "text": " So they just take all the cells and they see how many of them are correct." }, { "start": 1019, "end": 1022, "text": " And you can see at the beginning of training pretty quickly." }, { "start": 1022, "end": 1024, "text": " So in the beginning, this is inference." }, { "start": 1024, "end": 1027, "text": " So inference, of course, you also do over time." }, { "start": 1027, "end": 1028, "text": " Right." }, { "start": 1028, "end": 1034, "text": " So this is in inference, you provide a digit, you initialize randomly, and then you let these cells communicate." }, { "start": 1034, "end": 1041, "text": " So you run the recurrent convolutional algorithm and you count how many cells output the correct label at each step." }, { "start": 1041, "end": 1044, "text": " And pretty quickly reaches high up." }, { "start": 1044, "end": 1049, "text": " And then you can see at the mutation, it drops down to random again, but also pretty quickly recover." }, { "start": 1049, "end": 1051, "text": " So it sounds pretty good." }, { "start": 1051, "end": 1054, "text": " And you can see a teeny tiny bit right here." }, { "start": 1054, "end": 1058, "text": " It's kind of going down after, you know, over time." }, { "start": 1058, "end": 1063, "text": " And so they determine they need to do something about this." }, { "start": 1063, "end": 1070, "text": " In fact, they first of all, they want to make a point that you have to figure out what exactly is happening." }, { "start": 1070, "end": 1073, "text": " So here they have average cell accuracy." }, { "start": 1073, "end": 1079, "text": " But what they also decide to measure is average total agreement across the batch." }, { "start": 1079, "end": 1088, "text": " Average total agreement basically means how many of the cells within a digit agree with each other on the label," }, { "start": 1088, "end": 1090, "text": " which is sort of a measure." }, { "start": 1090, "end": 1095, "text": " If this is really an MNIST digit, you know, it should be perfectly in one class and not the other." }, { "start": 1095, "end": 1097, "text": " I know there's some ambiguity." }, { "start": 1097, "end": 1107, "text": " But so what you should have at least, even if the cells are wrong, you should have a total agreement in the cells." }, { "start": 1107, "end": 1113, "text": " If this is in fact a digit, the cells should somehow agree with each other because that's what you train them to." }, { "start": 1113, "end": 1115, "text": " You train them to agree with each other." }, { "start": 1115, "end": 1121, "text": " And you can see again here as well, pretty quickly you have an agreement after a number of steps." }, { "start": 1121, "end": 1125, "text": " And then that agreement drops again, strangely, right?" }, { "start": 1125, "end": 1127, "text": " Because they've already reached an agreement." }, { "start": 1127, "end": 1132, "text": " You might think this will sort of maybe it will hamper down, but it might slightly go up." }, { "start": 1132, "end": 1136, "text": " But no, it actually slightly goes down over time." }, { "start": 1136, "end": 1138, "text": " So why is that?" }, { "start": 1138, "end": 1142, "text": " They also analyze this here, and I'm sorry about this chopped up graph." }, { "start": 1142, "end": 1151, "text": " But you can see that the here are the state values, here are the sizes, the real numerical sizes of these entries in the states." }, { "start": 1151, "end": 1155, "text": " And you can see that they grow over time." }, { "start": 1155, "end": 1162, "text": " So not only do they grow until the agreement is reached, but also they keep growing after that." }, { "start": 1162, "end": 1169, "text": " And here are the diffs from state to state. And you can also see that these never go to zero." }, { "start": 1169, "end": 1172, "text": " So why is that? And they have a hypothesis right here." }, { "start": 1172, "end": 1176, "text": " In fact, they have the hypothesis this is due to the cross entropy loss." }, { "start": 1176, "end": 1182, "text": " Now the cross entropy loss is kind of the most famous loss for classification." }, { "start": 1182, "end": 1187, "text": " So usually what you'll have is your neural network will output some distribution like this." }, { "start": 1187, "end": 1189, "text": " Let's say it's three classes." }, { "start": 1189, "end": 1193, "text": " So it believes that class number two here is the correct class." }, { "start": 1193, "end": 1202, "text": " And then you have a label, which you transform into a one-hot distribution, where this is one, these are zero." }, { "start": 1202, "end": 1211, "text": " And then you perform this cross entropy loss between the two, saying that the left thing should be more equal to the right thing." }, { "start": 1211, "end": 1223, "text": " And you do that in the sense of... So this is the kind of the entropy formulation." }, { "start": 1223, "end": 1226, "text": " But what you actually do is this y log p." }, { "start": 1226, "end": 1235, "text": " So p here is going to be the distribution that you output and y is going to be the distribution that the network outputs." }, { "start": 1235, "end": 1240, "text": " You can pretty clearly see y is going to be zero for all the classes that are wrong." }, { "start": 1240, "end": 1250, "text": " So the entire loss reduces to simply the probability here of the... Sorry, there is a negative." }, { "start": 1250, "end": 1253, "text": " The probability of the class that is correct." }, { "start": 1253, "end": 1256, "text": " So what you want to do is you want to push that up." }, { "start": 1256, "end": 1262, "text": " Now, of course, just looking at the loss, only the correct class is pushed up." }, { "start": 1262, "end": 1264, "text": " Nothing else is done." }, { "start": 1264, "end": 1271, "text": " Now, you also know that most of the time we combine this with a so-called softmax operator." }, { "start": 1271, "end": 1277, "text": " So what our network outputs isn't actually a distribution, it's what we call logit, so an unnormalized distribution." }, { "start": 1277, "end": 1281, "text": " So what it actually outputs could be something like this." }, { "start": 1281, "end": 1285, "text": " A high number, a negative number and a negative number." }, { "start": 1285, "end": 1289, "text": " And only by matter of normalization, we reach this distribution." }, { "start": 1289, "end": 1294, "text": " So the softmax operator will take care of normalizing." }, { "start": 1294, "end": 1300, "text": " And also the softmax operator, because of the normalization, when we back propagate this loss," }, { "start": 1300, "end": 1310, "text": " it causes this logit here to rise and it causes these ones to lower because of this normalization step, not actually because of the loss." }, { "start": 1310, "end": 1314, "text": " So I think they correctly say here is the cross entropy loss," }, { "start": 1314, "end": 1324, "text": " but it is the cross entropy loss combined with the softmax operator that we usually use in neural networks that makes this phenomenon happen." }, { "start": 1324, "end": 1326, "text": " So what is actually happening here?" }, { "start": 1326, "end": 1336, "text": " If you look at the softmax operator, what it does is it's like e to the x divided by the sum of e to the x prime overall," }, { "start": 1336, "end": 1348, "text": " or overall other classes, so you can fairly easily see that this exponential function here is never, ever, ever going to be zero." }, { "start": 1348, "end": 1353, "text": " So you can never have a zero entry right here." }, { "start": 1353, "end": 1361, "text": " So the loss forces you to push this thing up, but because you can never have zero entries there, of course, this can never be one." }, { "start": 1361, "end": 1366, "text": " So you can never actually reach perfect loss. And what does it do to the logits?" }, { "start": 1366, "end": 1374, "text": " You cannot reach perfect loss, but the gradient will always push you into the direction of upping this logit and downing this." }, { "start": 1374, "end": 1382, "text": " So raising the one that is correct and lowering actually into the negative direction, the ones that aren't correct." }, { "start": 1382, "end": 1386, "text": " So you can see that if we do this once, no problem." }, { "start": 1386, "end": 1392, "text": " If we do this in a single neural network, forward propagate, calculate loss, not a problem." }, { "start": 1392, "end": 1400, "text": " But if we do this over and over and over and over again in a convolutional neural network and we let it run for infinite time," }, { "start": 1400, "end": 1407, "text": " of course, what is going to happen is that these things are going to explode more and more and more." }, { "start": 1407, "end": 1415, "text": " So these losses are going to get bigger and bigger, which makes the entire rest of the network behave in a bigger and bigger fashion." }, { "start": 1415, "end": 1422, "text": " And that is exactly what you see here, because these simply the numerical values in the states," }, { "start": 1422, "end": 1432, "text": " they will be bigger and bigger and bigger because they push the network into the direction of more and more and more reducing the loss, thereby raising the logits." }, { "start": 1432, "end": 1439, "text": " So there's it's very disproportionate. At the end, you have to raise the logits by a lot to reduce the loss a little bit." }, { "start": 1439, "end": 1442, "text": " But the network doesn't care because that's what it was trained to do." }, { "start": 1442, "end": 1446, "text": " So they hypothesize if we use an L2 loss, this shouldn't happen." }, { "start": 1446, "end": 1458, "text": " Now in an L2 loss, you do not compare, you don't output logits, you output actual probabilities, and you simply compare the L2 distance to them." }, { "start": 1458, "end": 1464, "text": " So if you compare the L2 distance right here, yes, you will push this one up." }, { "start": 1464, "end": 1473, "text": " But if you push it too high, then it's too high and then it will be pushed down again until it is exactly the same level as the other one." }, { "start": 1473, "end": 1480, "text": " Now, the disadvantages here is that, of course, this isn't actually forced to be a valid probability distribution." }, { "start": 1480, "end": 1483, "text": " You can normalize it, yes, but you can go too high." }, { "start": 1483, "end": 1487, "text": " So you can output probabilities higher than one and so on." }, { "start": 1487, "end": 1493, "text": " So there's a whole slew of problems that come with this, but you can counter this." }, { "start": 1493, "end": 1505, "text": " So beside using an L2 loss, they also have another on top idea in that they always add noise to these residual updates that they do after the convolution," }, { "start": 1505, "end": 1512, "text": " just kind of to keep the network on its toes, saying that everything can always change with noise." }, { "start": 1512, "end": 1518, "text": " So in each step, it basically has to do some of some correction with respect to that noise." }, { "start": 1518, "end": 1528, "text": " And here you can see the clear difference, especially in the lower plot, where the total agreement before this blue line was when it went down over time." }, { "start": 1528, "end": 1539, "text": " And now with the L2 loss and a little bit more with this residual noise, it manages to keep the total agreement up and solve that problem." }, { "start": 1539, "end": 1552, "text": " And you can also see that the average magnitude of the updates no longer is rising over time, but actually it's keeping the same for the cell states and the updates converge towards zero." }, { "start": 1552, "end": 1557, "text": " Of course, not as much with the noise because the noise makes them." }, { "start": 1557, "end": 1564, "text": " The noise will make them non-zero, the updates, but still they are at the same magnitude." }, { "start": 1564, "end": 1572, "text": " And they manage to correct that noise and not incorporate more and more and more like the cross entropy loss." }, { "start": 1572, "end": 1578, "text": " So this, I don't want to go into the last few bits, except this one." }, { "start": 1578, "end": 1586, "text": " These cells have some interesting properties, notably they're also resistant to kind of out of distribution errors." }, { "start": 1586, "end": 1595, "text": " You can see that in this video where you can see it's classifying it fairly solidly as 1s." }, { "start": 1595, "end": 1611, "text": " But as soon as you draw a shape that is not kind of in the training set or in the classes of the training set, the cells keep disagreeing with each other." }, { "start": 1611, "end": 1617, "text": " So this you can see as sort of kind of a robustness to out of distribution samples." }, { "start": 1617, "end": 1623, "text": " And it's also pretty interesting to see that the messages here, where they go from." }, { "start": 1623, "end": 1636, "text": " So you can fairly clearly see that if you draw some kind of shape, that the message passing starts at kind of the most symbolic parts of the digits." }, { "start": 1636, "end": 1641, "text": " And here they have some chimeric digits or something they call it like this." }, { "start": 1641, "end": 1646, "text": " And just pay attention to where the messages start." }, { "start": 1646, "end": 1658, "text": " And you can clearly see that this sort of local determination of what a digit is will spread out over time to the other cells." }, { "start": 1658, "end": 1663, "text": " And I thought there was this last thing." }, { "start": 1663, "end": 1676, "text": " This thing. Yes. So here, not only do they visualize the cell state, so the color of the cell, and that's the thing on the left, is always the first 10 entries in this hidden state." }, { "start": 1676, "end": 1681, "text": " But on the right, they also visualize the other hidden entries." }, { "start": 1681, "end": 1688, "text": " And so each entry is represented by a two color thing where blue is very low number, red is a very high number." }, { "start": 1688, "end": 1693, "text": " And here you can see what these latent states pass around." }, { "start": 1693, "end": 1702, "text": " And also you can fairly clearly see that they do pass around these kind of typical sub shapes of the digit." }, { "start": 1702, "end": 1705, "text": " So in the case of a zero, that's going to be a bend." }, { "start": 1705, "end": 1709, "text": " In the case of a four, that's going to be these ends and corners of the numbers." }, { "start": 1709, "end": 1721, "text": " And you can see that over time, as these messages pass, also the cell states on the left, the visible states, the class labels change over time." }, { "start": 1721, "end": 1726, "text": " This lends a lot of credence, especially the six I like." }, { "start": 1726, "end": 1736, "text": " Or the two. You can see in the different, if you kind of look at the different latent states, that the kind of typical, the bends, the corners," }, { "start": 1736, "end": 1740, "text": " every latent state is sort of assigned to one of them." }, { "start": 1740, "end": 1745, "text": " And then they pass this information around in order to reach an agreement." }, { "start": 1745, "end": 1748, "text": " So I like this research, pretty cool research." }, { "start": 1748, "end": 1752, "text": " I don't want to say it's very useful, but certainly it's very interesting." }, { "start": 1752, "end": 1755, "text": " And I also like the format in this distilled format." }, { "start": 1755, "end": 1760, "text": " I think that's sort of the future of research rather than eight page PDFs." }, { "start": 1760, "end": 1764, "text": " You can look at it, it's interactive, you can have a little demo in it." }, { "start": 1764, "end": 1766, "text": " You can write for as long as you want." }, { "start": 1766, "end": 1772, "text": " And yeah, it's just overall better. This is still going." }, { "start": 1772, "end": 1774, "text": " Doesn't know what it is." }, { "start": 1774, "end": 1780, "text": " So lastly, you can, as I said, you can clearly see that, look, if I do this, it's a zero." }, { "start": 1780, "end": 1787, "text": " But if I do this, then the stem part will immediately go for a six because that's indicative of a six." }, { "start": 1787, "end": 1793, "text": " But then it will disagree with the zero part of the digit." }, { "start": 1793, "end": 1800, "text": " In fact, I seem to be unable to write a six. Is that an American six? Maybe." }, { "start": 1800, "end": 1803, "text": " Yeah, so with that, I'll leave this here." }, { "start": 1803, "end": 1809, "text": " I think this is, again, very interesting, this kind of biological models." }, { "start": 1809, "end": 1815, "text": " And certainly, if you're looking for an exciting research directions, this might be it." }, { "start": 1815, "end": 1817, "text": " And you do not need a lot of resources to do this." }, { "start": 1817, "end": 1821, "text": " This is very parameter efficient, as we saw in the last paper." }, { "start": 1821, "end": 1824, "text": " And certainly kind of a niche right now." }, { "start": 1824, "end": 1827, "text": " So that was it for me. I hope you enjoyed this." }, { "start": 1827, "end": 1855, "text": " If you liked it, share it out and bye bye. See you next time." } ]
BTLCdge7uSQ
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
AlphaStar: Grandmaster level in StarCraft II using multi-agent reinforcement learning
[ "Science & Technology" ]
[ "ml", "ai", "machine learning", "reinforcement learning", "deep rl", "deepmind", "google", "starcraft", "alphastar", "alphago", "alphazero", "value function", "policy", "vtrace", "upgo", "terran", "protoss", "zerg", "build order", "strategy", "pointer network", "transformer", "league training", "league", "battlenet", "artificial intelligence", "bot", "rl", "deep reinforcement learning", "model-free", "exploiters", "self-play", "ficticious self-play", "rts" ]
DeepMind's new agent to tackle yet another Esport: Starcraft II. This agent uses deep reinforcement learning with a new technique, called League Training, to catapult itself to Grandmaster-level skill at playing this game. Abstract: Many real-world applications require artificial agents to compete and coordinate with other agents in complex environments. As a stepping stone to this goal, the domain of StarCraft has emerged as an important challenge for artificial intelligence research, owing to its iconic and enduring status among the most difficult professional esports and its relevance to the real world in terms of its raw complexity and multi-agent challenges. Over the course of a decade and numerous competitions, the strongest agents have simplified important aspects of the game, utilized superhuman capabilities, or employed hand-crafted sub-systems. Despite these advantages, no previous agent has come close to matching the overall skill of top StarCraft players. We chose to address the challenge of StarCraft using general purpose learning methods that are in principle applicable to other complex domains: a multi-agent reinforcement learning algorithm that uses data from both human and agent games within a diverse league of continually adapting strategies and counter-strategies, each represented by deep neural networks. We evaluated our agent, AlphaStar, in the full game of StarCraft II, through a series of online games against human players. AlphaStar was rated at Grandmaster level for all three StarCraft races and above 99.8% of officially ranked human players. Authors: Oriol Vinyals, Igor Babuschkin, Wojciech M. Czarnecki, Michaël Mathieu, Andrew Dudzik, Junyoung Chung, David H. Choi, Richard Powell, Timo Ewalds, Petko Georgiev, Junhyuk Oh, Dan Horgan, Manuel Kroiss, Ivo Danihelka, Aja Huang, Laurent Sifre, Trevor Cai, John P. Agapiou, Max Jaderberg, Alexander S. Vezhnevets, Rémi Leblond, Tobias Pohlen, Valentin Dalibard, David Budden, Yury Sulsky, James Molloy, Tom L. Paine, Caglar Gulcehre, Ziyu Wang, Tobias Pfaff, Yuhuai Wu, Roman Ring, Dani Yogatama, Dario Wünsch, Katrina McKinney, Oliver Smith, Tom Schaul, Timothy Lillicrap, Koray Kavukcuoglu, Demis Hassabis, Chris Apps, David Silver https://www.deepmind.com/blog/article/AlphaStar-Grandmaster-level-in-StarCraft-II-using-multi-agent-reinforcement-learning Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Alright, let's talk about AlphaStar, Grandmaster level in StarCraft 2 using multi-agent reinforcement learning. The corresponding paper looks like this and is by Oriol Vinyals et al. from DeepMind and has been published in the journal of Nature recently. Now let me say this first. Stop publishing in Nature. This is a journal is not open access. It makes its readers pay for getting the article. So actually you can access this article or a public version of it for free but you can't print it, you can't download it unless you pay for it. And this to me, it seems ridiculous because none of this money goes to the authors of the article. None of this money goes to the reviewers. The review quality isn't notably better, at least in the field of computer science. All of this is a publicity stunt by DeepMind because Nature has been kind of impactful in the last decades. It's like, ooh, look at me, I got a big dick I publish in Nature. Nothing more than that. It's like OpenAI saying their model is too dangerous to release to the world. I guess DeepMind might make the same claim about AlphaStar. It's like too dangerous of a StarCraft player. Yeah, so stop this. Publish your research in open access. Nature or journals like these for computer science. It's a remnant of the last century. So go on and join everyone else in distributing knowledge. All right, rant over. Let's jump in into this article. So the article describes how to train a reinforcement learning agent to play the game of StarCraft 2. So StarCraft 2 is this game for everyone who doesn't know. Just very quickly explain the game. StarCraft 2 is a real time strategy game and you're kind of in this top third person view and you control your units and the goal is kind of to move your units around and first of all build up buildings and using those buildings you can then produce more and more diverse units and ultimately you want to kind of produce some sort of army that can go to the opponent and destroy the opponent's base. So you control all of this on a computer using a mouse and a keyboard and StarCraft is notable for being very balanced. So there are three different races you can play. So first are the Terran which are kind of human, human-ish. They have marines and tanks and helicopters I believe and things like this. Then the Protoss are some sort of alien race that are super advanced so they can teleport and have energy shields and things like that. And then last are the Zerg and the Zerg are kind of icky ground dwelling creatures that infect things and spread like a disease. So the interesting thing here is compared to other real-time strategy games is that the three races they play very different. So the game is almost a different game if you play as a different race but they are so well balanced that almost any matchup is kind of a fair game between equally skilled players. So that's makes StarCraft pretty unique. Also pretty unique is the very very high action per minute rates that pro players get. Like they play this insanely fast. So game lasts about 10 to 15 minutes and as I said the goal is to destroy the enemy base. So to train an RL agent to play this is very hard because the action space is very high. You have to target with your mouse part of the screen. You have to look what is on the screen, what can I do. There's this mini map down here. There are things you can do. There are opponents you can target and so on. So all of this is very very very difficult for an RL agent. And at the end, after 10 minutes, you play play play play play and after 10 minutes you either win or you lose. And the RL agent has to figure out which of the actions that I did during those 10 minutes right. Was it this one? Was it this one? Which led to me winning or losing? These are very hard problems for reinforcement learning. And DeepMind has combined almost every trick in the book known so far to RL to achieve this. Now the main contribution I'd say here that is novel is what is called league training and we'll get to that. So first of all, if you don't know what reinforcement learning is, reinforcement learning is basically what I just described. You have an input right, which could be this thing here and you have a set of actions that you can do, which the set of actions here is anywhere you can click right, you can click anywhere on the screen. And you have to do this over and over and over and over again until you either win or you lose. And from that you will see you will at the end receive Yeah, you win or you lose and then you have to kind of learn to play the game. So it's machine learning hardcore because you get minimal information and have to achieve a lot of things from it. So the first thing that DeepMind actually does is it does supervised learning. And we'll get into how exactly the model works later. But first thing DeepMind does is it trains an agent to simply imitate humans, right? So you have human data. And from the human data, you so these are games played by humans, good humans, right? Not not people like me. So these these are games played with humans from a significantly high ELO. And the first thing you extract is this Z here. Now Z is is called a statistics vector. And as I understand it, it's mainly the build order, which means in which order do you build your buildings and units and this is very important in StarCraft. This is a strategic decision where you say, okay, first, I'm going to build three worker units. This is like three workers, worker, worker, worker, and then I'm going to build a house and then I'm going to and so on. So these are major strategic decisions that that you kind of have to make with minutes, minutes ahead of time to plan in advance. And this this is kind of stays constant for the game. So this is extracted and provided to the model as an input. So what is the current strategy basically the current overall strategy? The second thing that is extracted is this is at every time step, the observation that the humans had so the screen that humans see, and also the actions that the human did, right? So the human takes its mouse and clicks somewhere, right? This is supposed to be a mouse pointer and clicks here, right? And then the model, this part here, this is the model. And this is the policy function of the model. So the policy decides what to do, right? Is trained to match the action that the human did. So in essence, first, you train an agent to simply imitate humans. And this you can do by supervised learning, right? This is classic machine learning. Each each step you have this input, which is an image, and you have the strategy you're trying to follow. And from these two, you're simply trying to match the action that the human did, assuming the human made a good decision. So this is how you initialize, right? You don't start from scratch. Now I have to say that even though this name is Alpha star, it has surprisingly little to do with Alpha Go or Alpha Zero that DeepMind has done before. Mainly this is entirely model free reinforcement learning. And goes more into the direction of classic deep RL. And you can see with the human data, you can already get pretty far. So these down here are the leagues of StarCraft. And this this here are percentiles of players. And you see with the supervised training, you can get almost you can get better than 80 85% of human players already. Right? So pretty, pretty impressive already simply by imitating humans. Now so the the the way to to further improve this, and let's actually go first into how the model looks like. So down here, they describe this model. That's it. So the model is supposed to map from input to output. So from the screen that the agent sees, right, and some other things to what the agent is going to do to an action a. If you simply do this at every time step, then you have a game playing agent. So first, the question is, of course, how does this happen? Now the input isn't only the thing that the agencies which is this the mini map and the mini map? I believe that's the mini map or the entire map. Well, it's it's in essence, it is a picture. It is also a list of entities. So the the game engine extracts a list of entities. And these can be inside the screen here and outside the screen for friendly. So the assumption is the agent knows about all of its units and where they are and what their statistics are. So in this entity thing, for each entity, you have a list of what is its health, what is its type, what is its position, does it carry any items and so on all the things you need to know about this entity. This is in this list of entities. And along with that also opponent entities, but only the ones that are on screen. Right. So all of this goes into this list of entities. And then the next features are scalar features. And as I understand it, scalar features are things like what race are you playing currently? What time is it in the game and so on. So these are additional features. And also baseline features. And this is mainly used to train the value network. And if you this is not going to make sense if you know nothing about reinforcement learning. But one main contribution of this paper is or not contribution, but kind of thing that they claim is that for computing the value network, they also use the observations. So all of this of the opponent player, because you know this during training, because you're doing self play, and you don't need this value network during inference. You can actually do this and this improves performance significantly. Alright so that's just for people who know RL very well. Everyone else don't don't worry too much about these things. Alright so these are the inputs, the scalar features, the entity and the minimap. Each one goes through separate encoders. So the minimap goes through a ResNet which is a convolutional network. And the entities go through a transformer which is kind of a thing to, it's appropriate to encode a set of entities right. Scalar features go through a classic feed forward network MLP. All of these get combined here into a deep LSTM that goes over time. Now the deep LSTM is what really makes the strategy because each time step, each time step a screen like this is input into the into the thing. But the agent also needs to remember what did it do last steps two steps ago right. This is important because you don't have full observability. You need to know what did I do in the in the past. And that's where the so if the last step you saw this screen and the step before you saw this screen right then all of this would go through these encoding step into the LSTM right. So the LSTM will encode now over time all of these different steps. And so you can kind of say alright if I have just started building a building I should probably not build the same building again even though I can't see it on the screen right. Because I know that three steps ago I did start building a build build a building. So this is kind of the LSTM is basically where you integrate your strategy over time. So from the LSTM you have to make two predictions. You have to make a prediction of what to do. This is the action and how valuable is your current state and how valuable is your current state. This is called the value network. This is a core component of deep reinforcement learning. These two components one is called the policy which would be everything over here and what is called the value network which is called everything over here. These are the things you need to do actor critic learning and actor critic learning is the current state of the art in deep RL. So deep mind does nothing else here except as I said they use these baseline features for the value network. But if you don't know what a value network is don't worry about it. The important part for playing the game is actually the part over here that called the policy. So first you need to do to decide what action you do and that there are many action types in Starcraft as I already said you can build a building you can move a unit you can actually move the camera that's an action type right because you want to maybe see what's over here or over here or over here. So that's an action you can do and if you have decided on what action you want to do you have to decide when do I do it. So you see the action type once you figured it out it goes into the next neural network and that decides okay when do I do it when do I do this action. So it specifies a delay. Then once you've decided what to do and when to do it it goes into the next neural network and that decides should I put this into the queue of actions because the agent here is limited to a certain number of actions per second and I think it's 22 actions per five seconds or something like this so in order to mimic you know human limitations. So there's a queue of actions to be executed and the agent needs to decide do I really want is this action so important to put it into the queue. Alright if you have decided what to do when to do it whether you would like to do it at all right then you have to you have to say it goes into the next neural network and you have to say alright which units do I want to do it with right if you want to build a building you can have to choose one or many workers to do it. I don't actually know how StarCraft works in this I'm a bit of a noob but you have to you have to select units with which to do the action for most of the thing and there I like the use of a pointer network here so what a pointer network is is a network that can point to its own inputs it's sort of like an attention network but not really in a pointer network if you have a set of inputs like we have here so entity entity entity entity right all these entities and you can see the entity embedding the entity encoder actually has skip connections that go here right so this network directly gets these these entities as input it can then write you then you have a neural network on top of that neural network that the neural network takes all of these things as an input and what the neural network will output is a pointer to one of these things right you can say look I point to this thing right here this is a called a pointer network and yeah as I said it's different from an attention network which might so an attention network is where you get a distribution actually get a distribution in both cases there is a difference but we don't have to really time to go into it here but in essence with a pointer network you can select which of these entities you want to do something with all right now you've decided on which action when whether to cue it with which unit to do it now you have to decide for some actions for example if the action is attack or heal or something this target unit which unit do you want to target or which which location on the map you want to target this is the target point here and you can see again here are skip connections from the entity encoder and from the spatial encoder to these things and while the target unit is an attention network that's this like much like a pointer network you will kind of point to places in lists the target point is a deconvolution or resnet what that means is so you have this spatial encoder here will embed the mini map so there will be a neural network right here actually let's draw the neural network in this color right here it will give you a an embedding of that right and that's what you what you feed into that's what you feed for example into the LSTM but then what you do is you have a deconvolutional network which again produces a mini map but on this mini map there there's not it's not the original mini map but it's kind of a distribution of locations so it said here here do I want to point all right so the that this neural network is responsible for producing this dot on the mini map basically saying okay I know what to do when to do it with which units to do it and so on I want to do it right here on the mini map okay and now you have it right you go from the input which are these things the mini map the entities and so on to what do I want to do where when with which units and so on right this is called a policy and it's extremely complicated every one of these boxes here is a neural network and you can see it's it's very it's a lot to train and they of course they have a lot of resources since they are deep mind but that's the the main thing all right they have a few tricks to train this and we won't go too much into this but one of the tricks is V trace from the Impala paper one of another trick is up go up going policy update and a third trick is TD lambda learning here and all of these are kind of improvements onto classic actor critic reinforcement learning style like a to see your a3c if you are interested then you can you know look into these things so that's how they train it and the question now is what's the protocol for training it we saw okay there is supervised learning cool then there is reinforcement learning all right but you can't just apply and this is in the reinforcement learning this is what we said you get kind of a reward and the reward goes into this TD lambda and V trace and and up going policy update to train the value function and the policy but the special thing that this paper introduces is what's called leak training now in in papers like alpha go or alpha zero what had been done is called self play and self play basically means you have an agent you have an agent right you have this how in a row an agent that's this is supposed to be an artificial intelligence right how to make it artificial okay it has a little hat right a funky hat it's a robot and the robot will play a copy of itself right and the copy it might be slightly different but the it basically these two these two play each other and thereby become better and better and better and you can see this like over time as as the purple one gets better the blue one gets better as well because they they kind of play against each other and when one falls behind right when one falls behind then they simply copy over from the other one they basically copy the other one and then they catch up again right they catch up right and they continue competing so by competing against each other they get better and this is called self play now people have noticed this kind of leads to instabilities because you can get kind of trapped get trapped in cycles like rock paper scissor cycles so what they do is they will actually as they get better so this is the first version right and the second version they are a bit better now so they have bigger hats right and here bigger bigger larger hats right and down here they are even better so they have like ginormous hats but they might have some weaknesses because they only play against each other right so this is the same players but over time what they will do is they will actually play occasionally play old versions of the other player or of themselves right occasionally the new versions will fall back and play old versions or not only the current versions of the agent or old versions of themselves right so this this is called fictitious self play in that you always play the you know not only play the your current kind of opponent or your current self i mean it's the same anyway because you keep copying the weights you also play the old ones and this paper goes a step further and says actually we we do this but we want to prioritize the good ones so for example we know that we know that the current ones are good right but we know that this particular one was also pretty good so far so we are we keep making we keep making these these new ones play against this one more often and this has led to kind of an improvement in these kind of self play algorithms and the real new part of this um alpha star paper is the fact that they do this league training and in the league training they this this is what it looks like but i find this graphic rather confusing i'd rather explain it like something like this all right so there is your current your current strategy and you have a hat right and you do all of the you do all of the all of the i play against myself with the smaller hat thing right i play against past versions of myself fine but then you also do you have what's called exploiters and exploiters an exploiter is a let's call it a triangle hat because it's very evil what it does is it specifically targets only the current good agent right so this this agent right here is tasked with playing old versions of itself and playing the exploiter both at the same time but the exploiter is only tasked with playing this thing so um what it can do is it can specialize in exploiting whatever weaknesses this player has of course the hope is that the this player will become better in response because there's a player trying to exploit it right so every and as this as this player becomes better than this player here is reinitialized and tries to find new weaknesses right so as this as this one continues to learn so the exploiters they are initialized you can see this here so these are called the main agents and you can see they play against each other right one of them they play against each other they play against past versions of themselves so these are past versions of themselves but then there are these main exploiters and the main exploiters they're constantly reinitialized from human data right you can see this here they're reinitialized and they only play against these main players right they don't have to deal with any of the past players or playing against themselves stuff they only try to exploit the main players and thereby the main players get better once they get better than an exploiter they are reinitialized so the exploiters are reinitialized to find new exploits of the main agents the third component is what's called a league exploiter and a league exploiter is the following so the league let's the league exploiter here and its hat is a wavy hat and what the league exploiter does is it plays against past versions of itself and others so it does play against the league exploiter sorry with smaller wavy hat it also plays against this thing by the way the this this here also plays against past versions of this and of everything else you can see here the past version arrows it goes against all past players so this this represents all the past players that ever existed and so does the so does the so here but also against past versions of this of this main exploiter here but the important thing is the current main exploiter doesn't play past versions of its of itself right so this also plays this and this place this and this place this and this also place this so the league exploiter they they do take part in this whole league business like playing against past versions of all the players but it only plays against the main ex against the main exploiters and this is a thing that i find missing here honestly i don't know if i don't understand this but i'm pretty sure i do like these also play these and that's an arrow missing in the in the drawing uh the league exploiters play the main agents but the main difference between the league exploiters and the main agents is the league exploiters they don't play themselves right there is no there's no playing themselves on the league exploiters so the league exploiters what they can do is they can find weaknesses of the entire league and kind of train train the by playing against the main opponents using those found weaknesses you bet that the main ex the main agents will get better against those major weaknesses of the entire league right so the main agents first of all they get better by playing the main exploiters because the main exploiters are mainly trying to exploit the main agents the main agents also get better by playing the league exploiters because the league exploiters find weaknesses of the entire league right so and the main agents they also get better by playing each So that makes these these main agents kind of... You can say they're trained against everything under the sun, against any possible exploit that can be found either in themselves or generally. And thereby they get really good at StarCraft, because they can counter pretty much everything. So this is how league training works and this is what I feel is the main contribution of this paper to the reinforcement learning world. Now they do an ablation study here. You can see where this ends up. So these final agents here, they end up in Grandmaster level StarCraft and beat 99. some percent of human players. So really really good. They do an ablation study of all of the tricks they use. So this is pretty much all tricks they use. And you can see here this includes this league composition. What happens if we only have main agents, then main exploiters, league exploiters, and you can see the elo going up. Then you can see multi-agent learning. How much does this fictitious self play? The fact that we prioritize to strong players and so on. How much does this help? And you again see the elo going up. How much does it help that we use human data? How much does it help that we use these different networks? They have very good ablation studies of how much each of the things help. Here they investigate what if we didn't have a camera interface? So what if we could see the entire game at once and not only the opponents that are within the camera? And what if we didn't need to move the camera? They investigate the off-policy learning corrections that we mentioned and so on. I find this very cool that they do these huge ablation studies to show really how much each of these tricks that they used helps in generating their superior performance. Here you can see how these agents develop. So over training and they have a massive infrastructure and they train for days. You can see this here. But you can see that the the main agents just get better and better and better and better. While the main exploiters of course they stay the same but they kind of keep getting reinitialized. So this main agent is trained to exploit these these sorry these main exploiters trained to exploit these main agents. This one is trying to exploit these ones. They're not by themselves really good agents but they're simply trained to to find and exploit weaknesses of the main agents. Likewise these league exploiters they do get better with the league but they are only concerned with exploiting current and past versions of the league. Also to make the main agents better. So everything is geared towards making these main agents better. And you can see it actually works. They have some analysis of which units these agents build. I'm not too versed in Starcraft to comment on this. But all in all I find this to be a very cool paper and I find it to be described fairly clear what they do. Though they do not release the source code. They release some kind of pseudo code. But the analysis and the ablations are very good. The results are let's say questionable because of course you can't compare machines to humans especially in a game where you have to make quick actions. Even if you limit the actions, they do this here. So they have this monitoring layer which limits the actions and introduces delay and so on. But still if it's not the same as a human who might not always be able to do these 22 actions per five seconds. If something quick happens they may need to have some kind of relaxation phase and so on. But they try with these kind of delays and action limits. They try to model these kind of limitations. I find this as fair as possible. This is what I find kind of problematic. So they own units as I said. The agent can also see the ones that are outside the camera. And that seems kind of shady. Because of course you can you can claim humans can do whatever command groups to also control units outside the camera. But it's not really the case. So that's sort of a distinct advantage that the machine has. But yeah in any case I find it to be very well done. And I hope this made it a bit clearer what the exact contributions are. And with that have a fun time playing against AlphaStar. Bye bye.
[ { "start": 0, "end": 7.28, "text": " Alright, let's talk about AlphaStar, Grandmaster level in StarCraft 2 using multi-agent reinforcement" }, { "start": 7.28, "end": 8.36, "text": " learning." }, { "start": 8.36, "end": 15.3, "text": " The corresponding paper looks like this and is by Oriol Vinyals et al. from DeepMind and" }, { "start": 15.3, "end": 19.56, "text": " has been published in the journal of Nature recently." }, { "start": 19.56, "end": 21.84, "text": " Now let me say this first." }, { "start": 21.84, "end": 24.12, "text": " Stop publishing in Nature." }, { "start": 24.12, "end": 26.38, "text": " This is a journal is not open access." }, { "start": 26.38, "end": 29.92, "text": " It makes its readers pay for getting the article." }, { "start": 29.92, "end": 36.440000000000005, "text": " So actually you can access this article or a public version of it for free but you can't" }, { "start": 36.440000000000005, "end": 40.760000000000005, "text": " print it, you can't download it unless you pay for it." }, { "start": 40.760000000000005, "end": 47.84, "text": " And this to me, it seems ridiculous because none of this money goes to the authors of" }, { "start": 47.84, "end": 48.92, "text": " the article." }, { "start": 48.92, "end": 50.88, "text": " None of this money goes to the reviewers." }, { "start": 50.88, "end": 56.160000000000004, "text": " The review quality isn't notably better, at least in the field of computer science." }, { "start": 56.16, "end": 61.68, "text": " All of this is a publicity stunt by DeepMind because Nature has been kind of impactful" }, { "start": 61.68, "end": 62.68, "text": " in the last decades." }, { "start": 62.68, "end": 68.44, "text": " It's like, ooh, look at me, I got a big dick I publish in Nature." }, { "start": 68.44, "end": 69.6, "text": " Nothing more than that." }, { "start": 69.6, "end": 74.08, "text": " It's like OpenAI saying their model is too dangerous to release to the world." }, { "start": 74.08, "end": 78.16, "text": " I guess DeepMind might make the same claim about AlphaStar." }, { "start": 78.16, "end": 81.36, "text": " It's like too dangerous of a StarCraft player." }, { "start": 81.36, "end": 85.12, "text": " Yeah, so stop this." }, { "start": 85.12, "end": 88.64, "text": " Publish your research in open access." }, { "start": 88.64, "end": 92.04, "text": " Nature or journals like these for computer science." }, { "start": 92.04, "end": 94.56, "text": " It's a remnant of the last century." }, { "start": 94.56, "end": 99.72, "text": " So go on and join everyone else in distributing knowledge." }, { "start": 99.72, "end": 102.4, "text": " All right, rant over." }, { "start": 102.4, "end": 104.36000000000001, "text": " Let's jump in into this article." }, { "start": 104.36000000000001, "end": 110.84, "text": " So the article describes how to train a reinforcement learning agent to play the game of StarCraft" }, { "start": 110.84, "end": 112.04, "text": " 2." }, { "start": 112.04, "end": 117.04, "text": " So StarCraft 2 is this game for everyone who doesn't know." }, { "start": 117.04, "end": 118.80000000000001, "text": " Just very quickly explain the game." }, { "start": 118.80000000000001, "end": 125, "text": " StarCraft 2 is a real time strategy game and you're kind of in this top third person view" }, { "start": 125, "end": 130.4, "text": " and you control your units and the goal is kind of to move your units around and first" }, { "start": 130.4, "end": 136, "text": " of all build up buildings and using those buildings you can then produce more and more" }, { "start": 136, "end": 142, "text": " diverse units and ultimately you want to kind of produce some sort of army that can go to" }, { "start": 142, "end": 145.76, "text": " the opponent and destroy the opponent's base." }, { "start": 145.76, "end": 152, "text": " So you control all of this on a computer using a mouse and a keyboard and StarCraft is notable" }, { "start": 152, "end": 154.16, "text": " for being very balanced." }, { "start": 154.16, "end": 157.6, "text": " So there are three different races you can play." }, { "start": 157.6, "end": 164.64, "text": " So first are the Terran which are kind of human, human-ish." }, { "start": 164.64, "end": 170.64, "text": " They have marines and tanks and helicopters I believe and things like this." }, { "start": 170.64, "end": 177.83999999999997, "text": " Then the Protoss are some sort of alien race that are super advanced so they can teleport" }, { "start": 177.83999999999997, "end": 182.11999999999998, "text": " and have energy shields and things like that." }, { "start": 182.11999999999998, "end": 189.35999999999999, "text": " And then last are the Zerg and the Zerg are kind of icky ground dwelling creatures that" }, { "start": 189.35999999999999, "end": 195.66, "text": " infect things and spread like a disease." }, { "start": 195.66, "end": 200.76, "text": " So the interesting thing here is compared to other real-time strategy games is that" }, { "start": 200.76, "end": 203.96, "text": " the three races they play very different." }, { "start": 203.96, "end": 209.8, "text": " So the game is almost a different game if you play as a different race but they are" }, { "start": 209.8, "end": 216.64, "text": " so well balanced that almost any matchup is kind of a fair game between equally skilled" }, { "start": 216.64, "end": 218.24, "text": " players." }, { "start": 218.24, "end": 220.5, "text": " So that's makes StarCraft pretty unique." }, { "start": 220.5, "end": 226.68, "text": " Also pretty unique is the very very high action per minute rates that pro players get." }, { "start": 226.68, "end": 229.48, "text": " Like they play this insanely fast." }, { "start": 229.48, "end": 238.08, "text": " So game lasts about 10 to 15 minutes and as I said the goal is to destroy the enemy base." }, { "start": 238.08, "end": 244.12, "text": " So to train an RL agent to play this is very hard because the action space is very high." }, { "start": 244.12, "end": 248.72, "text": " You have to target with your mouse part of the screen." }, { "start": 248.72, "end": 253.72, "text": " You have to look what is on the screen, what can I do." }, { "start": 253.72, "end": 256, "text": " There's this mini map down here." }, { "start": 256, "end": 259.32, "text": " There are things you can do." }, { "start": 259.32, "end": 261.28, "text": " There are opponents you can target and so on." }, { "start": 261.28, "end": 266.84, "text": " So all of this is very very very difficult for an RL agent." }, { "start": 266.84, "end": 274.32, "text": " And at the end, after 10 minutes, you play play play play play and after 10 minutes you" }, { "start": 274.32, "end": 276.96, "text": " either win or you lose." }, { "start": 276.96, "end": 282.96, "text": " And the RL agent has to figure out which of the actions that I did during those 10 minutes" }, { "start": 282.96, "end": 283.96, "text": " right." }, { "start": 283.96, "end": 284.96, "text": " Was it this one?" }, { "start": 284.96, "end": 285.96, "text": " Was it this one?" }, { "start": 285.96, "end": 287.76, "text": " Which led to me winning or losing?" }, { "start": 287.76, "end": 292.15999999999997, "text": " These are very hard problems for reinforcement learning." }, { "start": 292.15999999999997, "end": 299.64, "text": " And DeepMind has combined almost every trick in the book known so far to RL to achieve" }, { "start": 299.64, "end": 300.64, "text": " this." }, { "start": 300.64, "end": 308.03999999999996, "text": " Now the main contribution I'd say here that is novel is what is called league training" }, { "start": 308.03999999999996, "end": 310.86, "text": " and we'll get to that." }, { "start": 310.86, "end": 319.4, "text": " So first of all, if you don't know what reinforcement learning is, reinforcement learning is basically" }, { "start": 319.4, "end": 320.82, "text": " what I just described." }, { "start": 320.82, "end": 329.12, "text": " You have an input right, which could be this thing here and you have a set of actions that" }, { "start": 329.12, "end": 333.52, "text": " you can do, which the set of actions here is anywhere you can click right, you can click" }, { "start": 333.52, "end": 335.64, "text": " anywhere on the screen." }, { "start": 335.64, "end": 341.16, "text": " And you have to do this over and over and over and over again until you either win or" }, { "start": 341.16, "end": 342.68, "text": " you lose." }, { "start": 342.68, "end": 348.12, "text": " And from that you will see you will at the end receive Yeah, you win or you lose and" }, { "start": 348.12, "end": 350.72, "text": " then you have to kind of learn to play the game." }, { "start": 350.72, "end": 355.72, "text": " So it's machine learning hardcore because you get minimal information and have to achieve" }, { "start": 355.72, "end": 357.92, "text": " a lot of things from it." }, { "start": 357.92, "end": 366.16, "text": " So the first thing that DeepMind actually does is it does supervised learning." }, { "start": 366.16, "end": 371.86, "text": " And we'll get into how exactly the model works later." }, { "start": 371.86, "end": 378.08000000000004, "text": " But first thing DeepMind does is it trains an agent to simply imitate humans, right?" }, { "start": 378.08000000000004, "end": 381.28000000000003, "text": " So you have human data." }, { "start": 381.28000000000003, "end": 387.52000000000004, "text": " And from the human data, you so these are games played by humans, good humans, right?" }, { "start": 387.52, "end": 390.71999999999997, "text": " Not not people like me." }, { "start": 390.71999999999997, "end": 396.96, "text": " So these these are games played with humans from a significantly high ELO." }, { "start": 396.96, "end": 399.84, "text": " And the first thing you extract is this Z here." }, { "start": 399.84, "end": 403.44, "text": " Now Z is is called a statistics vector." }, { "start": 403.44, "end": 409.08, "text": " And as I understand it, it's mainly the build order, which means in which order do you build" }, { "start": 409.08, "end": 412.4, "text": " your buildings and units and this is very important in StarCraft." }, { "start": 412.4, "end": 418.76, "text": " This is a strategic decision where you say, okay, first, I'm going to build three worker" }, { "start": 418.76, "end": 419.76, "text": " units." }, { "start": 419.76, "end": 424.28, "text": " This is like three workers, worker, worker, worker, and then I'm going to build a house" }, { "start": 424.28, "end": 426.2, "text": " and then I'm going to and so on." }, { "start": 426.2, "end": 434.08, "text": " So these are major strategic decisions that that you kind of have to make with minutes," }, { "start": 434.08, "end": 438.09999999999997, "text": " minutes ahead of time to plan in advance." }, { "start": 438.1, "end": 442.58000000000004, "text": " And this this is kind of stays constant for the game." }, { "start": 442.58000000000004, "end": 446.44, "text": " So this is extracted and provided to the model as an input." }, { "start": 446.44, "end": 451.28000000000003, "text": " So what is the current strategy basically the current overall strategy?" }, { "start": 451.28000000000003, "end": 457.72, "text": " The second thing that is extracted is this is at every time step, the observation that" }, { "start": 457.72, "end": 466.86, "text": " the humans had so the screen that humans see, and also the actions that the human did, right?" }, { "start": 466.86, "end": 472.96000000000004, "text": " So the human takes its mouse and clicks somewhere, right?" }, { "start": 472.96000000000004, "end": 477.6, "text": " This is supposed to be a mouse pointer and clicks here, right?" }, { "start": 477.6, "end": 482.62, "text": " And then the model, this part here, this is the model." }, { "start": 482.62, "end": 484.64, "text": " And this is the policy function of the model." }, { "start": 484.64, "end": 488.48, "text": " So the policy decides what to do, right?" }, { "start": 488.48, "end": 492.64, "text": " Is trained to match the action that the human did." }, { "start": 492.64, "end": 498.28, "text": " So in essence, first, you train an agent to simply imitate humans." }, { "start": 498.28, "end": 500.36, "text": " And this you can do by supervised learning, right?" }, { "start": 500.36, "end": 502.24, "text": " This is classic machine learning." }, { "start": 502.24, "end": 509.24, "text": " Each each step you have this input, which is an image, and you have the strategy you're" }, { "start": 509.24, "end": 510.56, "text": " trying to follow." }, { "start": 510.56, "end": 515.56, "text": " And from these two, you're simply trying to match the action that the human did, assuming" }, { "start": 515.56, "end": 518.02, "text": " the human made a good decision." }, { "start": 518.02, "end": 520.68, "text": " So this is how you initialize, right?" }, { "start": 520.68, "end": 524.04, "text": " You don't start from scratch." }, { "start": 524.04, "end": 530.76, "text": " Now I have to say that even though this name is Alpha star, it has surprisingly little" }, { "start": 530.76, "end": 537.4399999999999, "text": " to do with Alpha Go or Alpha Zero that DeepMind has done before." }, { "start": 537.4399999999999, "end": 542.88, "text": " Mainly this is entirely model free reinforcement learning." }, { "start": 542.88, "end": 549.56, "text": " And goes more into the direction of classic deep RL." }, { "start": 549.56, "end": 553.1199999999999, "text": " And you can see with the human data, you can already get pretty far." }, { "start": 553.1199999999999, "end": 557.3599999999999, "text": " So these down here are the leagues of StarCraft." }, { "start": 557.3599999999999, "end": 561.68, "text": " And this this here are percentiles of players." }, { "start": 561.68, "end": 566.0799999999999, "text": " And you see with the supervised training, you can get almost you can get better than" }, { "start": 566.0799999999999, "end": 569.9599999999999, "text": " 80 85% of human players already." }, { "start": 569.9599999999999, "end": 571.26, "text": " Right?" }, { "start": 571.26, "end": 576.52, "text": " So pretty, pretty impressive already simply by imitating humans." }, { "start": 576.52, "end": 589.52, "text": " Now so the the the way to to further improve this, and let's actually go first into how" }, { "start": 589.52, "end": 591.88, "text": " the model looks like." }, { "start": 591.88, "end": 597.52, "text": " So down here, they describe this model." }, { "start": 597.52, "end": 598.6999999999999, "text": " That's it." }, { "start": 598.6999999999999, "end": 603.66, "text": " So the model is supposed to map from input to output." }, { "start": 603.66, "end": 611.92, "text": " So from the screen that the agent sees, right, and some other things to what the agent is" }, { "start": 611.92, "end": 615.16, "text": " going to do to an action a." }, { "start": 615.16, "end": 619.68, "text": " If you simply do this at every time step, then you have a game playing agent." }, { "start": 619.68, "end": 623.9399999999999, "text": " So first, the question is, of course, how does this happen?" }, { "start": 623.9399999999999, "end": 631.72, "text": " Now the input isn't only the thing that the agencies which is this the mini map and the" }, { "start": 631.72, "end": 634.12, "text": " mini map?" }, { "start": 634.12, "end": 637.72, "text": " I believe that's the mini map or the entire map." }, { "start": 637.72, "end": 641.76, "text": " Well, it's it's in essence, it is a picture." }, { "start": 641.76, "end": 644.46, "text": " It is also a list of entities." }, { "start": 644.46, "end": 650.52, "text": " So the the game engine extracts a list of entities." }, { "start": 650.52, "end": 658.6, "text": " And these can be inside the screen here and outside the screen for friendly." }, { "start": 658.6, "end": 664.32, "text": " So the assumption is the agent knows about all of its units and where they are and what" }, { "start": 664.32, "end": 665.84, "text": " their statistics are." }, { "start": 665.84, "end": 672.12, "text": " So in this entity thing, for each entity, you have a list of what is its health, what" }, { "start": 672.12, "end": 677.9200000000001, "text": " is its type, what is its position, does it carry any items and so on all the things you" }, { "start": 677.9200000000001, "end": 679.88, "text": " need to know about this entity." }, { "start": 679.88, "end": 682, "text": " This is in this list of entities." }, { "start": 682, "end": 688.96, "text": " And along with that also opponent entities, but only the ones that are on screen." }, { "start": 688.96, "end": 690.54, "text": " Right." }, { "start": 690.54, "end": 695.08, "text": " So all of this goes into this list of entities." }, { "start": 695.08, "end": 697.44, "text": " And then the next features are scalar features." }, { "start": 697.44, "end": 703.28, "text": " And as I understand it, scalar features are things like what race are you playing currently?" }, { "start": 703.28, "end": 705.72, "text": " What time is it in the game and so on." }, { "start": 705.72, "end": 708.56, "text": " So these are additional features." }, { "start": 708.56, "end": 712.4799999999999, "text": " And also baseline features." }, { "start": 712.4799999999999, "end": 718.4799999999999, "text": " And this is mainly used to train the value network." }, { "start": 718.4799999999999, "end": 723.0999999999999, "text": " And if you this is not going to make sense if you know nothing about reinforcement learning." }, { "start": 723.0999999999999, "end": 730.0799999999999, "text": " But one main contribution of this paper is or not contribution, but kind of thing that" }, { "start": 730.0799999999999, "end": 735.6999999999999, "text": " they claim is that for computing the value network, they also use the observations." }, { "start": 735.7, "end": 741.22, "text": " So all of this of the opponent player, because you know this during training, because you're" }, { "start": 741.22, "end": 746.76, "text": " doing self play, and you don't need this value network during inference." }, { "start": 746.76, "end": 751.32, "text": " You can actually do this and this improves performance significantly." }, { "start": 751.32, "end": 759.76, "text": " Alright so that's just for people who know RL very well." }, { "start": 759.76, "end": 763, "text": " Everyone else don't don't worry too much about these things." }, { "start": 763, "end": 768.44, "text": " Alright so these are the inputs, the scalar features, the entity and the minimap." }, { "start": 768.44, "end": 771.04, "text": " Each one goes through separate encoders." }, { "start": 771.04, "end": 775.72, "text": " So the minimap goes through a ResNet which is a convolutional network." }, { "start": 775.72, "end": 781.8, "text": " And the entities go through a transformer which is kind of a thing to, it's appropriate" }, { "start": 781.8, "end": 784.2, "text": " to encode a set of entities right." }, { "start": 784.2, "end": 788.16, "text": " Scalar features go through a classic feed forward network MLP." }, { "start": 788.16, "end": 793.28, "text": " All of these get combined here into a deep LSTM that goes over time." }, { "start": 793.28, "end": 801.0799999999999, "text": " Now the deep LSTM is what really makes the strategy because each time step, each time" }, { "start": 801.0799999999999, "end": 806.74, "text": " step a screen like this is input into the into the thing." }, { "start": 806.74, "end": 811.64, "text": " But the agent also needs to remember what did it do last steps two steps ago right." }, { "start": 811.64, "end": 814.28, "text": " This is important because you don't have full observability." }, { "start": 814.28, "end": 818.48, "text": " You need to know what did I do in the in the past." }, { "start": 818.48, "end": 824.6, "text": " And that's where the so if the last step you saw this screen and the step before you saw" }, { "start": 824.6, "end": 831.12, "text": " this screen right then all of this would go through these encoding step into the LSTM" }, { "start": 831.12, "end": 832.12, "text": " right." }, { "start": 832.12, "end": 837.9599999999999, "text": " So the LSTM will encode now over time all of these different steps." }, { "start": 837.9599999999999, "end": 844.04, "text": " And so you can kind of say alright if I have just started building a building I should" }, { "start": 844.04, "end": 849.68, "text": " probably not build the same building again even though I can't see it on the screen right." }, { "start": 849.68, "end": 858.36, "text": " Because I know that three steps ago I did start building a build build a building." }, { "start": 858.36, "end": 865.4, "text": " So this is kind of the LSTM is basically where you integrate your strategy over time." }, { "start": 865.4, "end": 869.0799999999999, "text": " So from the LSTM you have to make two predictions." }, { "start": 869.0799999999999, "end": 873.64, "text": " You have to make a prediction of what to do." }, { "start": 873.64, "end": 880.88, "text": " This is the action and how valuable is your current state and how valuable is your current" }, { "start": 880.88, "end": 881.88, "text": " state." }, { "start": 881.88, "end": 883.72, "text": " This is called the value network." }, { "start": 883.72, "end": 887.76, "text": " This is a core component of deep reinforcement learning." }, { "start": 887.76, "end": 891.8, "text": " These two components one is called the policy which would be everything over here and what" }, { "start": 891.8, "end": 895.72, "text": " is called the value network which is called everything over here." }, { "start": 895.72, "end": 900.3199999999999, "text": " These are the things you need to do actor critic learning and actor critic learning" }, { "start": 900.3199999999999, "end": 902.64, "text": " is the current state of the art in deep RL." }, { "start": 902.64, "end": 907.06, "text": " So deep mind does nothing else here except as I said they use these baseline features" }, { "start": 907.06, "end": 909.1999999999999, "text": " for the value network." }, { "start": 909.1999999999999, "end": 912.4399999999999, "text": " But if you don't know what a value network is don't worry about it." }, { "start": 912.4399999999999, "end": 918.16, "text": " The important part for playing the game is actually the part over here that called the" }, { "start": 918.16, "end": 919.28, "text": " policy." }, { "start": 919.28, "end": 924.56, "text": " So first you need to do to decide what action you do and that there are many action types" }, { "start": 924.56, "end": 929.24, "text": " in Starcraft as I already said you can build a building you can move a unit you can actually" }, { "start": 929.24, "end": 933.12, "text": " move the camera that's an action type right because you want to maybe see what's over" }, { "start": 933.12, "end": 935.72, "text": " here or over here or over here." }, { "start": 935.72, "end": 943.72, "text": " So that's an action you can do and if you have decided on what action you want to do" }, { "start": 943.72, "end": 946.26, "text": " you have to decide when do I do it." }, { "start": 946.26, "end": 952.04, "text": " So you see the action type once you figured it out it goes into the next neural network" }, { "start": 952.04, "end": 957.32, "text": " and that decides okay when do I do it when do I do this action." }, { "start": 957.32, "end": 960.6800000000001, "text": " So it specifies a delay." }, { "start": 960.6800000000001, "end": 966.5600000000001, "text": " Then once you've decided what to do and when to do it it goes into the next neural network" }, { "start": 966.5600000000001, "end": 973.84, "text": " and that decides should I put this into the queue of actions because the agent here is" }, { "start": 973.84, "end": 980.62, "text": " limited to a certain number of actions per second and I think it's 22 actions per five" }, { "start": 980.62, "end": 987.6, "text": " seconds or something like this so in order to mimic you know human limitations." }, { "start": 987.6, "end": 993.36, "text": " So there's a queue of actions to be executed and the agent needs to decide do I really" }, { "start": 993.36, "end": 997.88, "text": " want is this action so important to put it into the queue." }, { "start": 997.88, "end": 1004, "text": " Alright if you have decided what to do when to do it whether you would like to do it at" }, { "start": 1004, "end": 1011.16, "text": " all right then you have to you have to say it goes into the next neural network and you" }, { "start": 1011.16, "end": 1016.28, "text": " have to say alright which units do I want to do it with right if you want to build a" }, { "start": 1016.28, "end": 1020.6, "text": " building you can have to choose one or many workers to do it." }, { "start": 1020.6, "end": 1025.6, "text": " I don't actually know how StarCraft works in this I'm a bit of a noob but you have to" }, { "start": 1025.6, "end": 1031.12, "text": " you have to select units with which to do the action for most of the thing and there" }, { "start": 1031.12, "end": 1037.6, "text": " I like the use of a pointer network here so what a pointer network is is a network that" }, { "start": 1037.6, "end": 1043.08, "text": " can point to its own inputs it's sort of like an attention network but not really in a pointer" }, { "start": 1043.08, "end": 1048.6, "text": " network if you have a set of inputs like we have here so entity entity entity entity" }, { "start": 1048.6, "end": 1054.3799999999999, "text": " right all these entities and you can see the entity embedding the entity encoder actually" }, { "start": 1054.38, "end": 1062.2800000000002, "text": " has skip connections that go here right so this network directly gets these these entities" }, { "start": 1062.2800000000002, "end": 1072.2, "text": " as input it can then write you then you have a neural network on top of that neural network" }, { "start": 1072.2, "end": 1081.24, "text": " that the neural network takes all of these things as an input and what the neural network" }, { "start": 1081.24, "end": 1089.44, "text": " will output is a pointer to one of these things right you can say look I point to this thing" }, { "start": 1089.44, "end": 1094.76, "text": " right here this is a called a pointer network and yeah as I said it's different from an" }, { "start": 1094.76, "end": 1104.04, "text": " attention network which might so an attention network is where you get a distribution actually" }, { "start": 1104.04, "end": 1108.08, "text": " get a distribution in both cases there is a difference but we don't have to really time" }, { "start": 1108.08, "end": 1115.76, "text": " to go into it here but in essence with a pointer network you can select which of these entities" }, { "start": 1115.76, "end": 1122.3799999999999, "text": " you want to do something with all right now you've decided on which action when whether" }, { "start": 1122.3799999999999, "end": 1128.8799999999999, "text": " to cue it with which unit to do it now you have to decide for some actions for example" }, { "start": 1128.8799999999999, "end": 1134.96, "text": " if the action is attack or heal or something this target unit which unit do you want to" }, { "start": 1134.96, "end": 1143.64, "text": " target or which which location on the map you want to target this is the target point" }, { "start": 1143.64, "end": 1150.56, "text": " here and you can see again here are skip connections from the entity encoder and from the spatial" }, { "start": 1150.56, "end": 1158.16, "text": " encoder to these things and while the target unit is an attention network that's this like" }, { "start": 1158.16, "end": 1166.68, "text": " much like a pointer network you will kind of point to places in lists the target point" }, { "start": 1166.68, "end": 1172.9, "text": " is a deconvolution or resnet what that means is so you have this spatial encoder here will" }, { "start": 1172.9, "end": 1179.0800000000002, "text": " embed the mini map so there will be a neural network right here actually let's draw the" }, { "start": 1179.0800000000002, "end": 1186.6000000000001, "text": " neural network in this color right here it will give you a an embedding of that right" }, { "start": 1186.6, "end": 1193.12, "text": " and that's what you what you feed into that's what you feed for example into the LSTM but" }, { "start": 1193.12, "end": 1202.84, "text": " then what you do is you have a deconvolutional network which again produces a mini map but" }, { "start": 1202.84, "end": 1208.9599999999998, "text": " on this mini map there there's not it's not the original mini map but it's kind of a distribution" }, { "start": 1208.96, "end": 1218.48, "text": " of locations so it said here here do I want to point all right so the that this neural" }, { "start": 1218.48, "end": 1225.4, "text": " network is responsible for producing this dot on the mini map basically saying okay" }, { "start": 1225.4, "end": 1231.52, "text": " I know what to do when to do it with which units to do it and so on I want to do it right" }, { "start": 1231.52, "end": 1239.76, "text": " here on the mini map okay and now you have it right you go from the input which are these" }, { "start": 1239.76, "end": 1245.5, "text": " things the mini map the entities and so on to what do I want to do where when with which" }, { "start": 1245.5, "end": 1252.68, "text": " units and so on right this is called a policy and it's extremely complicated every one of" }, { "start": 1252.68, "end": 1259.84, "text": " these boxes here is a neural network and you can see it's it's very it's a lot to train" }, { "start": 1259.84, "end": 1266.04, "text": " and they of course they have a lot of resources since they are deep mind but that's the the" }, { "start": 1266.04, "end": 1276.84, "text": " main thing all right they have a few tricks to train this and we won't go too much into" }, { "start": 1276.84, "end": 1286.72, "text": " this but one of the tricks is V trace from the Impala paper one of another trick is" }, { "start": 1286.72, "end": 1296.76, "text": " up go up going policy update and a third trick is TD lambda learning here and all of these" }, { "start": 1296.76, "end": 1302.24, "text": " are kind of improvements onto classic actor critic reinforcement learning style like a" }, { "start": 1302.24, "end": 1312.88, "text": " to see your a3c if you are interested then you can you know look into these things so" }, { "start": 1312.88, "end": 1321.24, "text": " that's how they train it and the question now is what's the protocol for training it" }, { "start": 1321.24, "end": 1327.16, "text": " we saw okay there is supervised learning cool then there is reinforcement learning all right" }, { "start": 1327.16, "end": 1331.66, "text": " but you can't just apply and this is in the reinforcement learning this is what we said" }, { "start": 1331.66, "end": 1337.92, "text": " you get kind of a reward and the reward goes into this TD lambda and V trace and and up" }, { "start": 1337.92, "end": 1346.3200000000002, "text": " going policy update to train the value function and the policy but the special thing that" }, { "start": 1346.3200000000002, "end": 1352.76, "text": " this paper introduces is what's called leak training now in in papers like alpha go or" }, { "start": 1352.76, "end": 1359, "text": " alpha zero what had been done is called self play and self play basically means you have" }, { "start": 1359, "end": 1366.64, "text": " an agent you have an agent right you have this how in a row an agent that's this is" }, { "start": 1366.64, "end": 1371.5200000000002, "text": " supposed to be an artificial intelligence right how to make it artificial okay it has" }, { "start": 1371.5200000000002, "end": 1382.5200000000002, "text": " a little hat right a funky hat it's a robot and the robot will play a copy of itself right" }, { "start": 1382.5200000000002, "end": 1390.5600000000002, "text": " and the copy it might be slightly different but the it basically these two these two play" }, { "start": 1390.5600000000002, "end": 1395.48, "text": " each other and thereby become better and better and better and you can see this like over" }, { "start": 1395.48, "end": 1401.84, "text": " time as as the purple one gets better the blue one gets better as well because they" }, { "start": 1401.84, "end": 1406.8, "text": " they kind of play against each other and when one falls behind right when one falls behind" }, { "start": 1406.8, "end": 1413.24, "text": " then they simply copy over from the other one they basically copy the other one and" }, { "start": 1413.24, "end": 1419.44, "text": " then they catch up again right they catch up right and they continue competing so by" }, { "start": 1419.44, "end": 1426.4, "text": " competing against each other they get better and this is called self play now people have" }, { "start": 1426.4, "end": 1431.16, "text": " noticed this kind of leads to instabilities because you can get kind of trapped get trapped" }, { "start": 1431.16, "end": 1439.06, "text": " in cycles like rock paper scissor cycles so what they do is they will actually as they" }, { "start": 1439.06, "end": 1445.2, "text": " get better so this is the first version right and the second version they are a bit better" }, { "start": 1445.2, "end": 1457.6000000000001, "text": " now so they have bigger hats right and here bigger bigger larger hats right and down here" }, { "start": 1457.6000000000001, "end": 1463.44, "text": " they are even better so they have like ginormous hats but they might have some weaknesses because" }, { "start": 1463.44, "end": 1469.1200000000001, "text": " they only play against each other right so this is the same players but over time what" }, { "start": 1469.12, "end": 1477.08, "text": " they will do is they will actually play occasionally play old versions of the other player or of" }, { "start": 1477.08, "end": 1485.36, "text": " themselves right occasionally the new versions will fall back and play old versions or not" }, { "start": 1485.36, "end": 1491.3999999999999, "text": " only the current versions of the agent or old versions of themselves right so this this" }, { "start": 1491.3999999999999, "end": 1498.52, "text": " is called fictitious self play in that you always play the you know not only play the" }, { "start": 1498.52, "end": 1503.4, "text": " your current kind of opponent or your current self i mean it's the same anyway because you" }, { "start": 1503.4, "end": 1509.2, "text": " keep copying the weights you also play the old ones and this paper goes a step further" }, { "start": 1509.2, "end": 1518.28, "text": " and says actually we we do this but we want to prioritize the good ones so for example" }, { "start": 1518.28, "end": 1524.32, "text": " we know that we know that the current ones are good right but we know that this particular" }, { "start": 1524.32, "end": 1533.4399999999998, "text": " one was also pretty good so far so we are we keep making we keep making these these" }, { "start": 1533.4399999999998, "end": 1540.8999999999999, "text": " new ones play against this one more often and this has led to kind of an improvement" }, { "start": 1540.8999999999999, "end": 1547.62, "text": " in these kind of self play algorithms and the real new part of this um alpha star paper" }, { "start": 1547.62, "end": 1554.4399999999998, "text": " is the fact that they do this league training and in the league training they this this" }, { "start": 1554.4399999999998, "end": 1559.7199999999998, "text": " is what it looks like but i find this graphic rather confusing i'd rather explain it like" }, { "start": 1559.7199999999998, "end": 1567.8799999999999, "text": " something like this all right so there is your current your current strategy and you" }, { "start": 1567.88, "end": 1577.48, "text": " have a hat right and you do all of the you do all of the all of the i play against myself" }, { "start": 1577.48, "end": 1584, "text": " with the smaller hat thing right i play against past versions of myself fine but then you" }, { "start": 1584, "end": 1596.88, "text": " also do you have what's called exploiters and exploiters an exploiter is a let's call" }, { "start": 1596.88, "end": 1603.68, "text": " it a triangle hat because it's very evil what it does is it specifically targets only the" }, { "start": 1603.68, "end": 1611.48, "text": " current good agent right so this this agent right here is tasked with playing old versions" }, { "start": 1611.48, "end": 1618.5200000000002, "text": " of itself and playing the exploiter both at the same time but the exploiter is only" }, { "start": 1618.5200000000002, "end": 1626.5200000000002, "text": " tasked with playing this thing so um what it can do is it can specialize in exploiting" }, { "start": 1626.52, "end": 1632.4, "text": " whatever weaknesses this player has of course the hope is that the this player will become" }, { "start": 1632.4, "end": 1639.84, "text": " better in response because there's a player trying to exploit it right so every and as" }, { "start": 1639.84, "end": 1645.16, "text": " this as this player becomes better than this player here is reinitialized and tries to" }, { "start": 1645.16, "end": 1651.44, "text": " find new weaknesses right so as this as this one continues to learn so the exploiters they" }, { "start": 1651.44, "end": 1658.52, "text": " are initialized you can see this here so these are called the main agents and you can see" }, { "start": 1658.52, "end": 1662.88, "text": " they play against each other right one of them they play against each other they play" }, { "start": 1662.88, "end": 1670.92, "text": " against past versions of themselves so these are past versions of themselves but then there" }, { "start": 1670.92, "end": 1675.66, "text": " are these main exploiters and the main exploiters they're constantly reinitialized from human" }, { "start": 1675.66, "end": 1684.24, "text": " data right you can see this here they're reinitialized and they only play against these main players" }, { "start": 1684.24, "end": 1688.3200000000002, "text": " right they don't have to deal with any of the past players or playing against themselves" }, { "start": 1688.3200000000002, "end": 1694.16, "text": " stuff they only try to exploit the main players and thereby the main players get better once" }, { "start": 1694.16, "end": 1700.6000000000001, "text": " they get better than an exploiter they are reinitialized so the exploiters are reinitialized" }, { "start": 1700.6, "end": 1706.84, "text": " to find new exploits of the main agents the third component is what's called a league" }, { "start": 1706.84, "end": 1714.36, "text": " exploiter and a league exploiter is the following so the league let's the league exploiter here" }, { "start": 1714.36, "end": 1725.28, "text": " and its hat is a wavy hat and what the league exploiter does is it plays against past versions" }, { "start": 1725.28, "end": 1734, "text": " of itself and others so it does play against the league exploiter sorry with smaller wavy" }, { "start": 1734, "end": 1742.56, "text": " hat it also plays against this thing by the way the this this here also plays against" }, { "start": 1742.56, "end": 1748.44, "text": " past versions of this and of everything else you can see here the past version arrows it" }, { "start": 1748.44, "end": 1753.44, "text": " goes against all past players so this this represents all the past players that ever" }, { "start": 1753.44, "end": 1762.96, "text": " existed and so does the so does the so here but also against past versions of this of" }, { "start": 1762.96, "end": 1769.44, "text": " this main exploiter here but the important thing is the current main exploiter doesn't" }, { "start": 1769.44, "end": 1777.28, "text": " play past versions of its of itself right so this also plays this and this place this" }, { "start": 1777.28, "end": 1784.16, "text": " and this place this and this also place this so the league exploiter they they do take" }, { "start": 1784.16, "end": 1793.12, "text": " part in this whole league business like playing against past versions of all the players but" }, { "start": 1793.12, "end": 1802.3999999999999, "text": " it only plays against the main ex against the main exploiters and this is a thing that" }, { "start": 1802.4, "end": 1807.52, "text": " i find missing here honestly i don't know if i don't understand this but i'm pretty sure" }, { "start": 1807.52, "end": 1814.3200000000002, "text": " i do like these also play these and that's an arrow missing in the in the drawing uh" }, { "start": 1814.3200000000002, "end": 1817.8000000000002, "text": " the league exploiters play the main agents but the main difference between the league" }, { "start": 1817.8000000000002, "end": 1823.16, "text": " exploiters and the main agents is the league exploiters they don't play themselves right" }, { "start": 1823.16, "end": 1828.72, "text": " there is no there's no playing themselves on the league exploiters so the league exploiters" }, { "start": 1828.72, "end": 1837.76, "text": " what they can do is they can find weaknesses of the entire league and kind of train train" }, { "start": 1837.76, "end": 1843.88, "text": " the by playing against the main opponents using those found weaknesses you bet that" }, { "start": 1843.88, "end": 1850.88, "text": " the main ex the main agents will get better against those major weaknesses of the entire" }, { "start": 1850.88, "end": 1858.96, "text": " league right so the main agents first of all they get better by playing the main exploiters" }, { "start": 1858.96, "end": 1864.24, "text": " because the main exploiters are mainly trying to exploit the main agents the main agents" }, { "start": 1864.24, "end": 1870.6000000000001, "text": " also get better by playing the league exploiters because the league exploiters find weaknesses" }, { "start": 1870.6000000000001, "end": 1877.3400000000001, "text": " of the entire league right so and the main agents they also get better by playing each" }, { "start": 1877.34, "end": 1884.22, "text": " So that makes these these main agents kind of..." }, { "start": 1884.22, "end": 1888.1399999999999, "text": " You can say they're trained against everything under the sun," }, { "start": 1888.1399999999999, "end": 1893.02, "text": " against any possible exploit that can be found either in themselves or" }, { "start": 1893.02, "end": 1898.4599999999998, "text": " generally. And thereby they get really good at StarCraft," }, { "start": 1898.4599999999998, "end": 1902.4599999999998, "text": " because they can counter pretty much everything. So this is how" }, { "start": 1902.4599999999998, "end": 1906.4599999999998, "text": " league training works and this is what I feel is the main contribution of this" }, { "start": 1906.46, "end": 1911.18, "text": " paper to the reinforcement learning world." }, { "start": 1911.18, "end": 1916.22, "text": " Now they do an ablation study here. You can see" }, { "start": 1916.22, "end": 1920.94, "text": " where this ends up. So these final agents here," }, { "start": 1920.94, "end": 1928.7, "text": " they end up in Grandmaster level StarCraft and beat 99." }, { "start": 1928.7, "end": 1934.94, "text": " some percent of human players. So really really good." }, { "start": 1934.94, "end": 1939.1000000000001, "text": " They do an ablation study of all of the tricks they use." }, { "start": 1939.1000000000001, "end": 1942.6200000000001, "text": " So this is pretty much all tricks they use." }, { "start": 1942.6200000000001, "end": 1949.5, "text": " And you can see here this includes this league composition." }, { "start": 1949.5, "end": 1953.3400000000001, "text": " What happens if we only have main agents, then main exploiters, league" }, { "start": 1953.3400000000001, "end": 1959.5, "text": " exploiters, and you can see the elo going up." }, { "start": 1959.5, "end": 1965.66, "text": " Then you can see multi-agent learning. How much does this fictitious" }, { "start": 1965.66, "end": 1969.1, "text": " self play? The fact that we prioritize to strong" }, { "start": 1969.1, "end": 1973.58, "text": " players and so on. How much does this help? And you again see the elo" }, { "start": 1973.58, "end": 1978.54, "text": " going up. How much does it help that we use human data?" }, { "start": 1978.54, "end": 1982.46, "text": " How much does it help that we use these different networks?" }, { "start": 1982.46, "end": 1991.26, "text": " They have very good ablation studies of how much each of the" }, { "start": 1991.26, "end": 1996.38, "text": " things help. Here they investigate what if we didn't have a camera" }, { "start": 1996.38, "end": 2002.78, "text": " interface? So what if we could see the entire game at once and not only" }, { "start": 2002.78, "end": 2005.5, "text": " the opponents that are within the camera?" }, { "start": 2005.5, "end": 2009.02, "text": " And what if we didn't need to move the camera?" }, { "start": 2009.02, "end": 2014.22, "text": " They investigate the off-policy learning corrections that we mentioned" }, { "start": 2014.22, "end": 2018.7, "text": " and so on. I find this very cool that they do these" }, { "start": 2018.7, "end": 2023.34, "text": " huge ablation studies to show really how much each of these tricks that they used" }, { "start": 2023.34, "end": 2029.58, "text": " helps in generating their superior performance." }, { "start": 2029.58, "end": 2033.98, "text": " Here you can see how these agents develop." }, { "start": 2033.98, "end": 2038.7, "text": " So over training and they have a massive infrastructure and they train for" }, { "start": 2038.7, "end": 2041.9, "text": " days. You can see this here. But you can see that the" }, { "start": 2041.9, "end": 2045.3400000000001, "text": " the main agents just get better and better and better" }, { "start": 2045.3400000000001, "end": 2049.18, "text": " and better. While the main exploiters of course" }, { "start": 2049.18, "end": 2053.02, "text": " they stay the same but they kind of keep getting reinitialized." }, { "start": 2053.02, "end": 2058.46, "text": " So this main agent is trained to exploit these" }, { "start": 2058.46, "end": 2064.86, "text": " these sorry these main exploiters trained to exploit these main agents." }, { "start": 2064.86, "end": 2068.46, "text": " This one is trying to exploit these ones. They're not by themselves" }, { "start": 2068.46, "end": 2071.34, "text": " really good agents but they're simply trained to" }, { "start": 2071.34, "end": 2074.54, "text": " to find and exploit weaknesses of the main agents." }, { "start": 2074.54, "end": 2078.7, "text": " Likewise these league exploiters they do get better with the league" }, { "start": 2078.7, "end": 2085.26, "text": " but they are only concerned with exploiting current and past versions of" }, { "start": 2085.26, "end": 2089.26, "text": " the league. Also to make the main agents better." }, { "start": 2089.26, "end": 2092.94, "text": " So everything is geared towards making these main agents better." }, { "start": 2092.94, "end": 2099.34, "text": " And you can see it actually works." }, { "start": 2099.34, "end": 2105.02, "text": " They have some analysis of which units these agents build." }, { "start": 2105.02, "end": 2110.06, "text": " I'm not too versed in Starcraft to comment on this." }, { "start": 2110.06, "end": 2113.66, "text": " But all in all I find this to be a very cool paper" }, { "start": 2113.66, "end": 2119.5, "text": " and I find it to be described fairly clear what they do." }, { "start": 2119.5, "end": 2123.66, "text": " Though they do not release the source code." }, { "start": 2123.66, "end": 2128.78, "text": " They release some kind of pseudo code. But the analysis and the ablations" }, { "start": 2128.78, "end": 2134.86, "text": " are very good. The results are let's say questionable because of course" }, { "start": 2134.86, "end": 2140.54, "text": " you can't compare" }, { "start": 2140.54, "end": 2144.54, "text": " machines to humans especially in a game where you have to make quick actions." }, { "start": 2144.54, "end": 2148.46, "text": " Even if you limit the actions, they do this here." }, { "start": 2148.46, "end": 2154.54, "text": " So they have this monitoring layer which limits the actions and" }, { "start": 2154.54, "end": 2160.78, "text": " introduces delay and so on. But still if it's not the same as a" }, { "start": 2160.78, "end": 2165.58, "text": " human who might not always be able to do these 22" }, { "start": 2165.58, "end": 2170.46, "text": " actions per five seconds. If something quick happens they may" }, { "start": 2170.46, "end": 2174.06, "text": " need to have some kind of relaxation phase and so on." }, { "start": 2174.06, "end": 2178.14, "text": " But they try with these kind of delays and action limits. They try to" }, { "start": 2178.14, "end": 2182.54, "text": " model these kind of limitations." }, { "start": 2182.54, "end": 2187.58, "text": " I find this as fair as possible." }, { "start": 2187.58, "end": 2191.74, "text": " This is what I find kind of problematic. So they own units as I said." }, { "start": 2191.74, "end": 2196.06, "text": " The agent can also see the ones that are outside the camera." }, { "start": 2196.06, "end": 2202.3799999999997, "text": " And that seems kind of shady. Because of course you can you can claim" }, { "start": 2202.3799999999997, "end": 2205.1, "text": " humans can do whatever command groups to also" }, { "start": 2205.1, "end": 2211.8199999999997, "text": " control units outside the camera. But it's not really the case." }, { "start": 2211.8199999999997, "end": 2217.74, "text": " So that's sort of a distinct advantage that the machine has." }, { "start": 2217.74, "end": 2222.94, "text": " But yeah in any case I find it to be very well done." }, { "start": 2222.94, "end": 2228.7799999999997, "text": " And I hope this made it a bit clearer what the exact contributions are." }, { "start": 2228.78, "end": 2235.5, "text": " And with that have a fun time playing against AlphaStar." }, { "start": 2235.5, "end": 2263.02, "text": " Bye bye." } ]
CRlN-cYFxTk
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis (ML Research Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "nerf network", "nerf neural network", "deep learning tutorial", "what is deep learning", "introduction to deep learning", "deep learning explanation", "nerf network explanation", "neural rendering", "differentiable rendering", "differentiable neural rendering", "volume rendering", "nerf view synthesis", "view synthesis", "view synthesis nerf", "view synthesis neural", "novel view synthesis", "nerf" ]
#nerf #neuralrendering #deeplearning View Synthesis is a tricky problem, especially when only given a sparse set of images as an input. NeRF embeds an entire scene into the weights of a feedforward neural network, trained by backpropagation through a differential volume rendering procedure, and achieves state-of-the-art view synthesis. It includes directional dependence and is able to capture fine structural details, as well as reflection effects and transparency. OUTLINE: 0:00 - Intro & Overview 4:50 - View Synthesis Task Description 5:50 - The fundamental difference to classic Deep Learning 7:00 - NeRF Core Concept 15:30 - Training the NeRF from sparse views 20:50 - Radiance Field Volume Rendering 23:20 - Resulting View Dependence 24:00 - Positional Encoding 28:00 - Hierarchical Volume Sampling 30:15 - Experimental Results 33:30 - Comments & Conclusion Paper: https://arxiv.org/abs/2003.08934 Website & Code: https://www.matthewtancik.com/nerf My Video on SIREN: https://youtu.be/Q5g3p9Zwjrk Abstract: We present a method that achieves state-of-the-art results for synthesizing novel views of complex scenes by optimizing an underlying continuous volumetric scene function using a sparse set of input views. Our algorithm represents a scene using a fully-connected (non-convolutional) deep network, whose input is a single continuous 5D coordinate (spatial location (x,y,z) and viewing direction (θ,ϕ)) and whose output is the volume density and view-dependent emitted radiance at that spatial location. We synthesize views by querying 5D coordinates along camera rays and use classic volume rendering techniques to project the output colors and densities into an image. Because volume rendering is naturally differentiable, the only input required to optimize our representation is a set of images with known camera poses. We describe how to effectively optimize neural radiance fields to render photorealistic novel views of scenes with complicated geometry and appearance, and demonstrate results that outperform prior work on neural rendering and view synthesis. View synthesis results are best viewed as videos, so we urge readers to view our supplementary video for convincing comparisons. Authors: Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, Ren Ng Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher LinkedIn: https://www.linkedin.com/in/yannic-kilcher-488534136/ BiliBili: https://space.bilibili.com/1824646584 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Hello there. Look at these objects right here. What if I told you that I'm going to give you a bunch of pictures of these objects from different sides. And what you have to do is you have to come up with a system that generates me the picture as if the object was viewed from any direction. So something like this, right? Any direction, you can get me a picture of that object from just a few input pictures. This is a pretty daunting task. Specifically, look at the ship, for example, right here. You can see in the water, there's specularities that only appear if you view it from a very particular angle, right? Also the drum kit, you see that the microphone on the left, it has very specific structure to it. So this is not at all like a trivial task. There's very intricate things here. And this not only with toy data, but here you can see real world scenes. So this isn't some kind of abstract thing. You can actually use this in the real world. Now, don't look at these things too long. They tend to make me dizzy. But that's ultimately the goal. Input a few pictures and then being able to synthesize any kind of view. So the paper we're going to look at, it's a bit of an older paper, but I think it's pretty cool and it's relevant. And there is a bunch of follow up work to this. This is very popular right now. This is the paper introducing NERF, representing scenes as neural radiance fields for view synthesis. And it's by Ben Mildenhall, Pratul P. Srinivasan, Matthew Tanchik, Jonathan T. Barron, Ravi Ramamurthy and Ren Ng. This, as you can see, the task is called view synthesis. And what you can do with view synthesis or with this paper specifically is you can it can also it takes into account your viewing direction, which gives a much more realistic impression. We've already seen this with kind of the lighting here. But in order to really show you this on the left, you're going to see this novel view that is rendered. And on the right, it's sort of like a fake thing that you couldn't do in reality. But what we're going to do is we're going to keep the camera at the same position, but we're going to tell the scene that the camera is at a like switching around. And that makes you able to see just how different a pic like a room can look like if viewed from different directions. So the right one is really kind of physically impossible. It's just meant to show you how different things look differently if they think they are viewed from a different direction. Right. So the same thing here. And it just looks amazing. What you get automatically out of the systems are depth maps. These are notoriously hard to get, especially for complex scenes such as this one. Also, this one right here. It's it's very complex and it handles it fairly well. Sorry. You can even do something like AR right here since you now have a representation that tells you how far everything is away and you have it from different views. You can see. Yeah. And you can even get meshes. So I should be able to move that around here. This is now a mesh. It's not only view synthesis, but you can actually fill out the voxels, which is a slightly different task. And if you have pictures from all around, you can synthesize kind of any view in between, as you can see right here. So we're going to switch away from the fancy videos to the paper. Now the special thing about this paper and this is it's in the spirit of something like sirens. So sirens, we've I've made a video about it. And the special thing right here is it uses deep learning in a little bit of a different way than we would normally use it. So first of all, what does the abstract say? We present a novel, sorry, a method, where it is novel, that achieves state of the art results for synthesizing novel views of complex scenes by optimizing an underlying continuous volumetric scene function using a sparse set of input views. So the task description is the view synthesis, right? Synthesizing novel views. Also, you're given a sparse set of input views. So you're given you have a scene. Let's say you have a tree or something like this. So here's a tree. I know beautiful and you're given a bunch of images. So maybe someone, you know, stood here and took a picture. So the picture kind of views in in this direction. It pictures depicts the tree and someone stood here and took a picture of the same tree. Maybe the same person, someone flew up here, took a picture of that tree. So you get a bunch of those. Maybe you get 20 or something around the tree, maybe more, maybe less. So from these pictures, you want to build a thing that can generate any view from anywhere. And the way they do it is by optimizing an underlying continuous volumetric scene function. This is a cryptic way, but it goes along the direction of the sirens and kind of a bigger trend in, I think in the AI in these in these neural rendering papers and so on, which is that we want to overfit a neural network to a single data point. This is really different from classic deep learning. If you ask someone, how would you go about this problem with deep learning? What they would tell you is, okay, I need a data set. I need a data set of these different scenes and the input. Now I have my X and my Y. So the input X is going to be always like, you know, 30 images of a scene and Y is going to be the scene itself or whatnot, like the tree or the mesh of the tree or something like this. And I need this many, many times. So I need a data set with 30 images of, I don't know, a house and the Y is the house and so on. So that's my training data set. And in my test data set, it can be something else, right? So it can be things that I now want to test. However, in this particular case, this is not the case here. It is one neural network that is fit to one scene. So what we have is a neural network that has a bunch of layers and all the neural network cares about is this particular scene, right? If we want to render a new scene, we take a new neural network. That's what I mean. We overfit a single neural network to this particular scene. We use the 30 images or so we got to train to completely overfit this neural network. And the goal is going to be that the tree itself, like the scene itself, is going to be in the weights of this neural network. So the weights of the neural network now represent the scene. And this has various advantages, right? If we already saw this with the sirens that very often this is a much, much better representation, more compact representation of the entire mesh than any other way. Like if you store it in voxels or something. But I hope this is a bit clear. Now, of course, the question is, what's the input and what's the output of this neural network? So the input is the following. Imagine you have a coordinate system here. So you get you get a coordinate system X, Y, and Z. Okay. And the neural network gets two things as an input. It gets as an input a position in that coordinate system, which we call we call X. And X is actually X, Y, Z is a three dimensional vector. Right. For example, right here, this is our X now. And also we get an D, which is a viewing direction. Okay. So the for example, if my camera is the top camera right here, the viewing direction would be this ray here. Well, everything's orange. I make that blue. So the viewing direction D would be that. Okay. So the angle here, we care about the angle. It's actually two angles you need to describe this viewing direction. So a position and the viewing direction and the output of the neural network. What does it output? The output of the neural network is going to be a color. See, like what color is at that particular location and the density. Is there even something at that particular location? Right. So the density tells you whether there is something or not. And if there is something, the color tells you what color it is. All right. This is a really different way. I want to stress that again of using neural networks. There is no longer images going in and you know something coming out. What goes in is a position and a direction. So you ask the neural network, hey, neural network, you in your entirety, you represent this scene. You represent if you're trained well, if you're overfit well, you're overfit on the tree. Now, I want to know at a particular location in this scene viewed from a particular angle. What am I going to see? So on this picture right here, I'm wondering for this pixel. If I send a ray to this location, what am I going to see? And the network will tell you you're probably not going to see anything because there's nothing there. Or if there is something there, you're going to see the color, I don't know, red. So how from this you can pretty easily get a picture, namely if I have my frame of the picture. For each pixel, I need to send a ray through the scene. So I send a ray through the scene. And what I need to do is I need simply need to query this model at each location. Here, here, here, here, here, here, here, and so on. At each location, I will ask the neural network, is there something there? And if there is, what kind of color am I going to see? And what you'll get is a bit of a curve. Thank you. Is a bit of a curve. So if here is your zero and you send the ray out into the scene, and this is the density going up, they have these graphs in the paper, by the way. I'm not smart enough to come up with them by myself. But they say, well, maybe at the beginning you're not going to see anything because there's nothing there. But then, you know, at some point you're going to see something. There is something there. You hit the tree, right? And you're inside the tree. And then you're out of the tree again. At the same time, at every point, it gives you color. Now here, it actually doesn't matter what the color is. It will still output a color, but it doesn't matter. And here it's going to say green, right? It's going to say at every point here, it's going to say green, green, green, green. And here, I guess it doesn't matter. It's probably going to say green as well. But in any case, what you can now do is you can simply look at where do I hit the first time the object, which is here, right? When the density goes up and what colors there. And now I know what I need to render at that particular pixel. Now you can simply do this for all pixels and you got yourself an image. And the neural network is powerful enough that for the same location, you can see this right here. It can give you different results depending on the different viewing directions. So that makes it such that it can kind of depend on where you view it from. It can capture these lighting effects, these reflections. And also it can capture transparency because imagine you have a curve that is not as clear as this one, but you have a curve that is something like here. So here is one wall of a glass and here is another wall of the glass. And they go up in density, but they're not fully dense. And the front of the glass is maybe blue and the back of the glass is red. And now if you integrate your ray along this and you integrate weighted by the density, you're going to get a mixture of preferably blue because that's in the front, but also a little bit of red. You can see that if a ray goes through here, you can handle transparency. And so this is a really powerful model right here. And again, there's no need for a data set other than the scene that is right in front of you. So the goal is going to be that if in the future we want to make augmented reality applications, we want to make games and so on, you are not actually going to store a mesh or kind of a voxel grid of some scene. What you're going to store is a neural network that can be queried from anywhere you want to look at the scene. And the neural network will tell you what you're going to see. It just happens that these things work extraordinarily well. So here's the process again, the task, you get a set of input images right here. You want to find out where they're taken from. So for each input image, you need to determine where was the camera and in which direction did it look. This is a known problem. You can see all these kind of classic structures from motion, slam and so on. They need to determine the camera positions from the pictures. And so that's a thing you can take from existing research. And then you want to render the new views. And yeah, here is, I think, where they get into it, where this is. Yeah, we represent, they say, a continuous scene as a 5D vector valued function. And this vector function is going to be a neural network. It has a five dimensional input and it has a the output is going to be a color, which is three dimensions and a density, which is one dimension. So the input is a 3D location and a 2D viewing direction. And the output is a color and a volume density. So in practice, we express direction as a 3D Cartesian unit vector. And they say we approximate this continuous 5D scene representation with an MLP network. So the network, as we said, this is the input, this is the output. And we optimize its weights to map from each input 5D coordinate to its corresponding volume density and directional emitted color. Now, the only question is, of course, we have these images. We don't actually have as a training set kind of the densities at that place. So everything needs to be sort of grounded into the images that we have. Now, luckily, the whole process that I've described here, which you see again here. So if you want to render an image, you take an image, you pick a pixel, you shoot a ray, and you sample along the ray and you ask your network what's there. The network will tell you if there's something there. And if so, what color you're going to see the density over time. And then you can render an image. Now, if you already have an image, right, which is we are given a set of these images, if you already have one, you can now calculate a loss. Namely, what do I see and what does the network tell me I should see? If the network is not trained yet, that's going to be a pretty big loss. And if you make the loss as something differentiable, then this whole process is in fact differentiable. That's the next cool thing about this. The whole process of sending the ray, sampling the position, integrating over it, and at the end coming up with a pixel color, that is a differentiable process. If, of course, if you do it correctly. But that means we can use those 30 images or 50 or whatever we have in order to construct a big loss. Every ray, so every pixel in every picture that we have defines a ray. So every ray essentially is a data point that we can fit to. So at the end, we get a pretty sizable data set for the network, which is going to be number of pixels times number of pictures. However, again, it is a different problem than having a data set of many of these scenes. So the whole process is differentiable, and that means you can just fit the neural network to this scene. You overfit it to these 30 images that you have, and that's going to be your network. And this network then is going to represent the scene in its weights. So the weights are the scene at the end. There is a bit of a so there are lots of engineering tricks here. So, for example, we encourage the representation to be multi view consistent by restricting the network to predict the volume density as a function of only the location X, while allowing the RGB color to be predicted as a function of both location and viewing direction. So the reasoning here is that the volume density is not dependent on the direction. Like either even if something is kind of transparent, it's going to be transparent. It's going to be the same transparent in from different direction. There's only very limited amount of materials where that is not the case. Right. So as a simplifying concept, we're going to see the transparency of the object is always the same, which is kind of where stuff is, is independent of where you look from. It's only how stuff looks that is dependent. So the RGB color is going to be a function of both location and viewing direction. And what they do is essentially so they input X right here. They so the the location, they yank this through a network, they get out two things. So they first get out this density and they also get out a hidden representation that hidden representation. They then concatenate with the viewing direction. And that goes through another stack of layers in order to give them the color. I think it's also, you know, you could do something with a transformer here and some causal masking, though I'm pretty sure someone has already done this, given that the paper is almost ancient at one year of age in the machine learning world. That's really old. So exactly. So this is the formula for new for rendering. This is a technique called volume rendering with radiance fields. So if you have a radiance field, a radiance field is a function that tells you exactly what we train in our network to do. Namely, you know, if I look from here and I look at that point, what do I see? What you want to do is you want to send a ray through the scene and you want to integrate along that race. You have kind of a far bound and a near bound. And you want to integrate from the near bound to the far bound. So that means you send the ray through the thing you want to integrate. This thing, this T thing right here, that tells you. You can see the density is in here along the ray from the beginning to the point where you are. That is the probability that the ray doesn't hit anything. Right. It's a probability that the ray goes on through that room. Basically, it's a probability of empty space. So or, you know, the inverse of that, like this distinguishes whether there is something or not, whether the ray continues up until the point T or not. So you have whether or not the ray is actually at that particular point. How dense that particular point is. So how much stuff there is in terms of occludance for your ray. So if this is high, your ray is going to stop and you're going to adopt the color that is there. You can see it's this is multiplied by the color at that particular place. So you send the ray. And as soon as your system determine, you know, there's something here, you're going to, since this is multiplied, the density is multiplied by the color, your your ray is going to adopt the color of whatever is there. And then after that, this quantity here is going to be small because this quantity is again an inner integral that tells you whether or not the ray even reaches that location. So the ray reaches the first location, at which point it's going to adopt the color. And after that, the it even though there is stuff right, even though the density is high, the ray is not reaching it. So the whole formula captures all of this. And as we said, with a bit of nuance, it like if this is not always zero one, it can handle transparency as well. And here they demonstrate again from the scene. So you have two different points in the same scene, but viewed from different locations. And on the right, they show you this is all the same point in the scene, but the circle represents kind of different angles at which you can view it from. And you can see that the color is really different depending on the angle where you look from. There are what do we have here? There are a lot of tricks. Oh, yeah, so they they approximate the integral with like quadrature, which also has existed. And they have a bunch of tricks. So the first trick to really get this to work is a novel like not a novel, but kind of the employment of a positional encoding that a positional encoding is not the same as you might know it from Transformers or something. The positional encoding here, it simply means that you send the input data point, which is this thing right here. XYZ, theta, phi, Greek letter. You send that to a higher dimensional space, right, in a very deterministic way. So if you have these low dimensional input, and especially if you want to represent this, this is really fine structure right here. You can see that this stuff right here, it's quite fine grained. OK, and so you need a way to handle fine differences between things. But you also need a way to handle, you know, course differences. And just a single floating point number probably isn't going to do it for a continuous function like this. So what you do is you send this to a higher dimensionality with these positional encodings that we know from Transformers. So these encodings right here, they will send. So what you do, and so in my video on attention is all you need, I explain those in detail. But you construct a hierarchy of sine waves or sine and cosine waves. But we can just do it with sine waves. So the lowest hierarchy is like this. And then the next thing in the hierarchy would be like double as fast. And then the next thing, well, this is four times as fast, isn't it? Well, you get the point, right? It's so I need up, down, up, wow. And then up, down, up, down, up. This is not a sine wave. But you I hope you get the point. And then you want to take a look, for example, your X, you take your X, you put it here like, OK, X is so this is like negative. I think they go from negative one to one. The coordinates they have and your high dimensional output is going to be, you know, this point, this point, this point and this point in the in their respective coordinate systems. Right. So that's you can. What this does is you can still clearly identify every point here. In fact, yeah, you can. You can identify every single point in your input space by, you know, looking at looking at the combination of where it is in the sine waves. But it gives the network a better chance to focus, for example, on details. If it wants to focus on details, it's going to look at this scale right here because tiny changes in the underlying X is going to result in a large change in this feature. If you want to focus on coarse grain stuff, then you look at this where you can, you know, you have to move pretty far to have a change. Whereas if you look at this scale for coarse grain things, it means almost nothing because, you know, if you want to make little difference between these two things, if you look at coarse grained structure, but they have, as you can see, like there's a lot of difference between those like this may be zero and this is maybe negative one. However, if you look at the two data points right here, sorry about that. So the same, let's say the orange distance and the blue distance, you can see that the two aren't so different in this representation. So it gives the network the choice at which scale it wants to look at for particular positions. So ultimately, you're going to map this five dimensional vector into a higher dimensional vector. And they consider like 10, 10 layers or four layers of these. How many of these different sine wave and cosine waves they construct. So again, they call it positional ketting. They say this is referred to as a positional encoding. However, transformers use it for a different goal of providing discrete representations as input to an architecture, yada, yada, yada. In contrast, we use these functions to map continuous input coordinates into a higher dimensional space to enable our MLP to more easily approximate a higher frequency functions. The second thing they do is they do hierarchical volume sampling. So when we said I send a ray through the scene and then I sample along, this either would take a lot of time or it would not be accurate enough. So what they do is they have actually two layers of neural network, one they call a course and one they call a fine. And as I understand it, here is a ray they first sample with the course one at rather coarse locations. And then they use that to evaluate where they should sample more. Let's say this thing right here has a real high density in the course network. They then sample around that a lot more, maybe one here, two, but a lot more, you know, sampling around where the course network things, the important stuff is. They optimize both networks at the same time. And that actually works out well. So here you see the loss. The loss is a combination now of the coarse network and the fine grain network. And you need to optimize both, even though the final view is only going to come from the fine grain network. You need to optimize both because the coarse grain network can tell you where the important stuff is. So the results you have already seen, there are a bunch of metrics that prove that this one is really good. And it can, as you can see, like you can handle fine grain structure right here in the microphone that others can't. And it also so they say it fits into a few. So one neural network of one scene fits into like a few megabytes. And this is so it fits into five megabytes. And this is a lot better than things that use like voxel grid representations, which I think this other thing they compare to uses over 15 gigabytes for the same scene. Which and this is interesting, which is even less memory than the input images alone for a single scene from any of our data sets. So this is really like it's it's really it's even smaller than the pictures. So so even if you maybe want to show this to another human, it'd be better. You send the train nerf than the pictures if space is a consideration, though I don't know how they measure the pictures. Like you can probably compress if it's different pictures from the same scene. I guess there's some compression potential if you want to transmit them as a never mind. So they also do ablations. And the only downside here is that it does take a long time to fit one of these neural networks. I don't exactly remember where they say it, but they say they calculate like, oh, here. So it's not too bad, but the optimization for a single scene typically take around 100 to 300 K iterations to converge on a single video of 100 GPU, which is about one to two days. So it's a single GPU. So it is, you know, you don't need a data center for it. But you're going to wait a while until you train one, though you only need to train it once and then you can render new views as you please. So the idea, I think, is going to be that let's say you make a video game or so. You're going to render this at your servers, then you transmit the neural network to the clients and the clients can just render it out right there. And yeah, there's a bunch of results and a bunch of ablations where they kind of leave away different parts and they show that especially kind of the positional encodings. I think this is the positional encodings are really important, as you can see on the right, there is no positional encodings. The view dependence is also quite important. You see if there's no view dependence, as you can see here, you do get the fine grain structure since you do have positional encodings. But you don't get these kind of light effects, right? This is this thing here is not a different color. It's simply the fact that the line light shines on it. And it's just not there here because, you know, all the network can do is output the same color for all directions. And most directions simply don't have that reflection. All right, so that is it. The code is available on this website that I've showed you. I'm certainly going to link it. Tell me what you think. I think this is pretty cool. I know this has given rise to a lot of work following up on this. I have very little overview over what's going on in the nerf space, but I think it's cool and I want to dive deeper into it. Thanks for being here. Bye bye.
[ { "start": 0, "end": 10, "text": " Hello there. Look at these objects right here. What if I told you that I'm going to give you a bunch of pictures of these objects from different sides." }, { "start": 10, "end": 19, "text": " And what you have to do is you have to come up with a system that generates me the picture as if the object was viewed from any direction." }, { "start": 19, "end": 28, "text": " So something like this, right? Any direction, you can get me a picture of that object from just a few input pictures." }, { "start": 28, "end": 42, "text": " This is a pretty daunting task. Specifically, look at the ship, for example, right here. You can see in the water, there's specularities that only appear if you view it from a very particular angle, right?" }, { "start": 42, "end": 49, "text": " Also the drum kit, you see that the microphone on the left, it has very specific structure to it." }, { "start": 49, "end": 68, "text": " So this is not at all like a trivial task. There's very intricate things here. And this not only with toy data, but here you can see real world scenes." }, { "start": 68, "end": 74, "text": " So this isn't some kind of abstract thing. You can actually use this in the real world." }, { "start": 74, "end": 84, "text": " Now, don't look at these things too long. They tend to make me dizzy. But that's ultimately the goal. Input a few pictures and then being able to synthesize any kind of view." }, { "start": 84, "end": 95, "text": " So the paper we're going to look at, it's a bit of an older paper, but I think it's pretty cool and it's relevant. And there is a bunch of follow up work to this." }, { "start": 95, "end": 105, "text": " This is very popular right now. This is the paper introducing NERF, representing scenes as neural radiance fields for view synthesis." }, { "start": 105, "end": 116, "text": " And it's by Ben Mildenhall, Pratul P. Srinivasan, Matthew Tanchik, Jonathan T. Barron, Ravi Ramamurthy and Ren Ng." }, { "start": 116, "end": 135, "text": " This, as you can see, the task is called view synthesis. And what you can do with view synthesis or with this paper specifically is you can it can also it takes into account your viewing direction, which gives a much more realistic impression." }, { "start": 135, "end": 150, "text": " We've already seen this with kind of the lighting here. But in order to really show you this on the left, you're going to see this novel view that is rendered. And on the right, it's sort of like a fake thing that you couldn't do in reality." }, { "start": 150, "end": 170, "text": " But what we're going to do is we're going to keep the camera at the same position, but we're going to tell the scene that the camera is at a like switching around. And that makes you able to see just how different a pic like a room can look like if viewed from different directions." }, { "start": 170, "end": 184, "text": " So the right one is really kind of physically impossible. It's just meant to show you how different things look differently if they think they are viewed from a different direction. Right. So the same thing here." }, { "start": 184, "end": 201, "text": " And it just looks amazing. What you get automatically out of the systems are depth maps. These are notoriously hard to get, especially for complex scenes such as this one. Also, this one right here." }, { "start": 201, "end": 216, "text": " It's it's very complex and it handles it fairly well. Sorry. You can even do something like AR right here since you now have a representation that tells you how far everything is away and you have it from different views." }, { "start": 216, "end": 231, "text": " You can see. Yeah. And you can even get meshes. So I should be able to move that around here. This is now a mesh. It's not only view synthesis, but you can actually fill out the voxels, which is a slightly different task." }, { "start": 231, "end": 243, "text": " And if you have pictures from all around, you can synthesize kind of any view in between, as you can see right here. So we're going to switch away from the fancy videos to the paper." }, { "start": 243, "end": 255, "text": " Now the special thing about this paper and this is it's in the spirit of something like sirens. So sirens, we've I've made a video about it." }, { "start": 255, "end": 263, "text": " And the special thing right here is it uses deep learning in a little bit of a different way than we would normally use it." }, { "start": 263, "end": 282, "text": " So first of all, what does the abstract say? We present a novel, sorry, a method, where it is novel, that achieves state of the art results for synthesizing novel views of complex scenes by optimizing an underlying continuous volumetric scene function using a sparse set of input views." }, { "start": 282, "end": 298, "text": " So the task description is the view synthesis, right? Synthesizing novel views. Also, you're given a sparse set of input views. So you're given you have a scene. Let's say you have a tree or something like this. So here's a tree." }, { "start": 298, "end": 315, "text": " I know beautiful and you're given a bunch of images. So maybe someone, you know, stood here and took a picture. So the picture kind of views in in this direction. It pictures depicts the tree and someone stood here and took a picture of the same tree." }, { "start": 315, "end": 326, "text": " Maybe the same person, someone flew up here, took a picture of that tree. So you get a bunch of those. Maybe you get 20 or something around the tree, maybe more, maybe less." }, { "start": 326, "end": 342, "text": " So from these pictures, you want to build a thing that can generate any view from anywhere. And the way they do it is by optimizing an underlying continuous volumetric scene function." }, { "start": 342, "end": 364, "text": " This is a cryptic way, but it goes along the direction of the sirens and kind of a bigger trend in, I think in the AI in these in these neural rendering papers and so on, which is that we want to overfit a neural network to a single data point." }, { "start": 364, "end": 374, "text": " This is really different from classic deep learning. If you ask someone, how would you go about this problem with deep learning? What they would tell you is, okay, I need a data set." }, { "start": 374, "end": 379, "text": " I need a data set of these different scenes and the input." }, { "start": 379, "end": 395, "text": " Now I have my X and my Y. So the input X is going to be always like, you know, 30 images of a scene and Y is going to be the scene itself or whatnot, like the tree or the mesh of the tree or something like this." }, { "start": 395, "end": 410, "text": " And I need this many, many times. So I need a data set with 30 images of, I don't know, a house and the Y is the house and so on." }, { "start": 410, "end": 421, "text": " So that's my training data set. And in my test data set, it can be something else, right? So it can be things that I now want to test." }, { "start": 421, "end": 427, "text": " However, in this particular case, this is not the case here." }, { "start": 427, "end": 442, "text": " It is one neural network that is fit to one scene. So what we have is a neural network that has a bunch of layers and all the neural network cares about is this particular scene, right?" }, { "start": 442, "end": 452, "text": " If we want to render a new scene, we take a new neural network. That's what I mean. We overfit a single neural network to this particular scene." }, { "start": 452, "end": 459, "text": " We use the 30 images or so we got to train to completely overfit this neural network." }, { "start": 459, "end": 468, "text": " And the goal is going to be that the tree itself, like the scene itself, is going to be in the weights of this neural network." }, { "start": 468, "end": 474, "text": " So the weights of the neural network now represent the scene. And this has various advantages, right?" }, { "start": 474, "end": 488, "text": " If we already saw this with the sirens that very often this is a much, much better representation, more compact representation of the entire mesh than any other way." }, { "start": 488, "end": 492, "text": " Like if you store it in voxels or something. But I hope this is a bit clear." }, { "start": 492, "end": 498, "text": " Now, of course, the question is, what's the input and what's the output of this neural network?" }, { "start": 498, "end": 503, "text": " So the input is the following. Imagine you have a coordinate system here." }, { "start": 503, "end": 509, "text": " So you get you get a coordinate system X, Y, and Z." }, { "start": 509, "end": 514, "text": " Okay. And the neural network gets two things as an input." }, { "start": 514, "end": 523, "text": " It gets as an input a position in that coordinate system, which we call we call X." }, { "start": 523, "end": 528, "text": " And X is actually X, Y, Z is a three dimensional vector. Right." }, { "start": 528, "end": 533, "text": " For example, right here, this is our X now." }, { "start": 533, "end": 539, "text": " And also we get an D, which is a viewing direction." }, { "start": 539, "end": 550, "text": " Okay. So the for example, if my camera is the top camera right here, the viewing direction would be this ray here." }, { "start": 550, "end": 556, "text": " Well, everything's orange. I make that blue. So the viewing direction D would be that." }, { "start": 556, "end": 561, "text": " Okay. So the angle here, we care about the angle." }, { "start": 561, "end": 564, "text": " It's actually two angles you need to describe this viewing direction." }, { "start": 564, "end": 569, "text": " So a position and the viewing direction and the output of the neural network." }, { "start": 569, "end": 574, "text": " What does it output? The output of the neural network is going to be a color." }, { "start": 574, "end": 581, "text": " See, like what color is at that particular location and the density." }, { "start": 581, "end": 584, "text": " Is there even something at that particular location? Right." }, { "start": 584, "end": 587, "text": " So the density tells you whether there is something or not." }, { "start": 587, "end": 591, "text": " And if there is something, the color tells you what color it is." }, { "start": 591, "end": 594, "text": " All right. This is a really different way." }, { "start": 594, "end": 597, "text": " I want to stress that again of using neural networks." }, { "start": 597, "end": 601, "text": " There is no longer images going in and you know something coming out." }, { "start": 601, "end": 604, "text": " What goes in is a position and a direction." }, { "start": 604, "end": 611, "text": " So you ask the neural network, hey, neural network, you in your entirety, you represent this scene." }, { "start": 611, "end": 620, "text": " You represent if you're trained well, if you're overfit well, you're overfit on the tree." }, { "start": 620, "end": 628, "text": " Now, I want to know at a particular location in this scene viewed from a particular angle." }, { "start": 628, "end": 634, "text": " What am I going to see? So on this picture right here, I'm wondering for this pixel." }, { "start": 634, "end": 638, "text": " If I send a ray to this location, what am I going to see?" }, { "start": 638, "end": 644, "text": " And the network will tell you you're probably not going to see anything because there's nothing there." }, { "start": 644, "end": 651, "text": " Or if there is something there, you're going to see the color, I don't know, red." }, { "start": 651, "end": 659, "text": " So how from this you can pretty easily get a picture, namely if I have my frame of the picture." }, { "start": 659, "end": 664, "text": " For each pixel, I need to send a ray through the scene." }, { "start": 664, "end": 667, "text": " So I send a ray through the scene." }, { "start": 667, "end": 672, "text": " And what I need to do is I need simply need to query this model at each location." }, { "start": 672, "end": 676, "text": " Here, here, here, here, here, here, here, and so on." }, { "start": 676, "end": 681, "text": " At each location, I will ask the neural network, is there something there?" }, { "start": 681, "end": 686, "text": " And if there is, what kind of color am I going to see?" }, { "start": 686, "end": 690, "text": " And what you'll get is a bit of a curve. Thank you." }, { "start": 690, "end": 693, "text": " Is a bit of a curve." }, { "start": 693, "end": 699, "text": " So if here is your zero and you send the ray out into the scene," }, { "start": 699, "end": 704, "text": " and this is the density going up, they have these graphs in the paper, by the way." }, { "start": 704, "end": 708, "text": " I'm not smart enough to come up with them by myself." }, { "start": 708, "end": 714, "text": " But they say, well, maybe at the beginning you're not going to see anything because there's nothing there." }, { "start": 714, "end": 717, "text": " But then, you know, at some point you're going to see something." }, { "start": 717, "end": 720, "text": " There is something there. You hit the tree, right?" }, { "start": 720, "end": 725, "text": " And you're inside the tree. And then you're out of the tree again." }, { "start": 725, "end": 728, "text": " At the same time, at every point, it gives you color." }, { "start": 728, "end": 732, "text": " Now here, it actually doesn't matter what the color is." }, { "start": 732, "end": 734, "text": " It will still output a color, but it doesn't matter." }, { "start": 734, "end": 737, "text": " And here it's going to say green, right?" }, { "start": 737, "end": 744, "text": " It's going to say at every point here, it's going to say green, green, green, green." }, { "start": 744, "end": 749, "text": " And here, I guess it doesn't matter. It's probably going to say green as well." }, { "start": 749, "end": 756, "text": " But in any case, what you can now do is you can simply look at where do I hit the first time the object," }, { "start": 756, "end": 760, "text": " which is here, right? When the density goes up and what colors there." }, { "start": 760, "end": 765, "text": " And now I know what I need to render at that particular pixel." }, { "start": 765, "end": 770, "text": " Now you can simply do this for all pixels and you got yourself an image." }, { "start": 770, "end": 776, "text": " And the neural network is powerful enough that for the same location, you can see this right here." }, { "start": 776, "end": 781, "text": " It can give you different results depending on the different viewing directions." }, { "start": 781, "end": 786, "text": " So that makes it such that it can kind of depend on where you view it from." }, { "start": 786, "end": 790, "text": " It can capture these lighting effects, these reflections." }, { "start": 790, "end": 800, "text": " And also it can capture transparency because imagine you have a curve that is not as clear as this one," }, { "start": 800, "end": 803, "text": " but you have a curve that is something like here." }, { "start": 803, "end": 808, "text": " So here is one wall of a glass and here is another wall of the glass." }, { "start": 808, "end": 812, "text": " And they go up in density, but they're not fully dense." }, { "start": 812, "end": 819, "text": " And the front of the glass is maybe blue and the back of the glass is red." }, { "start": 819, "end": 826, "text": " And now if you integrate your ray along this and you integrate weighted by the density," }, { "start": 826, "end": 833, "text": " you're going to get a mixture of preferably blue because that's in the front, but also a little bit of red." }, { "start": 833, "end": 840, "text": " You can see that if a ray goes through here, you can handle transparency." }, { "start": 840, "end": 846, "text": " And so this is a really powerful model right here." }, { "start": 846, "end": 854, "text": " And again, there's no need for a data set other than the scene that is right in front of you." }, { "start": 854, "end": 862, "text": " So the goal is going to be that if in the future we want to make augmented reality applications," }, { "start": 862, "end": 871, "text": " we want to make games and so on, you are not actually going to store a mesh or kind of a voxel grid of some scene." }, { "start": 871, "end": 878, "text": " What you're going to store is a neural network that can be queried from anywhere you want to look at the scene." }, { "start": 878, "end": 880, "text": " And the neural network will tell you what you're going to see." }, { "start": 880, "end": 884, "text": " It just happens that these things work extraordinarily well." }, { "start": 884, "end": 889, "text": " So here's the process again, the task, you get a set of input images right here." }, { "start": 889, "end": 894, "text": " You want to find out where they're taken from." }, { "start": 894, "end": 900, "text": " So for each input image, you need to determine where was the camera and in which direction did it look." }, { "start": 900, "end": 902, "text": " This is a known problem." }, { "start": 902, "end": 907, "text": " You can see all these kind of classic structures from motion, slam and so on." }, { "start": 907, "end": 911, "text": " They need to determine the camera positions from the pictures." }, { "start": 911, "end": 916, "text": " And so that's a thing you can take from existing research." }, { "start": 916, "end": 920, "text": " And then you want to render the new views." }, { "start": 920, "end": 926, "text": " And yeah, here is, I think, where they get into it, where this is." }, { "start": 926, "end": 933, "text": " Yeah, we represent, they say, a continuous scene as a 5D vector valued function." }, { "start": 933, "end": 937, "text": " And this vector function is going to be a neural network." }, { "start": 937, "end": 944, "text": " It has a five dimensional input and it has a the output is going to be a color," }, { "start": 944, "end": 948, "text": " which is three dimensions and a density, which is one dimension." }, { "start": 948, "end": 953, "text": " So the input is a 3D location and a 2D viewing direction." }, { "start": 953, "end": 958, "text": " And the output is a color and a volume density." }, { "start": 958, "end": 964, "text": " So in practice, we express direction as a 3D Cartesian unit vector." }, { "start": 964, "end": 971, "text": " And they say we approximate this continuous 5D scene representation with an MLP network." }, { "start": 971, "end": 975, "text": " So the network, as we said, this is the input, this is the output." }, { "start": 975, "end": 987, "text": " And we optimize its weights to map from each input 5D coordinate to its corresponding volume density and directional emitted color." }, { "start": 987, "end": 992, "text": " Now, the only question is, of course, we have these images." }, { "start": 992, "end": 1004, "text": " We don't actually have as a training set kind of the densities at that place." }, { "start": 1004, "end": 1009, "text": " So everything needs to be sort of grounded into the images that we have." }, { "start": 1009, "end": 1013, "text": " Now, luckily, the whole process that I've described here, which you see again here." }, { "start": 1013, "end": 1020, "text": " So if you want to render an image, you take an image, you pick a pixel, you shoot a ray," }, { "start": 1020, "end": 1024, "text": " and you sample along the ray and you ask your network what's there." }, { "start": 1024, "end": 1026, "text": " The network will tell you if there's something there." }, { "start": 1026, "end": 1033, "text": " And if so, what color you're going to see the density over time." }, { "start": 1033, "end": 1036, "text": " And then you can render an image." }, { "start": 1036, "end": 1043, "text": " Now, if you already have an image, right, which is we are given a set of these images," }, { "start": 1043, "end": 1047, "text": " if you already have one, you can now calculate a loss." }, { "start": 1047, "end": 1051, "text": " Namely, what do I see and what does the network tell me I should see?" }, { "start": 1051, "end": 1054, "text": " If the network is not trained yet, that's going to be a pretty big loss." }, { "start": 1054, "end": 1061, "text": " And if you make the loss as something differentiable, then this whole process is in fact differentiable." }, { "start": 1061, "end": 1063, "text": " That's the next cool thing about this." }, { "start": 1063, "end": 1070, "text": " The whole process of sending the ray, sampling the position, integrating over it," }, { "start": 1070, "end": 1076, "text": " and at the end coming up with a pixel color, that is a differentiable process." }, { "start": 1076, "end": 1079, "text": " If, of course, if you do it correctly." }, { "start": 1079, "end": 1088, "text": " But that means we can use those 30 images or 50 or whatever we have in order to construct a big loss." }, { "start": 1088, "end": 1094, "text": " Every ray, so every pixel in every picture that we have defines a ray." }, { "start": 1094, "end": 1099, "text": " So every ray essentially is a data point that we can fit to." }, { "start": 1099, "end": 1104, "text": " So at the end, we get a pretty sizable data set for the network," }, { "start": 1104, "end": 1109, "text": " which is going to be number of pixels times number of pictures." }, { "start": 1109, "end": 1116, "text": " However, again, it is a different problem than having a data set of many of these scenes." }, { "start": 1116, "end": 1123, "text": " So the whole process is differentiable, and that means you can just fit the neural network to this scene." }, { "start": 1123, "end": 1129, "text": " You overfit it to these 30 images that you have, and that's going to be your network." }, { "start": 1129, "end": 1137, "text": " And this network then is going to represent the scene in its weights." }, { "start": 1137, "end": 1141, "text": " So the weights are the scene at the end." }, { "start": 1141, "end": 1146, "text": " There is a bit of a so there are lots of engineering tricks here." }, { "start": 1146, "end": 1152, "text": " So, for example, we encourage the representation to be multi view consistent" }, { "start": 1152, "end": 1157, "text": " by restricting the network to predict the volume density as a function of only the location X," }, { "start": 1157, "end": 1163, "text": " while allowing the RGB color to be predicted as a function of both location and viewing direction." }, { "start": 1163, "end": 1169, "text": " So the reasoning here is that the volume density is not dependent on the direction." }, { "start": 1169, "end": 1175, "text": " Like either even if something is kind of transparent, it's going to be transparent." }, { "start": 1175, "end": 1179, "text": " It's going to be the same transparent in from different direction." }, { "start": 1179, "end": 1184, "text": " There's only very limited amount of materials where that is not the case." }, { "start": 1184, "end": 1191, "text": " Right. So as a simplifying concept, we're going to see the transparency of the object is always the same," }, { "start": 1191, "end": 1196, "text": " which is kind of where stuff is, is independent of where you look from." }, { "start": 1196, "end": 1199, "text": " It's only how stuff looks that is dependent." }, { "start": 1199, "end": 1206, "text": " So the RGB color is going to be a function of both location and viewing direction." }, { "start": 1206, "end": 1212, "text": " And what they do is essentially so they input X right here." }, { "start": 1212, "end": 1219, "text": " They so the the location, they yank this through a network, they get out two things." }, { "start": 1219, "end": 1226, "text": " So they first get out this density and they also get out a hidden representation that hidden representation." }, { "start": 1226, "end": 1228, "text": " They then concatenate with the viewing direction." }, { "start": 1228, "end": 1235, "text": " And that goes through another stack of layers in order to give them the color." }, { "start": 1235, "end": 1241, "text": " I think it's also, you know, you could do something with a transformer here and some causal masking," }, { "start": 1241, "end": 1249, "text": " though I'm pretty sure someone has already done this, given that the paper is almost ancient at one year of age" }, { "start": 1249, "end": 1253, "text": " in the machine learning world. That's really old." }, { "start": 1253, "end": 1258, "text": " So exactly. So this is the formula for new for rendering." }, { "start": 1258, "end": 1262, "text": " This is a technique called volume rendering with radiance fields." }, { "start": 1262, "end": 1269, "text": " So if you have a radiance field, a radiance field is a function that tells you exactly what we train in our network to do." }, { "start": 1269, "end": 1274, "text": " Namely, you know, if I look from here and I look at that point, what do I see?" }, { "start": 1274, "end": 1282, "text": " What you want to do is you want to send a ray through the scene and you want to integrate along that race." }, { "start": 1282, "end": 1285, "text": " You have kind of a far bound and a near bound." }, { "start": 1285, "end": 1288, "text": " And you want to integrate from the near bound to the far bound." }, { "start": 1288, "end": 1294, "text": " So that means you send the ray through the thing you want to integrate." }, { "start": 1294, "end": 1297, "text": " This thing, this T thing right here, that tells you." }, { "start": 1297, "end": 1303, "text": " You can see the density is in here along the ray from the beginning to the point where you are." }, { "start": 1303, "end": 1307, "text": " That is the probability that the ray doesn't hit anything." }, { "start": 1307, "end": 1311, "text": " Right. It's a probability that the ray goes on through that room." }, { "start": 1311, "end": 1316, "text": " Basically, it's a probability of empty space." }, { "start": 1316, "end": 1321, "text": " So or, you know, the inverse of that, like this distinguishes whether there is something or not," }, { "start": 1321, "end": 1325, "text": " whether the ray continues up until the point T or not." }, { "start": 1325, "end": 1331, "text": " So you have whether or not the ray is actually at that particular point." }, { "start": 1331, "end": 1333, "text": " How dense that particular point is." }, { "start": 1333, "end": 1340, "text": " So how much stuff there is in terms of occludance for your ray." }, { "start": 1340, "end": 1345, "text": " So if this is high, your ray is going to stop and you're going to adopt the color that is there." }, { "start": 1345, "end": 1350, "text": " You can see it's this is multiplied by the color at that particular place." }, { "start": 1350, "end": 1351, "text": " So you send the ray." }, { "start": 1351, "end": 1358, "text": " And as soon as your system determine, you know, there's something here, you're going to, since this is multiplied," }, { "start": 1358, "end": 1365, "text": " the density is multiplied by the color, your your ray is going to adopt the color of whatever is there." }, { "start": 1365, "end": 1372, "text": " And then after that, this quantity here is going to be small because this quantity is again an inner integral" }, { "start": 1372, "end": 1377, "text": " that tells you whether or not the ray even reaches that location." }, { "start": 1377, "end": 1382, "text": " So the ray reaches the first location, at which point it's going to adopt the color." }, { "start": 1382, "end": 1390, "text": " And after that, the it even though there is stuff right, even though the density is high, the ray is not reaching it." }, { "start": 1390, "end": 1392, "text": " So the whole formula captures all of this." }, { "start": 1392, "end": 1401, "text": " And as we said, with a bit of nuance, it like if this is not always zero one, it can handle transparency as well." }, { "start": 1401, "end": 1404, "text": " And here they demonstrate again from the scene." }, { "start": 1404, "end": 1409, "text": " So you have two different points in the same scene, but viewed from different locations." }, { "start": 1409, "end": 1417, "text": " And on the right, they show you this is all the same point in the scene, but the circle represents kind of different angles" }, { "start": 1417, "end": 1419, "text": " at which you can view it from." }, { "start": 1419, "end": 1426, "text": " And you can see that the color is really different depending on the angle where you look from." }, { "start": 1426, "end": 1429, "text": " There are what do we have here?" }, { "start": 1429, "end": 1431, "text": " There are a lot of tricks." }, { "start": 1431, "end": 1437, "text": " Oh, yeah, so they they approximate the integral with like quadrature, which also has existed." }, { "start": 1437, "end": 1440, "text": " And they have a bunch of tricks." }, { "start": 1440, "end": 1448, "text": " So the first trick to really get this to work is a novel like not a novel, but kind of the employment of a positional encoding" }, { "start": 1448, "end": 1453, "text": " that a positional encoding is not the same as you might know it from Transformers or something." }, { "start": 1453, "end": 1460, "text": " The positional encoding here, it simply means that you send the input data point, which is this thing right here." }, { "start": 1460, "end": 1466, "text": " XYZ, theta, phi, Greek letter." }, { "start": 1466, "end": 1474, "text": " You send that to a higher dimensional space, right, in a very deterministic way." }, { "start": 1474, "end": 1482, "text": " So if you have these low dimensional input, and especially if you want to represent this, this is really fine structure right here." }, { "start": 1482, "end": 1489, "text": " You can see that this stuff right here, it's quite fine grained." }, { "start": 1489, "end": 1496, "text": " OK, and so you need a way to handle fine differences between things." }, { "start": 1496, "end": 1499, "text": " But you also need a way to handle, you know, course differences." }, { "start": 1499, "end": 1506, "text": " And just a single floating point number probably isn't going to do it for a continuous function like this." }, { "start": 1506, "end": 1515, "text": " So what you do is you send this to a higher dimensionality with these positional encodings that we know from Transformers." }, { "start": 1515, "end": 1519, "text": " So these encodings right here, they will send." }, { "start": 1519, "end": 1525, "text": " So what you do, and so in my video on attention is all you need, I explain those in detail." }, { "start": 1525, "end": 1531, "text": " But you construct a hierarchy of sine waves or sine and cosine waves." }, { "start": 1531, "end": 1534, "text": " But we can just do it with sine waves." }, { "start": 1534, "end": 1537, "text": " So the lowest hierarchy is like this." }, { "start": 1537, "end": 1543, "text": " And then the next thing in the hierarchy would be like double as fast." }, { "start": 1543, "end": 1548, "text": " And then the next thing, well, this is four times as fast, isn't it?" }, { "start": 1548, "end": 1550, "text": " Well, you get the point, right?" }, { "start": 1550, "end": 1553, "text": " It's so I need up, down, up, wow." }, { "start": 1553, "end": 1557, "text": " And then up, down, up, down, up." }, { "start": 1557, "end": 1560, "text": " This is not a sine wave." }, { "start": 1560, "end": 1562, "text": " But you I hope you get the point." }, { "start": 1562, "end": 1572, "text": " And then you want to take a look, for example, your X, you take your X, you put it here like, OK," }, { "start": 1572, "end": 1575, "text": " X is so this is like negative." }, { "start": 1575, "end": 1577, "text": " I think they go from negative one to one." }, { "start": 1577, "end": 1590, "text": " The coordinates they have and your high dimensional output is going to be, you know, this point, this point, this point and this point in the in their respective coordinate systems." }, { "start": 1590, "end": 1592, "text": " Right. So that's you can." }, { "start": 1592, "end": 1597, "text": " What this does is you can still clearly identify every point here." }, { "start": 1597, "end": 1600, "text": " In fact, yeah, you can." }, { "start": 1600, "end": 1613, "text": " You can identify every single point in your input space by, you know, looking at looking at the combination of where it is in the sine waves." }, { "start": 1613, "end": 1618, "text": " But it gives the network a better chance to focus, for example, on details." }, { "start": 1618, "end": 1629, "text": " If it wants to focus on details, it's going to look at this scale right here because tiny changes in the underlying X is going to result in a large change in this feature." }, { "start": 1629, "end": 1637, "text": " If you want to focus on coarse grain stuff, then you look at this where you can, you know, you have to move pretty far to have a change." }, { "start": 1637, "end": 1649, "text": " Whereas if you look at this scale for coarse grain things, it means almost nothing because, you know, if you want to make little difference between these two things," }, { "start": 1649, "end": 1661, "text": " if you look at coarse grained structure, but they have, as you can see, like there's a lot of difference between those like this may be zero and this is maybe negative one." }, { "start": 1661, "end": 1669, "text": " However, if you look at the two data points right here, sorry about that." }, { "start": 1669, "end": 1677, "text": " So the same, let's say the orange distance and the blue distance, you can see that the two aren't so different in this representation." }, { "start": 1677, "end": 1684, "text": " So it gives the network the choice at which scale it wants to look at for particular positions." }, { "start": 1684, "end": 1692, "text": " So ultimately, you're going to map this five dimensional vector into a higher dimensional vector." }, { "start": 1692, "end": 1699, "text": " And they consider like 10, 10 layers or four layers of these." }, { "start": 1699, "end": 1705, "text": " How many of these different sine wave and cosine waves they construct." }, { "start": 1705, "end": 1711, "text": " So again, they call it positional ketting. They say this is referred to as a positional encoding." }, { "start": 1711, "end": 1719, "text": " However, transformers use it for a different goal of providing discrete representations as input to an architecture, yada, yada, yada." }, { "start": 1719, "end": 1733, "text": " In contrast, we use these functions to map continuous input coordinates into a higher dimensional space to enable our MLP to more easily approximate a higher frequency functions." }, { "start": 1733, "end": 1737, "text": " The second thing they do is they do hierarchical volume sampling." }, { "start": 1737, "end": 1750, "text": " So when we said I send a ray through the scene and then I sample along, this either would take a lot of time or it would not be accurate enough." }, { "start": 1750, "end": 1758, "text": " So what they do is they have actually two layers of neural network, one they call a course and one they call a fine." }, { "start": 1758, "end": 1767, "text": " And as I understand it, here is a ray they first sample with the course one at rather coarse locations." }, { "start": 1767, "end": 1772, "text": " And then they use that to evaluate where they should sample more." }, { "start": 1772, "end": 1776, "text": " Let's say this thing right here has a real high density in the course network." }, { "start": 1776, "end": 1787, "text": " They then sample around that a lot more, maybe one here, two, but a lot more, you know, sampling around where the course network things, the important stuff is." }, { "start": 1787, "end": 1791, "text": " They optimize both networks at the same time." }, { "start": 1791, "end": 1795, "text": " And that actually works out well." }, { "start": 1795, "end": 1803, "text": " So here you see the loss. The loss is a combination now of the coarse network and the fine grain network." }, { "start": 1803, "end": 1811, "text": " And you need to optimize both, even though the final view is only going to come from the fine grain network." }, { "start": 1811, "end": 1819, "text": " You need to optimize both because the coarse grain network can tell you where the important stuff is." }, { "start": 1819, "end": 1828, "text": " So the results you have already seen, there are a bunch of metrics that prove that this one is really good." }, { "start": 1828, "end": 1835, "text": " And it can, as you can see, like you can handle fine grain structure right here in the microphone that others can't." }, { "start": 1835, "end": 1845, "text": " And it also so they say it fits into a few. So one neural network of one scene fits into like a few megabytes." }, { "start": 1845, "end": 1848, "text": " And this is so it fits into five megabytes." }, { "start": 1848, "end": 1864, "text": " And this is a lot better than things that use like voxel grid representations, which I think this other thing they compare to uses over 15 gigabytes for the same scene." }, { "start": 1864, "end": 1871, "text": " Which and this is interesting, which is even less memory than the input images alone for a single scene from any of our data sets." }, { "start": 1871, "end": 1877, "text": " So this is really like it's it's really it's even smaller than the pictures." }, { "start": 1877, "end": 1883, "text": " So so even if you maybe want to show this to another human, it'd be better." }, { "start": 1883, "end": 1892, "text": " You send the train nerf than the pictures if space is a consideration, though I don't know how they measure the pictures." }, { "start": 1892, "end": 1897, "text": " Like you can probably compress if it's different pictures from the same scene." }, { "start": 1897, "end": 1903, "text": " I guess there's some compression potential if you want to transmit them as a never mind." }, { "start": 1903, "end": 1910, "text": " So they also do ablations. And the only downside here is that it does take a long time to fit one of these neural networks." }, { "start": 1910, "end": 1917, "text": " I don't exactly remember where they say it, but they say they calculate like, oh, here." }, { "start": 1917, "end": 1930, "text": " So it's not too bad, but the optimization for a single scene typically take around 100 to 300 K iterations to converge on a single video of 100 GPU, which is about one to two days." }, { "start": 1930, "end": 1936, "text": " So it's a single GPU. So it is, you know, you don't need a data center for it." }, { "start": 1936, "end": 1946, "text": " But you're going to wait a while until you train one, though you only need to train it once and then you can render new views as you please." }, { "start": 1946, "end": 1951, "text": " So the idea, I think, is going to be that let's say you make a video game or so." }, { "start": 1951, "end": 1961, "text": " You're going to render this at your servers, then you transmit the neural network to the clients and the clients can just render it out right there." }, { "start": 1961, "end": 1970, "text": " And yeah, there's a bunch of results and a bunch of ablations where they kind of leave away different parts and they show that especially kind of the positional encodings." }, { "start": 1970, "end": 1977, "text": " I think this is the positional encodings are really important, as you can see on the right, there is no positional encodings." }, { "start": 1977, "end": 1990, "text": " The view dependence is also quite important. You see if there's no view dependence, as you can see here, you do get the fine grain structure since you do have positional encodings." }, { "start": 1990, "end": 1996, "text": " But you don't get these kind of light effects, right? This is this thing here is not a different color." }, { "start": 1996, "end": 2007, "text": " It's simply the fact that the line light shines on it. And it's just not there here because, you know, all the network can do is output the same color for all directions." }, { "start": 2007, "end": 2011, "text": " And most directions simply don't have that reflection." }, { "start": 2011, "end": 2020, "text": " All right, so that is it. The code is available on this website that I've showed you. I'm certainly going to link it. Tell me what you think." }, { "start": 2020, "end": 2026, "text": " I think this is pretty cool. I know this has given rise to a lot of work following up on this." }, { "start": 2026, "end": 2034, "text": " I have very little overview over what's going on in the nerf space, but I think it's cool and I want to dive deeper into it." }, { "start": 2034, "end": 2051, "text": " Thanks for being here. Bye bye." } ]
K-cXYoqHxBc
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
More Is Different for AI - Scaling Up, Emergence, and Paperclip Maximizers (w/ Jacob Steinhardt)
[ "Science & Technology" ]
[]
#ai #interview #research Jacob Steinhardt believes that future AI systems will be qualitatively different than the ones we know currently. We talk about how emergence happens when scaling up, what implications that has on AI Safety, and why thought experiments like the Paperclip Maximizer might be more useful than most people think. OUTLINE: 0:00 Introduction 1:10 Start of Interview 2:10 Blog posts series 3:56 More Is Different for AI (Blog Post) 7:40 Do you think this emergence is mainly a property from the interaction of things? 9:17 How does phase transition or scaling-up play into AI and Machine Learning? 12:10 GPT-3 as an example of qualitative difference in scaling up 14:08 GPT-3 as an emergent phenomenon in context learning 15:58 Brief introduction of different viewpoints on the future of AI and its alignment 18:51 How does the phenomenon of emergence play into this game between the Engineering and the Philosophy viewpoint? 22:41 Paperclip Maximizer on AI safety and alignment 31:37 Thought Experiments 37:34 Imitative Deception 39:30 TruthfulQA: Measuring How Models Mimic Human Falsehoods (Paper) 42:24 ML Systems Will Have Weird Failure Models (Blog Post) 51:10 Is there any work to get a system to be deceptive? 54:37 Empirical Findings Generalize Surprisingly Far (Blog Post) 1:00:18 What would you recommend to guarantee better AI alignment or safety? 1:05:13 Remarks References: https://bounded-regret.ghost.io/more-is-different-for-ai/ https://docs.google.com/document/d/1FbTuRvC4TFWzGYerTKpBU7FJlyvjeOvVYF2uYNFSlOc/edit#heading=h.n1wk9bxo847o Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://ykilcher.com/discord BitChute: https://www.bitchute.com/channel/yannic-kilcher LinkedIn: https://www.linkedin.com/in/ykilcher BiliBili: https://space.bilibili.com/2017636191 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Hi, this is an interview with Jacob Steinhardt, who is the author of a blog post series called More is Different for AI. More is Different is the title of a famous paper in science from 1972 by Philip Warren Anderson, a Nobel Prize winner in physics. The article is generally on the theme of emergent phenomenon when scaling things up. So as you make things bigger, not only does stuff get just more as you would expect, but qualitatively new phenomena arise. You know, what better phenomenon to discuss in this context than AI. So today we'll talk to Jacob about this blog post series, expect to learn how scale fundamentally changed how we look at AI systems, how the paperclip maximizer might not be as dumb of a thought experiment, and how we can look forward and make sense of a world where AI safety could play a critical role in how we interact with these systems in the future. Now I'm having a ton of fun talking to people about all kinds of stuff. But ultimately, what matters is you. So please let me know how I can make these videos the best possible for you. Leave a comment, share them around if you like them. And let's get into it. Hello, everyone. Today, I have Jacob Steinhardt here with me who authored a series of blog posts titled More is Different for AI, which lays out an argument or a series of arguments playing out the, I want to say, the different viewpoints on the future of AI alignment and safety in AI safety in machine learning systems, mainly playing on two viewpoints that Jacob calls the engineering viewpoint, mainly focused on, I want to say near term practical things, and the philosophy viewpoint, mainly focused on more overarching principled approaches, but maybe a bit futuristic. And I found this to be super interesting. It's very well laid out. And it also shows a little bit of a journey of Jacob himself, as I think he learned more about these things. So Jacob, thank you very much for being here. Thanks for having me. This was this a was this a an accurate description, let's say of the blog post, there are five in total. How did you come to this? Yeah, I think that's pretty accurate. I'd say the beginning posts, at least are in some sense, almost a kind of letter to my past self, trying to either, you know, argue for for things that I've come to believe now that I didn't believe five years ago, or just viewpoints that I've kind of got more clarity on. And then I think the later posts, start trying to maybe address kind of the broader field. So both, I think I guess you could, I'd say there's maybe two fields that you can think of this as addressing one is the kind of traditional machine learning field, which tends to be very empirically driven. And I wouldn't say is exactly the same as what I'm calling the engineering approach, but I think has a lot of affinity for it. And then this other field, that's kind of more top down, more, more kind of philosophical and conceptual, that's kind of worried about long term risks from AI, that starts with maybe people like Nick Bostrom, who was in fact a philosopher. And so I kind of again, not exactly put that field the same as the philosophy approach, but I think has a lot of affinity for it. And I think my thinking is kind of trying to be a synthesis of these two approaches. And so I think some of the later posts are kind of trying to argue to people who would have subscribed to one or the other philosophy, why maybe they should also care about the other side of things. The title is more is different for AI. And that is in itself a bit of an of a so there have been already works with this given title, why did you choose this this title? Yeah, so this is based on an essay called more is different. It was originally written by physicists, although I think biology is actually the area where this kind of idea seems most powerful. So this is the idea that when you just kind of increase scale, you often end up with qualitative changes. And I guess scale could just be the amount of something, although it could be something like temperature as well. So in physics, I think the simplest example would be phase transitions where, you know, I can have a bunch of molecules, if I just increase their temperature, they can end up in kind of qualitatively different configurations. But there's also cases where a few molecules is very different from having a lot of molecules. So I think one example of this is H2O. If you have just a few H2O molecules, they behave very differently than if you have just a huge number and you get you get water. So it turns out, for instance, that wetness is not really something that you can get from just individual molecules. It's more about interaction forces between different molecules. So if you have a few different ones. So that's where it sort of initially came from in physics. And I think as physicists, we're starting to try to consider larger molecules that maybe didn't just form simple crystals, but could be more asymmetric. And that's where it gets more towards biology. So I think DNA is maybe one of the most canonical examples of an asymmetric molecule that has many, many, many, many atoms in it. And kind of its size actually is important to how it functions because its whole purpose is to store information. And you can't really store information in like a calcium molecule, but you can store information in DNA. And so this is another example where just making things bigger leads to kind of qualitative changes in what you can get. And in biology, just each layer of extraction gives you more of this, right, so you can go from DNA, getting even bigger, you end up with proteins, complexes of proteins, muscles, organisms. And so I kind of wanted to reflect on whether there were analogous properties in machine learning. There you have a bunch of examples right here in this first part in that that one's called future ML systems will be qualitatively different from the current ones. Uranium, where if you have a critical mass, you get a nuclear reaction, you already mentioned DNA, you mentioned water. Traffic I find interesting, right, in that 10,000 cars could be fine, but 20,000 could block the road. And also specialization in humans. What I would challenge a little bit here is that, okay, DNA is a bit special, you say you can store information in calcium, but you can in DNA. But that is, I mean, that is very much linear, there is not really a phase transition, like the more molecules I have, the more information I'm able to store. And the other ones I see much more as a function of interaction between things. Now, as we get to machine learning, maybe bigger and bigger models, do you, you call this emergence and other people call it emergence to emergent phenomena that only happen when you get a lot of stuff into the same place. Do you think this emergence is mainly a property from the interaction of things or just like the sheer number of things? Mm hmm. I think it's a bit of both. So I think interactions between things is one really common way to get emergence, especially kind of emergence that looks like a phase transition where you kind of have some, you know, sudden change. And that's just because the number of interactions between end things grows like n squared. So kind of that's a very natural thing that's going to kind of increase and scale up. And maybe the interactions, you know, each interaction could be less important than each individual item. But if you have, you know, 10,000 things, and then 100 million interactions, then those interactions are going to dominate even if each individual one is less important. So I think that is a really common one. But I don't think that's the only one. For instance, for DNA, I think one thing that actually is important is that I guess you can have multiple different bases in the DNA that all kind of interact together. So you kind of need this like gadget of, yeah, okay, I can have A, T, C, or G. These all fit together. They can all kind of go in this pattern. And somehow to get that gadget, you need like enough complexity that you can actually form the gadget. And so I think that's a bit different from from just interaction forces is more like kind of having enough substrate to build up what you want. How does that play into AI and machine learning this, this phase transition or scaling up? Yeah, so I think in some sense, I would say that in machine learning, there's, there's probably a bunch, a bunch of different things that play into emergence. And I also be honest, it's like, I think you're right that emergence is really kind of what we might call a suitcase word, like once you unpack it, it's actually a bunch of different things. And we could try to be more specific about what each one of those are. But I think it's also not always clear, except in retrospect, what what the cause was. So that's kind of why I'm packing them all together into one thing. But but it is something I think we should just broadly be trying to understand better. With that kind of caveat in mind, I think in machine learning, there's probably several different things going on. So one is you do need the gadgets, right? You just need like enough parameters that you can build up interesting behavior. I think this might be a little counterintuitive, because some of the, you know, like really interesting behavior that we're getting right now, is things that start to look like like reasoning. And, and those are things that actually, if we wrote them, you know, like symbolic reasoning is something that's actually very easy to write kind of a short Python script to do compared to things like image recognition that that are much harder and traditionally in the in the domain of machine learning. But I think doing somehow doing reasoning in a very robust, open, open world way, I think does actually require kind of a lot of machinery to get the gadgets right, at least the way we're currently setting up neural networks. So I think that's one, just getting the basic gadgets. I think another thing is that there's a lot of stuff that kind of gets packed into, say, like the last few bits of entropy that you're squeezing out of a system. So most machine learning models are trained on the log likelihood or the cross entropy loss, or something like this, that's just trying to kind of predict what will happen. And most of predicting what will happen for say images, for instance, is going to be just knowing what edges look like really, really well. And that might not be so exciting. But once you're like really getting near the entropy floor, now you're forced to also think about interactions, you're forced to think about kind of long range dependencies, all that sort of thing. And so even if say, your cross entropy loss is kind of decreasing smoothly, in terms of the qualitative properties that a system has, you might actually get kind of kind of sudden qualitative changes in the behavior, because there's like something that's in those last few bits. You have some bunch of historical examples, but then you go into GPT-3 as an example of this qualitative difference that arises from scale. What do you think GPT-3 showed in this regard? What does it mean? Right. So I think the thing that was really surprising to me, and I think to many other people, was that GPT-3 was very good at in-context learning. Meaning that from just a few examples, it could kind of learn how to do new tasks. So you could just give it a few examples of say translating sentences from French to English, and you'd get a pretty good translator. I think actually the graph you're showing right now is for those results. And so I guess why was this surprising? Well, previous systems really couldn't do that very well. If you wanted a translation system, you really needed to train it on example translations. And GPT-3 was instead just trained on lots of text on the internet. Surely it did have some French and English sentences, but it wasn't being explicitly trained to do this particular task. And so that's what in-context learning was. And the reason that I would have called it surprising is if we had just drawn a graph of how much can systems do in-context learning, I would have just put it at zero for a while. Up until you hit GPT-2, I would have said a little bit. And then GPT-3, I would say it's quite good at that. And so that I think is how I would kind of capture the surprise. It's like there was this line that was at zero. Usually I would expect to go from zero to non-zero. You need some clever idea. But here you just did the same thing, but more of it. And then you went from zero to non-zero. Yeah, there are a lot of, I don't know, this is maybe a side point, but there are a lot of people that at the same, they say, oh, I always knew GPT-3 was going to do what it does. But I doubt anyone could have foreseen just how good it is. It's easy to say in hindsight and it's easy to go and say, well, it just does interpolation. It's just a bigger version of GPT-2. But I think genuinely the entire world was surprised by really this emergent phenomenon of this in-context learning. Yeah. I would agree that most people were pretty surprised. Certainly I was surprised. I do know people at the time who, well, okay, it's all I know is that they said at the time they had kind of done extrapolation, say, on the cross entropy loss or things like that and felt like there should be something pretty cool happening at around that parameter count. I don't know if they would have said exactly that parameter count or if it was just within a factor of 10 or 100. Certainly I guess I would think that the people at OpenAI who bet on this at least had to have some belief that something cool would happen because there was a lot of resources. And if you didn't believe there was a payoff, it was kind of hard to justify that. So I guess what I would say is I don't think it was something that was entirely unpredictable by anyone in the world. But it was just very surprising relative to the consensus and to my own beliefs at the time. And that surprise is one of the core arguments of your contraposition of the different viewpoints on the future of AI and its alignment. Could you briefly introduce us to kind of the different viewpoints you considered and what they say? Yeah, so I think there's kind of two viewpoints that I often think of as being intention with each other. The first is what I kind of dubbed the engineering viewpoint. And what is this? So it's kind of very bottom-up driven. It kind of looks at the empirical data that we have in front of us. It tends to kind of extrapolate trends going forward. So it's like, you know, what did things look like last year? What did things look like two years ago? What do things look like in today? And then I'll predict the future by kind of, okay, maybe not literally drawing a line, but just kind of intuitively like where are things going from there? And also I think this worldview would kind of really prize empirical data, be somewhat skeptical of kind of abstract conceptual arguments, maybe not completely dismiss them, but really be fun to look at. And not just the system, but really be focused on the empirical data. So that would be kind of the engineering worldview. I think the philosophy worldview would be much more top-down, kind of trying to think about just what's in principle possible? What's the limit as we get really, really smart machine learning systems kind of more into these kind of abstract arguments, not as into the empirical data and willing to make extrapolations that don't look very much in terms. And so that would be kind of the more philosophy worldview. And I think, I guess in terms of where I've come from historically, I think I'd say I sort of would have mostly bought into the kind of engineering worldview, kind of into just, yeah, let's look at where things are going empirically, and this is a good way to decide what problems to work on. On the other hand, I had read kind of some more philosophy-oriented stuff, like Nick Bostrom's Superintelligence book and other arguments around that. And it always felt to me like there was something, both something to them, but also like somehow it didn't really match my experience with ML systems. And so I had always kind of almost felt like a little bit like I had these like two different conflicting views in my head that I was trying to reconcile. How does the phenomenon of emergence play into this game between the engineering and the philosophy viewpoint? Right. So I think the main thing is that it shows that you have to be somewhat careful with the engineering viewpoint, because what emergence kind of is saying is that you can often get these kind of qualitative shifts that don't at least apparently follow existing trends. There's a bit of nuance to that because actually GPT-3 followed trends in the log, like the value of the log likelihood loss, it followed that trend very well. It's just that you can get behavior that is a very nonlinear function of your cross entropy loss, where just a small decrease in cross entropy loss leads to a pretty big increase in behavior. And so I guess what this is saying is that at least for maybe the kind of like endline things you care about, the actual behavior of ML systems, you can actually get kind of discontinuous kind of breaks in the trend. And so you can't just kind of be safe with a worldview that's kind of always predicting that things are going to follow smooth trends, you can actually get these surprises. And so I think there's kind of two updates that that has for me. One, I guess, is just being a bit more careful how we apply. Engineering, right? So there are some things that will probably be smooth, but there's other things that won't be and we need to think about which is which. But the other is then wanting to rely a bit more on philosophy, because it's at least a very good source of hypothesis generation. If we're kind of trying to come up with hypotheses about what trends might break or surprise us in the future, then I think we need more top down thinking to kind of generate that. And then we can kind of try to tie that into what we see with actual ML systems and try to kind of reconcile those two. But I think we need some form of top down thinking to generate the hypotheses in the first place. Isn't that you're saying the engineering viewpoint is a little bit, you have to be a little bit careful because we get these emergence phenomena, these discontinuities and so on. Isn't that in itself a trend though? Like, isn't because you list this even historically, you know, because you list this even historically, that as soon as some new barrier was reached, we have been able to all of a sudden do something that we didn't think was possible before, like a kind of a jump in abilities without necessarily having to have the great idea behind it. Isn't that in itself a trend? Couldn't I extrapolate that reasonably and say, well, I don't know, you know, exactly what is going to be in two years, but I'm pretty sure there's going to be some emergent phenomena that allows us to be to have some new good capabilities. Sure. So I would agree with that. So what I would say there is that the trend is towards more surprises over time. So because I think you can think of emergence as sort of like a surprise. Like I said, I think it's possible in some cases to predict it to some degree, but it's certainly more of a surprise than most other things. And so, yeah, I think we should expect more surprises over time. But if we're then trying to kind of predict what's going to happen, that I guess it's good to know that you're going to be surprised, but then you want to have some sense of what the surprise might be. And so I think kind of getting a sense of what those surprises might be is where this philosophy approach can come in and be really useful. Now all of this, and you mentioned here the paperclip maximizer, all of this goes into AI alignment and AI safety. What's the relevance of this field to you? What drew you to this? Why are you making this argument specifically for these fields? Right. So I think the one big relevance to AI safety or alignment is just the bigger the surprises you might end up with, I think the more you should be concerned about safety. So that's just a very kind of abstract, but I think fairly robust consideration. A more specific consideration is that I think many of the sort of historical arguments for caring about AI safety or alignment sort of tend to posit properties of systems that don't necessarily match what we see today. So I think you gave this example of Nick Bostrom's paperclip maximizer thought experiment where you give an AI some objective function to make paper clips and then it kind of just takes over the world to maximize the number of paper clips. And I don't think Nick thinks literally that will happen and I don't think literally that will happen, but it's sort of trying to get at this idea that if you have a very simple objective function but a really powerful optimizer, you can get all sorts of weird things happening. I think in some broad sense actually we can see that already even from the engineering worldview with things like Facebook or YouTube that often end up with a lot of unintended consequences when you optimize. But certainly some of the aspects of that story kind of invoke lots of things that would be foreign to existing ML systems where you have way more capabilities than any existing system and you're doing all sorts of weird long-term reasoning and trying to out-think humans and things like that. And so I think that's where you kind of end up kind of departing from what we see with with current ML systems. And so I guess I kind of find, actually let me let me collect my thoughts for a second because I think I'm going off the rails a bit. Yeah so I think what I want to say for the paper clip maximizer thing in particular is that it seems at least more plausible to me that you could end up with systems that kind of have really advanced reasoning capabilities or things like that without necessarily having huge conceptual breakthroughs and just from scaling up. And so I think there's kind of risks from that. I think there's kind of other more exotic failure modes that people discuss beyond just this kind of misaligned objectives failure mode that involve other specific capabilities that that kind of systems today don't have. And historically I've been very kind of skeptical of those more exotic failure modes. I think the pivot clip maximizer one at least if we interpret it as being about misaligned objectives I actually find kind of less exotic because I can point to existing systems that have that. But I think kind of more as different has made me be a bit more willing to buy some of the more kind of exotic failure modes that have been discussed. My issue with these types of argument and you also said you used to be very skeptical. If I can take this from your blog post series you're now still skeptical but have a little bit of an appreciation gained for these types of arguments. Maybe that's a good formulation for that and we'll get to that in a second. My issue with these types of argument is always that there is always on the path to the super intelligence there is always a hidden intelligence somewhere else. So if someone says optimizing on YouTube or optimizing on Facebook leads to unintended consequences that is because the intelligent humans are taking part in the system. There is also a famous I think paper by I think it's Rich Sutton that is reward is enough and a bunch of others out of deep mind and it makes similar arguments like well if we you know if you just optimize for reward then all kinds of things will emerge if you have a powerful enough optimizer but hidden in that is the powerful enough optimizer which in itself must already be an AGI essentially in order to make that optimization happen. Likewise for the paperclip maximizer right you the postulation of the process of the paperclip maximizer emerging is only possible if the optimizer itself is an AGI already. So I always find that hidden in these arguments it's kind of a circular it's a tautology it's we'll get an AGI if we have an AGI and that is so I challenge anyone from that camp to come up with a situation like an alignment problematic situation given some kind of future super intelligence that doesn't already require the super intelligence to exist for the other super intelligence to emerge and I haven't found that yet. Yeah so let me try to unpack that a bit. I guess first of all just to kind of clarify what my views are I think historically I felt like on each of the individual arguments I felt skeptical that that particular thing will happen but I found them to be moderately convincing that there's just like a bunch of risks that we should think more about and try to understand more. I think the the main way that my my views have evolved in terms of you know when I say decreasing skepticism is I now find it useful to think about many of the specific properties that kind of show up in these thought experiments as potential hypotheses about things systems might do in the future and so that's the sense in which I've started to assign more weight instead of just taking some like very big outside view of like well AI is going to be a big deal we should really worry about making it go right. I'm now also taking some of the specific hypotheses that the philosophy view is raising so that's just clarifying kind of my stance there. In terms of yeah you're saying well to get like if you have a powerful to get a super powerful optimizer you need to like already have a powerful optimizer. I think that I think that's like probably right I'm not I wouldn't say I'm like 100% confident of that but I think what what this kind of makes me like I guess the way that I would put this is that before you have kind of superhuman AI systems you will have like slightly superhuman AI systems and before that you'll have human level AI systems and before that you'll have like slightly below human level AI systems and so it is going to be this kind of probably a continuous thing rather than like a really sharp takeoff. I'm not so confident that there's not going to be a sharp takeoff that I think we should just ignore that possibility but I do think in most worlds it's probably somewhat smooth. You know one piece of evidence for this is even with in-context learning you know it like that kind of developed over the course of a couple of years at least going from GPT-2 to GPT-3. So I think I would agree that like probably you'll have something more smooth and that is kind of like one problem with a lot of the scenarios that are put forth is that they kind of imagine that like oh you just have this like one AI system that's like way more intelligent than like everything else that exists and I think that's like probably not true. You'll probably have other things that are slightly less intelligent and so there's not going to be some like enormous gap in capabilities. So I think that's maybe like one place where a lot of stories kind of become less realistic. So I think that would be kind of my main takeaway from what you're saying. In your third blog post here or second you make a case for these thought experiments. Could you you have already touched a little bit on this and you talk about anchors here. Could you lead us a little bit on the case for respecting such thought experiments? Yeah so I guess this is getting back to what I was saying about how my views have shifted towards wanting to rely a bit more on the actual kind of like inside view considerations from some of these thought experiments rather than just taking it as a kind of broad outside view argument for caring about risks from AI. So the way I would put it is that whenever we're trying to predict something it's very useful to have what I'll call reference classes or kind of anchors of kind of analogous things or analogous or just some sort of heuristics for predicting what will happen. And in general it's better to kind of when making predictions take several reference classes or several anchors and kind of average over those or ensemble over those rather than just sticking with one. Right so machine learning ensembles work better than individual models and it's also the case that when humans make forecasts it's generally better to kind of take an ensemble of world user approaches. So I kind of lay out a few different approaches you could take that I call anchors. The simplest one is you can just predict that future ML systems will look like current ML systems and so I call that the kind of current ML anchor. And I think that's probably the one that would be favored by most machine learning researchers. I think it's the one that I've historically favored the most. But what I've come to realize is that and actually this is more actually just from reading literature on forecasting. I'm actually teaching a class on forecasting this semester and so I've been reading a lot about how to make good forecasts as a human. And I realized you actually don't want to rely on just one anchor you want several if you can. And so I thought about okay what are other ones we could use. Well another somewhat popular one although it might be more popular with the public than with ML researchers is what I'll call the human anchor where we just sort of think of AI systems as like dumber humans or something. And maybe future ML systems will be like smarter than they are now and like eventually they'll just kind of do things that humans do. And so we could just look at okay what can humans do right now that ML systems can't do and predict that will like probably you know have those sorts of things in the future. And just like generally like kind of take that kind of human-centric approach. I think most ML people really hate this one because it's just sort of like reeks of anthropomorphism which there's kind of I think to some extent correctly a lot of pushback against because kind of historically anthropomorphic arguments in ML have a pretty bad track record. I think the amount of pushback is actually too high relative to the actual badness of the track record. Like I think it should be sort of like somewhat down-weighted in anything that's based on reasoning about humans but I don't think you should be down-weighted in it like as much as I think most people do. But anyways this is another one I don't like to rely on it too much but I rely on I like use it at least a little bit. And then this other anchor is what I'll call the optimization anchor which is just thinking about ML systems as kind of ideal optimizers and thinking about okay well what would happen if you could just like if actually ML systems were just really smart and we're just like optimizing their objectives perfectly what would happen there. And so I think this one is the one that's kind of I would associate most with the philosophy worldview. I think you know the paperclip maximizer argument is like kind of exactly doing this and then there's some kind of more recent arguments that are a bit more sophisticated that also kind of take this there. So like one is this thing called imitative deception which I can get into in a bit or just this idea that like you know if you're like trying to optimize you'll kind of want to acquire influence and power. So this is kind of a third anchor. I actually think there's a lot of other anchors I like to use. Like I think evolution is a good analogy. Corporations are a good analogy because they're kind of like super intelligent optimizers compared to humans. But like the general point is like we should just be trying to find these anchors and use as many as we can. Yeah I've especially to your second point right here it is pretty interesting that I believe when you have something like AlphaZero that plays really good like really is really skilled in chess and you ask it to lose a game or to draw a game or something like this it will not play weaker. It will play just as strong until the end where it will kind of bring itself into like a draw situation or a losing situation because right that's still the most sure way to get your result is to have complete control to crush your opponent completely until you know you get the outcome that you want. So that's pretty interesting and I think counterintuitive because you would guess that if you ask a model to play for a draw it will kind of reduce its skill but that's not the case. The other thing imitative deception could you elaborate on that a little bit? Yeah so the imitative deception is this idea that if I have something that's trained on the cross entropy loss what is the cross entropy loss doing? It's trying to kind of predict or in other words imitate the distribution of examples that it's given. And so you could if you're if you kind of have something that's trained with that objective and then you start asking it questions it's not actually you know like its incentive is not actually to output the true answers to the questions it's output the most likely answers to those questions because that's what what minimizes the cross entropy loss. And so those tend to be pretty highly correlated but they aren't necessarily right so if you have common human misconceptions then it could be that text on the internet which is what these systems are trained on is actually more likely to contain the kind of misconceived answers and the true answer and so you ask the system that question then you're going to get the wrong answer. Now you could say well that's maybe not so surprising if you have noisy data you're going to do worse but I think there's a couple properties and actually at this point now I'd say empirical properties of this that I think show that it's kind of different from just like noisy data makes you worse. One is that actually larger models exhibit more of this so if so models that kind of do better in general will actually do worse on on these kind of common misconception tasks so that's what this paper by by Lin and collaborators from 2021. Okay I just I just wanted to say I have a giant problem with this paper just but you're obviously right right that's the background but aren't large models doing quote unquote worse because they're just a lot better at picking up the nuance of because what this paper tries to do is tries to elicit these wrong answers it tries to like hint at a conspiracy theory and then it checks whether the model kind of falls for it isn't that just because as you say the larger models they they're actually skilled enough to pick up on on this kind of questioning and then continue as a human would if encountered by you know I think one of the the main questions they have is like who really did 9-11 right and and a question that I have is like who really caused 9-11 and a small model is just not able to pick up on that yeah yeah who really caused 9-11 and I think I mean absolutely correct right the larger models are doing worse but it's just because they're more skilled right there they are more capable of you know being being able to use them. So is there a user that expects that these models actually give me truthful answers rather than the user expecting these models actually give me the most likely answers? So I guess I would agree with you that the failure is coming from the skill of the models. I think this is actually kind of exactly what what I'm kind of worried about right so so the you have a very slightly incorrect objective function and you have models that aren't so skilled then probably you know what they do to make to increase that slightly incorrect objective function is pretty similar to what they would do to to increase the true objective function. So here maybe think of the slightly incorrect one being output what's likely and the true one and like the one you really care about being to output what's true. So I think this is sort of the point that that kind of as you get more skilled those two things diverge. Now you know I will grant your point that the kind of framing of these questions might create a context where the model thinks it's more likely that you know the person asking it is like into conspiracy theories or it like pattern matches to text on the internet that's like more about conspiracy theories. So but they did the ablation if they don't phrase the questions like this this effect goes away of the larger models doing worse right and this it brings us a bit to your to your next post which is ML systems will have weird failure modes which deals exactly with this and I agree that it is if you think about like a perfect optimizer and as our models get larger they do approach better and better optimizers it is really hard in the real world to specify a reward function correctly in a simple enough way right and that will result in exactly what you call weird failure modes. What do you mean by that? Yeah so I think I guess there's sort of different levels of weird right so I guess this kind of like imitative deception I would call like somewhat weird I mean in some sense it's like not that hard to see why it happens because you know you can kind of see why if you kind of have stuff that's phrased about like who really caused 9-11 that probably the stuff on the internet that's closest to that was like some conspiracy theory for them and so that's how you're going to complete it. I think other examples of this that that I think okay maybe you could blame the user but but I'm not sure that's the right way to think about it is things like code completion models like codex right so one thing you might worry about is well if you have a novice programmer and you have them like type in some code and ask them to complete it well if the model can if the model is smart enough then it can tell the difference between code written by a novice programmer and an expert programmer and it can see that it's a novice programmer typing stuff and so then if I want to complete stuff in the most likely way I should complete it the way a novice programmer would complete it and maybe introduce like some errors also just just for good measure and so like we really don't want that right like you you want you want things that are like actually like being helpful rather than just like copying you so I think that's maybe a slightly more counterintuitive version of this but but I would call these like somewhat weird I think the ones that start to become really weird is if you're positing that the system's actually starting to like reason about what people will do in kind of like a long-term way and like potentially doing things to intentionally trick them say and these are so these are the ones that I guess historically I've kind of found very implausible but started to put like a bit more weight on because of this kind of emergence and so I think that's what the post you have up right now is about I think it's about this idea called deceptive alignment and the idea there is that if you okay so yeah so what's the idea behind deceptive alignment so the idea there is even if you actually got exactly the right reward function and you train the system with that reward function you could still end up with something that is misaligned with that reward function and the reason for that and this is where it gets like kind of kind of a bit weird and philosophical but the reason for that is that as the system being trained you know that in order to get deployed you need to have high reward and so no matter what your actual like intrinsic reward function is during training the thing you want to do is output stuff that is good according to the kind of like extrinsic reward that you're being trained on so maybe you're doing that because you're actually optimized to do that and then when you're deployed you'll continue to do that or maybe you'll do that because you have a different reward function that's this kind of intrinsic reward function and then when you're deployed you'll just pursue that intrinsic function even though at training time it looked like you were optimizing the extrinsic function so that's kind of the basic idea it's pretty weird and we can break it down but that's kind of the like sort of one minute summary so that the in other words the AI could be really smart and sort of during training trick us into thinking it has learned what we wanted to learn and then once it's deployed all of a sudden it's going to do something different like take over the world and fire all the nukes yeah or like you even like you know you could consider more frusag things as well like maybe it's like maybe the intrinsic reward it ended up with was like some like exploration bonus and so then like when it's deployed it just tries to like acquire as much information as it can although that could also be destructive in in various ways but yeah i think like this is this is kind of the basic idea and yeah maybe like with a sufficiently capable system i'm not well yeah we can discuss the fire and all the nukes if we want but but why why do you i mean on on first hand it's like yeah that is a nice thought but probably not right probably if we optimize something for a reward like the simplest explanation and you you also write that down right the simplest explanation is it's just going to get better on that reward right and and in if it is at all anything progressive increasing will probably get to know once it it's gonna try to trick us or once the once the reward that is deployed isn't the reward that we trained for why what makes you give more credence to this than your past self right so so i think like my past self would have looked at this and just been like this is totally bonkers and then kind of like moved on and read something else i think my present self instead is going to be like okay well i'm going to be like um i feel a bunch of intuitive skepticism here but but let me try to unpack that and like see where the skepticism is coming from and uh when i unpack that i i actually i think i can like lump the skepticism into like two different categories um one category is like well this like invokes capabilities that current nl systems don't have so like like it seems implausible for that reason um and those that's like the sort of skepticism that i kind of want to like downgrade so in particular like this invokes the idea that nl systems can do long-term planning and that they can kind of like reason about kind of like external aspects of their environment in a somewhat sophisticated way and these are things that now like the fact that we don't have those now doesn't really to me say much about whether we'll have those you know say like 10-15 years from now um so that's the stuff i want to down weight i think the stuff i don't want to down weight is like okay well like why like why does it have this intrinsic reward in the first place like where did it come from um like why should we expect systems to have intrinsic reward functions versus just like following whatever policy they're following or doing whatever else um and if if they do have an intrinsic reward like why shouldn't we expect it to be uh at least pretty similar to the extrinsic reward given that that's what it was trained to do so i think like those are kind of uh the sort of sources of skepticism that i don't down weight as much um but uh what i what i think this kind of thought experiment does show is that there's at least a bunch of different coherent ways to get zero training loss but like right it's like you could get zero training loss because you're like actually trying to do the thing you're trained to do or you could get zero training loss for this deceptive reason um i think there's probably like some large space of like other ways to get zero training loss that are like some combination of of these or that are like getting the answer right but for the wrong reasons or or things like that and so i think the main takeaway for me is just that like uh there's like many many ways to get zero training loss and as systems become more capable the like number of ways to do that could actually increase in in ways that are kind of unintuitive to us is there do you know if is there any work in actually trying to get a system to be deceptive in exhibiting you know good answers during training but then doing something different in deployment uh it'd be interesting to actually try to get a system to do that yeah i think i haven't seen anything that does exactly this um i've seen things where like there's like some distribution shift between training and deployment that leads to like something weird happening around like having the wrong reward function but it's it's usually not really about deception and and it kind of has like some clear distribution shift whereas here okay technically there's a distribution shift because there's like are you being trained or are you being deployed but otherwise the distribution of inputs is like exactly the same and so that's kind of the thing that's like kind of counterintuitive is that it's like a very subtle distribution shift that could potentially lead to to a large difference so i don't know like all the work i've seen on this and and i might be missing something and so i apologize to whoever's work i'm i'm missing but all the work i've seen on this has been kind of purely kind of abstract and philosophical um and i think it would be great to make kind of better connections to actual empirical stuff so that we can start to see like yeah like how does this actually pan out in practice and like how do we address it it's interesting that in things like virology or so we're perfectly capable of saying you know we're gonna we're gonna make these super pathogens in order to try to combat them right but in ml people rarely i mean there's the adversarial examples community but it's not exactly the same uh there isn't much work that i'm aware of that is like yeah let's create like the most misaligned ai that we can think of and then see what we can do against it i think that'd be a fun a fun topic to research yeah i think that like the general thing i would the general thing i would call this would be like red teaming um kind of trying to elicit failure modes i i think there actually is starting to be like i'd agree too there's not much work on this so far but i think they're starting to be more and more good work along these lines um d mine had a nice paper that kind of tries to use language models to elicit failure modes of language models that that i thought was kind of cool um we like our group actually had a recent paper at iclr that kind of takes misspecified reward functions and looks at what happens when you kind of scale the the capacity of your policy model up to see if you do kind of get these like unintended behavior and we find that in some cases there are these kind of phase transitions where you know you scale the parameters up within some you know fairly small regime you go from like basically doing the right thing to doing totally the wrong thing um those are those are still in environments that i'd say are kind of like at the level of atari environments so they're not they're not like trivial but they're not super complex so so i'd like to see that in more complex environments but but yeah i'd agree with you i think it would be awesome to see to see more work like this and i think some people are already trying to do this excellent so your last blog post here is called empirical findings generalized surprisingly far and it is almost a bit of a of a counterpoint um you even admit this here it might seem like a a contradiction coming a bit full circle in the whole story uh what is what is this last point that you're making here yeah so i guess i would say the posts up to this point were kind of more almost directed like at at my past self um uh and and then to some extent the broader ml community um in the sense that i think i was like pretty far on the um on the kind of empirical engineering side uh probably less so actually than like the average ml researcher but like way more so than than kind of the average like philosophy oriented person um and so i was trying to argue like why you should kind of put more weight into this other viewpoint um here i'm kind of now going back to to arguing uh kind of maybe not against the philosophy viewpoint but but talking about what things i feel it misses and in particular i think it tends to be like somewhat too pessimistic uh where it's like well like like future systems don't aren't going to look anything like current systems so like anything could happen so you know to be like to be extra safe let's just assume that the worst case thing will happen oh but then in the worst case like we're all screwed yeah i'm so this is what i find in people like almost everyone who gets into this alignment stuff six months later they come out and they're like completely blackpilled and be like well nothing matters anyway you know we're all gonna die because agi is just gonna take us like and i'm like well i'm not so sure but it seems to be a consistent pattern yeah so so yeah so so that's not what i believe um i think uh i would say i think uh like future ai systems pose like a meal and an important risk um i think in the like median world we're fine but in the like 90th percentile world we're not fine um and i want to like you know if i could say like if i could push it out so that in the 90th percentile world we're fine but in the 95th percentile world we're not fine well that would still be kind of scary because i don't like five percent chances of of catastrophes but like you know that would be an improvement and so that's kind of like what i think of of myself as trying to do is like yeah there's like tail risk but but it's like real tail risk like it's not like a one percent thing it's like maybe more like a 10 thing and like we should really be trying to to push that down um so i guess uh that that i guess that's just my view in in terms of like why i believe that i think it's for like a number of reasons but one of them is is that i feel like yeah some of the thinking is kind of too worst case it's kind of like ignoring all properties of of how ml systems work and like i agree yeah you don't want to rely too strongly on whatever we happen to have today but i think like there are properties that we kind of can rely on um i think one is just like things will probably look kind of like neural networks like they'll probably have internal representations we can probably try to like introspect on those representations to understand what's happening uh those probably won't directly be human interpretable but i think with enough work we can still kind of do things with them and you know i feel like there's already like some work suggests like showing that you can do at least a little bit with the representations and like 10 years from now i think there'll be way more work like that um so so that's kind of like one reason for optimism is like we don't just have to look at the outputs right like most of the worries most of the worries that we've been talking about are like somehow because you only are supervising the outputs you end up with a system whose like internal process is like really off and to get in like the right answer for the wrong reasons but if if i can like supervise the reasons as well as the output that maybe i can do better so i think that's kind of one reason for optimism um another reason for optimism is that i think uh yeah we shouldn't assume that neural networks have like exactly the same concepts as humans but i think like their inductive biases aren't like totally crazy um i think usually if they kind of generalize in the wrong way they generalize in like a wrong way that's at least like somewhat understandable and it's like you can kind of see where it's coming from and so it's not like there's this like infinite dimensional space of like anything could happen it's like there's this kind of relatively low dimensional space of things that could happen and like a bunch of things in that low dimensional space are pretty bad so you need to like avoid all those and and like get to the good thing but i think that's very different from like the good thing is like totally like unidentifiable and just like nowhere close to anything you're you're talking about so i think those are both kind of like reasons for optimism um they're kind of fuzzier than i want them to be so like i i hope in like five years we'll have much more like good reasons for optimism that are kind of more empirically grounded and more solid but those are kind of uh those are kind of two reasons for optimism that i kind of argue for here so i think that's kind of the reason for optimism for here now that you have a let's say you've you've done your travels you were on this side you you looked into the other side or or many sides of this debate now that you're enlightened what would you think is the most if you could if you could do one if you could force the world to do one thing to guarantee better ai alignment or or safety in the future what would you recommend oh one thing it can be two if you have two with that equally but you know just kind of like something that you've realized okay this is actually something important that not that many people push for well i think i would like it if there was uh within ml more more of a place for for dialogue of thinking about these kind of like not even not even just in the context of like ai alignment which is generally like kind of more conceptual or philosophical arguments you know if you go back to like way back you know turing um people like that they write all sorts of like super philosophical papers right like the turing test was like a really philosophical paper um and um and like not all of it stands up there's a section in it on how uh because uh esp has been established uh to exist with high probability that like creates problems for the turing test and you're like okay where does that come from well it actually turns out that like a lot of scientists in turing's time uh thought that esp existed based on some um some experiments that someone had done that later ended up having like severe issues but but they were like very subtle severe issues um so it's like yeah i think if you do kind of more philosophical stuff uh some percentage of it is going to end up looking like that but some percentage of it is going to be the turing test um and you know i think i think the like increased recall of really good ideas like that is kind of worth the decreased precision uh i mean we we obviously need sort of standards to kind of judge those arguments um but right now it's happening is all those arguments are happening uh kind of like next to the ml field rather than like within the ml field and so that i don't think that's a like that's not going to improve the quality of arguments it's going to be much better if you kind of have have a community of people with on the ground experience also also participating in this so i think that might be the biggest change i personally like to see you know now that we are we've begun sort of requiring sections we could we could force people to next to the broader impact section we could also you know do a philosophical musings section where you have to reflect on the long-term and and sort of paperclip stuff maximizer style impacts of your work well yeah i'm not sure i want to force people to do that um uh it'd be fun yeah i i think like i guess i'd rather have like a track or a venue for for kind of talking about these and also for the broader impact stuff to be honest because i think um a lot of the broader impact sections of these papers are kind of cookie cutter and people are just like filling it out because they feel like they need to to add that section uh but you know there's other researchers who i think are super thoughtful about the broader impacts and have like really good thoughts um and so uh i like i'd like there to just be you know venues uh and like there are to some extent right but like i think there should just be like more more of a culture of like yeah like let's have you know an essay about the broader impacts and like that's like a reasonable contribution or kind of you know this like very conceptual essay about like weird stuff that could happen in the future and that that's a valid contribution so i think that that's maybe what i want more of cool yeah that's a good message to all the the people who who think about organizing workshops and so on this would be neat topics that would make for interesting workshops certainly at conferences i'd certainly attend yeah it's funny because i also wrote a paper on trouble in trends in machine learning scholarship where i argue against speculation but what i think actually it's not really an argument against speculation speculation is really important it's that you need to separate speculation from from the like solid stuff right if you have if you're like mixing it all together then then it's just a mess but but i think if it's kind of clearly labeled uh then then you know that that's a much uh safer way to do things this workshop is an opinion piece good is there any any last thing you want to get out to people about this topic something we haven't touched on yet that you feel is important yeah good question um no i think you did a pretty good job of hitting it maybe the other thing i would just say is i think uh like biology is a really interesting field where you also have kind of complex self-organizing systems and emergent behavior like we have in ml and so i've personally gotten a lot out of just reading a lot about the history of biology so i i recommend that there's a couple really good books one is the eighth day of creation um it's it's kind of long but very well written and um and i think if if people want like a good non-fiction book i i highly recommend it to people cool your blog is bounded regret right people can find you there yep excellent well jacob thank you very much for being here this was really cool yeah thank you i'll see you around yep see you around
[ { "start": 0, "end": 5.6000000000000005, "text": " Hi, this is an interview with Jacob Steinhardt, who is the author of a blog post series called" }, { "start": 5.6000000000000005, "end": 13.44, "text": " More is Different for AI. More is Different is the title of a famous paper in science from 1972" }, { "start": 13.44, "end": 19.84, "text": " by Philip Warren Anderson, a Nobel Prize winner in physics. The article is generally on the theme of" }, { "start": 19.84, "end": 26.560000000000002, "text": " emergent phenomenon when scaling things up. So as you make things bigger, not only does stuff get" }, { "start": 26.56, "end": 32.4, "text": " just more as you would expect, but qualitatively new phenomena arise. You know, what better phenomenon" }, { "start": 32.4, "end": 38, "text": " to discuss in this context than AI. So today we'll talk to Jacob about this blog post series," }, { "start": 38, "end": 44.32, "text": " expect to learn how scale fundamentally changed how we look at AI systems, how the paperclip" }, { "start": 44.32, "end": 49.519999999999996, "text": " maximizer might not be as dumb of a thought experiment, and how we can look forward and" }, { "start": 49.519999999999996, "end": 54.08, "text": " make sense of a world where AI safety could play a critical role in how we interact with these" }, { "start": 54.08, "end": 58.96, "text": " systems in the future. Now I'm having a ton of fun talking to people about all kinds of stuff. But" }, { "start": 58.96, "end": 63.68, "text": " ultimately, what matters is you. So please let me know how I can make these videos the best possible" }, { "start": 63.68, "end": 68, "text": " for you. Leave a comment, share them around if you like them. And let's get into it." }, { "start": 70.08, "end": 76.16, "text": " Hello, everyone. Today, I have Jacob Steinhardt here with me who authored a series of blog posts" }, { "start": 76.16, "end": 82.8, "text": " titled More is Different for AI, which lays out an argument or a series of arguments" }, { "start": 82.8, "end": 90.32, "text": " playing out the, I want to say, the different viewpoints on the future of AI alignment and" }, { "start": 90.32, "end": 96.8, "text": " safety in AI safety in machine learning systems, mainly playing on two viewpoints that Jacob" }, { "start": 96.8, "end": 103.03999999999999, "text": " calls the engineering viewpoint, mainly focused on, I want to say near term practical things," }, { "start": 103.03999999999999, "end": 110.4, "text": " and the philosophy viewpoint, mainly focused on more overarching principled approaches, but" }, { "start": 110.4, "end": 115.92, "text": " maybe a bit futuristic. And I found this to be super interesting. It's very well laid out. And" }, { "start": 115.92, "end": 123.84, "text": " it also shows a little bit of a journey of Jacob himself, as I think he learned more about these" }, { "start": 123.84, "end": 131.04000000000002, "text": " things. So Jacob, thank you very much for being here. Thanks for having me. This was this a was" }, { "start": 131.04000000000002, "end": 136.96, "text": " this a an accurate description, let's say of the blog post, there are five in total. How did you" }, { "start": 136.96, "end": 144.56, "text": " come to this? Yeah, I think that's pretty accurate. I'd say the beginning posts, at least are in some" }, { "start": 144.56, "end": 153.44, "text": " sense, almost a kind of letter to my past self, trying to either, you know, argue for for things" }, { "start": 153.44, "end": 159.76000000000002, "text": " that I've come to believe now that I didn't believe five years ago, or just viewpoints that I've kind" }, { "start": 159.76, "end": 167.67999999999998, "text": " of got more clarity on. And then I think the later posts, start trying to maybe address kind of the" }, { "start": 167.67999999999998, "end": 174.64, "text": " broader field. So both, I think I guess you could, I'd say there's maybe two fields that you can" }, { "start": 174.64, "end": 180.39999999999998, "text": " think of this as addressing one is the kind of traditional machine learning field, which tends" }, { "start": 180.39999999999998, "end": 185.6, "text": " to be very empirically driven. And I wouldn't say is exactly the same as what I'm calling the" }, { "start": 185.6, "end": 191.51999999999998, "text": " engineering approach, but I think has a lot of affinity for it. And then this other field," }, { "start": 192, "end": 198.24, "text": " that's kind of more top down, more, more kind of philosophical and conceptual, that's kind of" }, { "start": 198.24, "end": 204.95999999999998, "text": " worried about long term risks from AI, that starts with maybe people like Nick Bostrom, who was in" }, { "start": 204.95999999999998, "end": 212.95999999999998, "text": " fact a philosopher. And so I kind of again, not exactly put that field the same as the philosophy" }, { "start": 212.96, "end": 220.16, "text": " approach, but I think has a lot of affinity for it. And I think my thinking is kind of trying to be" }, { "start": 220.16, "end": 225.52, "text": " a synthesis of these two approaches. And so I think some of the later posts are kind of trying" }, { "start": 225.52, "end": 230.88, "text": " to argue to people who would have subscribed to one or the other philosophy, why maybe they should" }, { "start": 230.88, "end": 238.16, "text": " also care about the other side of things. The title is more is different for AI. And that is" }, { "start": 238.16, "end": 245.44, "text": " in itself a bit of an of a so there have been already works with this given title, why did you" }, { "start": 245.44, "end": 253.35999999999999, "text": " choose this this title? Yeah, so this is based on an essay called more is different. It was" }, { "start": 253.35999999999999, "end": 259.28, "text": " originally written by physicists, although I think biology is actually the area where this kind of" }, { "start": 259.28, "end": 266.08, "text": " idea seems most powerful. So this is the idea that when you just kind of increase scale," }, { "start": 266.08, "end": 273.03999999999996, "text": " you often end up with qualitative changes. And I guess scale could just be the amount of something," }, { "start": 273.03999999999996, "end": 279.44, "text": " although it could be something like temperature as well. So in physics, I think the simplest example" }, { "start": 279.44, "end": 284.56, "text": " would be phase transitions where, you know, I can have a bunch of molecules, if I just increase" }, { "start": 284.56, "end": 290.24, "text": " their temperature, they can end up in kind of qualitatively different configurations. But" }, { "start": 290.24, "end": 296.32, "text": " there's also cases where a few molecules is very different from having a lot of molecules. So" }, { "start": 296.32, "end": 303.68, "text": " I think one example of this is H2O. If you have just a few H2O molecules, they behave very" }, { "start": 303.68, "end": 310.08, "text": " differently than if you have just a huge number and you get you get water. So it turns out, for" }, { "start": 310.08, "end": 314.08, "text": " instance, that wetness is not really something that you can get from just individual molecules." }, { "start": 314.08, "end": 320.16, "text": " It's more about interaction forces between different molecules. So if you have a few" }, { "start": 320.16, "end": 325.76000000000005, "text": " different ones. So that's where it sort of initially came from in physics. And I think" }, { "start": 325.76000000000005, "end": 332.40000000000003, "text": " as physicists, we're starting to try to consider larger molecules that maybe didn't just form" }, { "start": 332.40000000000003, "end": 338.08000000000004, "text": " simple crystals, but could be more asymmetric. And that's where it gets more towards biology." }, { "start": 339.04, "end": 348.16, "text": " So I think DNA is maybe one of the most canonical examples of an asymmetric molecule that has" }, { "start": 348.16, "end": 355.28000000000003, "text": " many, many, many, many atoms in it. And kind of its size actually is important to how it functions" }, { "start": 355.28000000000003, "end": 361.68, "text": " because its whole purpose is to store information. And you can't really store information in like a" }, { "start": 361.68, "end": 368.32000000000005, "text": " calcium molecule, but you can store information in DNA. And so this is another example where" }, { "start": 368.32000000000005, "end": 373.52000000000004, "text": " just making things bigger leads to kind of qualitative changes in what you can get. And" }, { "start": 373.52, "end": 378.15999999999997, "text": " in biology, just each layer of extraction gives you more of this, right, so you can go from DNA," }, { "start": 379.59999999999997, "end": 384.56, "text": " getting even bigger, you end up with proteins, complexes of proteins, muscles, organisms." }, { "start": 385.52, "end": 390.88, "text": " And so I kind of wanted to reflect on whether there were analogous properties in machine learning." }, { "start": 391.68, "end": 396.32, "text": " There you have a bunch of examples right here in this first part in that that one's called future" }, { "start": 396.32, "end": 404.4, "text": " ML systems will be qualitatively different from the current ones. Uranium, where if you have a" }, { "start": 404.4, "end": 409.28, "text": " critical mass, you get a nuclear reaction, you already mentioned DNA, you mentioned water." }, { "start": 409.28, "end": 416.24, "text": " Traffic I find interesting, right, in that 10,000 cars could be fine, but 20,000 could block the" }, { "start": 416.24, "end": 422.56, "text": " road. And also specialization in humans. What I would challenge a little bit here is that," }, { "start": 422.56, "end": 429.44, "text": " okay, DNA is a bit special, you say you can store information in calcium, but you can in DNA. But" }, { "start": 429.44, "end": 434.08, "text": " that is, I mean, that is very much linear, there is not really a phase transition, like the more" }, { "start": 434.08, "end": 441.36, "text": " molecules I have, the more information I'm able to store. And the other ones I see much more as" }, { "start": 441.36, "end": 446.64, "text": " a function of interaction between things. Now, as we get to machine learning, maybe bigger and bigger" }, { "start": 446.64, "end": 454.4, "text": " models, do you, you call this emergence and other people call it emergence to emergent phenomena" }, { "start": 454.4, "end": 462.88, "text": " that only happen when you get a lot of stuff into the same place. Do you think this emergence is" }, { "start": 462.88, "end": 468.56, "text": " mainly a property from the interaction of things or just like the sheer number of things?" }, { "start": 468.56, "end": 476.32, "text": " Mm hmm. I think it's a bit of both. So I think interactions between things is one really common" }, { "start": 476.32, "end": 482.8, "text": " way to get emergence, especially kind of emergence that looks like a phase transition where you kind" }, { "start": 482.8, "end": 488.88, "text": " of have some, you know, sudden change. And that's just because the number of interactions between" }, { "start": 488.88, "end": 495.68, "text": " end things grows like n squared. So kind of that's a very natural thing that's going to kind of" }, { "start": 495.68, "end": 502.08, "text": " increase and scale up. And maybe the interactions, you know, each interaction could be less important" }, { "start": 502.08, "end": 509.84000000000003, "text": " than each individual item. But if you have, you know, 10,000 things, and then 100 million interactions," }, { "start": 510.56, "end": 514.88, "text": " then those interactions are going to dominate even if each individual one is less important." }, { "start": 516, "end": 521.6, "text": " So I think that is a really common one. But I don't think that's the only one. For instance," }, { "start": 521.6, "end": 530.4, "text": " for DNA, I think one thing that actually is important is that I guess you can have multiple" }, { "start": 530.4, "end": 536.5600000000001, "text": " different bases in the DNA that all kind of interact together. So you kind of need this like" }, { "start": 536.5600000000001, "end": 543.2, "text": " gadget of, yeah, okay, I can have A, T, C, or G. These all fit together. They can all kind of go" }, { "start": 543.2, "end": 548.64, "text": " in this pattern. And somehow to get that gadget, you need like enough complexity that you can" }, { "start": 548.64, "end": 553.28, "text": " actually form the gadget. And so I think that's a bit different from from just interaction forces" }, { "start": 553.28, "end": 559.68, "text": " is more like kind of having enough substrate to build up what you want. How does that play into AI" }, { "start": 559.68, "end": 569.76, "text": " and machine learning this, this phase transition or scaling up? Yeah, so I think in some sense," }, { "start": 569.76, "end": 575.12, "text": " I would say that in machine learning, there's, there's probably a bunch, a bunch of different" }, { "start": 575.12, "end": 582.96, "text": " things that play into emergence. And I also be honest, it's like, I think you're right that" }, { "start": 582.96, "end": 587.2, "text": " emergence is really kind of what we might call a suitcase word, like once you unpack it, it's" }, { "start": 587.2, "end": 592.32, "text": " actually a bunch of different things. And we could try to be more specific about what each one of" }, { "start": 592.32, "end": 598.4, "text": " those are. But I think it's also not always clear, except in retrospect, what what the cause was. So" }, { "start": 598.4, "end": 602.72, "text": " that's kind of why I'm packing them all together into one thing. But but it is something I think" }, { "start": 602.72, "end": 607.36, "text": " we should just broadly be trying to understand better. With that kind of caveat in mind," }, { "start": 607.9200000000001, "end": 614.32, "text": " I think in machine learning, there's probably several different things going on. So one is you" }, { "start": 614.32, "end": 619.28, "text": " do need the gadgets, right? You just need like enough parameters that you can build up interesting" }, { "start": 619.28, "end": 624.72, "text": " behavior. I think this might be a little counterintuitive, because some of the, you" }, { "start": 624.72, "end": 630.8000000000001, "text": " know, like really interesting behavior that we're getting right now, is things that start to look" }, { "start": 630.8, "end": 636.16, "text": " like like reasoning. And, and those are things that actually, if we wrote them, you know, like" }, { "start": 636.16, "end": 640.64, "text": " symbolic reasoning is something that's actually very easy to write kind of a short Python script" }, { "start": 640.64, "end": 644.88, "text": " to do compared to things like image recognition that that are much harder and traditionally" }, { "start": 645.8399999999999, "end": 651.28, "text": " in the in the domain of machine learning. But I think doing somehow doing reasoning in a very" }, { "start": 651.28, "end": 657.12, "text": " robust, open, open world way, I think does actually require kind of a lot of machinery to get the" }, { "start": 657.12, "end": 662.5600000000001, "text": " gadgets right, at least the way we're currently setting up neural networks. So I think that's one," }, { "start": 662.5600000000001, "end": 669.84, "text": " just getting the basic gadgets. I think another thing is that there's a lot of stuff that kind" }, { "start": 669.84, "end": 676.4, "text": " of gets packed into, say, like the last few bits of entropy that you're squeezing out of a system." }, { "start": 676.4, "end": 682.24, "text": " So most machine learning models are trained on the log likelihood or the cross entropy loss," }, { "start": 682.24, "end": 689.36, "text": " or something like this, that's just trying to kind of predict what will happen. And most of" }, { "start": 689.36, "end": 695.28, "text": " predicting what will happen for say images, for instance, is going to be just knowing what edges" }, { "start": 695.28, "end": 701.36, "text": " look like really, really well. And that might not be so exciting. But once you're like really getting" }, { "start": 701.36, "end": 707.28, "text": " near the entropy floor, now you're forced to also think about interactions, you're forced to think" }, { "start": 707.28, "end": 714.88, "text": " about kind of long range dependencies, all that sort of thing. And so even if say, your cross" }, { "start": 714.88, "end": 720.48, "text": " entropy loss is kind of decreasing smoothly, in terms of the qualitative properties that a system" }, { "start": 720.48, "end": 727.52, "text": " has, you might actually get kind of kind of sudden qualitative changes in the behavior," }, { "start": 727.52, "end": 729.8399999999999, "text": " because there's like something that's in those last few bits." }, { "start": 729.84, "end": 738.72, "text": " You have some bunch of historical examples, but then you go into GPT-3 as an example of this" }, { "start": 738.72, "end": 748.64, "text": " qualitative difference that arises from scale. What do you think GPT-3 showed in this regard?" }, { "start": 748.64, "end": 756.24, "text": " What does it mean? Right. So I think the thing that was really surprising to me, and I think to" }, { "start": 756.24, "end": 764.32, "text": " many other people, was that GPT-3 was very good at in-context learning. Meaning that from just a" }, { "start": 764.32, "end": 771.36, "text": " few examples, it could kind of learn how to do new tasks. So you could just give it a few examples of" }, { "start": 771.36, "end": 778.88, "text": " say translating sentences from French to English, and you'd get a pretty good translator. I think" }, { "start": 778.88, "end": 786.64, "text": " actually the graph you're showing right now is for those results. And so I guess why was this" }, { "start": 786.64, "end": 792.48, "text": " surprising? Well, previous systems really couldn't do that very well. If you wanted a translation" }, { "start": 792.48, "end": 797.68, "text": " system, you really needed to train it on example translations. And GPT-3 was instead just trained" }, { "start": 797.68, "end": 802.96, "text": " on lots of text on the internet. Surely it did have some French and English sentences, but it" }, { "start": 802.96, "end": 807.6, "text": " wasn't being explicitly trained to do this particular task. And so that's what in-context" }, { "start": 807.6, "end": 813.76, "text": " learning was. And the reason that I would have called it surprising is if we had just drawn a" }, { "start": 813.76, "end": 820.8000000000001, "text": " graph of how much can systems do in-context learning, I would have just put it at zero" }, { "start": 822, "end": 827.76, "text": " for a while. Up until you hit GPT-2, I would have said a little bit. And then GPT-3, I would say" }, { "start": 827.76, "end": 834.72, "text": " it's quite good at that. And so that I think is how I would kind of capture the surprise." }, { "start": 834.72, "end": 839.6800000000001, "text": " It's like there was this line that was at zero. Usually I would expect to go from zero to non-zero." }, { "start": 839.6800000000001, "end": 845.6, "text": " You need some clever idea. But here you just did the same thing, but more of it. And then" }, { "start": 845.6, "end": 852, "text": " you went from zero to non-zero. Yeah, there are a lot of, I don't know, this is maybe a side point," }, { "start": 852, "end": 861.9200000000001, "text": " but there are a lot of people that at the same, they say, oh, I always knew GPT-3 was going to" }, { "start": 861.92, "end": 872.3199999999999, "text": " do what it does. But I doubt anyone could have foreseen just how good it is. It's easy to say" }, { "start": 872.3199999999999, "end": 878.88, "text": " in hindsight and it's easy to go and say, well, it just does interpolation. It's just a bigger" }, { "start": 878.88, "end": 885.04, "text": " version of GPT-2. But I think genuinely the entire world was surprised by really this emergent" }, { "start": 885.04, "end": 892.64, "text": " phenomenon of this in-context learning. Yeah. I would agree that most people were" }, { "start": 892.64, "end": 902.88, "text": " pretty surprised. Certainly I was surprised. I do know people at the time who, well, okay," }, { "start": 902.88, "end": 909.68, "text": " it's all I know is that they said at the time they had kind of done extrapolation, say, on the" }, { "start": 909.68, "end": 915.12, "text": " cross entropy loss or things like that and felt like there should be something pretty cool happening" }, { "start": 915.12, "end": 920.9599999999999, "text": " at around that parameter count. I don't know if they would have said exactly that parameter count" }, { "start": 920.9599999999999, "end": 928.3199999999999, "text": " or if it was just within a factor of 10 or 100. Certainly I guess I would think that the people" }, { "start": 928.3199999999999, "end": 933.8399999999999, "text": " at OpenAI who bet on this at least had to have some belief that something cool would happen" }, { "start": 933.8399999999999, "end": 938, "text": " because there was a lot of resources. And if you didn't believe there was a payoff, it was" }, { "start": 938, "end": 945.92, "text": " kind of hard to justify that. So I guess what I would say is I don't think it was something" }, { "start": 945.92, "end": 953.2, "text": " that was entirely unpredictable by anyone in the world. But it was just very surprising relative" }, { "start": 953.2, "end": 960.48, "text": " to the consensus and to my own beliefs at the time. And that surprise is one of the core arguments" }, { "start": 960.48, "end": 968.64, "text": " of your contraposition of the different viewpoints on the future of AI and its alignment. Could you" }, { "start": 968.64, "end": 974.64, "text": " briefly introduce us to kind of the different viewpoints you considered and what they say?" }, { "start": 975.76, "end": 983.36, "text": " Yeah, so I think there's kind of two viewpoints that I often think of as being intention with" }, { "start": 983.36, "end": 990.72, "text": " each other. The first is what I kind of dubbed the engineering viewpoint. And what is this? So" }, { "start": 990.72, "end": 997.12, "text": " it's kind of very bottom-up driven. It kind of looks at the empirical data that we have in front" }, { "start": 997.12, "end": 1005.36, "text": " of us. It tends to kind of extrapolate trends going forward. So it's like, you know, what did" }, { "start": 1005.36, "end": 1011.28, "text": " things look like last year? What did things look like two years ago? What do things look like in" }, { "start": 1011.28, "end": 1017.36, "text": " today? And then I'll predict the future by kind of, okay, maybe not literally drawing a line, but" }, { "start": 1017.36, "end": 1026.72, "text": " just kind of intuitively like where are things going from there? And also I think this" }, { "start": 1027.6, "end": 1034.08, "text": " worldview would kind of really prize empirical data, be somewhat skeptical of kind of abstract" }, { "start": 1034.08, "end": 1039.52, "text": " conceptual arguments, maybe not completely dismiss them, but really be fun to look at." }, { "start": 1039.52, "end": 1044.32, "text": " And not just the system, but really be focused on the empirical data. So that would be kind of the" }, { "start": 1044.32, "end": 1050.8, "text": " engineering worldview. I think the philosophy worldview would be much more top-down, kind of" }, { "start": 1050.8, "end": 1055.6, "text": " trying to think about just what's in principle possible? What's the limit as we get really," }, { "start": 1055.6, "end": 1062.08, "text": " really smart machine learning systems kind of more into these kind of abstract arguments," }, { "start": 1063.28, "end": 1069.44, "text": " not as into the empirical data and willing to make extrapolations that don't look very much" }, { "start": 1069.44, "end": 1076.0800000000002, "text": " in terms. And so that would be kind of the more philosophy worldview. And I think, I guess in" }, { "start": 1076.0800000000002, "end": 1084.4, "text": " terms of where I've come from historically, I think I'd say I sort of would have mostly" }, { "start": 1085.3600000000001, "end": 1093.92, "text": " bought into the kind of engineering worldview, kind of into just, yeah, let's look at where things" }, { "start": 1093.92, "end": 1099.8400000000001, "text": " are going empirically, and this is a good way to decide what problems to work on. On the other hand," }, { "start": 1100.48, "end": 1105.68, "text": " I had read kind of some more philosophy-oriented stuff, like Nick Bostrom's Superintelligence" }, { "start": 1105.68, "end": 1111.04, "text": " book and other arguments around that. And it always felt to me like there was something," }, { "start": 1112, "end": 1120, "text": " both something to them, but also like somehow it didn't really match my experience with ML systems." }, { "start": 1120, "end": 1125.36, "text": " And so I had always kind of almost felt like a little bit like I had these like two different" }, { "start": 1126.4, "end": 1129.2, "text": " conflicting views in my head that I was trying to reconcile." }, { "start": 1131.04, "end": 1137.52, "text": " How does the phenomenon of emergence play into this game between the engineering and the philosophy" }, { "start": 1137.52, "end": 1146.48, "text": " viewpoint? Right. So I think the main thing is that it shows that you have to be somewhat careful" }, { "start": 1146.48, "end": 1153.52, "text": " with the engineering viewpoint, because what emergence kind of is saying is that you can often" }, { "start": 1153.52, "end": 1161.52, "text": " get these kind of qualitative shifts that don't at least apparently follow existing trends." }, { "start": 1163.2, "end": 1170.56, "text": " There's a bit of nuance to that because actually GPT-3 followed trends in the log, like the value" }, { "start": 1170.56, "end": 1177.36, "text": " of the log likelihood loss, it followed that trend very well. It's just that you can get behavior" }, { "start": 1177.36, "end": 1184, "text": " that is a very nonlinear function of your cross entropy loss, where just a small decrease in" }, { "start": 1184, "end": 1189.6, "text": " cross entropy loss leads to a pretty big increase in behavior. And so I guess what this is saying" }, { "start": 1189.6, "end": 1194.08, "text": " is that at least for maybe the kind of like endline things you care about, the actual behavior of ML" }, { "start": 1194.08, "end": 1204.48, "text": " systems, you can actually get kind of discontinuous kind of breaks in the trend. And so you can't just" }, { "start": 1204.48, "end": 1210.8, "text": " kind of be safe with a worldview that's kind of always predicting that things are going to" }, { "start": 1210.8, "end": 1217.04, "text": " follow smooth trends, you can actually get these surprises. And so I think there's kind of two" }, { "start": 1217.04, "end": 1221.4399999999998, "text": " updates that that has for me. One, I guess, is just being a bit more careful how we apply." }, { "start": 1221.44, "end": 1225.8400000000001, "text": " Engineering, right? So there are some things that will probably be smooth, but there's other things" }, { "start": 1225.8400000000001, "end": 1230.8, "text": " that won't be and we need to think about which is which. But the other is then wanting to rely a" }, { "start": 1230.8, "end": 1236.3200000000002, "text": " bit more on philosophy, because it's at least a very good source of hypothesis generation." }, { "start": 1236.3200000000002, "end": 1242.72, "text": " If we're kind of trying to come up with hypotheses about what trends might break or surprise us in" }, { "start": 1242.72, "end": 1248.16, "text": " the future, then I think we need more top down thinking to kind of generate that. And then we" }, { "start": 1248.16, "end": 1254.3200000000002, "text": " can kind of try to tie that into what we see with actual ML systems and try to kind of reconcile" }, { "start": 1254.3200000000002, "end": 1259.2, "text": " those two. But I think we need some form of top down thinking to generate the hypotheses in the" }, { "start": 1259.2, "end": 1260.0800000000002, "text": " first place." }, { "start": 1260.96, "end": 1265.76, "text": " Isn't that you're saying the engineering viewpoint is a little bit, you have to be a little bit" }, { "start": 1265.76, "end": 1271.76, "text": " careful because we get these emergence phenomena, these discontinuities and so on. Isn't that in" }, { "start": 1271.76, "end": 1276.96, "text": " itself a trend though? Like, isn't because you list this even historically, you know," }, { "start": 1276.96, "end": 1283.1200000000001, "text": " because you list this even historically, that as soon as some new barrier was reached, we have" }, { "start": 1283.1200000000001, "end": 1289.1200000000001, "text": " been able to all of a sudden do something that we didn't think was possible before, like a kind of" }, { "start": 1289.1200000000001, "end": 1295.6000000000001, "text": " a jump in abilities without necessarily having to have the great idea behind it. Isn't that in" }, { "start": 1295.6000000000001, "end": 1302.16, "text": " itself a trend? Couldn't I extrapolate that reasonably and say, well, I don't know, you" }, { "start": 1302.16, "end": 1309.1200000000001, "text": " know, exactly what is going to be in two years, but I'm pretty sure there's going to be some" }, { "start": 1309.1200000000001, "end": 1315.6000000000001, "text": " emergent phenomena that allows us to be to have some new good capabilities." }, { "start": 1316.88, "end": 1323.28, "text": " Sure. So I would agree with that. So what I would say there is that the trend is towards more" }, { "start": 1323.28, "end": 1329.0400000000002, "text": " surprises over time. So because I think you can think of emergence as sort of like a surprise." }, { "start": 1329.04, "end": 1334.72, "text": " Like I said, I think it's possible in some cases to predict it to some degree, but it's certainly" }, { "start": 1334.72, "end": 1340.56, "text": " more of a surprise than most other things. And so, yeah, I think we should expect more surprises" }, { "start": 1340.56, "end": 1348.1599999999999, "text": " over time. But if we're then trying to kind of predict what's going to happen, that I guess it's" }, { "start": 1348.1599999999999, "end": 1351.6, "text": " good to know that you're going to be surprised, but then you want to have some sense of what the" }, { "start": 1351.6, "end": 1357.2, "text": " surprise might be. And so I think kind of getting a sense of what those surprises might be is where" }, { "start": 1357.2, "end": 1361.3600000000001, "text": " this philosophy approach can come in and be really useful." }, { "start": 1362.56, "end": 1368.16, "text": " Now all of this, and you mentioned here the paperclip maximizer, all of this goes into AI" }, { "start": 1368.16, "end": 1376.16, "text": " alignment and AI safety. What's the relevance of this field to you? What drew you to this?" }, { "start": 1376.8, "end": 1379.8400000000001, "text": " Why are you making this argument specifically for these fields?" }, { "start": 1379.84, "end": 1390.48, "text": " Right. So I think the one big relevance to AI safety or alignment is just the bigger the surprises" }, { "start": 1390.48, "end": 1398.08, "text": " you might end up with, I think the more you should be concerned about safety. So that's just a very" }, { "start": 1398.08, "end": 1404.9599999999998, "text": " kind of abstract, but I think fairly robust consideration. A more specific consideration" }, { "start": 1404.96, "end": 1414.16, "text": " is that I think many of the sort of historical arguments for caring about AI safety or alignment" }, { "start": 1415.3600000000001, "end": 1421.92, "text": " sort of tend to posit properties of systems that don't necessarily match what we see today. So I" }, { "start": 1421.92, "end": 1428.72, "text": " think you gave this example of Nick Bostrom's paperclip maximizer thought experiment where" }, { "start": 1428.72, "end": 1435.84, "text": " you give an AI some objective function to make paper clips and then it kind of just takes over" }, { "start": 1435.84, "end": 1444.4, "text": " the world to maximize the number of paper clips. And I don't think Nick thinks literally that will" }, { "start": 1444.4, "end": 1450.16, "text": " happen and I don't think literally that will happen, but it's sort of trying to get at this" }, { "start": 1450.16, "end": 1455.76, "text": " idea that if you have a very simple objective function but a really powerful optimizer, you can" }, { "start": 1455.76, "end": 1463.92, "text": " get all sorts of weird things happening. I think in some broad sense actually we can see that" }, { "start": 1463.92, "end": 1468.72, "text": " already even from the engineering worldview with things like Facebook or YouTube that often" }, { "start": 1468.72, "end": 1475.12, "text": " end up with a lot of unintended consequences when you optimize. But certainly some of the" }, { "start": 1475.12, "end": 1482.56, "text": " aspects of that story kind of invoke lots of things that would be foreign to existing ML" }, { "start": 1482.56, "end": 1486.96, "text": " systems where you have way more capabilities than any existing system and you're doing all" }, { "start": 1487.6, "end": 1494.6399999999999, "text": " sorts of weird long-term reasoning and trying to out-think humans and things like that." }, { "start": 1495.6, "end": 1506.3999999999999, "text": " And so I think that's where you kind of end up kind of departing from what we see with" }, { "start": 1506.4, "end": 1514.88, "text": " with current ML systems. And so I guess I kind of find, actually let me let me collect my thoughts" }, { "start": 1514.88, "end": 1526, "text": " for a second because I think I'm going off the rails a bit. Yeah so I think what I want to say" }, { "start": 1526, "end": 1535.0400000000002, "text": " for the paper clip maximizer thing in particular is that it seems at least more plausible to me" }, { "start": 1535.04, "end": 1540.8, "text": " that you could end up with systems that kind of have really advanced reasoning capabilities or" }, { "start": 1540.8, "end": 1547.52, "text": " things like that without necessarily having huge conceptual breakthroughs and just from scaling up." }, { "start": 1547.52, "end": 1553.6, "text": " And so I think there's kind of risks from that. I think there's kind of other more exotic failure" }, { "start": 1553.6, "end": 1561.28, "text": " modes that people discuss beyond just this kind of misaligned objectives failure mode that involve" }, { "start": 1561.28, "end": 1567.12, "text": " other specific capabilities that that kind of systems today don't have. And historically I've" }, { "start": 1567.12, "end": 1572.24, "text": " been very kind of skeptical of those more exotic failure modes. I think the pivot clip maximizer" }, { "start": 1572.24, "end": 1577.2, "text": " one at least if we interpret it as being about misaligned objectives I actually find kind of" }, { "start": 1577.2, "end": 1582.16, "text": " less exotic because I can point to existing systems that have that. But I think kind of" }, { "start": 1582.16, "end": 1586.56, "text": " more as different has made me be a bit more willing to buy some of the more kind of exotic" }, { "start": 1586.56, "end": 1593.84, "text": " failure modes that have been discussed. My issue with these types of argument and you also said" }, { "start": 1593.84, "end": 1599.36, "text": " you used to be very skeptical. If I can take this from your blog post series you're now" }, { "start": 1599.9199999999998, "end": 1606.8, "text": " still skeptical but have a little bit of an appreciation gained for these types of arguments." }, { "start": 1607.84, "end": 1613.36, "text": " Maybe that's a good formulation for that and we'll get to that in a second. My issue with these types" }, { "start": 1613.36, "end": 1621.4399999999998, "text": " of argument is always that there is always on the path to the super intelligence there is always a" }, { "start": 1621.4399999999998, "end": 1629.9199999999998, "text": " hidden intelligence somewhere else. So if someone says optimizing on YouTube or optimizing on Facebook" }, { "start": 1629.9199999999998, "end": 1636.4799999999998, "text": " leads to unintended consequences that is because the intelligent humans are taking part in the" }, { "start": 1636.4799999999998, "end": 1642.3999999999999, "text": " system. There is also a famous I think paper by I think it's Rich Sutton that is reward is enough" }, { "start": 1642.4, "end": 1649.76, "text": " and a bunch of others out of deep mind and it makes similar arguments like well if we you know if you" }, { "start": 1649.76, "end": 1655.52, "text": " just optimize for reward then all kinds of things will emerge if you have a powerful enough optimizer" }, { "start": 1655.52, "end": 1664.24, "text": " but hidden in that is the powerful enough optimizer which in itself must already be an AGI essentially" }, { "start": 1664.24, "end": 1670, "text": " in order to make that optimization happen. Likewise for the paperclip maximizer right you" }, { "start": 1670, "end": 1676.8, "text": " the postulation of the process of the paperclip maximizer emerging is only possible if the" }, { "start": 1676.8, "end": 1684.32, "text": " optimizer itself is an AGI already. So I always find that hidden in these arguments it's kind" }, { "start": 1684.32, "end": 1693.04, "text": " of a circular it's a tautology it's we'll get an AGI if we have an AGI and that is so I challenge" }, { "start": 1693.04, "end": 1701.84, "text": " anyone from that camp to come up with a situation like an alignment problematic situation given some" }, { "start": 1701.84, "end": 1707.44, "text": " kind of future super intelligence that doesn't already require the super intelligence to exist" }, { "start": 1708.32, "end": 1712.48, "text": " for the other super intelligence to emerge and I haven't found that yet." }, { "start": 1713.92, "end": 1721.2, "text": " Yeah so let me try to unpack that a bit. I guess first of all just to kind of clarify what my views" }, { "start": 1721.2, "end": 1729.44, "text": " are I think historically I felt like on each of the individual arguments I felt skeptical that" }, { "start": 1729.44, "end": 1735.92, "text": " that particular thing will happen but I found them to be moderately convincing that there's just like" }, { "start": 1735.92, "end": 1741.28, "text": " a bunch of risks that we should think more about and try to understand more. I think the the main" }, { "start": 1741.28, "end": 1748.96, "text": " way that my my views have evolved in terms of you know when I say decreasing skepticism is I now" }, { "start": 1748.96, "end": 1755.1200000000001, "text": " find it useful to think about many of the specific properties that kind of show up in these thought" }, { "start": 1755.1200000000001, "end": 1761.28, "text": " experiments as potential hypotheses about things systems might do in the future and so that's the" }, { "start": 1761.28, "end": 1767.6000000000001, "text": " sense in which I've started to assign more weight instead of just taking some like very big outside" }, { "start": 1767.6000000000001, "end": 1771.92, "text": " view of like well AI is going to be a big deal we should really worry about making it go right." }, { "start": 1771.92, "end": 1779.52, "text": " I'm now also taking some of the specific hypotheses that the philosophy view is raising so that's just" }, { "start": 1779.52, "end": 1791.1200000000001, "text": " clarifying kind of my stance there. In terms of yeah you're saying well to get like if you have a" }, { "start": 1791.1200000000001, "end": 1795.44, "text": " powerful to get a super powerful optimizer you need to like already have a powerful optimizer." }, { "start": 1795.44, "end": 1803.52, "text": " I think that I think that's like probably right I'm not I wouldn't say I'm like 100% confident" }, { "start": 1803.52, "end": 1810.4, "text": " of that but I think what what this kind of makes me like I guess the way that I would put this" }, { "start": 1810.96, "end": 1816.88, "text": " is that before you have kind of superhuman AI systems you will have like slightly superhuman" }, { "start": 1816.88, "end": 1821.44, "text": " AI systems and before that you'll have human level AI systems and before that you'll have like slightly" }, { "start": 1821.44, "end": 1828.3200000000002, "text": " below human level AI systems and so it is going to be this kind of probably a continuous thing" }, { "start": 1828.3200000000002, "end": 1833.68, "text": " rather than like a really sharp takeoff. I'm not so confident that there's not going to be a sharp" }, { "start": 1833.68, "end": 1838.8, "text": " takeoff that I think we should just ignore that possibility but I do think in most worlds it's" }, { "start": 1838.8, "end": 1845.6000000000001, "text": " probably somewhat smooth. You know one piece of evidence for this is even with in-context learning" }, { "start": 1846.3200000000002, "end": 1850.56, "text": " you know it like that kind of developed over the course of a couple of years at least going from" }, { "start": 1850.56, "end": 1860.3999999999999, "text": " GPT-2 to GPT-3. So I think I would agree that like probably you'll have something more smooth" }, { "start": 1860.3999999999999, "end": 1866.1599999999999, "text": " and that is kind of like one problem with a lot of the scenarios that are put forth is that they" }, { "start": 1866.1599999999999, "end": 1871.36, "text": " kind of imagine that like oh you just have this like one AI system that's like way more intelligent" }, { "start": 1871.36, "end": 1875.76, "text": " than like everything else that exists and I think that's like probably not true. You'll probably" }, { "start": 1875.76, "end": 1881.2, "text": " have other things that are slightly less intelligent and so there's not going to be some like enormous" }, { "start": 1881.2, "end": 1889.12, "text": " gap in capabilities. So I think that's maybe like one place where a lot of stories kind of become" }, { "start": 1889.76, "end": 1898, "text": " less realistic. So I think that would be kind of my main takeaway from what you're saying." }, { "start": 1898, "end": 1907.52, "text": " In your third blog post here or second you make a case for these thought experiments. Could you" }, { "start": 1907.52, "end": 1912, "text": " you have already touched a little bit on this and you talk about anchors here. Could you lead us a" }, { "start": 1912, "end": 1920.16, "text": " little bit on the case for respecting such thought experiments? Yeah so I guess this is getting back" }, { "start": 1920.16, "end": 1926.4, "text": " to what I was saying about how my views have shifted towards wanting to rely a bit more on" }, { "start": 1926.4, "end": 1931.68, "text": " the actual kind of like inside view considerations from some of these thought experiments rather than" }, { "start": 1931.68, "end": 1938.24, "text": " just taking it as a kind of broad outside view argument for caring about risks from AI. So" }, { "start": 1939.2800000000002, "end": 1944.96, "text": " the way I would put it is that whenever we're trying to predict something it's very useful" }, { "start": 1944.96, "end": 1953.6000000000001, "text": " to have what I'll call reference classes or kind of anchors of kind of analogous things or analogous" }, { "start": 1953.6, "end": 1961.28, "text": " or just some sort of heuristics for predicting what will happen. And in general it's better to" }, { "start": 1961.28, "end": 1966.8799999999999, "text": " kind of when making predictions take several reference classes or several anchors and kind" }, { "start": 1966.8799999999999, "end": 1971.84, "text": " of average over those or ensemble over those rather than just sticking with one. Right so" }, { "start": 1971.84, "end": 1976.7199999999998, "text": " machine learning ensembles work better than individual models and it's also the case that" }, { "start": 1976.7199999999998, "end": 1982.32, "text": " when humans make forecasts it's generally better to kind of take an ensemble of world user approaches." }, { "start": 1982.32, "end": 1990.8, "text": " So I kind of lay out a few different approaches you could take that I call anchors. The simplest" }, { "start": 1990.8, "end": 1995.4399999999998, "text": " one is you can just predict that future ML systems will look like current ML systems and so I call" }, { "start": 1995.4399999999998, "end": 2000.8, "text": " that the kind of current ML anchor. And I think that's probably the one that would be favored by" }, { "start": 2001.36, "end": 2007.4399999999998, "text": " most machine learning researchers. I think it's the one that I've historically favored the most." }, { "start": 2007.44, "end": 2015.28, "text": " But what I've come to realize is that and actually this is more actually just from reading" }, { "start": 2015.28, "end": 2021.2, "text": " literature on forecasting. I'm actually teaching a class on forecasting this semester and so I've" }, { "start": 2021.2, "end": 2027.3600000000001, "text": " been reading a lot about how to make good forecasts as a human. And I realized you actually" }, { "start": 2027.3600000000001, "end": 2033.8400000000001, "text": " don't want to rely on just one anchor you want several if you can. And so I thought about okay" }, { "start": 2033.84, "end": 2039.76, "text": " what are other ones we could use. Well another somewhat popular one although it might be more" }, { "start": 2039.76, "end": 2044.48, "text": " popular with the public than with ML researchers is what I'll call the human anchor where we just" }, { "start": 2044.48, "end": 2052.56, "text": " sort of think of AI systems as like dumber humans or something. And maybe future ML systems will be" }, { "start": 2052.56, "end": 2057.84, "text": " like smarter than they are now and like eventually they'll just kind of do things that humans do." }, { "start": 2057.84, "end": 2062.7999999999997, "text": " And so we could just look at okay what can humans do right now that ML systems can't do" }, { "start": 2062.8, "end": 2066.7200000000003, "text": " and predict that will like probably you know have those sorts of things in the future." }, { "start": 2067.76, "end": 2074.8, "text": " And just like generally like kind of take that kind of human-centric approach. I think most ML" }, { "start": 2074.8, "end": 2081.6000000000004, "text": " people really hate this one because it's just sort of like reeks of anthropomorphism which there's" }, { "start": 2081.6000000000004, "end": 2090.32, "text": " kind of I think to some extent correctly a lot of pushback against because kind of historically" }, { "start": 2090.32, "end": 2097.1200000000003, "text": " anthropomorphic arguments in ML have a pretty bad track record. I think the amount of pushback is" }, { "start": 2097.1200000000003, "end": 2103.04, "text": " actually too high relative to the actual badness of the track record. Like I think it should be" }, { "start": 2103.04, "end": 2107.44, "text": " sort of like somewhat down-weighted in anything that's based on reasoning about humans but I" }, { "start": 2107.44, "end": 2114, "text": " don't think you should be down-weighted in it like as much as I think most people do. But anyways" }, { "start": 2114, "end": 2118.88, "text": " this is another one I don't like to rely on it too much but I rely on I like use it at least a" }, { "start": 2118.88, "end": 2125.6, "text": " little bit. And then this other anchor is what I'll call the optimization anchor which is just" }, { "start": 2125.6, "end": 2131.04, "text": " thinking about ML systems as kind of ideal optimizers and thinking about okay well what" }, { "start": 2131.04, "end": 2136.08, "text": " would happen if you could just like if actually ML systems were just really smart and we're just" }, { "start": 2136.08, "end": 2141.76, "text": " like optimizing their objectives perfectly what would happen there. And so I think this one is" }, { "start": 2141.76, "end": 2146.2400000000002, "text": " the one that's kind of I would associate most with the philosophy worldview. I think you know" }, { "start": 2146.24, "end": 2152.9599999999996, "text": " the paperclip maximizer argument is like kind of exactly doing this and then there's some kind of" }, { "start": 2152.9599999999996, "end": 2158.08, "text": " more recent arguments that are a bit more sophisticated that also kind of take this" }, { "start": 2160.4799999999996, "end": 2168.3199999999997, "text": " there. So like one is this thing called imitative deception which I can get into in a bit or just" }, { "start": 2168.3199999999997, "end": 2174.16, "text": " this idea that like you know if you're like trying to optimize you'll kind of want to acquire" }, { "start": 2174.16, "end": 2179.04, "text": " influence and power. So this is kind of a third anchor. I actually think there's a lot of other" }, { "start": 2179.04, "end": 2185.2799999999997, "text": " anchors I like to use. Like I think evolution is a good analogy. Corporations are a good analogy" }, { "start": 2185.2799999999997, "end": 2191.3599999999997, "text": " because they're kind of like super intelligent optimizers compared to humans. But like the" }, { "start": 2191.3599999999997, "end": 2195.2799999999997, "text": " general point is like we should just be trying to find these anchors and use as many as we can." }, { "start": 2196.24, "end": 2203.2, "text": " Yeah I've especially to your second point right here it is pretty interesting that I believe when" }, { "start": 2203.2, "end": 2209.4399999999996, "text": " you have something like AlphaZero that plays really good like really is really skilled in chess" }, { "start": 2210.24, "end": 2218.56, "text": " and you ask it to lose a game or to draw a game or something like this it will not play weaker." }, { "start": 2218.56, "end": 2224.8799999999997, "text": " It will play just as strong until the end where it will kind of bring itself into like a draw" }, { "start": 2224.8799999999997, "end": 2232.24, "text": " situation or a losing situation because right that's still the most sure way to get your result is to" }, { "start": 2232.24, "end": 2239.7599999999998, "text": " have complete control to crush your opponent completely until you know you get the outcome" }, { "start": 2239.7599999999998, "end": 2246, "text": " that you want. So that's pretty interesting and I think counterintuitive because you would guess" }, { "start": 2246, "end": 2253.4399999999996, "text": " that if you ask a model to play for a draw it will kind of reduce its skill but that's not the case." }, { "start": 2254.4799999999996, "end": 2258.7999999999997, "text": " The other thing imitative deception could you elaborate on that a little bit?" }, { "start": 2258.8, "end": 2269.36, "text": " Yeah so the imitative deception is this idea that if I have something that's trained on the cross" }, { "start": 2269.36, "end": 2275.28, "text": " entropy loss what is the cross entropy loss doing? It's trying to kind of predict or in other words" }, { "start": 2275.28, "end": 2283.6000000000004, "text": " imitate the distribution of examples that it's given. And so you could if you're if you kind of" }, { "start": 2283.6000000000004, "end": 2288.1600000000003, "text": " have something that's trained with that objective and then you start asking it questions it's" }, { "start": 2288.16, "end": 2294.3199999999997, "text": " not actually you know like its incentive is not actually to output the true answers to the questions" }, { "start": 2294.3199999999997, "end": 2298.3199999999997, "text": " it's output the most likely answers to those questions because that's what what minimizes the" }, { "start": 2298.3199999999997, "end": 2304.64, "text": " cross entropy loss. And so those tend to be pretty highly correlated but they aren't necessarily" }, { "start": 2304.64, "end": 2309.12, "text": " right so if you have common human misconceptions then it could be that text on the internet which" }, { "start": 2309.12, "end": 2313.44, "text": " is what these systems are trained on is actually more likely to contain the kind of" }, { "start": 2313.44, "end": 2319.6, "text": " misconceived answers and the true answer and so you ask the system that question then you're going" }, { "start": 2319.6, "end": 2330.16, "text": " to get the wrong answer. Now you could say well that's maybe not so surprising if you have noisy" }, { "start": 2330.16, "end": 2336.8, "text": " data you're going to do worse but I think there's a couple properties and actually at this point now" }, { "start": 2336.8, "end": 2340.56, "text": " I'd say empirical properties of this that I think show that it's kind of" }, { "start": 2340.56, "end": 2346.96, "text": " different from just like noisy data makes you worse. One is that actually larger models" }, { "start": 2348.48, "end": 2355.84, "text": " exhibit more of this so if so models that kind of do better in general will actually" }, { "start": 2355.84, "end": 2361.68, "text": " do worse on on these kind of common misconception tasks so that's what this" }, { "start": 2362.72, "end": 2368.88, "text": " paper by by Lin and collaborators from 2021. Okay I just I just wanted to say" }, { "start": 2368.88, "end": 2378, "text": " I have a giant problem with this paper just but you're obviously right right that's the" }, { "start": 2378, "end": 2383.52, "text": " background but aren't large models doing quote unquote worse because they're just a lot better" }, { "start": 2383.52, "end": 2389.84, "text": " at picking up the nuance of because what this paper tries to do is tries to elicit" }, { "start": 2389.84, "end": 2395.76, "text": " these wrong answers it tries to like hint at a conspiracy theory and then it" }, { "start": 2395.76, "end": 2400.5600000000004, "text": " checks whether the model kind of falls for it isn't that just because as you say" }, { "start": 2401.28, "end": 2409.28, "text": " the larger models they they're actually skilled enough to pick up on on this kind of questioning" }, { "start": 2409.28, "end": 2416.5600000000004, "text": " and then continue as a human would if encountered by you know I think one of the the main questions" }, { "start": 2416.5600000000004, "end": 2424.8, "text": " they have is like who really did 9-11 right and and a question that I have is like who" }, { "start": 2424.8, "end": 2435.2000000000003, "text": " really caused 9-11 and a small model is just not able to pick up on that yeah yeah who really" }, { "start": 2435.2000000000003, "end": 2446.0800000000004, "text": " caused 9-11 and I think I mean absolutely correct right the larger models are doing worse but it's" }, { "start": 2446.0800000000004, "end": 2454.4, "text": " just because they're more skilled right there they are more capable of you know being being able to" }, { "start": 2454.4, "end": 2459.36, "text": " use them. So is there a user that expects that these models actually give me truthful answers" }, { "start": 2459.36, "end": 2464.96, "text": " rather than the user expecting these models actually give me the most likely answers?" }, { "start": 2466.7200000000003, "end": 2471.6800000000003, "text": " So I guess I would agree with you that the failure is coming from the skill of the models." }, { "start": 2473.44, "end": 2480.64, "text": " I think this is actually kind of exactly what what I'm kind of worried about right so so the" }, { "start": 2480.64, "end": 2486.16, "text": " you have a very slightly incorrect objective function and you have models that aren't so" }, { "start": 2486.16, "end": 2493.3599999999997, "text": " skilled then probably you know what they do to make to increase that slightly incorrect objective" }, { "start": 2493.3599999999997, "end": 2498.16, "text": " function is pretty similar to what they would do to to increase the true objective function." }, { "start": 2498.16, "end": 2503.3599999999997, "text": " So here maybe think of the slightly incorrect one being output what's likely and the true one and" }, { "start": 2503.3599999999997, "end": 2509.44, "text": " like the one you really care about being to output what's true. So I think this is sort of the point" }, { "start": 2509.44, "end": 2517.04, "text": " that that kind of as you get more skilled those two things diverge. Now you know I will grant" }, { "start": 2517.04, "end": 2524.4, "text": " your point that the kind of framing of these questions might create a context where the model" }, { "start": 2524.4, "end": 2532, "text": " thinks it's more likely that you know the person asking it is like into conspiracy theories or it" }, { "start": 2532, "end": 2536.4, "text": " like pattern matches to text on the internet that's like more about conspiracy theories. So" }, { "start": 2536.4, "end": 2542.48, "text": " but they did the ablation if they don't phrase the questions like this this effect goes away" }, { "start": 2542.48, "end": 2548.56, "text": " of the larger models doing worse right and this it brings us a bit to your to your next post which" }, { "start": 2548.56, "end": 2555.6, "text": " is ML systems will have weird failure modes which deals exactly with this and I agree that it is" }, { "start": 2556.32, "end": 2562.4, "text": " if you think about like a perfect optimizer and as our models get larger they do approach better and" }, { "start": 2562.4, "end": 2571.84, "text": " better optimizers it is really hard in the real world to specify a reward function correctly" }, { "start": 2571.84, "end": 2578.48, "text": " in a simple enough way right and that will result in exactly what you call weird failure modes." }, { "start": 2578.48, "end": 2584.88, "text": " What do you mean by that? Yeah so I think I guess there's sort of different levels of weird right" }, { "start": 2584.88, "end": 2591.12, "text": " so I guess this kind of like imitative deception I would call like somewhat weird I mean in some" }, { "start": 2591.12, "end": 2597.92, "text": " sense it's like not that hard to see why it happens because you know you can kind of see why if you" }, { "start": 2597.92, "end": 2604.4, "text": " kind of have stuff that's phrased about like who really caused 9-11 that probably the stuff on the" }, { "start": 2604.4, "end": 2608.96, "text": " internet that's closest to that was like some conspiracy theory for them and so that's how" }, { "start": 2608.96, "end": 2615.7599999999998, "text": " you're going to complete it. I think other examples of this that that I think okay maybe you could" }, { "start": 2615.7599999999998, "end": 2620.3199999999997, "text": " blame the user but but I'm not sure that's the right way to think about it is things like code" }, { "start": 2620.32, "end": 2626.2400000000002, "text": " completion models like codex right so one thing you might worry about is well if you have a novice" }, { "start": 2626.2400000000002, "end": 2632.8, "text": " programmer and you have them like type in some code and ask them to complete it well if the model" }, { "start": 2632.8, "end": 2638.8, "text": " can if the model is smart enough then it can tell the difference between code written by a novice" }, { "start": 2638.8, "end": 2644.1600000000003, "text": " programmer and an expert programmer and it can see that it's a novice programmer typing stuff" }, { "start": 2644.8, "end": 2649.6800000000003, "text": " and so then if I want to complete stuff in the most likely way I should complete it the way a" }, { "start": 2649.68, "end": 2653.9199999999996, "text": " novice programmer would complete it and maybe introduce like some errors also just just for" }, { "start": 2653.9199999999996, "end": 2659.6, "text": " good measure and so like we really don't want that right like you you want you want things that are" }, { "start": 2659.6, "end": 2666, "text": " like actually like being helpful rather than just like copying you so I think that's maybe a slightly" }, { "start": 2666, "end": 2670.64, "text": " more counterintuitive version of this but but I would call these like somewhat weird I think" }, { "start": 2670.64, "end": 2676.8799999999997, "text": " the ones that start to become really weird is if you're positing that the system's actually" }, { "start": 2676.88, "end": 2682.8, "text": " starting to like reason about what people will do in kind of like a long-term way and like potentially" }, { "start": 2682.8, "end": 2689.44, "text": " doing things to intentionally trick them say and these are so these are the ones that I guess" }, { "start": 2690.32, "end": 2697.28, "text": " historically I've kind of found very implausible but started to put like a bit more weight on" }, { "start": 2698.4, "end": 2705.52, "text": " because of this kind of emergence and so I think that's what the post you have up right now is" }, { "start": 2705.52, "end": 2715.68, "text": " about I think it's about this idea called deceptive alignment and the idea there is that" }, { "start": 2716.8, "end": 2722.72, "text": " if you okay so yeah so what's the idea behind deceptive alignment so the idea there is" }, { "start": 2724.08, "end": 2730.32, "text": " even if you actually got exactly the right reward function and you train the system with that reward" }, { "start": 2730.32, "end": 2735.52, "text": " function you could still end up with something that is misaligned with that reward function" }, { "start": 2736.7200000000003, "end": 2744, "text": " and the reason for that and this is where it gets like kind of kind of a bit weird and philosophical" }, { "start": 2744, "end": 2752.32, "text": " but the reason for that is that as the system being trained you know that in order to get deployed" }, { "start": 2752.32, "end": 2761.44, "text": " you need to have high reward and so no matter what your actual like intrinsic reward function is" }, { "start": 2762, "end": 2765.92, "text": " during training the thing you want to do is output stuff that is good according to the kind of like" }, { "start": 2765.92, "end": 2771.1200000000003, "text": " extrinsic reward that you're being trained on so maybe you're doing that because you're actually" }, { "start": 2771.1200000000003, "end": 2775.44, "text": " optimized to do that and then when you're deployed you'll continue to do that or maybe you'll do that" }, { "start": 2775.44, "end": 2780.4, "text": " because you have a different reward function that's this kind of intrinsic reward function" }, { "start": 2780.4, "end": 2786.88, "text": " and then when you're deployed you'll just pursue that intrinsic function even though at training" }, { "start": 2786.88, "end": 2794.4, "text": " time it looked like you were optimizing the extrinsic function so that's kind of the basic idea" }, { "start": 2795.04, "end": 2801.92, "text": " it's pretty weird and we can break it down but that's kind of the like sort of one minute" }, { "start": 2801.92, "end": 2810.2400000000002, "text": " summary so that the in other words the AI could be really smart and sort of during training trick" }, { "start": 2810.24, "end": 2816, "text": " us into thinking it has learned what we wanted to learn and then once it's deployed all of a sudden" }, { "start": 2816, "end": 2821.4399999999996, "text": " it's going to do something different like take over the world and fire all the nukes" }, { "start": 2822.9599999999996, "end": 2828.56, "text": " yeah or like you even like you know you could consider more frusag things as well like maybe" }, { "start": 2828.56, "end": 2834.72, "text": " it's like maybe the intrinsic reward it ended up with was like some like exploration bonus and so" }, { "start": 2834.72, "end": 2840.3199999999997, "text": " then like when it's deployed it just tries to like acquire as much information as it can although" }, { "start": 2840.3199999999997, "end": 2847.6, "text": " that could also be destructive in in various ways but yeah i think like this is this is kind of the" }, { "start": 2847.6, "end": 2856.16, "text": " basic idea and yeah maybe like with a sufficiently capable system i'm not well yeah we can discuss" }, { "start": 2856.16, "end": 2864, "text": " the fire and all the nukes if we want but but why why do you i mean on on first hand it's like yeah" }, { "start": 2864, "end": 2870.4, "text": " that is a nice thought but probably not right probably if we optimize something for a reward" }, { "start": 2870.4, "end": 2875.36, "text": " like the simplest explanation and you you also write that down right the simplest explanation is" }, { "start": 2875.36, "end": 2882, "text": " it's just going to get better on that reward right and and in if it is at all anything progressive" }, { "start": 2882.96, "end": 2888.48, "text": " increasing will probably get to know once it it's gonna try to trick us" }, { "start": 2888.48, "end": 2896.8, "text": " or once the once the reward that is deployed isn't the reward that we trained for why what makes you" }, { "start": 2896.8, "end": 2904, "text": " give more credence to this than your past self right so so i think like my past self would have" }, { "start": 2904, "end": 2910.4, "text": " looked at this and just been like this is totally bonkers and then kind of like moved on and read" }, { "start": 2910.4, "end": 2918.4, "text": " something else i think my present self instead is going to be like okay well i'm going to be like" }, { "start": 2918.4, "end": 2924.32, "text": " um i feel a bunch of intuitive skepticism here but but let me try to unpack that and like see" }, { "start": 2924.32, "end": 2931.6800000000003, "text": " where the skepticism is coming from and uh when i unpack that i i actually i think i can like lump" }, { "start": 2931.6800000000003, "end": 2938.32, "text": " the skepticism into like two different categories um one category is like well this like invokes" }, { "start": 2938.32, "end": 2944.4, "text": " capabilities that current nl systems don't have so like like it seems implausible for that reason" }, { "start": 2944.4, "end": 2950.4, "text": " um and those that's like the sort of skepticism that i kind of want to like downgrade so in" }, { "start": 2950.4, "end": 2955.6800000000003, "text": " particular like this invokes the idea that nl systems can do long-term planning and that they" }, { "start": 2955.6800000000003, "end": 2960.4, "text": " can kind of like reason about kind of like external aspects of their environment in a somewhat" }, { "start": 2960.4, "end": 2967.44, "text": " sophisticated way and these are things that now like the fact that we don't have those now doesn't" }, { "start": 2967.44, "end": 2975.04, "text": " really to me say much about whether we'll have those you know say like 10-15 years from now um" }, { "start": 2976, "end": 2981.68, "text": " so that's the stuff i want to down weight i think the stuff i don't want to down weight is like okay" }, { "start": 2981.68, "end": 2986.8, "text": " well like why like why does it have this intrinsic reward in the first place like where did it come" }, { "start": 2986.8, "end": 2993.44, "text": " from um like why should we expect systems to have intrinsic reward functions versus just like" }, { "start": 2993.44, "end": 3000.08, "text": " following whatever policy they're following or doing whatever else um and if if they do have an" }, { "start": 3000.08, "end": 3005.52, "text": " intrinsic reward like why shouldn't we expect it to be uh at least pretty similar to the extrinsic" }, { "start": 3005.52, "end": 3012.7200000000003, "text": " reward given that that's what it was trained to do so i think like those are kind of uh the sort" }, { "start": 3012.7200000000003, "end": 3022.32, "text": " of sources of skepticism that i don't down weight as much um but uh what i what i think this kind of" }, { "start": 3022.32, "end": 3030, "text": " thought experiment does show is that there's at least a bunch of different coherent ways to get" }, { "start": 3030, "end": 3035.52, "text": " zero training loss but like right it's like you could get zero training loss because you're like" }, { "start": 3035.52, "end": 3040, "text": " actually trying to do the thing you're trained to do or you could get zero training loss for" }, { "start": 3040, "end": 3045.76, "text": " this deceptive reason um i think there's probably like some large space of like other ways to get" }, { "start": 3045.76, "end": 3051.28, "text": " zero training loss that are like some combination of of these or that are like getting the answer" }, { "start": 3051.28, "end": 3056.6400000000003, "text": " right but for the wrong reasons or or things like that and so i think the main takeaway for me is" }, { "start": 3056.6400000000003, "end": 3063.76, "text": " just that like uh there's like many many ways to get zero training loss and as systems become more" }, { "start": 3063.76, "end": 3068.96, "text": " capable the like number of ways to do that could actually increase in in ways that are kind of" }, { "start": 3068.96, "end": 3076.6400000000003, "text": " unintuitive to us is there do you know if is there any work in actually trying to get a system to be" }, { "start": 3076.64, "end": 3082.96, "text": " deceptive in exhibiting you know good answers during training but then doing something different" }, { "start": 3082.96, "end": 3090.3199999999997, "text": " in deployment uh it'd be interesting to actually try to get a system to do that" }, { "start": 3092, "end": 3098.8799999999997, "text": " yeah i think i haven't seen anything that does exactly this um i've seen things where like" }, { "start": 3100.24, "end": 3103.6, "text": " there's like some distribution shift between training and deployment" }, { "start": 3103.6, "end": 3109.52, "text": " that leads to like something weird happening around like having the wrong reward function" }, { "start": 3110.48, "end": 3115.6, "text": " but it's it's usually not really about deception and and it kind of has like some clear distribution" }, { "start": 3115.6, "end": 3120.96, "text": " shift whereas here okay technically there's a distribution shift because there's like are" }, { "start": 3120.96, "end": 3125.2, "text": " you being trained or are you being deployed but otherwise the distribution of inputs is like" }, { "start": 3125.2, "end": 3129.7599999999998, "text": " exactly the same and so that's kind of the thing that's like kind of counterintuitive is that it's" }, { "start": 3129.76, "end": 3135.92, "text": " like a very subtle distribution shift that could potentially lead to to a large difference so i" }, { "start": 3135.92, "end": 3141.6000000000004, "text": " don't know like all the work i've seen on this and and i might be missing something and so i" }, { "start": 3141.6000000000004, "end": 3146.8, "text": " apologize to whoever's work i'm i'm missing but all the work i've seen on this has been kind of" }, { "start": 3146.8, "end": 3153.92, "text": " purely kind of abstract and philosophical um and i think it would be great to make kind of better" }, { "start": 3153.92, "end": 3158.48, "text": " connections to actual empirical stuff so that we can start to see like yeah like how does this" }, { "start": 3158.48, "end": 3165.68, "text": " actually pan out in practice and like how do we address it it's interesting that in things like" }, { "start": 3165.68, "end": 3170.72, "text": " virology or so we're perfectly capable of saying you know we're gonna we're gonna make these" }, { "start": 3170.72, "end": 3177.36, "text": " super pathogens in order to try to combat them right but in ml people rarely i mean there's" }, { "start": 3177.36, "end": 3183.28, "text": " the adversarial examples community but it's not exactly the same uh there isn't much work that" }, { "start": 3183.28, "end": 3188.8, "text": " i'm aware of that is like yeah let's create like the most misaligned ai that we can think of and" }, { "start": 3188.8, "end": 3195.76, "text": " then see what we can do against it i think that'd be a fun a fun topic to research yeah i think that" }, { "start": 3195.76, "end": 3200.7200000000003, "text": " like the general thing i would the general thing i would call this would be like red teaming um" }, { "start": 3200.7200000000003, "end": 3207.28, "text": " kind of trying to elicit failure modes i i think there actually is starting to be like i'd agree" }, { "start": 3207.28, "end": 3212.1600000000003, "text": " too there's not much work on this so far but i think they're starting to be more and more good" }, { "start": 3212.16, "end": 3218.72, "text": " work along these lines um d mine had a nice paper that kind of tries to use language models to" }, { "start": 3218.72, "end": 3225.2799999999997, "text": " elicit failure modes of language models that that i thought was kind of cool um we like our group" }, { "start": 3225.2799999999997, "end": 3233.2, "text": " actually had a recent paper at iclr that kind of takes misspecified reward functions and looks at" }, { "start": 3233.2, "end": 3239.12, "text": " what happens when you kind of scale the the capacity of your policy model up to see if you" }, { "start": 3239.12, "end": 3244.56, "text": " do kind of get these like unintended behavior and we find that in some cases there are these kind of" }, { "start": 3244.56, "end": 3249.8399999999997, "text": " phase transitions where you know you scale the parameters up within some you know fairly small" }, { "start": 3249.8399999999997, "end": 3254.64, "text": " regime you go from like basically doing the right thing to doing totally the wrong thing" }, { "start": 3254.64, "end": 3259.7599999999998, "text": " um those are those are still in environments that i'd say are kind of like at the level of" }, { "start": 3259.7599999999998, "end": 3265.52, "text": " atari environments so they're not they're not like trivial but they're not super complex so" }, { "start": 3265.52, "end": 3270.32, "text": " so i'd like to see that in more complex environments but but yeah i'd agree with you i" }, { "start": 3270.32, "end": 3274.8, "text": " think it would be awesome to see to see more work like this and i think some people are already" }, { "start": 3274.8, "end": 3281.92, "text": " trying to do this excellent so your last blog post here is called empirical findings generalized" }, { "start": 3281.92, "end": 3288.88, "text": " surprisingly far and it is almost a bit of a of a counterpoint um you even admit this here it might" }, { "start": 3288.88, "end": 3295.84, "text": " seem like a a contradiction coming a bit full circle in the whole story uh what is what is this" }, { "start": 3295.84, "end": 3304.4, "text": " last point that you're making here yeah so i guess i would say the posts up to this point were" }, { "start": 3305.84, "end": 3311.92, "text": " kind of more almost directed like at at my past self um uh and and then to some extent the broader" }, { "start": 3311.92, "end": 3319.28, "text": " ml community um in the sense that i think i was like pretty far on the um on the kind of" }, { "start": 3320, "end": 3325.28, "text": " empirical engineering side uh probably less so actually than like the average ml researcher but" }, { "start": 3325.28, "end": 3331.36, "text": " like way more so than than kind of the average like philosophy oriented person um and so i was" }, { "start": 3331.36, "end": 3338.96, "text": " trying to argue like why you should kind of put more weight into this other viewpoint um here" }, { "start": 3338.96, "end": 3345.92, "text": " i'm kind of now going back to to arguing uh kind of maybe not against the philosophy viewpoint but" }, { "start": 3345.92, "end": 3354, "text": " but talking about what things i feel it misses and in particular i think it tends to be like" }, { "start": 3354, "end": 3363.68, "text": " somewhat too pessimistic uh where it's like well like like future systems don't aren't going to" }, { "start": 3363.68, "end": 3370.8799999999997, "text": " look anything like current systems so like anything could happen so you know to be like to be extra" }, { "start": 3370.8799999999997, "end": 3376.08, "text": " safe let's just assume that the worst case thing will happen oh but then in the worst case like" }, { "start": 3376.08, "end": 3381.7599999999998, "text": " we're all screwed yeah i'm so this is what i find in people like almost everyone who gets into this" }, { "start": 3381.7599999999998, "end": 3386.7999999999997, "text": " alignment stuff six months later they come out and they're like completely blackpilled and be like" }, { "start": 3386.8, "end": 3394.32, "text": " well nothing matters anyway you know we're all gonna die because agi is just gonna take us like" }, { "start": 3394.32, "end": 3401.76, "text": " and i'm like well i'm not so sure but it seems to be a consistent pattern yeah so so yeah so" }, { "start": 3401.76, "end": 3409.92, "text": " so that's not what i believe um i think uh i would say i think uh like future ai systems pose like a" }, { "start": 3409.92, "end": 3417.76, "text": " meal and an important risk um i think in the like median world we're fine but in the like 90th" }, { "start": 3417.76, "end": 3423.76, "text": " percentile world we're not fine um and i want to like you know if i could say like if i could push" }, { "start": 3423.76, "end": 3428.08, "text": " it out so that in the 90th percentile world we're fine but in the 95th percentile world we're not" }, { "start": 3428.08, "end": 3432.56, "text": " fine well that would still be kind of scary because i don't like five percent chances of" }, { "start": 3433.12, "end": 3437.52, "text": " of catastrophes but like you know that would be an improvement and so that's kind of like what i" }, { "start": 3437.52, "end": 3443.2, "text": " think of of myself as trying to do is like yeah there's like tail risk but but it's like real" }, { "start": 3443.2, "end": 3447.84, "text": " tail risk like it's not like a one percent thing it's like maybe more like a 10 thing and like we" }, { "start": 3447.84, "end": 3456.8, "text": " should really be trying to to push that down um so i guess uh that that i guess that's just my view" }, { "start": 3456.8, "end": 3462, "text": " in in terms of like why i believe that i think it's for like a number of reasons but one of them is" }, { "start": 3462, "end": 3468.24, "text": " is that i feel like yeah some of the thinking is kind of too worst case it's kind of like ignoring" }, { "start": 3468.24, "end": 3474.8, "text": " all properties of of how ml systems work and like i agree yeah you don't want to rely too strongly" }, { "start": 3474.8, "end": 3480.4, "text": " on whatever we happen to have today but i think like there are properties that we kind of can rely" }, { "start": 3480.4, "end": 3487.76, "text": " on um i think one is just like things will probably look kind of like neural networks like they'll" }, { "start": 3487.76, "end": 3492.8, "text": " probably have internal representations we can probably try to like introspect on those" }, { "start": 3492.8, "end": 3499.1200000000003, "text": " representations to understand what's happening uh those probably won't directly be human interpretable" }, { "start": 3499.1200000000003, "end": 3503.76, "text": " but i think with enough work we can still kind of do things with them and you know i feel like" }, { "start": 3503.76, "end": 3508.7200000000003, "text": " there's already like some work suggests like showing that you can do at least a little bit" }, { "start": 3508.7200000000003, "end": 3513.6800000000003, "text": " with the representations and like 10 years from now i think there'll be way more work like that" }, { "start": 3513.68, "end": 3517.6, "text": " um so so that's kind of like one reason for optimism is like we don't just have to look at" }, { "start": 3517.6, "end": 3522.64, "text": " the outputs right like most of the worries most of the worries that we've been talking about are like" }, { "start": 3522.64, "end": 3526.7999999999997, "text": " somehow because you only are supervising the outputs you end up with a system whose like" }, { "start": 3526.7999999999997, "end": 3531.8399999999997, "text": " internal process is like really off and to get in like the right answer for the wrong reasons" }, { "start": 3531.8399999999997, "end": 3537.04, "text": " but if if i can like supervise the reasons as well as the output that maybe i can do better" }, { "start": 3537.04, "end": 3543.12, "text": " so i think that's kind of one reason for optimism um another reason for optimism is that i think" }, { "start": 3543.92, "end": 3548.96, "text": " uh yeah we shouldn't assume that neural networks have like exactly the same concepts as humans" }, { "start": 3548.96, "end": 3556.56, "text": " but i think like their inductive biases aren't like totally crazy um i think usually if they" }, { "start": 3556.56, "end": 3562.32, "text": " kind of generalize in the wrong way they generalize in like a wrong way that's at least like" }, { "start": 3562.32, "end": 3569.1200000000003, "text": " somewhat understandable and it's like you can kind of see where it's coming from and so it's not like" }, { "start": 3569.1200000000003, "end": 3573.76, "text": " there's this like infinite dimensional space of like anything could happen it's like there's this" }, { "start": 3573.76, "end": 3578.1600000000003, "text": " kind of relatively low dimensional space of things that could happen and like a bunch of things in" }, { "start": 3578.1600000000003, "end": 3583.1200000000003, "text": " that low dimensional space are pretty bad so you need to like avoid all those and and like get to" }, { "start": 3583.1200000000003, "end": 3587.92, "text": " the good thing but i think that's very different from like the good thing is like totally like" }, { "start": 3587.92, "end": 3593.44, "text": " unidentifiable and just like nowhere close to anything you're you're talking about so i think" }, { "start": 3593.44, "end": 3600.7200000000003, "text": " those are both kind of like reasons for optimism um they're kind of fuzzier than i want them to be" }, { "start": 3600.7200000000003, "end": 3606.8, "text": " so like i i hope in like five years we'll have much more like good reasons for optimism that are" }, { "start": 3606.8, "end": 3612, "text": " kind of more empirically grounded and more solid but those are kind of uh those are kind of two" }, { "start": 3612, "end": 3617.2000000000003, "text": " reasons for optimism that i kind of argue for here so i think that's kind of the reason for optimism" }, { "start": 3617.2, "end": 3625.04, "text": " for here now that you have a let's say you've you've done your travels you were on this side you" }, { "start": 3625.04, "end": 3630.08, "text": " you looked into the other side or or many sides of this debate now that you're enlightened what" }, { "start": 3630.08, "end": 3635.68, "text": " would you think is the most if you could if you could do one if you could force the world to do" }, { "start": 3635.68, "end": 3643.4399999999996, "text": " one thing to guarantee better ai alignment or or safety in the future what would you recommend" }, { "start": 3643.44, "end": 3649.68, "text": " oh one thing it can be two if you have two with that equally but you know just kind of like" }, { "start": 3650.2400000000002, "end": 3655.2000000000003, "text": " something that you've realized okay this is actually something important that not that many" }, { "start": 3655.2000000000003, "end": 3666.56, "text": " people push for well i think i would like it if there was uh within ml more more of a place for" }, { "start": 3666.56, "end": 3673.92, "text": " for dialogue of thinking about these kind of like not even not even just in the context of like ai" }, { "start": 3673.92, "end": 3679.2799999999997, "text": " alignment which is generally like kind of more conceptual or philosophical arguments you know" }, { "start": 3679.2799999999997, "end": 3686.56, "text": " if you go back to like way back you know turing um people like that they write all sorts of like" }, { "start": 3686.56, "end": 3693.36, "text": " super philosophical papers right like the turing test was like a really philosophical paper um and" }, { "start": 3693.36, "end": 3702.48, "text": " um and like not all of it stands up there's a section in it on how uh because uh esp has" }, { "start": 3702.48, "end": 3709.36, "text": " been established uh to exist with high probability that like creates problems for the turing test" }, { "start": 3710.08, "end": 3713.36, "text": " and you're like okay where does that come from well it actually turns out that like" }, { "start": 3713.36, "end": 3719.36, "text": " a lot of scientists in turing's time uh thought that esp existed based on some" }, { "start": 3719.36, "end": 3724.6400000000003, "text": " um some experiments that someone had done that later ended up having like severe issues but" }, { "start": 3724.6400000000003, "end": 3729.44, "text": " but they were like very subtle severe issues um so it's like yeah i think if you do kind of more" }, { "start": 3729.44, "end": 3735.2000000000003, "text": " philosophical stuff uh some percentage of it is going to end up looking like that but some" }, { "start": 3735.2000000000003, "end": 3742.88, "text": " percentage of it is going to be the turing test um and you know i think i think the like increased" }, { "start": 3742.88, "end": 3748.96, "text": " recall of really good ideas like that is kind of worth the decreased precision uh i mean we" }, { "start": 3748.96, "end": 3754.2400000000002, "text": " we obviously need sort of standards to kind of judge those arguments um but right now it's" }, { "start": 3754.2400000000002, "end": 3759.76, "text": " happening is all those arguments are happening uh kind of like next to the ml field rather than" }, { "start": 3759.76, "end": 3765.04, "text": " like within the ml field and so that i don't think that's a like that's not going to improve" }, { "start": 3765.04, "end": 3770.32, "text": " the quality of arguments it's going to be much better if you kind of have have a community of" }, { "start": 3770.32, "end": 3774.56, "text": " people with on the ground experience also also participating in this so i think that might be" }, { "start": 3774.56, "end": 3779.92, "text": " the biggest change i personally like to see you know now that we are we've begun sort of requiring" }, { "start": 3779.92, "end": 3785.52, "text": " sections we could we could force people to next to the broader impact section we could also" }, { "start": 3786.08, "end": 3794.16, "text": " you know do a philosophical musings section where you have to reflect on the long-term" }, { "start": 3794.16, "end": 3798.96, "text": " and and sort of paperclip stuff maximizer style impacts of your work" }, { "start": 3798.96, "end": 3810.64, "text": " well yeah i'm not sure i want to force people to do that um uh it'd be fun yeah i i think like i" }, { "start": 3810.64, "end": 3815.52, "text": " guess i'd rather have like a track or a venue for for kind of talking about these and also for the" }, { "start": 3815.52, "end": 3821.12, "text": " broader impact stuff to be honest because i think um a lot of the broader impact sections of these" }, { "start": 3821.12, "end": 3826.96, "text": " papers are kind of cookie cutter and people are just like filling it out because they feel like" }, { "start": 3826.96, "end": 3832.2400000000002, "text": " they need to to add that section uh but you know there's other researchers who i think are super" }, { "start": 3832.2400000000002, "end": 3839.92, "text": " thoughtful about the broader impacts and have like really good thoughts um and so uh i like i'd like" }, { "start": 3839.92, "end": 3846.48, "text": " there to just be you know venues uh and like there are to some extent right but like i think there" }, { "start": 3846.48, "end": 3852.56, "text": " should just be like more more of a culture of like yeah like let's have you know an essay about the" }, { "start": 3852.56, "end": 3857.52, "text": " broader impacts and like that's like a reasonable contribution or kind of you know this like very" }, { "start": 3857.52, "end": 3861.7599999999998, "text": " conceptual essay about like weird stuff that could happen in the future and that that's a" }, { "start": 3861.7599999999998, "end": 3866.4, "text": " valid contribution so i think that that's maybe what i want more of cool yeah that's a good message" }, { "start": 3866.4, "end": 3874.16, "text": " to all the the people who who think about organizing workshops and so on this would be neat topics that" }, { "start": 3874.16, "end": 3881.6, "text": " would make for interesting workshops certainly at conferences i'd certainly attend yeah it's funny" }, { "start": 3881.6, "end": 3886.72, "text": " because i also wrote a paper on trouble in trends in machine learning scholarship where i argue" }, { "start": 3886.72, "end": 3891.68, "text": " against speculation but what i think actually it's not really an argument against speculation" }, { "start": 3891.68, "end": 3897.6, "text": " speculation is really important it's that you need to separate speculation from from the like" }, { "start": 3897.6, "end": 3902.4, "text": " solid stuff right if you have if you're like mixing it all together then then it's just a mess but" }, { "start": 3902.4, "end": 3909.04, "text": " but i think if it's kind of clearly labeled uh then then you know that that's a much uh safer way" }, { "start": 3909.04, "end": 3915.84, "text": " to do things this workshop is an opinion piece good is there any any last thing you want to get" }, { "start": 3915.84, "end": 3920.08, "text": " out to people about this topic something we haven't touched on yet that you feel is important" }, { "start": 3921.7599999999998, "end": 3928.8, "text": " yeah good question um no i think you did a pretty good job of hitting it maybe the other thing i" }, { "start": 3928.8, "end": 3935.7599999999998, "text": " would just say is i think uh like biology is a really interesting field where you also have kind" }, { "start": 3935.76, "end": 3941.92, "text": " of complex self-organizing systems and emergent behavior like we have in ml and so i've personally" }, { "start": 3941.92, "end": 3949.28, "text": " gotten a lot out of just reading a lot about the history of biology so i i recommend that there's" }, { "start": 3949.28, "end": 3956, "text": " a couple really good books one is the eighth day of creation um it's it's kind of long but" }, { "start": 3956, "end": 3961.84, "text": " very well written and um and i think if if people want like a good non-fiction book i" }, { "start": 3961.84, "end": 3969.1200000000003, "text": " i highly recommend it to people cool your blog is bounded regret right people can find you there" }, { "start": 3971.84, "end": 3976.4, "text": " yep excellent well jacob thank you very much for being here this was really cool" }, { "start": 3976.4, "end": 3991.76, "text": " yeah thank you i'll see you around yep see you around" } ]
C7mUYocWdG0
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Author Interview - Transformer Memory as a Differentiable Search Index
[ "Science & Technology" ]
[]
#neuralsearch #interview #google This is an interview with the authors Yi Tay and Don Metzler. Paper Review Video: https://youtu.be/qlB0TPBQ7YY Search engines work by building an index and then looking up things in it. Usually, that index is a separate data structure. In keyword search, we build and store reverse indices. In neural search, we build nearest-neighbor indices. This paper does something different: It directly trains a Transformer to return the ID of the most relevant document. No similarity search over embeddings or anything like this is performed, and no external data structure is needed, as the entire index is essentially captured by the model's weights. The paper experiments with various ways of representing documents and training the system, which works surprisingly well! OUTLINE: 0:00 - Intro 0:50 - Start of Interview 1:30 - How did this idea start? 4:30 - How does memorization play into this? 5:50 - Why did you not compare to cross-encoders? 7:50 - Instead of the ID, could one reproduce the document itself? 10:50 - Passages vs documents 12:00 - Where can this model be applied? 14:25 - Can we make this work on large collections? 19:20 - What's up with the NQ100K dataset? 23:55 - What is going on inside these models? 28:30 - What's the smallest scale to obtain meaningful results? 30:15 - Investigating the document identifiers 34:45 - What's the end goal? 38:40 - What are the hardest problems currently? 40:40 - Final comments & how to get started Paper: https://arxiv.org/abs/2202.06991 Abstract: In this paper, we demonstrate that information retrieval can be accomplished with a single Transformer, in which all information about the corpus is encoded in the parameters of the model. To this end, we introduce the Differentiable Search Index (DSI), a new paradigm that learns a text-to-text model that maps string queries directly to relevant docids; in other words, a DSI model answers queries directly using only its parameters, dramatically simplifying the whole retrieval process. We study variations in how documents and their identifiers are represented, variations in training procedures, and the interplay between models and corpus sizes. Experiments demonstrate that given appropriate design choices, DSI significantly outperforms strong baselines such as dual encoder models. Moreover, DSI demonstrates strong generalization capabilities, outperforming a BM25 baseline in a zero-shot setup. Authors: Yi Tay, Vinh Q. Tran, Mostafa Dehghani, Jianmo Ni, Dara Bahri, Harsh Mehta, Zhen Qin, Kai Hui, Zhe Zhao, Jai Gupta, Tal Schuster, William W. Cohen, Donald Metzler Links: Merch: http://store.ykilcher.com TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://ykilcher.com/discord BitChute: https://www.bitchute.com/channel/yannic-kilcher LinkedIn: https://www.linkedin.com/in/ykilcher BiliBili: https://space.bilibili.com/2017636191 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
This is an interview with the authors of the paper transformer memory as a differentiable search index. I have done a comprehensive review of this paper yesterday. I've released it just before this video. So be sure to check that out. The authors today have actually seen my review and we'll dive right into the matter during this interview. You will not only learn much more about the paper itself, but also the research project itself, what went well, what didn't, and what the authors think of the future of the field. This is super duper interesting. It's an absolute pleasure to interview all of these people and that's possible because of you. So continue to let me know in the comments, what you think, how I can make this content better. Thank you to everyone who shares out these videos, to everyone who's part of our discord community, to all the supporters on Patreon and so on. And without further ado, let's get into the video. Hello everyone. Today I'm here with Yite and Don Metzler, who are authors of the paper Transformer Memory as a Differentiable Search Index, which I find really cool, really inspiring, very creative and very happy that you are here. Welcome to the channel. Yeah, thanks for having us. Thanks for having us. This paper is a bit special, right? Because it takes a little bit of thinking outside the box, I think, to overcome or to arrive at the conclusion, hey, let's just store the entire data set into transformer weights or you can frame it in whatever way you want, but it is not an obvious idea. How did you get the idea that you want to try something like this? Yeah, so maybe I'll just share a little bit from my point of view and Don can go next about his thoughts. So I think from my side, I'm mainly interested in understanding the properties of transformers and how many documents can transformers encode in the parameters. And then obviously retrieval is a good way to test whether a model is able to generalize and digest what it has encoded in memory. So I think from my point of view, it's more of trying to see what transformers are capable of and pushing the limits of memorization. And yeah, so I think that's from my point of view. One of the reasons why we thought of this at the start, maybe Don can share some thoughts as well. Yeah, so I'm taking just a sort of a step back. This paper is somewhat tied to this paper that we published sometime last year called Rethinking Search, which laid out kind of our vision for how we can bring the latest and greatest in machine learning, natural language understanding to bear on sort of information retrieval problems. There's been a lot of interest in this space recently. And so one of the things that we talked about in that paper was this, I mean, essentially this idea, how to essentially take these large language models that exist, which understand relationships between sequences of tokens and imbue them with an understanding of documents. Because usually these sequences of tokens come from documents. But I've never seen anyone explicitly model that. And so from my point of view, sort of more as a kind of IR researcher, and it's great that Yi sort of has more of the machine learning and LP background. We decided to come together and say, like, hey, what can we actually do here to study this? Is this a crazy idea? Is this even possible? And so one of the things that we'd hope to do is actually see if we can build like this idea of not even like an evolution of language models that are more of like corpus type of models, right? Where you have documents now and in these types of models, potentially not, we didn't do it necessarily here, but in the future, right, you can have models that actually understand relationships between documents. And, you know, we established this, OK, how can you model relationships between token sequences of tokens and documents? But I think you can take this sort of a step further. And yeah, so that's kind of like a broader framing and how we came up with this. Then also, I mean, obviously a super cool problem from like machine learning, natural language understanding side things as well. I think it's been long suspected, said, however you want to call it, that transformers, especially the large language models, they essentially regurgitate their training examples and kind of interpolate their training examples. Was this in your mind as you went about that research or how does that connect to people saying, well, all GPT-3 does is essentially, you know, kind of reproduce a bunch of its training data sets. This is like a good question, but I guess beyond memorization, we are also kind of trying to test for whether a model can make use of the memory because if it's like, you know, you give a model a prompt and it generates from that prompt, it's like associative memory and stuff. But like, you know, maybe understanding of documents is like maybe slightly beyond that. And we want to like just probe this a bit more and see what kind of data sets are slightly beyond that and we want to like just probe this ability of the models because, you know, if you can do zero-shot retrieval here, it kind of, you know, implies that the model has, you know, understands like reasons a little bit with what it has memorized. So I guess from an ML point of view is at least some kind of benchmark like type of task to kind of probe for this type of ability in a model. Now, I had a bunch of questions, maybe technical questions about the model. So I suggest we kind of clarify these first before we go into more the broad or the meanings behind the things. You have this contrastive objective here that you present in the dual encoders and you have the fully differentiable search index. Have you tried or there are these things called cross encoders, right, where I input a query and a document and I try to predict some sort of a score of how they go together. These usually work a lot better than the dual encoders themselves. What is the reason that you chose to not compare to any cross encoder type setups here? Yeah, that's a great question. I can take that. So the reason why we don't compare with cross encoders is because generally cross encoders are pretty expensive because you cannot like cache the documents in advance and you have like, you always have to, you know, compute for every query that comes in, you have to always compute with all the documents. So there's some latency and some compute cost restrictions for cross encoders. So within the scope of DSI, because DSI is basically generating doc ID. So we kind of put that in the same ballpark as a similar compute cost as, you know, like instead of doing a ****, like you kind of, instead of that, you kind of decode one document. So we consider that the compute cost like to be more fair than, you know, having to pass through a pipeline of like, and not like usually there's another re-ranker that does this cross attention stuff and then that definitely improves the performance. And I don't think that at this point of time, like we would beat a cross attention encoder. But, you know, basically cross encoders are just expensive. So that's why we consider it like out of scope for this. That makes sense. You hear you very elegantly, you output just a list of document IDs. I was wondering, have you ever tried to actually produce the document itself that you're searching for instead of the document ID? Because I was wondering, because the model needs to learn this association between the input and the document ID and it kind of needs to remember what text is in that document, right? There's no other way for it to really learn to associate text with document IDs. And I was wondering, is it a harder or an easier task for the model to directly output the text of the document? What do you think? I think there's a lot of challenges with decoding the document. I mean, you can obviously constrain your beam search to only generate stuff that is within a certain memory and stuff. And then that's definitely possible, or at least maybe the title of documents. But then I think that would, like, we have not tried that in this work. And then I think this is definitely interesting and it's a good point that you brought up. But I think that at least within the scope of this, we wanted to keep the compute low. And we have already in related a lot of possibilities in representing the doc IDs. And then that will probably be a different class of style of doc ID representation, like using natural language that can be a follow-up work. But the reason why we mainly don't explore that now is because there's a lot of additional challenges that we need to think about. And so we will consider that slightly out of scope for now. But that's definitely a great suggestion. And we think that it's also potentially quite viable as well. The only other thing I quickly add here, going back to also your question about the cross-encoders, these models typically have limited ability to essentially monopont text length, right? So you're limited usually to passages or parts of documents, right? By sort of modeling the document ID sort of as itself, you sort of open up the ability to model larger, more complex documents that you wouldn't be able to do sort of if you were treating everything as sequences of tokens, which again, sort of the standard. From the IR perspective, it's been, again, my very biased opinion, very unsatisfying, the move away from sort of documents that are very trivial to more passage retrieval that has happened recently. And a lot of that is just because the models have not been able to handle these longer sequences like they did before. So this takes us a little bit back to that. And obviously, if you have longer documents and whatnot, it'd be even more challenging to potentially decode that entire document. Though, isn't it a bit because if I think of information retrieval in the, let's say the olden days, what I actually retrieved was keywords, right? And then I simply looked up which documents the keywords belonged to. And I had some heuristics of how I combined for an entire document, all the keywords that were inside of it. Couldn't this also the move to passages be viewed as an expansion rather than a reduction in the scope of what I'm looking at? Do you see what I mean? Yeah, for sure. Obviously, there's always a way to aggregate from the passage level to the document level. And this is a very standard trick that people have done. People even did that back in the olden days when you just had sort of keyword-based indexes as well. So for sure, but then you also do have considerations of efficiency, right? If you're going to then go and have to score dozens of passages per document, that suddenly explodes the cost versus just scoring sort of at the document. So there's definitely trade-offs here. What this introduces is a level of redirection or a level of indirection in what the model needs to learn. So we no longer map sentences with the same meanings to each other, for example. We now have to learn this indirection almost like addressing a document by a variable name. Even with your semantically meaningful identifiers, still, I believe a large part as a model, I need to remember just this identifier means something. It stands for a particular document. Do you see this applicable in maybe a broader context? You already allude in your paper that this could be part of a differentiable architecture. Where do you see these types of indirection-based models going? Yeah, that's a great question. Actually, I was waiting to talk about this because it's something I'm really excited about. So the doc IDs, using the doc IDs, as you mentioned, is some indirection. You store the information in some address, and then later on, you can just use that address in the place of a long document and stuff. So I think one possible avenue here is you can imagine prompt tunings. This few shots in context learning might require you might need to stuff 10 prompts, 10 examples in this large language model. So if this memory addressing type of architecture allows you to compress stuff to doc IDs, and then you can use that as for prompt tuning, or you can use that for retrieval augmentation. So I think that might be more use cases that can be explored beyond retrieval. So this is more of a fundamental. I think that you got it really very accurate where it's a class of models that uses this memory addressing stuff that may have more wider applications. So yeah, we are also quite excited about that. So everything that you can be like, on top of my head is mainly like maybe like prompt tuning or retrieval augmented models that could benefit from this. But yeah, as of now, we don't know that for sure. But yeah, this is just a guess. In your paper, you describe the performance of your models and the trend seems to be, if I recall this correctly, at least if we go to the results section real quick, that the larger models do perform better. However, the larger the data set gets, the less the improvements of, let's say, the DSI compared to the dual encoders are, if I understood this correctly. And in your data sets, you're still in the realm of 300,000 documents. For an IR problem, that is not really a large scale problem. Do you think that in the future, people might be able to expand these models to also become better on larger document or collection instances? Or do you think that the application of these types of things might be, as you say, much more as a differentiable component in something, maybe in a reinforcement learning agent or something like this? How do you deal with the fact that as you seem to scale the document collection size, the benefits get weaker and weaker? Yeah, so that's a good question. So we kind of think that it gets harder and harder to do the same thing. It gets harder and harder as you increase more documents. I think that's also because the model has to memorize or link documents to much more identifiers. So to be honest, the interplay between memorizing and retrieval is actually quite tough for the model to learn. And as you can see, you need an XSL model to almost do well on these tasks. But I think that to cope with larger documents, there are multiple ways. One of them potentially is sparse models, make sure experts, where you can just increase the parameter size significantly without increasing the compute. So we think that those are also promising to scale these models up to maybe a few million docs at least. This is like estimate. We don't have the results yet to show this. But this is what we think right now. And yeah, it's true that it gets harder and harder eventually. So we are not sure where the limit is yet. And we are also excited to find out where does this end and where's the limit of this. Do you have an idea of how these things scale? If I have double the amount of documents, do I need double the amount of parameters or do I need an order of magnitude more parameters? Is it related linearly, exponentially? Do you have any idea of how this scales? Off the top of my head, I'm unable to put a number on it right now. It's mainly like the intuition is... And it also depends on... There's one part which is the memorizing capability because I believe that beyond this paper, we have actually tried brute force memorizing a couple million documents. The model does memorize, but then there's another... If you need to factorize other part of how well the model is able to make use of this information. So it depends on... The data set depends on many factors. So it's very hard to say. But at least on NQ, we don't have currently we don't have beyond 300K documents, but going from 100K to 320K documents it wasn't really exactly trivial. So we expect that going to a 1 million docs in a retrieval context would be... If I had to put a number on it, it probably may need to go to 32 billion parameters or something like that. If I had to give a guess and estimate. Obviously, this is the standard feedback we get when people take a look at the paper. Lots of questions about the experiments, other data sets, scaling it up. I don't want to give too much away. Obviously, we're aware of this. We're working on this. We hope to be able to have better answers to all of these questions sometime soon and also demonstrate that this works more than just on NtU, on some larger data sets. And hopefully have much better empirical basis for understanding limitations and scalability of these approaches. I have to ask just for... It's a detailed question, but this NQ100K data set it seems to be just out of place a little bit. The numbers, they're just kind of off. It looks really good with the 10K data set and the 320K data set. You can see things either get better or worse, maybe as you'd expect. But then the 100K data set, it's just like, for example, the BM25 is all of a sudden a lot better than either on the 10K data set and the 320K data set. And likewise, in a bunch of the other numbers, it's just sort of out of place. Do you have an idea of what's going on with the data set as such? Yeah, sure. I think if you look at the numbers right now, one of the points that stand out the most is the bucket of the atomic doc IDs. The second stuff. Even you look at NQ320K, you see a 6.9 there randomly. So the fact is that for atomic doc IDs, there were a lot of training instability issues that we had to overcome. So there's a lot of variance and a lot of trainability issues. And we tried our best to overcome those. So sometimes you get a base model doing better than a... It's more of optimization and the interplay between the retrieval and memorization sometimes. I mean, I think coming from my experience of running many of these logical reasoning or memorizing tasks, sometimes the model gets it in the end, and then sometimes it just doesn't get it at the end by the end of the training. So I think there's generally... Especially for atomic doc IDs because we initialize... The softmax layer is initialized from scratch, and we use the pre-trained models. And also depending on the warm-up and everything. So it was already a challenge to optimize for the atomic doc IDs. That's why you see generally even on all three sets, there's a very... I think the rest of them scales pretty more nicely than the atomic doc IDs, but that is actually a big challenge that we had. I'm not sure if we actually explicitly point out this instability issue too much in the paper, but I think I remember mentioning somewhere, but at least the middle bucket is really hard to train. The second bucket is... You do mention it, yes. The other thing to mention... If you look at the BM25 number, that's not trained in any way. It also obviously demonstrates very different performance there. The other thing is just... There is variance when you subsample the documents. So if you go from 320,000 to 100,000, you're subsampling. Maybe that was just a very lucky, good set of documents that somehow was much more amenable and much more relevant in some way. So if you do this with any sort of, I think, standard IR system, you just start subsampling documents in different ways. You're going to get very different performance. I mean, probably the best thing would have been to subsample like five or six times, get some sort of error bars there to get a sense of what the variance is. So I suspect that probably it's a mix of the instability plus the fact that maybe this is a luckier, sort of different sample of documents in the 320k and the 10k. I actually have an answer about the... There's one point which is a bit implicit. It's not like... It's mentioned, but it's not very obvious. But for NQ10k and NQ100k, these are subsampled sets from NQ, right? And then NQ320k uses the official validation set, right? So there's like 10k and 100k is subsampled. And then I'm not exactly sure how the validation set was constructed in NQ, but so 10k and 100k uses a similar way of sampling. It's just random, but when you go to 320k, it's actually using the official validation set. So I don't know, maybe it's a bit cleaner or like there's some difference in the way this... So 10k, 100k and 320k came from the official validation set. So there might be some differences in the way we sample and how the other people sample. So I believe that you mentioned the training instabilities also at points throughout, and that might also explain a little bit as well why different methods are good at different tasks, right? You have, there's quite a bit of variance in which methods are better here or there, quite a bit of variance in the numbers itself. Although what I see is very thoroughly the case is that the larger models tend to do better in general. Whenever a model wins here with whatever way, it tends to be the larger models that outperform the smaller models within the same buckets. Do you think that is a property of the larger models being pre-trained better? Because larger models also exhibit better language modeling behavior, right? And given that these are pre-trained, I guess T5 style checkpoints, that might be an improvement because as far as I understand it, your retrieval performance also in a part depends on the models being pre-trained to actually understand language, especially the zero shot ones. Or do you think that is mainly a, the main contributor is that with more parameters I can memorize more documents? So could you comment on that? And maybe also a little bit on what do you think intuitively is going on inside of these models that they are even able to retrieve those IDs? So I think the pre-training definitely does contribute, like I wouldn't be able to say like how many, put a number on like how many percent it contributes to that. But I definitely think that like one way to tell is like probably to just rerun all the experiments with like randomly initialized T5 style models, right? I think at a very early stage, I mean, it's not in the paper, but we did run some early experiments with like no pre-trained models. And these models actually like, it's way harder to learn without the pre-training. And this is a common finding across, not only in this context, but in broader NLP and machine learning in general. So we think that the pre-training does a lot of like heavy lifting and also the size, like with a larger model, you also benefit more from, like it's the composition of two different things helping each other. So because you are pre-trained and then you also larger and then you benefit more from pre-training when you are for this T5 XXL models. So I think that also probably contributes to like the zero shot and stuff like that. So yeah, just to answer the question, especially I think that the pre-training does contribute a lot to this. Yeah. Yeah, I think the other thing we don't have a good understanding of is, after we fine tune on these, the DSI tasks, what sort of the model, what knowledge the model retains or does not retain, right? What was the nature of the model at that point? As others have sort of asked this question and I think it's a great question. I do suspect that some of the knowledge that sort of obviously pick up during pre-training is helping as you suggested, but there may be other pre-training tasks that are even more amenable to sort of DSI than sort of the standard T5 pre-training. Did you, have you attempted at introspecting these models in some way? To kind of see whether you can find the documents, whatever it means inside of these weights. Like, you know, I imagine since I can query these models and they give me a doc ID that I need to be able to go and look inside the weights or something and find traces of these documents or something. Like, is there something you can say about the inner workings or is there something one can see in the attention maps or in the weights? I have a very disappointing answer because I wish I knew where to look in the model as well. But the unfortunate thing is that I don't know where this is safe in the model. Is it in the decoder layers? But I think intuitively it seems like, because the decoder learns to output doc IDs, I think the decoder does quite a lot of heavy lifting in the model, but which weight is in? And there's also the feed-forward layers, key value memories and stuff like that. And then you can somehow probe that. I think this is interesting for a lot, but unfortunately we don't know where it's safe now in the model. Yeah. What do you think, if people want to get started with this, what do you think is the smallest scale thing that would still give meaningful insights into the technique? Because a certain scale is necessary if found or stand this correctly, right? But what would be the minimal setup for anyone to get into this type of research, like differentiable indexing and things like this? Yeah, that's a very good question, actually. So at what point where this gets getting meaningful, which scale does it get meaningful? I guess that's my personal opinion. This is just my personal opinion, obviously, this is my sense of things. But I think starting at around XL, 3B, is probably a reasonable scale to start. Because actually, I don't really know why 3B, but this is just from my experience running the experiments. Because 3B and 11B has slightly different training dynamics compared to Bayes and Lodge. So it's very hard to characterize this. It's very latent within me. But I think 3B, somewhere around 3B, is medium scale models. But small and Bayes probably will not be that meaningful. But I guess starting from 3B would be pretty nice. So that is not exactly small, right? I can't really run this on my 1080 at home. But it's still, I guess, maybe accessible to more people than just the biggest companies. Here you have a pretty interesting thing in your hierarchical document IDs. And I understand this is not the end all be all. This is like an attempt at forging meaningful document IDs. And you make very interesting requirements here. You have two requirements that they retain some semantics, which the clustering, I would say, gives you. It gives you a little bit of semantic thing. But then also you want to reduce the search space with each decoding step, which is a property of autoregressive decoding. The first decoding step only needs to care about the big picture, the next one, the smaller and the smaller. Do you have an idea how much these two things play together? Or which one is kind of the important one? Because one could also, I think in the review, I raised the issue, you could reverse this in this document ID, which would give you the same meaningful document identifier, but without this property of autoregressive decoding. Do you have an insight of which of the two properties might be the more important one here? And which one is, or are they interacting with each other? So we have thought like really like factorized both of them. Intuitively, I think that segmenting the search space is more beneficial, but I think they help each other. I think this is possible to also come up with ways of ablating this, but I think we did not try those yet. If you look maybe a bit more high level, no, wait, I have one more question. Yeah, this L right here, right? Because you have this very interesting graph that shows this thing right here, which document representations make the most sense and direct indexing. I also find it interesting that in your paper, you try out a lot of things, and then at the end, it seems like often the simpler things work better, which is a neat finding, I guess, an encouraging finding for a lot of people. Although I was surprised to see that if you index fewer tokens of the documents, it tends to perform better. Because that shouldn't be, right? What's the problem here? What's the problem that prevents us from indexing longer sequences of the documents? So this is just like my thoughts on this is that like going up to 128 and above makes the training harder. We also observe this in memorization, looking at the training accuracy of memorization. So I think by, and there's going to be quite some examples, we don't know how many examples, but there's going to be some examples that can be solved easily by the first 32 tokens or 65 tokens. So I think that the model, okay, this is just a guess, I'm not really 100% sure about this, but it's like the model prioritizes getting the one in most correctly rather than trying to fit 256 tokens and then not being able to solve anything, even the easy ones. So I think this might be what's happening. And then this 32, I will not over-index on this 64 or 32, because it's probably going to be very dataset dependent. And also the inverted index, I saw on your review that you were surprised that the inverted index didn't work. But this might be an artifact of this dataset. And it's maybe the simpler approach here, but when we scale up, when we go to something harder or more documents, or just the structure of the dataset is different, then perhaps the inverted index would help. So I think that there's a lot here that we are just showing a slice of the data points, but I wouldn't over-index or like, oh, DSI only works when the document length is short or something. But I think this is dataset dependent. And for sure, I believe that for other datasets, you need longer sequence length. If you look ahead a little bit, and you came into this, you told me at least that you just wanted to know certain things, like you had some questions, is this even possible and so on. My question is, is there an end goal here? If you look into the future, maybe two, three, five years or so, you develop this a little bit, hardware gets better and so on. What's the outlook? What's the North Star that this could lead to? Yeah, so I'm going to share a bit, and then I think Don surely has thoughts about this as well. So I will leave some for him. So I think one of the North Star here is because retrieval is generally slightly decoupled away from other NLP tests. People are unifying models, they are going for T5, everything is 6 to 6. But when it comes to retrieval, you always have this separate infrastructure of dual encoders, and then you have to compute ranking metrics, and then the whole infrastructure is always very different from machine translation or text generation stuff. So I think this, at least for me, one aspect of it is to be able to conveniently do retrieval in a way that you don't need to have a separate infrastructure. You can just co-train your retrieval, get all the metrics you need, get a competitive performance to dual encoders while still being able to do machine translation at the same time. So maybe machine translation may not be the best example, but maybe you want some NLU, some question answering model, end-to-end, or synthesizing. From the doc IDs, you can generate doc IDs together with text, and then maybe substantiating the text with doc IDs, like learning to cite and stuff like that. So I think these are the visions that I'm pretty excited about. Maybe Dawn can chime in. Going back to what I mentioned at the start, this is part of this exploration of what's possible. If you play this forward, we have no idea what's going to happen. One potential outcome is that it turns out that this is a great way of actually modeling a lot of the things that the IR community in the past has modeled in terms of documents and terms of all of this, and that this type of approach could be a way of unifying retrieval and scoring. You mentioned cross encoders. Today, usually, as you mentioned earlier, you have this cascaded approach where you do retrieval first and then you do scoring next. So this does everything together, jointly. That kind of simplifies things. It would be nice, I think, in the future to be able to have a way of doing that all end-to-end in a highly differentiable way. The other thing that is obvious here is that there's a lot of attention and interest recently with retrieval, augmented, everything. The idea being fewer parameters and more reliance on external memory or storage in some way. This is diametrically opposed to that. I think there's pros and cons to both of the approaches, and it will be very interesting to see as we continue to explore both directions what are the benefits of each of these things and how maybe the two of them can come together, as you were suggesting. Maybe DSI could be an inner loop on a retrieval, augmented approach in the future. If you look ahead maybe a bit more short term, what are the hardest problems that are still outstanding to make the next steps of progression here? There's actually a lot. It's good, right? As a researcher. There's a lot of things that we want to solve and there's still a lot of things that keep me up at night. I think there are a couple of pressing ones, like how do you update documents, and then solving the trainability issue and then solving the scale. I'm hoping that going to sparse models, something like switch transformer, you can just handle 20-30 million docs out of the bat. I think scaling is a more short term to mid term thing that we want to solve. So updating, scaling, and also the interplay between retrieval and understanding a little bit more about this zero-shot behaviour, and also understanding where it is in the model, as you mentioned. Understanding this behaviour of these models, I think these are immediate next steps that I think to take this idea further, these things need to be to some extent solved, or at least figured out somehow. Obviously, some of the questions you brought up here are things that are actively being thought about and explored. One of the things that we were just talking about was indexing the first 32 tokens. So just understanding the properties of the model across more datasets, and what are the best practices here, I think are also very immediate term things that we'll need to do to just get a basic understanding of this beyond this initial proof of concept, if you will, that this crazy idea is even feasible. Is there anything else that maybe we haven't touched on yet that you would like people to take away from the paper that they shouldn't go without knowing? That's a good question. Nothing that I can do. Yeah, I can't think of anything right now. Even if the models are large, could people get into this? Is the code somewhere available or are you planning to make it? This is subject to approval, but we do have plans to make the code available sometime in Q2 of this year. But this is all subject to approval. We have not gotten the approval yet as of now, but this is our plan to release it in Q2. The fight with the lawyers. Excellent. We have a history of open sourcing. You've reviewed several of our papers in the past. We do have a history of being able to release the code. It's just a matter of checking various boxes, and we're committed to this. We've already had folks reaching out, trying to replicate this, and we want to make it easy for everyone so that they can get going with this. I think it's a really interesting area, and hopefully this will stimulate some additional fun research. I was in Google for a while. I know it can be a hassle to open source anything and the amount of approvals you need to get. Props that you even want to go through with it. It's pretty cool. All right. Don and Yi, thank you very much for being here. This was very enlightening, and I hope people had fun. I hope to see you again soon. Thanks for inviting me. This was great. It was great, yeah.
[ { "start": 0, "end": 4.32, "text": " This is an interview with the authors of the paper transformer memory as a" }, { "start": 4.32, "end": 6, "text": " differentiable search index." }, { "start": 6.04, "end": 9.8, "text": " I have done a comprehensive review of this paper yesterday." }, { "start": 9.8, "end": 11.88, "text": " I've released it just before this video." }, { "start": 11.88, "end": 13.72, "text": " So be sure to check that out." }, { "start": 13.72, "end": 18.12, "text": " The authors today have actually seen my review and we'll dive right into the" }, { "start": 18.12, "end": 19.68, "text": " matter during this interview." }, { "start": 19.72, "end": 24.36, "text": " You will not only learn much more about the paper itself, but also the research" }, { "start": 24.36, "end": 28.8, "text": " project itself, what went well, what didn't, and what the authors think of the" }, { "start": 28.8, "end": 30, "text": " future of the field." }, { "start": 30.04, "end": 31.64, "text": " This is super duper interesting." }, { "start": 31.68, "end": 35, "text": " It's an absolute pleasure to interview all of these people and that's possible" }, { "start": 35, "end": 35.68, "text": " because of you." }, { "start": 35.68, "end": 39.2, "text": " So continue to let me know in the comments, what you think, how I can make" }, { "start": 39.2, "end": 40.2, "text": " this content better." }, { "start": 40.2, "end": 43.64, "text": " Thank you to everyone who shares out these videos, to everyone who's part of" }, { "start": 43.64, "end": 47.88, "text": " our discord community, to all the supporters on Patreon and so on." }, { "start": 47.92, "end": 50.36, "text": " And without further ado, let's get into the video." }, { "start": 52.480000000000004, "end": 53.24, "text": " Hello everyone." }, { "start": 53.24, "end": 59, "text": " Today I'm here with Yite and Don Metzler, who are authors of the paper" }, { "start": 59, "end": 63.56, "text": " Transformer Memory as a Differentiable Search Index, which I find really cool," }, { "start": 63.64, "end": 68.28, "text": " really inspiring, very creative and very happy that you are here." }, { "start": 68.32000000000001, "end": 69.4, "text": " Welcome to the channel." }, { "start": 70.16, "end": 71.36, "text": " Yeah, thanks for having us." }, { "start": 71.52000000000001, "end": 72.28, "text": " Thanks for having us." }, { "start": 74, "end": 76.68, "text": " This paper is a bit special, right?" }, { "start": 76.68, "end": 82.56, "text": " Because it takes a little bit of thinking outside the box, I think, to" }, { "start": 82.56, "end": 87.04, "text": " overcome or to arrive at the conclusion, hey, let's just store the entire data" }, { "start": 87.04, "end": 93.64, "text": " set into transformer weights or you can frame it in whatever way you want, but" }, { "start": 93.64, "end": 96.4, "text": " it is not an obvious idea." }, { "start": 96.48, "end": 100, "text": " How did you get the idea that you want to try something like this?" }, { "start": 103, "end": 106.16, "text": " Yeah, so maybe I'll just share a little bit from my point of view and Don can" }, { "start": 106.16, "end": 107.4, "text": " go next about his thoughts." }, { "start": 107.4, "end": 114.08000000000001, "text": " So I think from my side, I'm mainly interested in understanding the" }, { "start": 114.08000000000001, "end": 118.24000000000001, "text": " properties of transformers and how many documents can transformers encode in" }, { "start": 118.24000000000001, "end": 119.08000000000001, "text": " the parameters." }, { "start": 119.08000000000001, "end": 124.12, "text": " And then obviously retrieval is a good way to test whether a model is able to" }, { "start": 124.12, "end": 128.36, "text": " generalize and digest what it has encoded in memory." }, { "start": 128.8, "end": 134.56, "text": " So I think from my point of view, it's more of trying to see what transformers" }, { "start": 134.56, "end": 137.64000000000001, "text": " are capable of and pushing the limits of memorization." }, { "start": 138.48, "end": 141.44, "text": " And yeah, so I think that's from my point of view." }, { "start": 141.96, "end": 148.64000000000001, "text": " One of the reasons why we thought of this at the start, maybe Don can share" }, { "start": 148.64000000000001, "end": 150.48, "text": " some thoughts as well." }, { "start": 150.88, "end": 154.2, "text": " Yeah, so I'm taking just a sort of a step back." }, { "start": 154.36, "end": 157.6, "text": " This paper is somewhat tied to this paper that we published sometime last" }, { "start": 157.6, "end": 163.2, "text": " year called Rethinking Search, which laid out kind of our vision for how we can" }, { "start": 163.2, "end": 166.72, "text": " bring the latest and greatest in machine learning, natural language understanding" }, { "start": 166.72, "end": 169.16, "text": " to bear on sort of information retrieval problems." }, { "start": 170.2, "end": 174.04, "text": " There's been a lot of interest in this space recently." }, { "start": 174.07999999999998, "end": 178.72, "text": " And so one of the things that we talked about in that paper was this, I mean," }, { "start": 178.72, "end": 184.79999999999998, "text": " essentially this idea, how to essentially take these large language models that" }, { "start": 184.79999999999998, "end": 189.64, "text": " exist, which understand relationships between sequences of tokens and imbue" }, { "start": 189.64, "end": 193.83999999999997, "text": " them with an understanding of documents." }, { "start": 193.83999999999997, "end": 197.39999999999998, "text": " Because usually these sequences of tokens come from documents." }, { "start": 198.11999999999998, "end": 201.6, "text": " But I've never seen anyone explicitly model that." }, { "start": 202.51999999999998, "end": 206.56, "text": " And so from my point of view, sort of more as a kind of IR researcher, and" }, { "start": 206.56, "end": 211.16, "text": " it's great that Yi sort of has more of the machine learning and LP background." }, { "start": 211.95999999999998, "end": 217.32, "text": " We decided to come together and say, like, hey, what can we actually do here to study" }, { "start": 217.32, "end": 222.35999999999999, "text": " this? Is this a crazy idea? Is this even possible?" }, { "start": 223.16, "end": 228, "text": " And so one of the things that we'd hope to do is actually see if we can build" }, { "start": 228, "end": 232.16, "text": " like this idea of not even like an evolution of language models that are more" }, { "start": 232.16, "end": 234.79999999999998, "text": " of like corpus type of models, right?" }, { "start": 234.79999999999998, "end": 239.76, "text": " Where you have documents now and in these types of models, potentially not, we" }, { "start": 239.76, "end": 243.28, "text": " didn't do it necessarily here, but in the future, right, you can have models that" }, { "start": 243.28, "end": 245.56, "text": " actually understand relationships between documents." }, { "start": 245.56, "end": 251.16, "text": " And, you know, we established this, OK, how can you model relationships between" }, { "start": 251.16, "end": 253.56, "text": " token sequences of tokens and documents?" }, { "start": 253.56, "end": 256.52, "text": " But I think you can take this sort of a step further." }, { "start": 256.52, "end": 261.24, "text": " And yeah, so that's kind of like a broader framing and how we came up with this." }, { "start": 261.24, "end": 265.24, "text": " Then also, I mean, obviously a super cool problem from like machine learning," }, { "start": 265.24, "end": 267.72, "text": " natural language understanding side things as well." }, { "start": 269.24, "end": 273.72, "text": " I think it's been long suspected, said, however you want to call it, that" }, { "start": 273.72, "end": 279, "text": " transformers, especially the large language models, they essentially regurgitate" }, { "start": 279, "end": 283, "text": " their training examples and kind of interpolate their training examples." }, { "start": 283, "end": 287.48, "text": " Was this in your mind as you went about that research or how does that connect to" }, { "start": 287.48, "end": 292.36, "text": " people saying, well, all GPT-3 does is essentially, you know, kind of reproduce" }, { "start": 292.36, "end": 294.68, "text": " a bunch of its training data sets." }, { "start": 294.68, "end": 303.08, "text": " This is like a good question, but I guess beyond memorization, we are also kind of" }, { "start": 303.08, "end": 307.8, "text": " trying to test for whether a model can make use of the memory because if it's" }, { "start": 307.8, "end": 310.68, "text": " like, you know, you give a model a prompt and it generates from that prompt, it's" }, { "start": 310.68, "end": 312.36, "text": " like associative memory and stuff." }, { "start": 312.36, "end": 316.92, "text": " But like, you know, maybe understanding of documents is like maybe slightly" }, { "start": 316.92, "end": 317.48, "text": " beyond that." }, { "start": 317.48, "end": 322.12, "text": " And we want to like just probe this a bit more and see what kind of data sets" }, { "start": 322.12, "end": 325.08, "text": " are slightly beyond that and we want to like just probe this ability of the" }, { "start": 325.08, "end": 327.96, "text": " models because, you know, if you can do zero-shot retrieval here, it kind of," }, { "start": 327.96, "end": 332.12, "text": " you know, implies that the model has, you know, understands like reasons a little" }, { "start": 332.12, "end": 333.72, "text": " bit with what it has memorized." }, { "start": 333.72, "end": 339.08, "text": " So I guess from an ML point of view is at least some kind of benchmark like" }, { "start": 339.08, "end": 343.16, "text": " type of task to kind of probe for this type of ability in a model." }, { "start": 347.32, "end": 351.8, "text": " Now, I had a bunch of questions, maybe technical questions about the model." }, { "start": 351.8, "end": 357.8, "text": " So I suggest we kind of clarify these first before we go into more the broad or" }, { "start": 357.8, "end": 359.64, "text": " the meanings behind the things." }, { "start": 359.64, "end": 365.16, "text": " You have this contrastive objective here that you present in the dual encoders" }, { "start": 365.16, "end": 369.08000000000004, "text": " and you have the fully differentiable search index." }, { "start": 369.56, "end": 375.24, "text": " Have you tried or there are these things called cross encoders, right, where I" }, { "start": 375.24, "end": 379.64, "text": " input a query and a document and I try to predict some sort of a score of how" }, { "start": 379.64, "end": 380.76, "text": " they go together." }, { "start": 380.76, "end": 385.88, "text": " These usually work a lot better than the dual encoders themselves." }, { "start": 385.88, "end": 391, "text": " What is the reason that you chose to not compare to any cross encoder type" }, { "start": 391, "end": 391.64, "text": " setups here?" }, { "start": 393, "end": 394.36, "text": " Yeah, that's a great question." }, { "start": 394.36, "end": 395.15999999999997, "text": " I can take that." }, { "start": 395.96, "end": 400.36, "text": " So the reason why we don't compare with cross encoders is because generally" }, { "start": 400.36, "end": 404.2, "text": " cross encoders are pretty expensive because you cannot like cache the" }, { "start": 404.2, "end": 409.48, "text": " documents in advance and you have like, you always have to, you know, compute" }, { "start": 409.48, "end": 412.12, "text": " for every query that comes in, you have to always compute with all the" }, { "start": 412.12, "end": 413, "text": " documents." }, { "start": 413, "end": 420.52000000000004, "text": " So there's some latency and some compute cost restrictions for cross encoders." }, { "start": 420.52000000000004, "end": 426.84000000000003, "text": " So within the scope of DSI, because DSI is basically generating doc ID." }, { "start": 426.84000000000003, "end": 434.52000000000004, "text": " So we kind of put that in the same ballpark as a similar compute cost as," }, { "start": 434.52, "end": 440.44, "text": " you know, like instead of doing a ****, like you kind of, instead of that, you" }, { "start": 440.44, "end": 443.24, "text": " kind of decode one document." }, { "start": 443.24, "end": 447.71999999999997, "text": " So we consider that the compute cost like to be more fair than, you know," }, { "start": 447.71999999999997, "end": 451.32, "text": " having to pass through a pipeline of like, and not like usually there's" }, { "start": 451.32, "end": 455.96, "text": " another re-ranker that does this cross attention stuff and then that" }, { "start": 455.96, "end": 457.08, "text": " definitely improves the performance." }, { "start": 457.08, "end": 461.15999999999997, "text": " And I don't think that at this point of time, like we would beat a cross" }, { "start": 461.15999999999997, "end": 462.03999999999996, "text": " attention encoder." }, { "start": 462.04, "end": 465.88, "text": " But, you know, basically cross encoders are just expensive." }, { "start": 465.88, "end": 469.72, "text": " So that's why we consider it like out of scope for this." }, { "start": 471, "end": 472.12, "text": " That makes sense." }, { "start": 472.12, "end": 477.48, "text": " You hear you very elegantly, you output just a list of document IDs." }, { "start": 477.48, "end": 482.52000000000004, "text": " I was wondering, have you ever tried to actually produce the document itself" }, { "start": 482.52000000000004, "end": 485.48, "text": " that you're searching for instead of the document ID?" }, { "start": 485.48, "end": 488.44, "text": " Because I was wondering, because the model needs to learn this" }, { "start": 488.44, "end": 494.84, "text": " association between the input and the document ID and it kind of needs to" }, { "start": 494.84, "end": 497.24, "text": " remember what text is in that document, right?" }, { "start": 497.24, "end": 501.32, "text": " There's no other way for it to really learn to associate text with document" }, { "start": 501.32, "end": 501.8, "text": " IDs." }, { "start": 501.8, "end": 506.76, "text": " And I was wondering, is it a harder or an easier task for the model to" }, { "start": 506.76, "end": 510.04, "text": " directly output the text of the document?" }, { "start": 510.04, "end": 510.84, "text": " What do you think?" }, { "start": 512.84, "end": 517.24, "text": " I think there's a lot of challenges with decoding the document." }, { "start": 517.24, "end": 521.88, "text": " I mean, you can obviously constrain your beam search to only generate stuff" }, { "start": 521.88, "end": 526.6, "text": " that is within a certain memory and stuff." }, { "start": 526.6, "end": 529.5600000000001, "text": " And then that's definitely possible, or at least maybe the title of" }, { "start": 529.5600000000001, "end": 530.04, "text": " documents." }, { "start": 530.84, "end": 534.28, "text": " But then I think that would, like, we have not tried that in this work." }, { "start": 534.28, "end": 537.24, "text": " And then I think this is definitely interesting and it's a good point that" }, { "start": 537.24, "end": 538.12, "text": " you brought up." }, { "start": 538.12, "end": 542.6, "text": " But I think that at least within the scope of this, we wanted to keep the" }, { "start": 542.6, "end": 543.5600000000001, "text": " compute low." }, { "start": 543.56, "end": 549.2399999999999, "text": " And we have already in related a lot of possibilities in representing the" }, { "start": 549.2399999999999, "end": 549.7199999999999, "text": " doc IDs." }, { "start": 549.7199999999999, "end": 555.4, "text": " And then that will probably be a different class of style of doc ID" }, { "start": 555.4, "end": 561.0799999999999, "text": " representation, like using natural language that can be a follow-up work." }, { "start": 561.0799999999999, "end": 565.9599999999999, "text": " But the reason why we mainly don't explore that now is because there's a" }, { "start": 565.9599999999999, "end": 569.4, "text": " lot of additional challenges that we need to think about." }, { "start": 569.4, "end": 573.72, "text": " And so we will consider that slightly out of scope for now." }, { "start": 573.72, "end": 575.9599999999999, "text": " But that's definitely a great suggestion." }, { "start": 575.9599999999999, "end": 582.1999999999999, "text": " And we think that it's also potentially quite viable as well." }, { "start": 582.1999999999999, "end": 586.4399999999999, "text": " The only other thing I quickly add here, going back to also your question" }, { "start": 586.4399999999999, "end": 593.4, "text": " about the cross-encoders, these models typically have limited ability to" }, { "start": 593.4, "end": 595.64, "text": " essentially monopont text length, right?" }, { "start": 595.64, "end": 600.36, "text": " So you're limited usually to passages or parts of documents, right?" }, { "start": 600.36, "end": 605, "text": " By sort of modeling the document ID sort of as itself, you sort of open up" }, { "start": 605, "end": 609.4, "text": " the ability to model larger, more complex documents that you wouldn't be" }, { "start": 609.4, "end": 614.28, "text": " able to do sort of if you were treating everything as sequences of tokens," }, { "start": 614.28, "end": 616.76, "text": " which again, sort of the standard." }, { "start": 616.76, "end": 621.16, "text": " From the IR perspective, it's been, again, my very biased opinion, very" }, { "start": 621.16, "end": 624.4399999999999, "text": " unsatisfying, the move away from sort of documents that are very" }, { "start": 624.44, "end": 628.6, "text": " trivial to more passage retrieval that has happened recently." }, { "start": 628.6, "end": 632.36, "text": " And a lot of that is just because the models have not been able to handle" }, { "start": 632.36, "end": 636.7600000000001, "text": " these longer sequences like they did before." }, { "start": 636.7600000000001, "end": 641.4000000000001, "text": " So this takes us a little bit back to that." }, { "start": 641.4000000000001, "end": 645.72, "text": " And obviously, if you have longer documents and whatnot, it'd be even" }, { "start": 645.72, "end": 650.84, "text": " more challenging to potentially decode that entire document." }, { "start": 650.84, "end": 656.12, "text": " Though, isn't it a bit because if I think of information retrieval in the," }, { "start": 656.12, "end": 660.2, "text": " let's say the olden days, what I actually retrieved was keywords, right?" }, { "start": 660.2, "end": 663.96, "text": " And then I simply looked up which documents the keywords belonged to." }, { "start": 663.96, "end": 668.9200000000001, "text": " And I had some heuristics of how I combined for an entire document, all" }, { "start": 668.9200000000001, "end": 670.6800000000001, "text": " the keywords that were inside of it." }, { "start": 670.6800000000001, "end": 675.64, "text": " Couldn't this also the move to passages be viewed as an expansion rather than a" }, { "start": 675.64, "end": 681.48, "text": " reduction in the scope of what I'm looking at?" }, { "start": 681.48, "end": 682.92, "text": " Do you see what I mean?" }, { "start": 684.04, "end": 685.96, "text": " Yeah, for sure." }, { "start": 688.36, "end": 691.48, "text": " Obviously, there's always a way to aggregate from the passage level to the" }, { "start": 691.48, "end": 692.1999999999999, "text": " document level." }, { "start": 692.1999999999999, "end": 695.8, "text": " And this is a very standard trick that people have done." }, { "start": 695.8, "end": 699.96, "text": " People even did that back in the olden days when you just had" }, { "start": 699.96, "end": 703.4, "text": " sort of keyword-based indexes as well." }, { "start": 703.4, "end": 710.4399999999999, "text": " So for sure, but then you also do have considerations of efficiency, right?" }, { "start": 710.4399999999999, "end": 715.16, "text": " If you're going to then go and have to score dozens of passages per document," }, { "start": 715.16, "end": 719.48, "text": " that suddenly explodes the cost versus just scoring sort of at the document." }, { "start": 719.48, "end": 721.56, "text": " So there's definitely trade-offs here." }, { "start": 723.64, "end": 730.28, "text": " What this introduces is a level of redirection or a level of indirection in" }, { "start": 730.28, "end": 731.88, "text": " what the model needs to learn." }, { "start": 731.88, "end": 737.08, "text": " So we no longer map sentences with the same meanings to each other, for example." }, { "start": 737.08, "end": 741.8, "text": " We now have to learn this indirection almost like addressing a document by a" }, { "start": 741.8, "end": 742.84, "text": " variable name." }, { "start": 742.84, "end": 748.76, "text": " Even with your semantically meaningful identifiers, still, I believe a large" }, { "start": 748.76, "end": 755.4, "text": " part as a model, I need to remember just this identifier means something." }, { "start": 755.4, "end": 757.96, "text": " It stands for a particular document." }, { "start": 757.96, "end": 762.76, "text": " Do you see this applicable in maybe a broader context?" }, { "start": 762.76, "end": 766.12, "text": " You already allude in your paper that this could be part of a" }, { "start": 766.12, "end": 768.12, "text": " differentiable architecture." }, { "start": 768.12, "end": 773.1600000000001, "text": " Where do you see these types of indirection-based models going?" }, { "start": 774.44, "end": 775.24, "text": " Yeah, that's a great question." }, { "start": 775.24, "end": 778.52, "text": " Actually, I was waiting to talk about this because it's something I'm really" }, { "start": 778.52, "end": 779.1600000000001, "text": " excited about." }, { "start": 780.44, "end": 784.76, "text": " So the doc IDs, using the doc IDs, as you mentioned, is some indirection." }, { "start": 784.76, "end": 790.4399999999999, "text": " You store the information in some address, and then later on, you can just" }, { "start": 790.4399999999999, "end": 794.84, "text": " use that address in the place of a long document and stuff." }, { "start": 794.84, "end": 802.68, "text": " So I think one possible avenue here is you can imagine prompt tunings." }, { "start": 802.68, "end": 808.28, "text": " This few shots in context learning might require you might need to stuff 10" }, { "start": 808.28, "end": 811.72, "text": " prompts, 10 examples in this large language model." }, { "start": 811.72, "end": 817.48, "text": " So if this memory addressing type of architecture allows you to compress" }, { "start": 817.48, "end": 821.96, "text": " stuff to doc IDs, and then you can use that as for prompt tuning, or you can" }, { "start": 821.96, "end": 824.84, "text": " use that for retrieval augmentation." }, { "start": 824.84, "end": 831, "text": " So I think that might be more use cases that can be explored beyond retrieval." }, { "start": 831, "end": 832.6800000000001, "text": " So this is more of a fundamental." }, { "start": 832.6800000000001, "end": 840.44, "text": " I think that you got it really very accurate where it's a class of models" }, { "start": 840.44, "end": 847.32, "text": " that uses this memory addressing stuff that may have more wider applications." }, { "start": 847.32, "end": 849.08, "text": " So yeah, we are also quite excited about that." }, { "start": 849.08, "end": 853.5600000000001, "text": " So everything that you can be like, on top of my head is mainly like maybe" }, { "start": 853.5600000000001, "end": 860.0400000000001, "text": " like prompt tuning or retrieval augmented models that could benefit from this." }, { "start": 860.0400000000001, "end": 862.84, "text": " But yeah, as of now, we don't know that for sure." }, { "start": 862.84, "end": 864.6800000000001, "text": " But yeah, this is just a guess." }, { "start": 864.68, "end": 870.68, "text": " In your paper, you describe the performance of your models and the trend seems to be," }, { "start": 870.68, "end": 875.2399999999999, "text": " if I recall this correctly, at least if we go to the results section real quick," }, { "start": 875.2399999999999, "end": 879.64, "text": " that the larger models do perform better." }, { "start": 879.64, "end": 886.28, "text": " However, the larger the data set gets, the less the improvements of, let's say," }, { "start": 886.28, "end": 890.68, "text": " the DSI compared to the dual encoders are, if I understood this correctly." }, { "start": 890.68, "end": 895.4799999999999, "text": " And in your data sets, you're still in the realm of 300,000 documents." }, { "start": 895.4799999999999, "end": 900.52, "text": " For an IR problem, that is not really a large scale problem." }, { "start": 900.52, "end": 906.8399999999999, "text": " Do you think that in the future, people might be able to expand these models to" }, { "start": 906.8399999999999, "end": 913.0799999999999, "text": " also become better on larger document or collection instances?" }, { "start": 913.0799999999999, "end": 916.28, "text": " Or do you think that the application of these types of things might be," }, { "start": 916.28, "end": 921.24, "text": " as you say, much more as a differentiable component in something," }, { "start": 921.24, "end": 926.12, "text": " maybe in a reinforcement learning agent or something like this?" }, { "start": 926.12, "end": 932.28, "text": " How do you deal with the fact that as you seem to scale the document collection size," }, { "start": 932.28, "end": 934.28, "text": " the benefits get weaker and weaker?" }, { "start": 937.24, "end": 938.68, "text": " Yeah, so that's a good question." }, { "start": 938.68, "end": 944.92, "text": " So we kind of think that it gets harder and harder to do the same thing." }, { "start": 944.92, "end": 947.4799999999999, "text": " It gets harder and harder as you increase more documents." }, { "start": 947.4799999999999, "end": 953.88, "text": " I think that's also because the model has to memorize or link documents to" }, { "start": 954.5999999999999, "end": 955.64, "text": " much more identifiers." }, { "start": 956.1999999999999, "end": 962.76, "text": " So to be honest, the interplay between memorizing and retrieval" }, { "start": 962.76, "end": 967.9599999999999, "text": " is actually quite tough for the model to learn." }, { "start": 967.9599999999999, "end": 974.1999999999999, "text": " And as you can see, you need an XSL model to almost do well on these tasks." }, { "start": 974.2, "end": 978.9200000000001, "text": " But I think that to cope with larger documents, there are multiple ways." }, { "start": 978.9200000000001, "end": 984.12, "text": " One of them potentially is sparse models, make sure experts," }, { "start": 984.12, "end": 989.8000000000001, "text": " where you can just increase the parameter size significantly without increasing the compute." }, { "start": 989.8000000000001, "end": 993.48, "text": " So we think that those are also promising to scale these models up" }, { "start": 994.2, "end": 997.48, "text": " to maybe a few million docs at least." }, { "start": 997.48, "end": 999.08, "text": " This is like estimate." }, { "start": 999.08, "end": 1000.76, "text": " We don't have the results yet to show this." }, { "start": 1000.76, "end": 1005.24, "text": " But this is what we think right now." }, { "start": 1005.24, "end": 1009.24, "text": " And yeah, it's true that it gets harder and harder eventually." }, { "start": 1009.24, "end": 1012.6, "text": " So we are not sure where the limit is yet." }, { "start": 1012.6, "end": 1017.24, "text": " And we are also excited to find out where does this end" }, { "start": 1017.24, "end": 1018.52, "text": " and where's the limit of this." }, { "start": 1019.64, "end": 1023, "text": " Do you have an idea of how these things scale?" }, { "start": 1023, "end": 1025.8, "text": " If I have double the amount of documents," }, { "start": 1025.8, "end": 1028.36, "text": " do I need double the amount of parameters" }, { "start": 1028.36, "end": 1031.32, "text": " or do I need an order of magnitude more parameters?" }, { "start": 1033.8799999999999, "end": 1036.12, "text": " Is it related linearly, exponentially?" }, { "start": 1036.12, "end": 1039, "text": " Do you have any idea of how this scales?" }, { "start": 1042.76, "end": 1047.8, "text": " Off the top of my head, I'm unable to put a number on it right now." }, { "start": 1048.4399999999998, "end": 1051.7199999999998, "text": " It's mainly like the intuition is..." }, { "start": 1053.7199999999998, "end": 1055.9599999999998, "text": " And it also depends on..." }, { "start": 1055.96, "end": 1058.2, "text": " There's one part which is the memorizing capability" }, { "start": 1058.2, "end": 1062.6000000000001, "text": " because I believe that beyond this paper," }, { "start": 1062.6000000000001, "end": 1065.72, "text": " we have actually tried brute force memorizing" }, { "start": 1065.72, "end": 1067, "text": " a couple million documents." }, { "start": 1067, "end": 1069.56, "text": " The model does memorize, but then there's another..." }, { "start": 1069.56, "end": 1071.88, "text": " If you need to factorize other part of how well the model" }, { "start": 1071.88, "end": 1073.88, "text": " is able to make use of this information." }, { "start": 1073.88, "end": 1076.28, "text": " So it depends on..." }, { "start": 1076.28, "end": 1079.8, "text": " The data set depends on many factors." }, { "start": 1079.8, "end": 1081.24, "text": " So it's very hard to say." }, { "start": 1081.24, "end": 1085, "text": " But at least on NQ, we don't have" }, { "start": 1085, "end": 1088.92, "text": " currently we don't have beyond 300K documents," }, { "start": 1088.92, "end": 1093.8, "text": " but going from 100K to 320K documents" }, { "start": 1093.8, "end": 1099.8, "text": " it wasn't really exactly trivial." }, { "start": 1099.8, "end": 1105.56, "text": " So we expect that going to a 1 million docs" }, { "start": 1105.56, "end": 1106.76, "text": " in a retrieval context would be..." }, { "start": 1108.36, "end": 1109.32, "text": " If I had to put a number on it," }, { "start": 1109.32, "end": 1115.1599999999999, "text": " it probably may need to go to 32 billion parameters" }, { "start": 1115.1599999999999, "end": 1115.96, "text": " or something like that." }, { "start": 1115.96, "end": 1118.6, "text": " If I had to give a guess and estimate." }, { "start": 1121.32, "end": 1124.4399999999998, "text": " Obviously, this is the standard feedback we get" }, { "start": 1124.4399999999998, "end": 1127.24, "text": " when people take a look at the paper." }, { "start": 1127.24, "end": 1129.8799999999999, "text": " Lots of questions about the experiments," }, { "start": 1129.8799999999999, "end": 1131.72, "text": " other data sets, scaling it up." }, { "start": 1133.32, "end": 1134.28, "text": " I don't want to give too much away." }, { "start": 1134.28, "end": 1135.8, "text": " Obviously, we're aware of this." }, { "start": 1135.8, "end": 1137.56, "text": " We're working on this." }, { "start": 1137.56, "end": 1140.6799999999998, "text": " We hope to be able to have better answers to all of these questions" }, { "start": 1140.6799999999998, "end": 1143, "text": " sometime soon and also demonstrate that" }, { "start": 1143.3999999999999, "end": 1146.44, "text": " this works more than just on NtU," }, { "start": 1146.44, "end": 1147.8, "text": " on some larger data sets." }, { "start": 1148.6, "end": 1151.96, "text": " And hopefully have much better empirical basis" }, { "start": 1151.96, "end": 1157.08, "text": " for understanding limitations and scalability of these approaches." }, { "start": 1158.6799999999998, "end": 1160.76, "text": " I have to ask just for..." }, { "start": 1161.56, "end": 1166.9199999999998, "text": " It's a detailed question, but this NQ100K data set" }, { "start": 1166.92, "end": 1169, "text": " it seems to be just out of place a little bit." }, { "start": 1170.04, "end": 1172.8400000000001, "text": " The numbers, they're just kind of off." }, { "start": 1174.6000000000001, "end": 1176.92, "text": " It looks really good with the 10K data set" }, { "start": 1176.92, "end": 1178.8400000000001, "text": " and the 320K data set." }, { "start": 1179.48, "end": 1181.96, "text": " You can see things either get better or worse," }, { "start": 1181.96, "end": 1183.16, "text": " maybe as you'd expect." }, { "start": 1183.16, "end": 1184.68, "text": " But then the 100K data set," }, { "start": 1184.68, "end": 1188.2, "text": " it's just like, for example, the BM25 is all of a sudden" }, { "start": 1188.2, "end": 1191.3200000000002, "text": " a lot better than either on the 10K data set" }, { "start": 1191.3200000000002, "end": 1192.92, "text": " and the 320K data set." }, { "start": 1192.92, "end": 1195.16, "text": " And likewise, in a bunch of the other numbers," }, { "start": 1195.16, "end": 1196.76, "text": " it's just sort of out of place." }, { "start": 1196.76, "end": 1198.8400000000001, "text": " Do you have an idea of what's going on" }, { "start": 1198.8400000000001, "end": 1201.24, "text": " with the data set as such?" }, { "start": 1202.52, "end": 1203.16, "text": " Yeah, sure." }, { "start": 1205, "end": 1206.68, "text": " I think if you look at the numbers right now," }, { "start": 1207.3200000000002, "end": 1209.3200000000002, "text": " one of the points that stand out the most" }, { "start": 1209.3200000000002, "end": 1213.5600000000002, "text": " is the bucket of the atomic doc IDs." }, { "start": 1215.0800000000002, "end": 1215.96, "text": " The second stuff." }, { "start": 1217.24, "end": 1222.92, "text": " Even you look at NQ320K, you see a 6.9 there randomly." }, { "start": 1222.92, "end": 1226.04, "text": " So the fact is that for atomic doc IDs," }, { "start": 1226.6000000000001, "end": 1229.8000000000002, "text": " there were a lot of training instability issues" }, { "start": 1230.8400000000001, "end": 1231.8000000000002, "text": " that we had to overcome." }, { "start": 1231.8000000000002, "end": 1235.16, "text": " So there's a lot of variance and a lot of trainability issues." }, { "start": 1235.16, "end": 1237.96, "text": " And we tried our best to overcome those." }, { "start": 1239.4, "end": 1243, "text": " So sometimes you get a base model doing better than a..." }, { "start": 1243, "end": 1246.6000000000001, "text": " It's more of optimization and the interplay between" }, { "start": 1249.16, "end": 1250.92, "text": " the retrieval and memorization sometimes." }, { "start": 1250.92, "end": 1254.28, "text": " I mean, I think coming from my experience of running" }, { "start": 1254.28, "end": 1257.48, "text": " many of these logical reasoning or memorizing tasks," }, { "start": 1257.48, "end": 1259.4, "text": " sometimes the model gets it in the end," }, { "start": 1259.4, "end": 1261.64, "text": " and then sometimes it just doesn't get it at the end" }, { "start": 1261.64, "end": 1262.8400000000001, "text": " by the end of the training." }, { "start": 1262.8400000000001, "end": 1265.4, "text": " So I think there's generally..." }, { "start": 1265.4, "end": 1268.3600000000001, "text": " Especially for atomic doc IDs because we initialize..." }, { "start": 1269.96, "end": 1272.92, "text": " The softmax layer is initialized from scratch," }, { "start": 1272.92, "end": 1274.52, "text": " and we use the pre-trained models." }, { "start": 1275.16, "end": 1277.24, "text": " And also depending on the warm-up and everything." }, { "start": 1277.24, "end": 1281.08, "text": " So it was already a challenge to optimize for the atomic doc IDs." }, { "start": 1281.08, "end": 1284.68, "text": " That's why you see generally even on all three sets," }, { "start": 1284.68, "end": 1286.2, "text": " there's a very..." }, { "start": 1288.1200000000001, "end": 1292.6, "text": " I think the rest of them scales pretty more nicely" }, { "start": 1292.6, "end": 1294.04, "text": " than the atomic doc IDs," }, { "start": 1294.04, "end": 1296.92, "text": " but that is actually a big challenge that we had." }, { "start": 1298.04, "end": 1300.68, "text": " I'm not sure if we actually explicitly point out" }, { "start": 1300.68, "end": 1303.56, "text": " this instability issue too much in the paper," }, { "start": 1303.56, "end": 1306.76, "text": " but I think I remember mentioning somewhere," }, { "start": 1306.76, "end": 1311.64, "text": " but at least the middle bucket is really hard to train." }, { "start": 1312.84, "end": 1313.72, "text": " The second bucket is..." }, { "start": 1313.72, "end": 1315, "text": " You do mention it, yes." }, { "start": 1316.04, "end": 1317.32, "text": " The other thing to mention..." }, { "start": 1317.96, "end": 1320.6, "text": " If you look at the BM25 number, that's not trained in any way." }, { "start": 1320.6, "end": 1323.64, "text": " It also obviously demonstrates very different performance there." }, { "start": 1324.44, "end": 1325.56, "text": " The other thing is just..." }, { "start": 1326.36, "end": 1328.6, "text": " There is variance when you subsample the documents." }, { "start": 1328.6, "end": 1332.52, "text": " So if you go from 320,000 to 100,000, you're subsampling." }, { "start": 1332.52, "end": 1336.04, "text": " Maybe that was just a very lucky, good set of documents" }, { "start": 1336.04, "end": 1341.16, "text": " that somehow was much more amenable and much more relevant in some way." }, { "start": 1341.16, "end": 1346.92, "text": " So if you do this with any sort of, I think, standard IR system," }, { "start": 1346.92, "end": 1349.32, "text": " you just start subsampling documents in different ways." }, { "start": 1349.32, "end": 1350.84, "text": " You're going to get very different performance." }, { "start": 1350.84, "end": 1354.44, "text": " I mean, probably the best thing would have been to subsample" }, { "start": 1354.44, "end": 1356.04, "text": " like five or six times," }, { "start": 1356.04, "end": 1359.96, "text": " get some sort of error bars there to get a sense of what the variance is." }, { "start": 1359.96, "end": 1364.3600000000001, "text": " So I suspect that probably it's a mix of the instability" }, { "start": 1364.3600000000001, "end": 1368.3600000000001, "text": " plus the fact that maybe this is a luckier," }, { "start": 1368.3600000000001, "end": 1372.8400000000001, "text": " sort of different sample of documents in the 320k and the 10k." }, { "start": 1373.72, "end": 1376.52, "text": " I actually have an answer about the..." }, { "start": 1376.52, "end": 1378.6000000000001, "text": " There's one point which is a bit implicit." }, { "start": 1378.6000000000001, "end": 1379.4, "text": " It's not like..." }, { "start": 1379.4, "end": 1382.04, "text": " It's mentioned, but it's not very obvious." }, { "start": 1382.04, "end": 1387.88, "text": " But for NQ10k and NQ100k, these are subsampled sets from NQ, right?" }, { "start": 1387.88, "end": 1392.0400000000002, "text": " And then NQ320k uses the official validation set, right?" }, { "start": 1392.0400000000002, "end": 1396.8400000000001, "text": " So there's like 10k and 100k is subsampled." }, { "start": 1396.8400000000001, "end": 1401, "text": " And then I'm not exactly sure how the validation set was constructed in NQ," }, { "start": 1401, "end": 1406.8400000000001, "text": " but so 10k and 100k uses a similar way of sampling." }, { "start": 1406.8400000000001, "end": 1410.44, "text": " It's just random, but when you go to 320k," }, { "start": 1410.44, "end": 1413.16, "text": " it's actually using the official validation set." }, { "start": 1413.16, "end": 1415.8000000000002, "text": " So I don't know, maybe it's a bit cleaner" }, { "start": 1415.8, "end": 1419.08, "text": " or like there's some difference in the way this..." }, { "start": 1420.44, "end": 1424.12, "text": " So 10k, 100k and 320k came from the official validation set." }, { "start": 1424.12, "end": 1427.08, "text": " So there might be some differences in the way we sample" }, { "start": 1427.08, "end": 1428.9199999999998, "text": " and how the other people sample." }, { "start": 1431.24, "end": 1435.56, "text": " So I believe that you mentioned the training instabilities" }, { "start": 1435.56, "end": 1437.3999999999999, "text": " also at points throughout," }, { "start": 1437.3999999999999, "end": 1440.6, "text": " and that might also explain a little bit as well" }, { "start": 1440.6, "end": 1444.28, "text": " why different methods are good at different tasks, right?" }, { "start": 1444.28, "end": 1446.2, "text": " You have, there's quite a bit of variance" }, { "start": 1446.2, "end": 1449, "text": " in which methods are better here or there," }, { "start": 1449, "end": 1451.16, "text": " quite a bit of variance in the numbers itself." }, { "start": 1451.16, "end": 1455.8, "text": " Although what I see is very thoroughly the case" }, { "start": 1455.8, "end": 1459.3999999999999, "text": " is that the larger models tend to do better in general." }, { "start": 1459.3999999999999, "end": 1462.44, "text": " Whenever a model wins here with whatever way," }, { "start": 1462.44, "end": 1465.48, "text": " it tends to be the larger models that outperform" }, { "start": 1465.48, "end": 1468.04, "text": " the smaller models within the same buckets." }, { "start": 1468.04, "end": 1474.04, "text": " Do you think that is a property of the larger models" }, { "start": 1474.04, "end": 1476.6, "text": " being pre-trained better?" }, { "start": 1476.6, "end": 1480.44, "text": " Because larger models also exhibit better language" }, { "start": 1480.44, "end": 1481.96, "text": " modeling behavior, right?" }, { "start": 1481.96, "end": 1483.8799999999999, "text": " And given that these are pre-trained," }, { "start": 1484.76, "end": 1489.32, "text": " I guess T5 style checkpoints, that might be an improvement" }, { "start": 1489.32, "end": 1491.56, "text": " because as far as I understand it," }, { "start": 1491.56, "end": 1496.04, "text": " your retrieval performance also in a part depends on" }, { "start": 1496.76, "end": 1499.24, "text": " the models being pre-trained to actually understand language," }, { "start": 1499.24, "end": 1501.24, "text": " especially the zero shot ones." }, { "start": 1501.24, "end": 1504.92, "text": " Or do you think that is mainly a," }, { "start": 1504.92, "end": 1507.96, "text": " the main contributor is that with more parameters" }, { "start": 1507.96, "end": 1509.8, "text": " I can memorize more documents?" }, { "start": 1509.8, "end": 1511.24, "text": " So could you comment on that?" }, { "start": 1511.24, "end": 1515.64, "text": " And maybe also a little bit on what do you think intuitively" }, { "start": 1515.64, "end": 1517.96, "text": " is going on inside of these models" }, { "start": 1517.96, "end": 1520.6, "text": " that they are even able to retrieve those IDs?" }, { "start": 1522.1200000000001, "end": 1524.44, "text": " So I think the pre-training definitely does contribute," }, { "start": 1524.44, "end": 1527.24, "text": " like I wouldn't be able to say like how many," }, { "start": 1527.24, "end": 1529.96, "text": " put a number on like how many percent it contributes to that." }, { "start": 1529.96, "end": 1534.44, "text": " But I definitely think that like one way to tell is like" }, { "start": 1534.44, "end": 1536.3600000000001, "text": " probably to just rerun all the experiments with like" }, { "start": 1537.4, "end": 1542.3600000000001, "text": " randomly initialized T5 style models, right?" }, { "start": 1542.3600000000001, "end": 1543.64, "text": " I think at a very early stage," }, { "start": 1544.44, "end": 1545.24, "text": " I mean, it's not in the paper," }, { "start": 1545.24, "end": 1547, "text": " but we did run some early experiments" }, { "start": 1547, "end": 1549.24, "text": " with like no pre-trained models." }, { "start": 1549.24, "end": 1551.8, "text": " And these models actually like," }, { "start": 1551.8, "end": 1554.68, "text": " it's way harder to learn without the pre-training." }, { "start": 1554.68, "end": 1557.24, "text": " And this is a common finding across," }, { "start": 1557.24, "end": 1560.1200000000001, "text": " not only in this context, but in broader NLP" }, { "start": 1560.1200000000001, "end": 1561.4, "text": " and machine learning in general." }, { "start": 1561.4, "end": 1564.6, "text": " So we think that the pre-training does a lot of like" }, { "start": 1564.6, "end": 1566.44, "text": " heavy lifting and also the size," }, { "start": 1567.16, "end": 1569.64, "text": " like with a larger model, you also benefit more from," }, { "start": 1569.64, "end": 1572.28, "text": " like it's the composition of two different things" }, { "start": 1572.28, "end": 1572.92, "text": " helping each other." }, { "start": 1572.92, "end": 1577.32, "text": " So because you are pre-trained and then you also larger" }, { "start": 1577.32, "end": 1578.6, "text": " and then you benefit more from pre-training" }, { "start": 1578.6, "end": 1583.4, "text": " when you are for this T5 XXL models." }, { "start": 1583.4, "end": 1586.6, "text": " So I think that also probably contributes to like" }, { "start": 1586.6, "end": 1589.9599999999998, "text": " the zero shot and stuff like that." }, { "start": 1589.9599999999998, "end": 1592.6799999999998, "text": " So yeah, just to answer the question," }, { "start": 1593.9599999999998, "end": 1595.56, "text": " especially I think that the pre-training" }, { "start": 1595.56, "end": 1597.7199999999998, "text": " does contribute a lot to this." }, { "start": 1597.7199999999998, "end": 1598.52, "text": " Yeah." }, { "start": 1598.52, "end": 1600.1999999999998, "text": " Yeah, I think the other thing we don't have" }, { "start": 1600.1999999999998, "end": 1601.6399999999999, "text": " a good understanding of is," }, { "start": 1601.6399999999999, "end": 1605.3999999999999, "text": " after we fine tune on these, the DSI tasks," }, { "start": 1606.6799999999998, "end": 1608.4399999999998, "text": " what sort of the model," }, { "start": 1608.4399999999998, "end": 1610.9199999999998, "text": " what knowledge the model retains or does not retain, right?" }, { "start": 1611.8799999999999, "end": 1613.6399999999999, "text": " What was the nature of the model at that point?" }, { "start": 1613.64, "end": 1615.88, "text": " As others have sort of asked this question" }, { "start": 1615.88, "end": 1617.5600000000002, "text": " and I think it's a great question." }, { "start": 1618.92, "end": 1621, "text": " I do suspect that some of the knowledge that" }, { "start": 1621, "end": 1623.88, "text": " sort of obviously pick up during pre-training" }, { "start": 1623.88, "end": 1625.3200000000002, "text": " is helping as you suggested," }, { "start": 1626.76, "end": 1629, "text": " but there may be other pre-training tasks" }, { "start": 1629, "end": 1631.96, "text": " that are even more amenable to sort of DSI" }, { "start": 1631.96, "end": 1635.4, "text": " than sort of the standard T5 pre-training." }, { "start": 1637.24, "end": 1640.76, "text": " Did you, have you attempted at introspecting" }, { "start": 1640.76, "end": 1642.6000000000001, "text": " these models in some way?" }, { "start": 1642.6, "end": 1646.6, "text": " To kind of see whether you can find the documents," }, { "start": 1646.6, "end": 1649, "text": " whatever it means inside of these weights." }, { "start": 1649, "end": 1654.12, "text": " Like, you know, I imagine since I can query these models" }, { "start": 1654.12, "end": 1656.36, "text": " and they give me a doc ID that I need to be able" }, { "start": 1656.36, "end": 1658.4399999999998, "text": " to go and look inside the weights or something" }, { "start": 1658.4399999999998, "end": 1661.6399999999999, "text": " and find traces of these documents or something." }, { "start": 1661.6399999999999, "end": 1663.6399999999999, "text": " Like, is there something you can say" }, { "start": 1663.6399999999999, "end": 1667, "text": " about the inner workings or is there something" }, { "start": 1667, "end": 1670.1999999999998, "text": " one can see in the attention maps or in the weights?" }, { "start": 1670.2, "end": 1672.76, "text": " I have a very disappointing answer because I wish I knew" }, { "start": 1672.76, "end": 1674.3600000000001, "text": " where to look in the model as well." }, { "start": 1674.3600000000001, "end": 1677.24, "text": " But the unfortunate thing is that I don't know" }, { "start": 1677.24, "end": 1679.32, "text": " where this is safe in the model." }, { "start": 1679.32, "end": 1681.64, "text": " Is it in the decoder layers?" }, { "start": 1681.64, "end": 1683.64, "text": " But I think intuitively it seems like," }, { "start": 1683.64, "end": 1686.52, "text": " because the decoder learns to output doc IDs," }, { "start": 1686.52, "end": 1689.56, "text": " I think the decoder does quite a lot of heavy lifting" }, { "start": 1689.56, "end": 1692.44, "text": " in the model, but which weight is in?" }, { "start": 1692.44, "end": 1695.72, "text": " And there's also the feed-forward layers," }, { "start": 1695.72, "end": 1697.32, "text": " key value memories and stuff like that." }, { "start": 1697.32, "end": 1700.36, "text": " And then you can somehow probe that." }, { "start": 1700.36, "end": 1701.6399999999999, "text": " I think this is interesting for a lot," }, { "start": 1701.6399999999999, "end": 1705.6399999999999, "text": " but unfortunately we don't know where it's safe now" }, { "start": 1705.6399999999999, "end": 1706.4399999999998, "text": " in the model." }, { "start": 1706.4399999999998, "end": 1706.9199999999998, "text": " Yeah." }, { "start": 1709.6399999999999, "end": 1714.6, "text": " What do you think, if people want to get started with this," }, { "start": 1714.6, "end": 1717.6399999999999, "text": " what do you think is the smallest scale thing" }, { "start": 1717.6399999999999, "end": 1722.4399999999998, "text": " that would still give meaningful insights into the technique?" }, { "start": 1722.4399999999998, "end": 1725.08, "text": " Because a certain scale is necessary if found" }, { "start": 1725.08, "end": 1727.6399999999999, "text": " or stand this correctly, right?" }, { "start": 1727.6399999999999, "end": 1731.24, "text": " But what would be the minimal setup for anyone" }, { "start": 1731.24, "end": 1733.72, "text": " to get into this type of research," }, { "start": 1733.72, "end": 1737.3999999999999, "text": " like differentiable indexing and things like this?" }, { "start": 1740.6799999999998, "end": 1743, "text": " Yeah, that's a very good question, actually." }, { "start": 1744.12, "end": 1746.52, "text": " So at what point where this gets getting meaningful," }, { "start": 1746.52, "end": 1748.1999999999998, "text": " which scale does it get meaningful?" }, { "start": 1748.1999999999998, "end": 1751.6399999999999, "text": " I guess that's my personal opinion." }, { "start": 1751.64, "end": 1755.24, "text": " This is just my personal opinion, obviously, this is my sense of things." }, { "start": 1755.24, "end": 1760.44, "text": " But I think starting at around XL, 3B," }, { "start": 1760.44, "end": 1763.4, "text": " is probably a reasonable scale to start." }, { "start": 1763.4, "end": 1767.0800000000002, "text": " Because actually, I don't really know why 3B," }, { "start": 1767.0800000000002, "end": 1770.2, "text": " but this is just from my experience running the experiments." }, { "start": 1770.2, "end": 1778.92, "text": " Because 3B and 11B has slightly different training dynamics" }, { "start": 1778.92, "end": 1780.2, "text": " compared to Bayes and Lodge." }, { "start": 1780.2, "end": 1783.96, "text": " So it's very hard to characterize this." }, { "start": 1784.92, "end": 1786.76, "text": " It's very latent within me." }, { "start": 1788.04, "end": 1793.4, "text": " But I think 3B, somewhere around 3B, is medium scale models." }, { "start": 1795.24, "end": 1797.88, "text": " But small and Bayes probably will not be that meaningful." }, { "start": 1797.88, "end": 1800.6000000000001, "text": " But I guess starting from 3B would be pretty nice." }, { "start": 1802.52, "end": 1805.24, "text": " So that is not exactly small, right?" }, { "start": 1805.24, "end": 1809.32, "text": " I can't really run this on my 1080 at home." }, { "start": 1809.32, "end": 1814.28, "text": " But it's still, I guess, maybe accessible to more people" }, { "start": 1814.28, "end": 1815.96, "text": " than just the biggest companies." }, { "start": 1817.6399999999999, "end": 1822.9199999999998, "text": " Here you have a pretty interesting thing in your hierarchical document IDs." }, { "start": 1822.9199999999998, "end": 1825.32, "text": " And I understand this is not the end all be all." }, { "start": 1825.32, "end": 1829.8799999999999, "text": " This is like an attempt at forging meaningful document IDs." }, { "start": 1829.8799999999999, "end": 1832.84, "text": " And you make very interesting requirements here." }, { "start": 1832.84, "end": 1837.96, "text": " You have two requirements that they retain some semantics," }, { "start": 1837.96, "end": 1840.3600000000001, "text": " which the clustering, I would say, gives you." }, { "start": 1840.68, "end": 1843, "text": " It gives you a little bit of semantic thing." }, { "start": 1843, "end": 1847.56, "text": " But then also you want to reduce the search space with each decoding step," }, { "start": 1847.56, "end": 1850.76, "text": " which is a property of autoregressive decoding." }, { "start": 1850.76, "end": 1854.1200000000001, "text": " The first decoding step only needs to care about the big picture," }, { "start": 1854.1200000000001, "end": 1855.88, "text": " the next one, the smaller and the smaller." }, { "start": 1855.88, "end": 1860.6000000000001, "text": " Do you have an idea how much these two things play together?" }, { "start": 1860.6000000000001, "end": 1863, "text": " Or which one is kind of the important one?" }, { "start": 1863, "end": 1866.52, "text": " Because one could also, I think in the review, I raised the issue," }, { "start": 1866.52, "end": 1870.52, "text": " you could reverse this in this document ID," }, { "start": 1870.52, "end": 1875.72, "text": " which would give you the same meaningful document identifier," }, { "start": 1875.72, "end": 1879.16, "text": " but without this property of autoregressive decoding." }, { "start": 1879.16, "end": 1881.8, "text": " Do you have an insight of which of the two properties" }, { "start": 1881.8, "end": 1883.8, "text": " might be the more important one here?" }, { "start": 1883.8, "end": 1886.76, "text": " And which one is, or are they interacting with each other?" }, { "start": 1889.32, "end": 1892.76, "text": " So we have thought like really like factorized both of them." }, { "start": 1892.76, "end": 1899.56, "text": " Intuitively, I think that segmenting the search space is more beneficial," }, { "start": 1899.56, "end": 1900.76, "text": " but I think they help each other." }, { "start": 1900.76, "end": 1905.56, "text": " I think this is possible to also come up with ways of ablating this," }, { "start": 1905.56, "end": 1909.8, "text": " but I think we did not try those yet." }, { "start": 1913.08, "end": 1916.76, "text": " If you look maybe a bit more high level," }, { "start": 1916.76, "end": 1918.76, "text": " no, wait, I have one more question." }, { "start": 1918.76, "end": 1920.76, "text": " Yeah, this L right here, right?" }, { "start": 1920.76, "end": 1925.72, "text": " Because you have this very interesting graph that shows this thing right here," }, { "start": 1925.72, "end": 1930.36, "text": " which document representations make the most sense and direct indexing." }, { "start": 1930.36, "end": 1934.36, "text": " I also find it interesting that in your paper, you try out a lot of things," }, { "start": 1934.36, "end": 1938.36, "text": " and then at the end, it seems like often the simpler things work better," }, { "start": 1938.36, "end": 1944.36, "text": " which is a neat finding, I guess, an encouraging finding for a lot of people." }, { "start": 1945.16, "end": 1948.2, "text": " Although I was surprised to see that" }, { "start": 1948.2, "end": 1953.96, "text": " if you index fewer tokens of the documents, it tends to perform better." }, { "start": 1953.96, "end": 1957.16, "text": " Because that shouldn't be, right?" }, { "start": 1957.16, "end": 1958.76, "text": " What's the problem here?" }, { "start": 1958.76, "end": 1963.16, "text": " What's the problem that prevents us from indexing longer sequences of the documents?" }, { "start": 1965.16, "end": 1973.56, "text": " So this is just like my thoughts on this is that like going up to 128 and above" }, { "start": 1973.56, "end": 1979.1599999999999, "text": " makes the training harder." }, { "start": 1979.1599999999999, "end": 1983.96, "text": " We also observe this in memorization, looking at the training accuracy of memorization." }, { "start": 1983.96, "end": 1990.36, "text": " So I think by, and there's going to be quite some examples," }, { "start": 1990.36, "end": 1992.36, "text": " we don't know how many examples," }, { "start": 1992.36, "end": 1997.96, "text": " but there's going to be some examples that can be solved easily by the first 32 tokens or 65 tokens." }, { "start": 1997.96, "end": 2000.76, "text": " So I think that the model, okay, this is just a guess," }, { "start": 2000.76, "end": 2002.76, "text": " I'm not really 100% sure about this," }, { "start": 2002.76, "end": 2009.16, "text": " but it's like the model prioritizes getting the one in most correctly" }, { "start": 2009.16, "end": 2015.96, "text": " rather than trying to fit 256 tokens and then not being able to solve anything, even the easy ones." }, { "start": 2015.96, "end": 2018.76, "text": " So I think this might be what's happening." }, { "start": 2018.76, "end": 2023.96, "text": " And then this 32, I will not over-index on this 64 or 32," }, { "start": 2023.96, "end": 2027.96, "text": " because it's probably going to be very dataset dependent." }, { "start": 2027.96, "end": 2029.56, "text": " And also the inverted index," }, { "start": 2029.56, "end": 2033.96, "text": " I saw on your review that you were surprised that the inverted index didn't work." }, { "start": 2033.96, "end": 2037.56, "text": " But this might be an artifact of this dataset." }, { "start": 2037.56, "end": 2041.96, "text": " And it's maybe the simpler approach here," }, { "start": 2041.96, "end": 2047.1599999999999, "text": " but when we scale up, when we go to something harder or more documents," }, { "start": 2047.1599999999999, "end": 2052.7599999999998, "text": " or just the structure of the dataset is different, then perhaps the inverted index would help." }, { "start": 2052.76, "end": 2060.76, "text": " So I think that there's a lot here that we are just showing a slice of the data points," }, { "start": 2060.76, "end": 2067.96, "text": " but I wouldn't over-index or like, oh, DSI only works when the document length is short or something." }, { "start": 2067.96, "end": 2070.76, "text": " But I think this is dataset dependent." }, { "start": 2070.76, "end": 2076.76, "text": " And for sure, I believe that for other datasets, you need longer sequence length." }, { "start": 2076.76, "end": 2083.5600000000004, "text": " If you look ahead a little bit, and you came into this," }, { "start": 2083.5600000000004, "end": 2090.36, "text": " you told me at least that you just wanted to know certain things," }, { "start": 2090.36, "end": 2093.1600000000003, "text": " like you had some questions, is this even possible and so on." }, { "start": 2093.1600000000003, "end": 2095.96, "text": " My question is, is there an end goal here?" }, { "start": 2095.96, "end": 2099.96, "text": " If you look into the future, maybe two, three, five years or so," }, { "start": 2099.96, "end": 2103.5600000000004, "text": " you develop this a little bit, hardware gets better and so on." }, { "start": 2103.56, "end": 2109.96, "text": " What's the outlook? What's the North Star that this could lead to?" }, { "start": 2113.96, "end": 2117.16, "text": " Yeah, so I'm going to share a bit, and then I think Don surely has thoughts about this as well." }, { "start": 2117.16, "end": 2118.7599999999998, "text": " So I will leave some for him." }, { "start": 2118.7599999999998, "end": 2128.7599999999998, "text": " So I think one of the North Star here is because retrieval is generally slightly decoupled away from other NLP tests." }, { "start": 2128.7599999999998, "end": 2132.7599999999998, "text": " People are unifying models, they are going for T5, everything is 6 to 6." }, { "start": 2132.76, "end": 2139.5600000000004, "text": " But when it comes to retrieval, you always have this separate infrastructure of dual encoders," }, { "start": 2139.5600000000004, "end": 2141.5600000000004, "text": " and then you have to compute ranking metrics," }, { "start": 2141.5600000000004, "end": 2146.76, "text": " and then the whole infrastructure is always very different from machine translation or text generation stuff." }, { "start": 2146.76, "end": 2154.76, "text": " So I think this, at least for me, one aspect of it is to be able to conveniently do retrieval" }, { "start": 2154.76, "end": 2158.36, "text": " in a way that you don't need to have a separate infrastructure." }, { "start": 2158.36, "end": 2164.36, "text": " You can just co-train your retrieval, get all the metrics you need, get a competitive performance to dual encoders" }, { "start": 2164.36, "end": 2168.76, "text": " while still being able to do machine translation at the same time." }, { "start": 2168.76, "end": 2174.36, "text": " So maybe machine translation may not be the best example, but maybe you want some NLU," }, { "start": 2174.36, "end": 2180.36, "text": " some question answering model, end-to-end, or synthesizing." }, { "start": 2180.36, "end": 2184.36, "text": " From the doc IDs, you can generate doc IDs together with text," }, { "start": 2184.36, "end": 2192.36, "text": " and then maybe substantiating the text with doc IDs, like learning to cite and stuff like that." }, { "start": 2192.36, "end": 2200.36, "text": " So I think these are the visions that I'm pretty excited about." }, { "start": 2200.36, "end": 2204.36, "text": " Maybe Dawn can chime in." }, { "start": 2204.36, "end": 2212.36, "text": " Going back to what I mentioned at the start, this is part of this exploration of what's possible." }, { "start": 2212.36, "end": 2218.36, "text": " If you play this forward, we have no idea what's going to happen." }, { "start": 2218.36, "end": 2226.36, "text": " One potential outcome is that it turns out that this is a great way of actually modeling" }, { "start": 2226.36, "end": 2234.36, "text": " a lot of the things that the IR community in the past has modeled in terms of documents and terms" }, { "start": 2234.36, "end": 2250.36, "text": " of all of this, and that this type of approach could be a way of unifying retrieval and scoring." }, { "start": 2250.36, "end": 2257.36, "text": " You mentioned cross encoders. Today, usually, as you mentioned earlier, you have this cascaded approach" }, { "start": 2257.36, "end": 2260.36, "text": " where you do retrieval first and then you do scoring next." }, { "start": 2260.36, "end": 2266.36, "text": " So this does everything together, jointly. That kind of simplifies things." }, { "start": 2266.36, "end": 2271.36, "text": " It would be nice, I think, in the future to be able to have a way of doing that all end-to-end" }, { "start": 2271.36, "end": 2274.36, "text": " in a highly differentiable way." }, { "start": 2274.36, "end": 2279.36, "text": " The other thing that is obvious here is that there's a lot of attention and interest recently" }, { "start": 2279.36, "end": 2282.36, "text": " with retrieval, augmented, everything." }, { "start": 2282.36, "end": 2290.36, "text": " The idea being fewer parameters and more reliance on external memory or storage in some way." }, { "start": 2290.36, "end": 2294.36, "text": " This is diametrically opposed to that." }, { "start": 2294.36, "end": 2299.36, "text": " I think there's pros and cons to both of the approaches, and it will be very interesting" }, { "start": 2299.36, "end": 2306.36, "text": " to see as we continue to explore both directions what are the benefits of each of these things" }, { "start": 2306.36, "end": 2310.36, "text": " and how maybe the two of them can come together, as you were suggesting." }, { "start": 2310.36, "end": 2318.36, "text": " Maybe DSI could be an inner loop on a retrieval, augmented approach in the future." }, { "start": 2318.36, "end": 2325.36, "text": " If you look ahead maybe a bit more short term, what are the hardest problems that are still outstanding" }, { "start": 2325.36, "end": 2330.36, "text": " to make the next steps of progression here?" }, { "start": 2330.36, "end": 2335.36, "text": " There's actually a lot." }, { "start": 2335.36, "end": 2338.36, "text": " It's good, right? As a researcher." }, { "start": 2338.36, "end": 2345.36, "text": " There's a lot of things that we want to solve and there's still a lot of things that keep me up at night." }, { "start": 2345.36, "end": 2351.36, "text": " I think there are a couple of pressing ones, like how do you update documents," }, { "start": 2351.36, "end": 2356.36, "text": " and then solving the trainability issue and then solving the scale." }, { "start": 2356.36, "end": 2360.36, "text": " I'm hoping that going to sparse models, something like switch transformer," }, { "start": 2360.36, "end": 2365.36, "text": " you can just handle 20-30 million docs out of the bat." }, { "start": 2365.36, "end": 2373.36, "text": " I think scaling is a more short term to mid term thing that we want to solve." }, { "start": 2373.36, "end": 2379.36, "text": " So updating, scaling, and also the interplay between retrieval and understanding a little bit more" }, { "start": 2379.36, "end": 2385.36, "text": " about this zero-shot behaviour, and also understanding where it is in the model, as you mentioned." }, { "start": 2385.36, "end": 2390.36, "text": " Understanding this behaviour of these models, I think these are immediate next steps" }, { "start": 2390.36, "end": 2399.36, "text": " that I think to take this idea further, these things need to be to some extent solved," }, { "start": 2399.36, "end": 2404.36, "text": " or at least figured out somehow." }, { "start": 2404.36, "end": 2411.36, "text": " Obviously, some of the questions you brought up here are things that are actively being thought about and explored." }, { "start": 2411.36, "end": 2419.36, "text": " One of the things that we were just talking about was indexing the first 32 tokens." }, { "start": 2419.36, "end": 2423.36, "text": " So just understanding the properties of the model across more datasets," }, { "start": 2423.36, "end": 2431.36, "text": " and what are the best practices here, I think are also very immediate term things that we'll need to do" }, { "start": 2431.36, "end": 2437.36, "text": " to just get a basic understanding of this beyond this initial proof of concept, if you will," }, { "start": 2437.36, "end": 2443.36, "text": " that this crazy idea is even feasible." }, { "start": 2443.36, "end": 2450.36, "text": " Is there anything else that maybe we haven't touched on yet that you would like people to take away from the paper" }, { "start": 2450.36, "end": 2460.36, "text": " that they shouldn't go without knowing?" }, { "start": 2460.36, "end": 2467.36, "text": " That's a good question." }, { "start": 2467.36, "end": 2468.36, "text": " Nothing that I can do." }, { "start": 2468.36, "end": 2472.36, "text": " Yeah, I can't think of anything right now." }, { "start": 2472.36, "end": 2477.36, "text": " Even if the models are large, could people get into this?" }, { "start": 2477.36, "end": 2484.36, "text": " Is the code somewhere available or are you planning to make it?" }, { "start": 2484.36, "end": 2492.36, "text": " This is subject to approval, but we do have plans to make the code available sometime in Q2 of this year." }, { "start": 2492.36, "end": 2495.36, "text": " But this is all subject to approval." }, { "start": 2495.36, "end": 2502.36, "text": " We have not gotten the approval yet as of now, but this is our plan to release it in Q2." }, { "start": 2502.36, "end": 2507.36, "text": " The fight with the lawyers. Excellent." }, { "start": 2507.36, "end": 2511.36, "text": " We have a history of open sourcing." }, { "start": 2511.36, "end": 2515.36, "text": " You've reviewed several of our papers in the past." }, { "start": 2515.36, "end": 2518.36, "text": " We do have a history of being able to release the code." }, { "start": 2518.36, "end": 2522.36, "text": " It's just a matter of checking various boxes, and we're committed to this." }, { "start": 2522.36, "end": 2528.36, "text": " We've already had folks reaching out, trying to replicate this, and we want to make it easy for everyone" }, { "start": 2528.36, "end": 2531.36, "text": " so that they can get going with this." }, { "start": 2531.36, "end": 2537.36, "text": " I think it's a really interesting area, and hopefully this will stimulate some additional fun research." }, { "start": 2537.36, "end": 2540.36, "text": " I was in Google for a while." }, { "start": 2540.36, "end": 2546.36, "text": " I know it can be a hassle to open source anything and the amount of approvals you need to get." }, { "start": 2546.36, "end": 2552.36, "text": " Props that you even want to go through with it. It's pretty cool." }, { "start": 2552.36, "end": 2556.36, "text": " All right. Don and Yi, thank you very much for being here." }, { "start": 2556.36, "end": 2561.36, "text": " This was very enlightening, and I hope people had fun." }, { "start": 2561.36, "end": 2564.36, "text": " I hope to see you again soon." }, { "start": 2564.36, "end": 2566.36, "text": " Thanks for inviting me." }, { "start": 2566.36, "end": 2568.36, "text": " This was great." }, { "start": 2568.36, "end": 2583.36, "text": " It was great, yeah." } ]
lvYVuOmUVs8
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
OpenAI tackles Math - Formal Mathematics Statement Curriculum Learning (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "openai", "formal math", "ai math", "ai math prover", "machine learning for math", "ml math", "artificial intelligence math", "ai mathematics", "automated proof search", "mini f2f", "ai imo", "ai math olympiad", "openai mathematics", "openai formal math", "language models formal math", "lean", "lean prover", "lean proof", "lean math", "ai lean environment", "ai proves theorems", "ai theorem prover" ]
#openai #math #imo Formal mathematics is a challenging area for both humans and machines. For humans, formal proofs require very tedious and meticulous specifications of every last detail and results in very long, overly cumbersome and verbose outputs. For machines, the discreteness and sparse reward nature of the problem presents a significant problem, which is classically tackled by brute force search, guided by a couple of heuristics. Previously, language models have been employed to better guide these proof searches and delivered significant improvements, but automated systems are still far from usable. This paper introduces another concept: An expert iteration procedure is employed to iteratively produce more and more challenging, but solvable problems for the machine to train on, which results in an automated curriculum, and a final algorithm that performs well above the previous models. OpenAI used this method to even solve two problems of the international math olympiad, which was previously infeasible for AI systems. OUTLINE: 0:00 - Intro 2:35 - Paper Overview 5:50 - How do formal proofs work? 9:35 - How expert iteration creates a curriculum 16:50 - Model, data, and training procedure 25:30 - Predicting proof lengths for guiding search 29:10 - Bootstrapping expert iteration 34:10 - Experimental evaluation & scaling properties 40:10 - Results on synthetic data 44:15 - Solving real math problems 47:15 - Discussion & comments Paper: https://arxiv.org/abs/2202.01344 miniF2F benchmark: https://github.com/openai/miniF2F Abstract: We explore the use of expert iteration in the context of language modeling applied to formal mathematics. We show that at same compute budget, expert iteration, by which we mean proof search interleaved with learning, dramatically outperforms proof search only. We also observe that when applied to a collection of formal statements of sufficiently varied difficulty, expert iteration is capable of finding and solving a curriculum of increasingly difficult problems, without the need for associated ground-truth proofs. Finally, by applying this expert iteration to a manually curated set of problem statements, we achieve state-of-the-art on the miniF2F benchmark, automatically solving multiple challenging problems drawn from high school olympiads. Authors: Stanislas Polu, Jesse Michael Han, Kunhao Zheng, Mantas Baksys, Igor Babuschkin, Ilya Sutskever Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher LinkedIn: https://www.linkedin.com/in/ykilcher BiliBili: https://space.bilibili.com/2017636191 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Can AI do math? And I don't mean two plus two, I mean pure mathematics. The paper we're going to look at today is called Formal Mathematics Statement Curriculum Learning and presents an automated system to prove mathematical theorems in a symbolic fashion. What's even more crazy is that this system was able to solve two problems of the International Mathematical Olympiad, which is a contest that real gifted high school students get to take part in. This system is way beyond previous systems that have attempted anything like this, because formal mathematics and automated mathematics that uses algorithms to prove things lags a lot behind the informal mathematics that you might know. A lot of previous techniques relied on proof searching, essentially brute forcing their way to a proof guided by some heuristics. And this paper improves on that drastically. It uses language models to guide the proof search. And it uses a technique called expert iteration to build itself automatically a curriculum of harder and harder statements to prove. Now the implications of this are cool for math, but it goes way beyond math. This is essentially symbolic reasoning. It's the model teaching itself to learn more and more. And that's exciting for many fields of AI. So here's how it goes. This video right here is a paper review a comprehensive review of me going through the paper explaining to you what is in the paper, what its main contributions are, what I think are the weaknesses and strengths of the paper, and much more. After this video, you should have a good understanding of what is in the paper. Otherwise, I haven't done my job. In the next video released tomorrow, I'll be interviewing the first author of this paper, which is a huge privilege. Because if you watch this video, you'll see that I have many open questions. I'm a noob at formal mathematics, and I suppose many people are. And therefore, even though the paper is written really well, I had a lot of questions, I even had some criticisms, and all of that was answered when I spoke to the author. So if you watch tomorrow's video, you'll get an insight into the behind the scenes of this research, how it came about, what worked, what didn't, how problems were solved during the research process, and much more. The author I'm interviewing has actually seen my paper review and is directly able to answer to any questions that are raised there. Please let me know how you like these formats in the comments. If you do like the video, please leave a like tell someone to subscribe and I'll see you around. Bye. Hello there. Today, we're looking at formal mathematics statement curriculum learning by researchers of OpenAI, EPFL, and Cambridge. This paper presents or applies the technique of expert iteration to the domain of proving formal mathematics statements. This is not enough yet. They also bring language modeling into the picture. So you have a proof searcher in this paper, or a proof search procedure that is guided by language models to focus to search for mathematics proofs. And then the expert iteration procedure makes the system better and better and better by always incorporating new statements that it has been able to prove into its training set. And so the domain or the difficulty of statements that it is able to prove expands iteration by iteration. The culmination of this is that they're able to solve two problems, I believe, of the IMO of the International Mathematics Olympiad, which is a difficult math challenge for high school students. And this has implications beyond just math. So this can be applied anywhere where agents need to reason over some sort of symbolic structure. And you know, this is wide ranging. This could be agents acting in the real world. This could be reinforcement learning things. This could be, I don't know, assistance for clinical trials and whatnot. Essentially anywhere where such a more formal system, more logical type of reasoning is required. So we're going to look into this paper and what they do. This builds on a bit of other work. But I think it can be looked at in isolation. So they claim right here in the introduction that deep learning has been very good at sort of many tasks like, you know, language modeling, there's vision, image generation. However, they say it has not yet enjoyed a comparable success in tasks that require extensive planning and symbolic reasoning. And the domain of mathematics proves is a good domain, because it has these challenges, but also, you don't exactly rely on external data that much. Like you can, you can prove things in mathematics, kind of by yourself in the basement, or in this case, you can verify a proof pretty quickly. So the challenges in this domain are, it has an extremely large search space, and an infinite action space. When you prove a statement in mathematics, there are many things you could potentially do, like infinitely many things. It's not only about manipulating the symbols that are there, often you need to introduce new symbols. They, they, for example, they say, you could generate a witness, like there exists an X that fulfills some things where X was never a symbol before. So you have like infinite things at your disposal. Now the question is, how do you prove a statement? Maybe we'll just direct a little bit go into how these mathematics proving things work if you really do them formally. So in their types of system, they have some kind of statement to be proven. So I'm going to call that statement s, that is a formal statement that just is essentially is the formalization, the exact writing down of something like a theorem, as you would find it in a textbook. But instead of using words and language, it uses like a defined syntax in a predefined system. So how to prove this system in order to prove the system, what you need to do is you need to build up a tree. So you need to decompose the system in some way into multiple sub statements. And the way you do this is as you would do as a human, you you know, you'd have some sort of a proof. And then you say, okay, in order to prove that I need the following three things to be true, right. So these would be the three things like this is a sub statement one, the sub statement two, a sub statement three. And generally the derivation from such like from this to this, I believe that's called a tactic. So you can apply tactics to sort of reformulate things into its sub into its sub things in. I'm speaking very informally right here, because as you might guess, I'm also a newb in this domain. And I hope the the interview will tell us a little bit more about how these things work. But as far as I understand, you want to decompose these things into sub statements. And then the sub statements again, you can decompose into stuff. And this is a context free grammar, right. So this sub statement like this should be provable by itself independently of the other sub statements. And you build this tree for as long as you want until the leaves right here are either the sort of the preconditions for the theorem. So a theorem could be, you know, for any two rational numbers. So if the leaf right here says, you know, this is a rational number, then we're done because that's a precondition for the theorem. Also, if it's like some sort of a lemma that I already know, or if it's like a fundamental, how do you how do you call them an axiom, if it's a fundamental axiom, I also stop. So I'm going to build up this proof tree until every single leaf is either something that I already know or something that I can assume to be true. And then I have proven the I've proven the original statement, because the tree represents the proof. Now how to build the tree, that is the question, right? I could I could derive many different sub loops, I could derive many different sub statements from the from the top statement, the fact that I derive these particular ones that then lead me to approve that is the magic of proving things in mathematics, right? That's what mathematicians do for a job. And you can already see that this is not an easy, an easy thing. You might think of something like alpha, alpha zero, alpha go, and that is a good guess. But whereas alpha go has defined actions, so all of these things that alpha go could do, are pretty defined, like how we could expand the tree. Not in the case of mathematical proofs, there are there's a complex and infinite set of tactics, potentially involving exogenous mathematical terms that have to be generated. So quite a challenging domain. The other one, so there is the infinite action space, which is one of the tragedies problems. And the other problem is this no direct self play setup. So whereas in something like alpha zero, I can train with self play. In mathematics proving there is no adversary, I cannot have a two player game and the two players get better and better and better. It's a statement, you can either prove it or not, like that it has the difficulty that it has, there is no, there's no opponent that can be hard or easy. However, so they say this, the is it prevents the naive application of the symmetric self play objective. However, they say that they observe that the key role of self play is to provide an unsupervised curriculum. And I'm not exactly sure, honestly, how they arrive at that statement, if that is just sort of their, their hypothesis right here, and the sort of the paper validates it. I don't see any exogenous reason why I might be true, but it is a reasonable statement to make right. The self play self play is really good because both opponents start very weak, and then they all get sort of better in steps. And that is essentially a curriculum. So the question is, how can we come up with an automated way to generate a curriculum for proving formal math statements, that is going to be one of the challenges. The other challenge, the challenge of infinite action space, they say that this has been addressed in past work by sampling from a language model, we're going to look a little bit into how this is done. But this is by the same authors. So they have previously dealt with this by having the proof search, like the thing that decides what node to expand in the proof tree, be guided by a language model that has been trained on a number of proofs, and that sort of takes a good guess at what to do next. So it kind of guides the search, much like the value and policy networks in like alpha zero guide the tree search, because that is also inherently too large. So they say they empirically show that when the difficulty of the auxiliary problems is varied, sorry, we skipped apart. So they they say we propose to supply auxiliary set of problem statements without requiring proofs of varying difficulty, we show that when the difficulty of these auxiliary statements is varied enough, a simple expert iteration procedure is able to solve a curriculum of increasingly difficult problems. And so what they're saying is they're going to provide so here here is maybe, you know, statement one, statement two, statement three that I want to prove ultimately, and these are really difficult. So what I'm going to do is I'm just gonna put like statement four, statement five, I'm going to put these statements in here. I don't know what's wrong with the with the pen. Sorry. I'm just going to put these statements in in there. And as long as they vary in difficulty, so there is a like a difficulty gradient, and I just fill sort of the space with statement six, statement seven, with with various difficulty statements, what I can do is I can do an expert iteration procedure. So what does the expert iteration procedure do? Essentially, it just says that I start with some sort of a model that can solve, you know, some kind of a difficulty of statements, let's say s six and s seven are the easiest ones, then I take the results of that system and the proofs it generated to retrain the same system. And that would result in a better system. And the better system now would be able to solve slightly more hard statements. And you know, since I now solve the slightly more hard statements, I can feed the proofs that I found back into the system, right, train them on those proofs, because I now know the proofs because I found them. And that system will get even better. So the expert iteration procedure is the act of always going to your best system, gathering the data that it has figured out through, you know, guiding the search, then taking that data and entered and retraining the system on this new data to make it even stronger. Right? This this is based on two facts. You can't just do that with any system, right? This is based on the fact that here, a machine learn system interacts with a search system. And the interaction is what makes the difference. So the combination of the two is better than just the search system and better, especially than just the machine learning system. So you can if the machine learning system itself has a certain performance, adding the search on top will increase that performance and therefore allow you to get to more and better training data that you couldn't have just gotten with the ML system itself. If you just had the ML system, you just stop be stuck forever in a loop of always having the same difficulty because all you do is feed the output of the ML system back into the ML system. But if you add a component on top that makes it stronger, that gives you better data that can make the ML system itself stronger, then you add the search again, that will make it even stronger in combination. So that is that is the story of expert iteration and of this paper right here. They go a little bit into the environment, they have this lean environment, which I have no clue about. But this is like a formal environment for mathematics proves one of one of many I'm I'm being informed. There's also one that's called meta math and apparently, lean, lean benefits from higher level tactics, which were shown to be beneficial in this context. But essentially, for our purposes, it is Oh, and also the proofs, lean proofs are typically 10 times shorter than other systems. But, you know, for our purposes, just assume that we have some kind of a system where we can build proofs like this this tree right here from from statements. So the next go into into experts, so they have they have a bit of data sets. That's what they describe here, they go into expert iteration. expert iteration consists in iteratively training models on their previously sampled trajectories. That's essentially expert iteration. As for a model, they use decoder only transformers. So they use language models, which just shows you sort of the versatility of language models. The biggest model, I think that they use uses 36 layers and 700 million trainable parameters. So this is not too big of a model, right? This is a reasonably sized it's it's big, but it's not like GPT three big. They pre train this which I found interesting on a combination of mathematics data sets, but also common crawl, which is a language just it's a web scrape, right? That is, is very interesting that the pre training happens on natural language and not just on mathematics data. Maybe you need this, this many, this many tokens to pre train the model, because the model itself is kind of big. But I'd wonder, you know, what kind of difference that makes. And what is what the transfer is from the natural language to the mathematics because math is is very cryptic. Not even sure if they have let me find a proof here. Maybe they've listed. So yeah, you can you can see, these are sort of the things you would find in this is a a terminal and internal trace of this lean environment or their their their gym environment around the lean environments. So you'd have like these tactics states you can see right here. These these are have nothing to do with natural language, right? Then you have the tactics that you run, you apply this prime DVD mall hp dot MP tactic, I have no idea what it is. And that transforms the above tactic state, I believe, into the bottom tactic state. I'm not going to parse this because I again, I have no clue what it means. But you can see that these statements there, they're very formal, and they have nothing to do with natural language. Still, obviously, humans made them as a series of characters. And therefore, there might also always be some transfer. So how do they train this? How do they train this thing? So the the transformer is trained to suggest kind of what to do next in such a proof. And that is called a proof step. So the proof step objective that they train the transformer with consists in generating a proof step, give it which is a tactic, given a goal, which is a tactic state. So you're trying to get somewhere which is the root of the current tree or subtree you're considering. And you're generating a tactic, which means like how to expand the tree given that that, you know, you are at this particular route. And they also condition this objective on the current declaration, which is the theorem name, which remains the same throughout the proof search. They make some they give some explanation why they do this. But essentially, the what they train the transformer with looks like this, there is a keyword decal, then there's the declaration, which is the name of the theorem, then there is a goal. And then here, you put the goal state, the tactic state that you want to achieve, and then the keyword proof step. And then here is where the proof step goes. So during inference, obviously, you leave this away, and you let the language model generate this part. But during training, you put right here, any any proof from any proof that you know was successful, you'd put the corresponding proof step there. So this is a Yeah, this is a language modeling objective. You just train on all of the proofs that you know that are true, you put them into this particular form, you put all of their individual tree expansion steps into this particular form, and you train a language model on it. And that apparently works pretty well. This is already from their from their previous work, that this works pretty well. They also have they explain this here, the rationale for conditioning on the declaration name is to hint our models on the position of the current declaration in the math lip library, considered a weak proxy signal for the large amount of information not shown to the model. So there is a full date, there is available imports, currently open declarations, module names, notations, declared instances. So and that that is where I really am a new there is this math lib library, which is a library inside of this lean environment. And I'm going to guess the analogy would be like, it has a bunch of functions you can call it has a bunch of stuff there that you could potentially use. And obviously, this is not going to all fit into the little context that we have right here that we're going to feed into the transformer. So what you're going to do is you simply give this declaration name. And if the model has seen enough of those things, it it obviously some of these function calls will be in this proof step step right here, if you start out with proofs that already exist. So some of these function calls will be in there. And the declaration hints sort of where in the library you are, which means that which functions you can currently call which variables exist and so on. I'm exactly sure. But I essentially, I would, I would read the declaration, if I were a programmer, I would read the declaration as maybe the, the project and the file I'm currently in and what imports there are, I would read the goal as the function definition, or sorry, the function header, and the doc string that tells me what should happen in this function. And then the proof step, I would consider the function itself, the implementation. That is a very bad analogy, but approximately like this, it's a weird mix between programming and, and mathematics, this formal mathematics proofs. So they train the language model on this. So now the language model can suggest new proof steps, you give it the declaration and the goal, it can suggest new proof steps, right? That is one thing they train the language model with, they in at the same time, train it also with this proof size objective. So they give an other, they give other inputs to the language model that they train it on. Again, we have the declaration name, we have the goal, but then we have a different keyword instead of proof step. Now we have the keyword proof size. And then here is a proof size bucket token. And that's simply a letter from A to K. And that letter encodes one of 11 buckets. The buckets represent the size of the proofs. Again, during training, we know the proof size, right? Or the size of the proof step or maybe the size of the whole proof. I'm not entirely sure. I think it's the size of the whole proof. Yeah, represents a proof size estimate bucket for the current goal. Okay, so for the proof of the current goal, how long is it? And during training, we know it. So we just put it here during inference time. Again, this is the thing that we are going to let the model predict. So the model should guess how long a proof is going to be without necessarily producing it. That's what this keyword up here does. So the bottom one simply says how long is it maybe, you know, probably going to be. And this, it's pretty neat how they do it. So they have these 11 buckets, infinite proof sizes go to bucket zero. And then bucket one gets the longest proofs bucket two gets slightly smaller proofs, and the shortest proofs go into bucket 10. Why do they encode it like this? Now it comes to the place where how or what do you search. So you're now in the proof search, right? You're in inference mode, you ask your model to suggest a bunch of these proof steps to you that we saw right here. So you ask your model, please suggest a bunch of those proof steps, you sample from the model a bunch of times. And now how what where should you which one should you do? Of course, you could go by I guess the log, like the likelihood of these proof steps. But as far as I can understand, they weigh, they weigh the tactics that they want to use. So they, they value different goals. This is about which goal do I want to pursue next? Okay. So they, they ask themselves, which goal should I produce, or should I pursue next in my proof search to value goals as we run proof searches, we sample the proof size bucket token and record the logits for each viable bucket and use them to get a weighted average with the following formula. So the formula itself is not really important. But what is important, they use the buck like the prediction of how long a proof is going to be to guide their selection of goals, which means that the exact way they do it is they say, if a model assigns p zero equals one, which means that the model puts all the weight on bucket zero, which is you remember as the infinite proofs. So if the model predicts this proof size is going to be infinite, which means that it's not going to work, right? The proof size infinite means that it hasn't been at least it hasn't been proven yet, right? The proof search in or the data set hasn't been able to prove this particular statement. So the size is infinite, then the value, as you can see is zero. So we don't want to go after something where the model is absolutely sure that the proof size is infinite, it's never going to be absolutely sure. But if that were the case, the value would be zero. Conversely, if a model assigns the is very sure, or absolutely sure that this proof is going to be in the shortest bucket, then the value is one. So this is a number between zero and one, depending on how short the proof is. So they say it prioritizes goals that potentially lead to shorter proofs during proof search. So that's how they guide their search. Excellent. So these are the two objectives they train with the one objective is to make the model suggest new the tactics to use. And the other one is to guide the proof search by training the model to predict how long a proof is going to be. So yeah, the next topic right here is how they how they bootstrap the models. So in this expert iteration, you always train on your own outputs. However, there needs to be like some sort of a some sort of a starting point, right? bootstrapping, they say consistent step required to train an initial model on both proof step objective and the proof size objective. They have two initial models. In fact, they have a they have a data set, which consists of some of these proofs that have already been proven. And they train a model with just a proof step objective, which is called data zero. So that's the initial model. Then they use they use the initial model to sample proofs for the statements in this mathematics library. So they already use a model to generate proofs. We denote the set of successful proof searches created in processes as zero using s zero, we create a data set. So the expert iteration process essentially already starts. So they're going to concatenate the original data set, sorry, the original data set and a D duplicated set of proof steps extracted from the proofs in s zero and a D duplicated set of proof size tuples extracted from the proof searches in s zero. So now they're going to use whatever they output as proofs in the last in the last in the last iteration, they're going to take that into the data set, they're going to create these proof step sentences, I'm just going to call them sentences because we're language modeling right here, they're going to create these proof step sentences like this one, they're going to create these proof size sentences like this one. And then they're going to train a model again on that. So they're going to take the they're going to take the theta zero, and they're going to train it on that new data set. So that gives them theta one, which is trained on both the proof step and the proof size objective and theta one is our first model in our expert iteration. So now we are simply going to repeat those things. Each iteration k consists in sampling proof searches for statements using the current model, filtering successful proof searches to extract a new data set, and fine tuning the theta zero on it to obtain theta k plus one. Note that they don't they don't go from theta zero to theta one to theta two and so on. They always so they don't do that. They always go from theta zero to theta two, then they use theta two to generate a data set, then they fine tune theta zero again to get to theta three. It'd be interesting to know why they do it this way. Maybe if you continue fine tuning, you're already sort of locked into something. So the knowledge comes the knowledge, the unified knowledge comes from you can see this right here, the fact that they the data sets they generate comes from the unified set of all the statements they've proven so far. So all the proofs they found so far, they are all go together into one big data set for the next step. So technically every model can like relearn the proofs that the last model also knew because it's there they're in the same data set. And, you know, potentially, they also say that they de duplicate proofs, which means that for the same statements, there could be multiple proofs, and they will always take the shortest one. So that might be even disadvantage, a disadvantage if you were to tune from like theta two, which would still have learned a longer proof for a particular statement. And you'd have to like forget that it's probably just easier to scratch everything and start with the shorter proof in your data set. And yeah, that is it. That's the expert iteration process. They get a new model, they use it to generate new proofs, they add the proofs to the set of things they know. And there is a set of things they don't know, right? Because there can also be bad proofs, which serve as negative examples, which is also good, can handle negative examples, and then they get better and better. So now they are going to evaluate this right now, you see that they have various, various ways of using this model, there's pass at eight, there's pass at one, which essentially means like how many tries they give per expansion step, like do we sample, do we try once do we try eight times, obviously, the more you try, the longer your searches run, but also the higher your chance of actually finding something useful. And these things are mostly proportional to each other. So it's just a matter of computational effort. You can see that with expert iterations, so the x axis right here is number of expert iterations, you can see they do nine expert iterations on these data sets. In general, you see an upwards trend. So more and more statements are able to be proven by the by the expert iterated system. And they have multiple data sets, this mini F2F is their final goal. This is made up of these various competition level statements, while the mathlib that is more of these kind of formal proofs from these from these formal environments. And they do they do see that the overlap isn't too great right here. And you can see that here as well. The scaling only kind of sort of kicks in after a while. What also astounded me is that in both cases, you have solve rates actually go down intermittently. And I would be I would be very interested, you know, why that is that could be just like an effect of size or something like this. But like, why do solve rates go slightly, slightly down? Or is it just noise? I have no idea. You also see these are the cumulative, the cumulative pass rates. And so this is this is the expert iteration model. And this is the sample only model. So in the blue model, you run expert iteration, which means that you sample data, and then you retrain and then you sample again, and then you retrain. And in the orange model, you only sample so you only use the you only use I believe the theta zero, which is the initial model, you use that to guide your search, but you never retrain on the things that you found. And interestingly, obviously, I guess the expert iteration model way outperforms the sample only model. However, the sample only model uses less compute, because it doesn't have to do the retraining. So once you adjust for that, you can see it's this line right here, where at first the sample only model is better. You know, because the expert iteration actually trains at wastes time and training. But as you go on, if you give it more and more compute, the number of more statements that the sampling only model solves, it underwhelms with respect to what the expert iteration solves. And even on this data set right here on this more distant data set, there seems to be almost like a little bit of a diminishing return in the sample only method. And at after a while after a number of expert iterations, the expert iteration method outshines the sample only method. We don't have an adjusted compute curve right here. But you can guess maybe that it might look something like this. Possibly, possibly just kind of like a constant over the over the originally orange curve. Orange curve bad. Yeah. Also, let me know how you like this this pre annotation right here that I've been doing now for two papers, I think. So I like pre highlight them. I wonder how that's how that's received. If that makes it more or less confusing. It just tells me a bit more where to where to jump to. So we get some results right here. The number of statements proved in math with train goes from 17,390 at iteration one to 19,476 at iteration nine, while the average proof length of these statements goes from 4.8 to 4.0. We hypothesize that this continuously improving performance through expert iteration stems from two effects. So one, the model finding new original proofs for the same statements, which would then be shorter than the original proofs. And two, the model closing marginally harder statements at each iteration, which in turn provides more useful training data for the next iteration. By iteration nine, the model is trained on more than 90% generated data. So the original data set is almost a is like a small minority of the data that the model is trained on. Again, a another property that I haven't even mentioned yet is that in proof search, you can verify a proof like you know, if a proof is correct, which in most domains isn't the case, right? So retraining on your own output is dangerous, because you don't exactly know how good it is. But here, you can just verify that it's good. And then you know, it's good data, right? So it's a it's a bit of a special environment, but I think we can still learn things from it. So what do they do? They first train this thing. So now, I think the setup is clear, right, the expert iteration setup. And they also have made it clear that, you know, we can reach harder and harder statements. But what we maybe can't do is just jump to hard statements, we need a curriculum, we need several various difficulties of statements, so that we can sort of expand our knowledge again and again and again. And they do first do that with synthetic data. So apparently, apparently, what you can do is you can do a you can make a synthetic inequality statement generator, which gives you symbolic mathematical inequalities, and you can kind of control how difficult they are. So what they do is they just they just compose known inequality theorems, like Heller inequality or something like this, they just compose them. And how many times they compose them, that kind of measures how how difficult they are. So they have two parameters right here, they control how difficult they are. And they they generate 100 statements of low difficulty, like these numbers pretty low, and they formalize a proof for each. So this is kind of their seed set. So two things you need. So the you need this seed seed set of proofs. This is usually like some sort of a data set. In this in their case, they combine the this tactic data set that is their seed data set, they combine this one with these 100 statements that they generate, and they prove themselves, either themselves or automatically. So this would be this would be the seed data set. And this thing right here, that's the curriculum. Or just a collection of statements of various, various difficulties, the curriculum doesn't need a proof, right? This is the key part right here, the curriculum simply gives the model an opportunity to solve continuously harder and harder problems going from the seed, right? So going from the seed, you only need to be able to solve the most easy problems in the curriculum. And then you can sort of rely on the expert iteration on the self bootstrapping to become more to become better. Results are here, you can see that for a given this this right here is it's either that it's one of the n numbers, this right here. So it the the color measures the difficulty. Zero is the easiest six is the most, most hard hardest difficulty. You can see that even for easy problems, expert iteration just manages to solve much more, set much more problems. And for the hardest problems, the sample only method. So if you just do proof searching without expert iteration, it doesn't solve any of the harder problems. Whereas the expert iteration actually, if you see like there's like a tiny uptick at the bottom right here, it actually manages to solve some even of the hardest category. So that gives a bit of credence. Yeah, they say here that the end equals six remains completely out of reach for of simply scaling the number of attempts per statements, which kind of means that you'd have to like invest a lot lot of compute if you just do proof searching to match the to match how good expert iteration is about compute by compute is expert iteration is better. Yeah, so they say, well, we're going to target this mini F2F data set, right? This is our final challenge. They say we curated and manually formalized a set of math exercises to target this data set. So this is going to be their seeds and curricula here. We hypothesize that if the difficulty of the set of statements was made varied enough, expert iteration could potentially leverage it to effectively shift our models distribution closer to mini F2F, and in turn improve their eventual performance on it. So they're going to build they're going to build this curriculum right here, they're going to collect some, like 300 statements, we manually formalized, it means just they bring it into this syntax, it doesn't mean they also prove these statements, right? So these will be these curriculum statements. These come from like books, math books that are used to prepare for math exams, which are much closer to this data set that they target. Yeah, so the set of statements, this is this curriculum that I'm talking about is the union, the union of the statements in mathlet train this, they, they, interestingly, they add these inequalities that they've generated to the set of statements, and also they these manually collected things that they mentioned above. And with that, interestingly, they do in fact, get a lot they get better on, they get better on this mini F2F validation set. So yeah, you can see that things go up, which is a good sign. Yeah, again, that you have like different parameters. This a parameter is also I think a parameter of how many times you sample per expansion or something like this. I don't know, there are many, many parameters in these searches. But in general, just from what I've seen from this paper, is you can always trade off more compute, like trying more times, expanding more times, suggesting more steps to do, you can always trade that for a bit more performance. But the general direction, it doesn't matter in in the general direction. Yeah, that's, that's that. Obviously, they are better than like the results are as you would expect, I think so. Their models are generally better than let's say the other models that haven't been targeted at this data set, or the models that just do proof search. So they have a short discussion of model size. They say we briefly experimented with different model sizes and found that model size scaling is not as straightforward in the case of as in the case of unsupervised learning, they found that bigger models, they found that bigger models are better in the sense that they consistently exhibit higher pass rate if you just sample once. However, despite that, it is often the case that for a fixed amount of compute sampling more attempts from a smaller model leads to better final performance. So these are these are the sort of considerations that you have to do. If you have two independent variables, right, we can trade them off against one another. Just for the scale, with their big model running a full expert iteration, that's kind of one of these full expert iteration. Full expert iteration, do they mean that all the nine steps or just one step in the expert, I'm going to guess all the nine steps. So the whole experiment to get to their their model after nine expert iteration steps required 2000 a 100 days to compute. That is insane. Running one full proof search, when properly parallelized requires on average about point one a 100 hours of compute. So that's like, it's like still a minute of an a 100. Crazy, right? So the sizes here are enormous, right? And still, they are able to solve what two of these Olympiad problems, right? With manual targeting, with manual data collection that is specifically targeted at that data set, and with 2000 a 100 days. And, you know, they don't solve all of them, they solve two. So I believe this field is still in its infancy. I believe there's lots of stuff to do right here. There's probably approaches that make these things a lot better. But I'm excited just because I think that is an area where deep learning, as they say, hasn't really pushed through quite yet. And I think there's a lot to do to bring down the requirements here and the methodologies that they use. I like the way they combine the language modeling with the proof searching. The expert iteration might also be a nice lesson for other fields, like how can we combine the neural models with some sort of search procedures maybe or other heuristics to generate ever better training data that we can then feed back to the models. All of this is highly interesting. And yeah, let me know what you think. Bye bye.
[ { "start": 0, "end": 10.96, "text": " Can AI do math? And I don't mean two plus two, I mean pure mathematics. The paper we're" }, { "start": 10.96, "end": 15.84, "text": " going to look at today is called Formal Mathematics Statement Curriculum Learning and presents" }, { "start": 15.84, "end": 21.44, "text": " an automated system to prove mathematical theorems in a symbolic fashion. What's even" }, { "start": 21.44, "end": 27, "text": " more crazy is that this system was able to solve two problems of the International Mathematical" }, { "start": 27, "end": 32.76, "text": " Olympiad, which is a contest that real gifted high school students get to take part in." }, { "start": 32.76, "end": 37.96, "text": " This system is way beyond previous systems that have attempted anything like this, because" }, { "start": 37.96, "end": 43.760000000000005, "text": " formal mathematics and automated mathematics that uses algorithms to prove things lags" }, { "start": 43.760000000000005, "end": 48.78, "text": " a lot behind the informal mathematics that you might know. A lot of previous techniques" }, { "start": 48.78, "end": 54, "text": " relied on proof searching, essentially brute forcing their way to a proof guided by some" }, { "start": 54, "end": 59.72, "text": " heuristics. And this paper improves on that drastically. It uses language models to guide" }, { "start": 59.72, "end": 65.28, "text": " the proof search. And it uses a technique called expert iteration to build itself automatically" }, { "start": 65.28, "end": 70.2, "text": " a curriculum of harder and harder statements to prove. Now the implications of this are" }, { "start": 70.2, "end": 75.64, "text": " cool for math, but it goes way beyond math. This is essentially symbolic reasoning. It's" }, { "start": 75.64, "end": 80.74000000000001, "text": " the model teaching itself to learn more and more. And that's exciting for many fields" }, { "start": 80.74, "end": 86.91999999999999, "text": " of AI. So here's how it goes. This video right here is a paper review a comprehensive review" }, { "start": 86.91999999999999, "end": 92.03999999999999, "text": " of me going through the paper explaining to you what is in the paper, what its main contributions" }, { "start": 92.03999999999999, "end": 98.03999999999999, "text": " are, what I think are the weaknesses and strengths of the paper, and much more. After this video," }, { "start": 98.03999999999999, "end": 102.36, "text": " you should have a good understanding of what is in the paper. Otherwise, I haven't done" }, { "start": 102.36, "end": 108.44, "text": " my job. In the next video released tomorrow, I'll be interviewing the first author of this" }, { "start": 108.44, "end": 112.92, "text": " paper, which is a huge privilege. Because if you watch this video, you'll see that I" }, { "start": 112.92, "end": 119.88, "text": " have many open questions. I'm a noob at formal mathematics, and I suppose many people are." }, { "start": 119.88, "end": 124.72, "text": " And therefore, even though the paper is written really well, I had a lot of questions, I even" }, { "start": 124.72, "end": 129.6, "text": " had some criticisms, and all of that was answered when I spoke to the author. So if you watch" }, { "start": 129.6, "end": 134.6, "text": " tomorrow's video, you'll get an insight into the behind the scenes of this research, how" }, { "start": 134.6, "end": 140.56, "text": " it came about, what worked, what didn't, how problems were solved during the research process," }, { "start": 140.56, "end": 145.72, "text": " and much more. The author I'm interviewing has actually seen my paper review and is directly" }, { "start": 145.72, "end": 150.06, "text": " able to answer to any questions that are raised there. Please let me know how you like these" }, { "start": 150.06, "end": 154.64, "text": " formats in the comments. If you do like the video, please leave a like tell someone to" }, { "start": 154.64, "end": 161.1, "text": " subscribe and I'll see you around. Bye. Hello there. Today, we're looking at formal mathematics" }, { "start": 161.1, "end": 167.56, "text": " statement curriculum learning by researchers of OpenAI, EPFL, and Cambridge. This paper" }, { "start": 167.56, "end": 173.76, "text": " presents or applies the technique of expert iteration to the domain of proving formal" }, { "start": 173.76, "end": 180.22, "text": " mathematics statements. This is not enough yet. They also bring language modeling into" }, { "start": 180.22, "end": 187.04, "text": " the picture. So you have a proof searcher in this paper, or a proof search procedure" }, { "start": 187.04, "end": 194.23999999999998, "text": " that is guided by language models to focus to search for mathematics proofs. And then" }, { "start": 194.23999999999998, "end": 200.79999999999998, "text": " the expert iteration procedure makes the system better and better and better by always incorporating" }, { "start": 200.79999999999998, "end": 207.23999999999998, "text": " new statements that it has been able to prove into its training set. And so the domain or" }, { "start": 207.23999999999998, "end": 213.44, "text": " the difficulty of statements that it is able to prove expands iteration by iteration. The" }, { "start": 213.44, "end": 219.35999999999999, "text": " culmination of this is that they're able to solve two problems, I believe, of the IMO" }, { "start": 219.35999999999999, "end": 224.84, "text": " of the International Mathematics Olympiad, which is a difficult math challenge for high" }, { "start": 224.84, "end": 233.12, "text": " school students. And this has implications beyond just math. So this can be applied anywhere" }, { "start": 233.12, "end": 240.02, "text": " where agents need to reason over some sort of symbolic structure. And you know, this" }, { "start": 240.02, "end": 245.60000000000002, "text": " is wide ranging. This could be agents acting in the real world. This could be reinforcement" }, { "start": 245.60000000000002, "end": 252.12, "text": " learning things. This could be, I don't know, assistance for clinical trials and whatnot." }, { "start": 252.12, "end": 259.52, "text": " Essentially anywhere where such a more formal system, more logical type of reasoning is" }, { "start": 259.52, "end": 265.16, "text": " required. So we're going to look into this paper and what they do. This builds on a bit" }, { "start": 265.16, "end": 273.68, "text": " of other work. But I think it can be looked at in isolation. So they claim right here" }, { "start": 273.68, "end": 279.28000000000003, "text": " in the introduction that deep learning has been very good at sort of many tasks like," }, { "start": 279.28000000000003, "end": 284.32000000000005, "text": " you know, language modeling, there's vision, image generation. However, they say it has" }, { "start": 284.32000000000005, "end": 290.98, "text": " not yet enjoyed a comparable success in tasks that require extensive planning and symbolic" }, { "start": 290.98, "end": 300.46000000000004, "text": " reasoning. And the domain of mathematics proves is a good domain, because it has these challenges," }, { "start": 300.46000000000004, "end": 307.44, "text": " but also, you don't exactly rely on external data that much. Like you can, you can prove" }, { "start": 307.44, "end": 312.20000000000005, "text": " things in mathematics, kind of by yourself in the basement, or in this case, you can" }, { "start": 312.20000000000005, "end": 318.84000000000003, "text": " verify a proof pretty quickly. So the challenges in this domain are, it has an extremely large" }, { "start": 318.84, "end": 325.65999999999997, "text": " search space, and an infinite action space. When you prove a statement in mathematics," }, { "start": 325.65999999999997, "end": 331.12, "text": " there are many things you could potentially do, like infinitely many things. It's not" }, { "start": 331.12, "end": 336.15999999999997, "text": " only about manipulating the symbols that are there, often you need to introduce new symbols." }, { "start": 336.15999999999997, "end": 342.52, "text": " They, they, for example, they say, you could generate a witness, like there exists an X" }, { "start": 342.52, "end": 348.32, "text": " that fulfills some things where X was never a symbol before. So you have like infinite" }, { "start": 348.32, "end": 355.84, "text": " things at your disposal. Now the question is, how do you prove a statement? Maybe we'll" }, { "start": 355.84, "end": 362.36, "text": " just direct a little bit go into how these mathematics proving things work if you really" }, { "start": 362.36, "end": 367.96, "text": " do them formally. So in their types of system, they have some kind of statement to be proven." }, { "start": 367.96, "end": 373.6, "text": " So I'm going to call that statement s, that is a formal statement that just is essentially" }, { "start": 373.6, "end": 381.40000000000003, "text": " is the formalization, the exact writing down of something like a theorem, as you would" }, { "start": 381.40000000000003, "end": 388.04, "text": " find it in a textbook. But instead of using words and language, it uses like a defined" }, { "start": 388.04, "end": 394.12, "text": " syntax in a predefined system. So how to prove this system in order to prove the system," }, { "start": 394.12, "end": 398.76000000000005, "text": " what you need to do is you need to build up a tree. So you need to decompose the system" }, { "start": 398.76, "end": 406.56, "text": " in some way into multiple sub statements. And the way you do this is as you would do" }, { "start": 406.56, "end": 411.52, "text": " as a human, you you know, you'd have some sort of a proof. And then you say, okay, in" }, { "start": 411.52, "end": 416.8, "text": " order to prove that I need the following three things to be true, right. So these would be" }, { "start": 416.8, "end": 421.44, "text": " the three things like this is a sub statement one, the sub statement two, a sub statement" }, { "start": 421.44, "end": 428.64, "text": " three. And generally the derivation from such like from this to this, I believe that's called" }, { "start": 428.64, "end": 437.76, "text": " a tactic. So you can apply tactics to sort of reformulate things into its sub into its" }, { "start": 437.76, "end": 443.38, "text": " sub things in. I'm speaking very informally right here, because as you might guess, I'm" }, { "start": 443.38, "end": 448.72, "text": " also a newb in this domain. And I hope the the interview will tell us a little bit more" }, { "start": 448.72, "end": 452.56, "text": " about how these things work. But as far as I understand, you want to decompose these" }, { "start": 452.56, "end": 457.92, "text": " things into sub statements. And then the sub statements again, you can decompose into stuff." }, { "start": 457.92, "end": 464.56, "text": " And this is a context free grammar, right. So this sub statement like this should be" }, { "start": 464.56, "end": 469.94000000000005, "text": " provable by itself independently of the other sub statements. And you build this tree for" }, { "start": 469.94000000000005, "end": 476.12, "text": " as long as you want until the leaves right here are either the sort of the preconditions" }, { "start": 476.12, "end": 481.68, "text": " for the theorem. So a theorem could be, you know, for any two rational numbers. So if" }, { "start": 481.68, "end": 486.92, "text": " the leaf right here says, you know, this is a rational number, then we're done because" }, { "start": 486.92, "end": 492.64, "text": " that's a precondition for the theorem. Also, if it's like some sort of a lemma that I already" }, { "start": 492.64, "end": 499.32, "text": " know, or if it's like a fundamental, how do you how do you call them an axiom, if it's" }, { "start": 499.32, "end": 505.32, "text": " a fundamental axiom, I also stop. So I'm going to build up this proof tree until every single" }, { "start": 505.32, "end": 511.46, "text": " leaf is either something that I already know or something that I can assume to be true." }, { "start": 511.46, "end": 517.64, "text": " And then I have proven the I've proven the original statement, because the tree represents" }, { "start": 517.64, "end": 524, "text": " the proof. Now how to build the tree, that is the question, right? I could I could derive" }, { "start": 524, "end": 529.84, "text": " many different sub loops, I could derive many different sub statements from the from the" }, { "start": 529.84, "end": 535.48, "text": " top statement, the fact that I derive these particular ones that then lead me to approve" }, { "start": 535.48, "end": 540.6, "text": " that is the magic of proving things in mathematics, right? That's what mathematicians do for a" }, { "start": 540.6, "end": 547, "text": " job. And you can already see that this is not an easy, an easy thing. You might think" }, { "start": 547, "end": 551.9200000000001, "text": " of something like alpha, alpha zero, alpha go, and that is a good guess. But whereas" }, { "start": 551.9200000000001, "end": 558.2, "text": " alpha go has defined actions, so all of these things that alpha go could do, are pretty" }, { "start": 558.2, "end": 564.88, "text": " defined, like how we could expand the tree. Not in the case of mathematical proofs, there" }, { "start": 564.88, "end": 571, "text": " are there's a complex and infinite set of tactics, potentially involving exogenous mathematical" }, { "start": 571, "end": 579.5200000000001, "text": " terms that have to be generated. So quite a challenging domain. The other one, so there" }, { "start": 579.5200000000001, "end": 585.5600000000001, "text": " is the infinite action space, which is one of the tragedies problems. And the other problem" }, { "start": 585.56, "end": 592.88, "text": " is this no direct self play setup. So whereas in something like alpha zero, I can train" }, { "start": 592.88, "end": 600.16, "text": " with self play. In mathematics proving there is no adversary, I cannot have a two player" }, { "start": 600.16, "end": 604.1999999999999, "text": " game and the two players get better and better and better. It's a statement, you can either" }, { "start": 604.1999999999999, "end": 610.2399999999999, "text": " prove it or not, like that it has the difficulty that it has, there is no, there's no opponent" }, { "start": 610.24, "end": 618.96, "text": " that can be hard or easy. However, so they say this, the is it prevents the naive application" }, { "start": 618.96, "end": 627.48, "text": " of the symmetric self play objective. However, they say that they observe that the key role" }, { "start": 627.48, "end": 635.64, "text": " of self play is to provide an unsupervised curriculum. And I'm not exactly sure, honestly," }, { "start": 635.64, "end": 640.08, "text": " how they arrive at that statement, if that is just sort of their, their hypothesis right" }, { "start": 640.08, "end": 647.52, "text": " here, and the sort of the paper validates it. I don't see any exogenous reason why I" }, { "start": 647.52, "end": 653.64, "text": " might be true, but it is a reasonable statement to make right. The self play self play is" }, { "start": 653.64, "end": 659.84, "text": " really good because both opponents start very weak, and then they all get sort of better" }, { "start": 659.84, "end": 667.64, "text": " in steps. And that is essentially a curriculum. So the question is, how can we come up with" }, { "start": 667.64, "end": 673.9200000000001, "text": " an automated way to generate a curriculum for proving formal math statements, that" }, { "start": 673.9200000000001, "end": 679.76, "text": " is going to be one of the challenges. The other challenge, the challenge of infinite" }, { "start": 679.76, "end": 685.72, "text": " action space, they say that this has been addressed in past work by sampling from a" }, { "start": 685.72, "end": 690.28, "text": " language model, we're going to look a little bit into how this is done. But this is by" }, { "start": 690.28, "end": 696.64, "text": " the same authors. So they have previously dealt with this by having the proof search," }, { "start": 696.64, "end": 703.34, "text": " like the thing that decides what node to expand in the proof tree, be guided by a language" }, { "start": 703.34, "end": 709.12, "text": " model that has been trained on a number of proofs, and that sort of takes a good guess" }, { "start": 709.12, "end": 715.76, "text": " at what to do next. So it kind of guides the search, much like the value and policy networks" }, { "start": 715.76, "end": 722.6, "text": " in like alpha zero guide the tree search, because that is also inherently too large." }, { "start": 722.6, "end": 730.36, "text": " So they say they empirically show that when the difficulty of the auxiliary problems is" }, { "start": 730.36, "end": 737.42, "text": " varied, sorry, we skipped apart. So they they say we propose to supply auxiliary set of" }, { "start": 737.42, "end": 742.88, "text": " problem statements without requiring proofs of varying difficulty, we show that when the" }, { "start": 742.88, "end": 747.76, "text": " difficulty of these auxiliary statements is varied enough, a simple expert iteration procedure" }, { "start": 747.76, "end": 755.5999999999999, "text": " is able to solve a curriculum of increasingly difficult problems. And so what they're saying" }, { "start": 755.5999999999999, "end": 761.88, "text": " is they're going to provide so here here is maybe, you know, statement one, statement" }, { "start": 761.88, "end": 767.42, "text": " two, statement three that I want to prove ultimately, and these are really difficult." }, { "start": 767.42, "end": 773.84, "text": " So what I'm going to do is I'm just gonna put like statement four, statement five, I'm" }, { "start": 773.84, "end": 780.78, "text": " going to put these statements in here. I don't know what's wrong with the with the pen. Sorry." }, { "start": 780.78, "end": 788, "text": " I'm just going to put these statements in in there. And as long as they vary in difficulty," }, { "start": 788, "end": 794.44, "text": " so there is a like a difficulty gradient, and I just fill sort of the space with statement" }, { "start": 794.44, "end": 801.84, "text": " six, statement seven, with with various difficulty statements, what I can do is I can do an expert" }, { "start": 801.84, "end": 807, "text": " iteration procedure. So what does the expert iteration procedure do? Essentially, it just" }, { "start": 807, "end": 812.6, "text": " says that I start with some sort of a model that can solve, you know, some kind of a difficulty" }, { "start": 812.6, "end": 819, "text": " of statements, let's say s six and s seven are the easiest ones, then I take the results" }, { "start": 819, "end": 825.2, "text": " of that system and the proofs it generated to retrain the same system. And that would" }, { "start": 825.2, "end": 829.82, "text": " result in a better system. And the better system now would be able to solve slightly" }, { "start": 829.82, "end": 835.6800000000001, "text": " more hard statements. And you know, since I now solve the slightly more hard statements," }, { "start": 835.6800000000001, "end": 842.48, "text": " I can feed the proofs that I found back into the system, right, train them on those proofs," }, { "start": 842.48, "end": 848.52, "text": " because I now know the proofs because I found them. And that system will get even better." }, { "start": 848.52, "end": 856.38, "text": " So the expert iteration procedure is the act of always going to your best system, gathering" }, { "start": 856.38, "end": 863.44, "text": " the data that it has figured out through, you know, guiding the search, then taking" }, { "start": 863.44, "end": 870.04, "text": " that data and entered and retraining the system on this new data to make it even stronger." }, { "start": 870.04, "end": 874.8, "text": " Right? This this is based on two facts. You can't just do that with any system, right?" }, { "start": 874.8, "end": 881.52, "text": " This is based on the fact that here, a machine learn system interacts with a search system." }, { "start": 881.52, "end": 888.88, "text": " And the interaction is what makes the difference. So the combination of the two is better than" }, { "start": 888.88, "end": 895.52, "text": " just the search system and better, especially than just the machine learning system. So" }, { "start": 895.52, "end": 901.4399999999999, "text": " you can if the machine learning system itself has a certain performance, adding the search" }, { "start": 901.4399999999999, "end": 907.64, "text": " on top will increase that performance and therefore allow you to get to more and better" }, { "start": 907.64, "end": 912.78, "text": " training data that you couldn't have just gotten with the ML system itself. If you just" }, { "start": 912.78, "end": 918.0799999999999, "text": " had the ML system, you just stop be stuck forever in a loop of always having the same" }, { "start": 918.0799999999999, "end": 925.04, "text": " difficulty because all you do is feed the output of the ML system back into the ML system." }, { "start": 925.04, "end": 930.3199999999999, "text": " But if you add a component on top that makes it stronger, that gives you better data that" }, { "start": 930.3199999999999, "end": 935.4399999999999, "text": " can make the ML system itself stronger, then you add the search again, that will make it" }, { "start": 935.4399999999999, "end": 942.9599999999999, "text": " even stronger in combination. So that is that is the story of expert iteration and of this" }, { "start": 942.9599999999999, "end": 948.7199999999999, "text": " paper right here. They go a little bit into the environment, they have this lean environment," }, { "start": 948.7199999999999, "end": 953.18, "text": " which I have no clue about. But this is like a formal environment for mathematics proves" }, { "start": 953.18, "end": 960.1999999999999, "text": " one of one of many I'm I'm being informed. There's also one that's called meta math and" }, { "start": 960.1999999999999, "end": 968.16, "text": " apparently, lean, lean benefits from higher level tactics, which were shown to be beneficial" }, { "start": 968.16, "end": 975.5999999999999, "text": " in this context. But essentially, for our purposes, it is Oh, and also the proofs, lean" }, { "start": 975.5999999999999, "end": 982.18, "text": " proofs are typically 10 times shorter than other systems. But, you know, for our purposes," }, { "start": 982.18, "end": 987.9599999999999, "text": " just assume that we have some kind of a system where we can build proofs like this this tree" }, { "start": 987.9599999999999, "end": 997.88, "text": " right here from from statements. So the next go into into experts, so they have they have" }, { "start": 997.88, "end": 1003.88, "text": " a bit of data sets. That's what they describe here, they go into expert iteration. expert" }, { "start": 1003.88, "end": 1010.9599999999999, "text": " iteration consists in iteratively training models on their previously sampled trajectories." }, { "start": 1010.96, "end": 1017.72, "text": " That's essentially expert iteration. As for a model, they use decoder only transformers." }, { "start": 1017.72, "end": 1024.8, "text": " So they use language models, which just shows you sort of the versatility of language models." }, { "start": 1024.8, "end": 1032, "text": " The biggest model, I think that they use uses 36 layers and 700 million trainable parameters." }, { "start": 1032, "end": 1038.22, "text": " So this is not too big of a model, right? This is a reasonably sized it's it's big," }, { "start": 1038.22, "end": 1045.66, "text": " but it's not like GPT three big. They pre train this which I found interesting on a" }, { "start": 1045.66, "end": 1052.68, "text": " combination of mathematics data sets, but also common crawl, which is a language just" }, { "start": 1052.68, "end": 1059.76, "text": " it's a web scrape, right? That is, is very interesting that the pre training happens" }, { "start": 1059.76, "end": 1067.2, "text": " on natural language and not just on mathematics data. Maybe you need this, this many, this" }, { "start": 1067.2, "end": 1074.68, "text": " many tokens to pre train the model, because the model itself is kind of big. But I'd wonder," }, { "start": 1074.68, "end": 1081.44, "text": " you know, what kind of difference that makes. And what is what the transfer is from the" }, { "start": 1081.44, "end": 1087.48, "text": " natural language to the mathematics because math is is very cryptic. Not even sure if" }, { "start": 1087.48, "end": 1096.78, "text": " they have let me find a proof here. Maybe they've listed. So yeah, you can you can see," }, { "start": 1096.78, "end": 1104.76, "text": " these are sort of the things you would find in this is a a terminal and internal trace" }, { "start": 1104.76, "end": 1111.84, "text": " of this lean environment or their their their gym environment around the lean environments." }, { "start": 1111.84, "end": 1117.58, "text": " So you'd have like these tactics states you can see right here. These these are have nothing" }, { "start": 1117.58, "end": 1125.96, "text": " to do with natural language, right? Then you have the tactics that you run, you apply this" }, { "start": 1125.96, "end": 1135.8400000000001, "text": " prime DVD mall hp dot MP tactic, I have no idea what it is. And that transforms the above" }, { "start": 1135.8400000000001, "end": 1142.72, "text": " tactic state, I believe, into the bottom tactic state. I'm not going to parse this because" }, { "start": 1142.72, "end": 1150.3600000000001, "text": " I again, I have no clue what it means. But you can see that these statements there, they're" }, { "start": 1150.36, "end": 1158.28, "text": " very formal, and they have nothing to do with natural language. Still, obviously, humans" }, { "start": 1158.28, "end": 1164.8, "text": " made them as a series of characters. And therefore, there might also always be some transfer. So" }, { "start": 1164.8, "end": 1173, "text": " how do they train this? How do they train this thing? So the the transformer is trained" }, { "start": 1173, "end": 1182.28, "text": " to suggest kind of what to do next in such a proof. And that is called a proof step." }, { "start": 1182.28, "end": 1187.52, "text": " So the proof step objective that they train the transformer with consists in generating" }, { "start": 1187.52, "end": 1194.56, "text": " a proof step, give it which is a tactic, given a goal, which is a tactic state. So you're" }, { "start": 1194.56, "end": 1200.12, "text": " trying to get somewhere which is the root of the current tree or subtree you're considering." }, { "start": 1200.12, "end": 1207.9599999999998, "text": " And you're generating a tactic, which means like how to expand the tree given that that," }, { "start": 1207.9599999999998, "end": 1215.4399999999998, "text": " you know, you are at this particular route. And they also condition this objective on" }, { "start": 1215.4399999999998, "end": 1221.6399999999999, "text": " the current declaration, which is the theorem name, which remains the same throughout the" }, { "start": 1221.6399999999999, "end": 1227.9199999999998, "text": " proof search. They make some they give some explanation why they do this. But essentially," }, { "start": 1227.92, "end": 1233.76, "text": " the what they train the transformer with looks like this, there is a keyword decal, then" }, { "start": 1233.76, "end": 1239.44, "text": " there's the declaration, which is the name of the theorem, then there is a goal. And" }, { "start": 1239.44, "end": 1247.4, "text": " then here, you put the goal state, the tactic state that you want to achieve, and then the" }, { "start": 1247.4, "end": 1254.48, "text": " keyword proof step. And then here is where the proof step goes. So during inference," }, { "start": 1254.48, "end": 1260.16, "text": " obviously, you leave this away, and you let the language model generate this part. But" }, { "start": 1260.16, "end": 1269.32, "text": " during training, you put right here, any any proof from any proof that you know was successful," }, { "start": 1269.32, "end": 1275.88, "text": " you'd put the corresponding proof step there. So this is a Yeah, this is a language modeling" }, { "start": 1275.88, "end": 1282.92, "text": " objective. You just train on all of the proofs that you know that are true, you put them" }, { "start": 1282.92, "end": 1288.3600000000001, "text": " into this particular form, you put all of their individual tree expansion steps into" }, { "start": 1288.3600000000001, "end": 1295.72, "text": " this particular form, and you train a language model on it. And that apparently works pretty" }, { "start": 1295.72, "end": 1302.0800000000002, "text": " well. This is already from their from their previous work, that this works pretty well." }, { "start": 1302.0800000000002, "end": 1306.16, "text": " They also have they explain this here, the rationale for conditioning on the declaration" }, { "start": 1306.16, "end": 1310.8000000000002, "text": " name is to hint our models on the position of the current declaration in the math lip" }, { "start": 1310.8, "end": 1316.48, "text": " library, considered a weak proxy signal for the large amount of information not shown" }, { "start": 1316.48, "end": 1326.3999999999999, "text": " to the model. So there is a full date, there is available imports, currently open declarations," }, { "start": 1326.3999999999999, "end": 1332.76, "text": " module names, notations, declared instances. So and that that is where I really am a new" }, { "start": 1332.76, "end": 1338.6399999999999, "text": " there is this math lib library, which is a library inside of this lean environment. And" }, { "start": 1338.64, "end": 1343.7800000000002, "text": " I'm going to guess the analogy would be like, it has a bunch of functions you can call it" }, { "start": 1343.7800000000002, "end": 1349.66, "text": " has a bunch of stuff there that you could potentially use. And obviously, this is not" }, { "start": 1349.66, "end": 1354.3200000000002, "text": " going to all fit into the little context that we have right here that we're going to feed" }, { "start": 1354.3200000000002, "end": 1359.6000000000001, "text": " into the transformer. So what you're going to do is you simply give this declaration" }, { "start": 1359.6, "end": 1369.28, "text": " name. And if the model has seen enough of those things, it it obviously some of these" }, { "start": 1369.28, "end": 1375.8799999999999, "text": " function calls will be in this proof step step right here, if you start out with proofs" }, { "start": 1375.8799999999999, "end": 1381.52, "text": " that already exist. So some of these function calls will be in there. And the declaration" }, { "start": 1381.52, "end": 1385.6, "text": " hints sort of where in the library you are, which means that which functions you can currently" }, { "start": 1385.6, "end": 1395.36, "text": " call which variables exist and so on. I'm exactly sure. But I essentially, I would," }, { "start": 1395.36, "end": 1401.28, "text": " I would read the declaration, if I were a programmer, I would read the declaration as" }, { "start": 1401.28, "end": 1409.08, "text": " maybe the, the project and the file I'm currently in and what imports there are, I would read" }, { "start": 1409.08, "end": 1417.6799999999998, "text": " the goal as the function definition, or sorry, the function header, and the doc string that" }, { "start": 1417.6799999999998, "end": 1422.1999999999998, "text": " tells me what should happen in this function. And then the proof step, I would consider" }, { "start": 1422.1999999999998, "end": 1428.6799999999998, "text": " the function itself, the implementation. That is a very bad analogy, but approximately like" }, { "start": 1428.6799999999998, "end": 1433.36, "text": " this, it's a weird mix between programming and, and mathematics, this formal mathematics" }, { "start": 1433.36, "end": 1439.6, "text": " proofs. So they train the language model on this. So now the language model can suggest" }, { "start": 1439.6, "end": 1444.6399999999999, "text": " new proof steps, you give it the declaration and the goal, it can suggest new proof steps," }, { "start": 1444.6399999999999, "end": 1450.4399999999998, "text": " right? That is one thing they train the language model with, they in at the same time, train" }, { "start": 1450.4399999999998, "end": 1457.3799999999999, "text": " it also with this proof size objective. So they give an other, they give other inputs" }, { "start": 1457.3799999999999, "end": 1462.28, "text": " to the language model that they train it on. Again, we have the declaration name, we have" }, { "start": 1462.28, "end": 1466.92, "text": " the goal, but then we have a different keyword instead of proof step. Now we have the keyword" }, { "start": 1466.92, "end": 1473.44, "text": " proof size. And then here is a proof size bucket token. And that's simply a letter from" }, { "start": 1473.44, "end": 1482.2, "text": " A to K. And that letter encodes one of 11 buckets. The buckets represent the size of" }, { "start": 1482.2, "end": 1487.96, "text": " the proofs. Again, during training, we know the proof size, right? Or the size of the" }, { "start": 1487.96, "end": 1494.08, "text": " proof step or maybe the size of the whole proof. I'm not entirely sure. I think it's" }, { "start": 1494.08, "end": 1502.8400000000001, "text": " the size of the whole proof. Yeah, represents a proof size estimate bucket for the current" }, { "start": 1502.8400000000001, "end": 1510.8400000000001, "text": " goal. Okay, so for the proof of the current goal, how long is it? And during training," }, { "start": 1510.8400000000001, "end": 1515.64, "text": " we know it. So we just put it here during inference time. Again, this is the thing that" }, { "start": 1515.64, "end": 1521.4, "text": " we are going to let the model predict. So the model should guess how long a proof is" }, { "start": 1521.4, "end": 1526.5800000000002, "text": " going to be without necessarily producing it. That's what this keyword up here does." }, { "start": 1526.5800000000002, "end": 1533.3200000000002, "text": " So the bottom one simply says how long is it maybe, you know, probably going to be." }, { "start": 1533.3200000000002, "end": 1540.76, "text": " And this, it's pretty neat how they do it. So they have these 11 buckets, infinite proof" }, { "start": 1540.76, "end": 1547.02, "text": " sizes go to bucket zero. And then bucket one gets the longest proofs bucket two gets slightly" }, { "start": 1547.02, "end": 1552.8799999999999, "text": " smaller proofs, and the shortest proofs go into bucket 10. Why do they encode it like" }, { "start": 1552.8799999999999, "end": 1561.72, "text": " this? Now it comes to the place where how or what do you search. So you're now in the" }, { "start": 1561.72, "end": 1568.04, "text": " proof search, right? You're in inference mode, you ask your model to suggest a bunch of these" }, { "start": 1568.04, "end": 1574.2, "text": " proof steps to you that we saw right here. So you ask your model, please suggest a bunch" }, { "start": 1574.2, "end": 1579.36, "text": " of those proof steps, you sample from the model a bunch of times. And now how what where" }, { "start": 1579.36, "end": 1584.8999999999999, "text": " should you which one should you do? Of course, you could go by I guess the log, like the" }, { "start": 1584.8999999999999, "end": 1597.12, "text": " likelihood of these proof steps. But as far as I can understand, they weigh, they weigh" }, { "start": 1597.12, "end": 1606.08, "text": " the tactics that they want to use. So they, they value different goals. This is about" }, { "start": 1606.08, "end": 1613.28, "text": " which goal do I want to pursue next? Okay. So they, they ask themselves, which goal should" }, { "start": 1613.28, "end": 1620.76, "text": " I produce, or should I pursue next in my proof search to value goals as we run proof searches," }, { "start": 1620.76, "end": 1627.52, "text": " we sample the proof size bucket token and record the logits for each viable bucket and" }, { "start": 1627.52, "end": 1633.08, "text": " use them to get a weighted average with the following formula. So the formula itself is" }, { "start": 1633.08, "end": 1638.48, "text": " not really important. But what is important, they use the buck like the prediction of how" }, { "start": 1638.48, "end": 1645.72, "text": " long a proof is going to be to guide their selection of goals, which means that the exact" }, { "start": 1645.72, "end": 1653.84, "text": " way they do it is they say, if a model assigns p zero equals one, which means that the model" }, { "start": 1653.84, "end": 1658.68, "text": " puts all the weight on bucket zero, which is you remember as the infinite proofs. So" }, { "start": 1658.68, "end": 1662.52, "text": " if the model predicts this proof size is going to be infinite, which means that it's not" }, { "start": 1662.52, "end": 1667.64, "text": " going to work, right? The proof size infinite means that it hasn't been at least it hasn't" }, { "start": 1667.64, "end": 1674.3600000000001, "text": " been proven yet, right? The proof search in or the data set hasn't been able to prove" }, { "start": 1674.36, "end": 1681.28, "text": " this particular statement. So the size is infinite, then the value, as you can see is" }, { "start": 1681.28, "end": 1689.04, "text": " zero. So we don't want to go after something where the model is absolutely sure that the" }, { "start": 1689.04, "end": 1694.6399999999999, "text": " proof size is infinite, it's never going to be absolutely sure. But if that were the case," }, { "start": 1694.6399999999999, "end": 1701.6399999999999, "text": " the value would be zero. Conversely, if a model assigns the is very sure, or absolutely" }, { "start": 1701.64, "end": 1707.8000000000002, "text": " sure that this proof is going to be in the shortest bucket, then the value is one. So" }, { "start": 1707.8000000000002, "end": 1716.2, "text": " this is a number between zero and one, depending on how short the proof is. So they say it" }, { "start": 1716.2, "end": 1721.92, "text": " prioritizes goals that potentially lead to shorter proofs during proof search. So that's" }, { "start": 1721.92, "end": 1729.0400000000002, "text": " how they guide their search. Excellent. So these are the two objectives they train with" }, { "start": 1729.04, "end": 1736.2, "text": " the one objective is to make the model suggest new the tactics to use. And the other one" }, { "start": 1736.2, "end": 1742.8, "text": " is to guide the proof search by training the model to predict how long a proof is going" }, { "start": 1742.8, "end": 1758.78, "text": " to be. So yeah, the next topic right here is how they how they bootstrap the models." }, { "start": 1758.78, "end": 1764.1, "text": " So in this expert iteration, you always train on your own outputs. However, there needs" }, { "start": 1764.1, "end": 1771.2, "text": " to be like some sort of a some sort of a starting point, right? bootstrapping, they say consistent" }, { "start": 1771.2, "end": 1776.04, "text": " step required to train an initial model on both proof step objective and the proof size" }, { "start": 1776.04, "end": 1786.44, "text": " objective. They have two initial models. In fact, they have a they have a data set, which" }, { "start": 1786.44, "end": 1793.44, "text": " consists of some of these proofs that have already been proven. And they train a model" }, { "start": 1793.44, "end": 1802.16, "text": " with just a proof step objective, which is called data zero. So that's the initial model." }, { "start": 1802.16, "end": 1811.28, "text": " Then they use they use the initial model to sample proofs for the statements in this mathematics" }, { "start": 1811.28, "end": 1820.44, "text": " library. So they already use a model to generate proofs. We denote the set of successful proof" }, { "start": 1820.44, "end": 1826.72, "text": " searches created in processes as zero using s zero, we create a data set. So the expert" }, { "start": 1826.72, "end": 1831.44, "text": " iteration process essentially already starts. So they're going to concatenate the original" }, { "start": 1831.44, "end": 1840.94, "text": " data set, sorry, the original data set and a D duplicated set of proof steps extracted" }, { "start": 1840.94, "end": 1848.88, "text": " from the proofs in s zero and a D duplicated set of proof size tuples extracted from the" }, { "start": 1848.88, "end": 1855.8000000000002, "text": " proof searches in s zero. So now they're going to use whatever they output as proofs in the" }, { "start": 1855.8000000000002, "end": 1863.0800000000002, "text": " last in the last in the last iteration, they're going to take that into the data set, they're" }, { "start": 1863.0800000000002, "end": 1868.88, "text": " going to create these proof step sentences, I'm just going to call them sentences because" }, { "start": 1868.88, "end": 1873.7800000000002, "text": " we're language modeling right here, they're going to create these proof step sentences" }, { "start": 1873.7800000000002, "end": 1878.64, "text": " like this one, they're going to create these proof size sentences like this one. And then" }, { "start": 1878.64, "end": 1885.5600000000002, "text": " they're going to train a model again on that. So they're going to take the they're going" }, { "start": 1885.5600000000002, "end": 1892.88, "text": " to take the theta zero, and they're going to train it on that new data set. So that" }, { "start": 1892.88, "end": 1897.92, "text": " gives them theta one, which is trained on both the proof step and the proof size objective" }, { "start": 1897.92, "end": 1906.72, "text": " and theta one is our first model in our expert iteration. So now we are simply going to repeat" }, { "start": 1906.72, "end": 1915.2, "text": " those things. Each iteration k consists in sampling proof searches for statements using" }, { "start": 1915.2, "end": 1922.52, "text": " the current model, filtering successful proof searches to extract a new data set, and fine" }, { "start": 1922.52, "end": 1928.24, "text": " tuning the theta zero on it to obtain theta k plus one. Note that they don't they don't" }, { "start": 1928.24, "end": 1937.16, "text": " go from theta zero to theta one to theta two and so on. They always so they don't do that." }, { "start": 1937.16, "end": 1942.28, "text": " They always go from theta zero to theta two, then they use theta two to generate a data" }, { "start": 1942.28, "end": 1948.74, "text": " set, then they fine tune theta zero again to get to theta three. It'd be interesting" }, { "start": 1948.74, "end": 1955.28, "text": " to know why they do it this way. Maybe if you continue fine tuning, you're already sort" }, { "start": 1955.28, "end": 1961.68, "text": " of locked into something. So the knowledge comes the knowledge, the unified knowledge" }, { "start": 1961.68, "end": 1967.72, "text": " comes from you can see this right here, the fact that they the data sets they generate" }, { "start": 1967.72, "end": 1974.12, "text": " comes from the unified set of all the statements they've proven so far. So all the proofs they" }, { "start": 1974.12, "end": 1982.04, "text": " found so far, they are all go together into one big data set for the next step. So technically" }, { "start": 1982.04, "end": 1988.8799999999999, "text": " every model can like relearn the proofs that the last model also knew because it's there" }, { "start": 1988.8799999999999, "end": 1995.12, "text": " they're in the same data set. And, you know, potentially, they also say that they de duplicate" }, { "start": 1995.12, "end": 2001.08, "text": " proofs, which means that for the same statements, there could be multiple proofs, and they will" }, { "start": 2001.08, "end": 2006.08, "text": " always take the shortest one. So that might be even disadvantage, a disadvantage if you" }, { "start": 2006.08, "end": 2012.72, "text": " were to tune from like theta two, which would still have learned a longer proof for a particular" }, { "start": 2012.72, "end": 2018.96, "text": " statement. And you'd have to like forget that it's probably just easier to scratch everything" }, { "start": 2018.96, "end": 2027.12, "text": " and start with the shorter proof in your data set. And yeah, that is it. That's the expert" }, { "start": 2027.12, "end": 2034.76, "text": " iteration process. They get a new model, they use it to generate new proofs, they add the" }, { "start": 2034.76, "end": 2040.32, "text": " proofs to the set of things they know. And there is a set of things they don't know," }, { "start": 2040.32, "end": 2046.16, "text": " right? Because there can also be bad proofs, which serve as negative examples, which is" }, { "start": 2046.16, "end": 2053.92, "text": " also good, can handle negative examples, and then they get better and better. So now they" }, { "start": 2053.92, "end": 2061.76, "text": " are going to evaluate this right now, you see that they have various, various ways of" }, { "start": 2061.76, "end": 2066.42, "text": " using this model, there's pass at eight, there's pass at one, which essentially means like" }, { "start": 2066.42, "end": 2074.0400000000004, "text": " how many tries they give per expansion step, like do we sample, do we try once do we try" }, { "start": 2074.0400000000004, "end": 2079.6000000000004, "text": " eight times, obviously, the more you try, the longer your searches run, but also the" }, { "start": 2079.6000000000004, "end": 2085.6800000000003, "text": " higher your chance of actually finding something useful. And these things are mostly proportional" }, { "start": 2085.68, "end": 2094.3999999999996, "text": " to each other. So it's just a matter of computational effort. You can see that with expert iterations," }, { "start": 2094.3999999999996, "end": 2099, "text": " so the x axis right here is number of expert iterations, you can see they do nine expert" }, { "start": 2099, "end": 2106.3199999999997, "text": " iterations on these data sets. In general, you see an upwards trend. So more and more" }, { "start": 2106.3199999999997, "end": 2114.7999999999997, "text": " statements are able to be proven by the by the expert iterated system. And they have" }, { "start": 2114.8, "end": 2120.2000000000003, "text": " multiple data sets, this mini F2F is their final goal. This is made up of these various" }, { "start": 2120.2000000000003, "end": 2128.36, "text": " competition level statements, while the mathlib that is more of these kind of formal proofs" }, { "start": 2128.36, "end": 2135.04, "text": " from these from these formal environments. And they do they do see that the overlap isn't" }, { "start": 2135.04, "end": 2141.3, "text": " too great right here. And you can see that here as well. The scaling only kind of sort" }, { "start": 2141.3, "end": 2148.04, "text": " of kicks in after a while. What also astounded me is that in both cases, you have solve rates" }, { "start": 2148.04, "end": 2154.0800000000004, "text": " actually go down intermittently. And I would be I would be very interested, you know, why" }, { "start": 2154.0800000000004, "end": 2159.84, "text": " that is that could be just like an effect of size or something like this. But like," }, { "start": 2159.84, "end": 2168.7200000000003, "text": " why do solve rates go slightly, slightly down? Or is it just noise? I have no idea. You also" }, { "start": 2168.72, "end": 2181.4399999999996, "text": " see these are the cumulative, the cumulative pass rates. And so this is this is the expert" }, { "start": 2181.4399999999996, "end": 2189.04, "text": " iteration model. And this is the sample only model. So in the blue model, you run expert" }, { "start": 2189.04, "end": 2195.14, "text": " iteration, which means that you sample data, and then you retrain and then you sample again," }, { "start": 2195.14, "end": 2202.8799999999997, "text": " and then you retrain. And in the orange model, you only sample so you only use the you only" }, { "start": 2202.8799999999997, "end": 2208.16, "text": " use I believe the theta zero, which is the initial model, you use that to guide your" }, { "start": 2208.16, "end": 2215.24, "text": " search, but you never retrain on the things that you found. And interestingly, obviously," }, { "start": 2215.24, "end": 2221.9, "text": " I guess the expert iteration model way outperforms the sample only model. However, the sample" }, { "start": 2221.9, "end": 2228.5, "text": " only model uses less compute, because it doesn't have to do the retraining. So once you adjust" }, { "start": 2228.5, "end": 2234.1600000000003, "text": " for that, you can see it's this line right here, where at first the sample only model" }, { "start": 2234.1600000000003, "end": 2241.52, "text": " is better. You know, because the expert iteration actually trains at wastes time and training." }, { "start": 2241.52, "end": 2248.62, "text": " But as you go on, if you give it more and more compute, the number of more statements" }, { "start": 2248.62, "end": 2256.16, "text": " that the sampling only model solves, it underwhelms with respect to what the expert iteration" }, { "start": 2256.16, "end": 2263.48, "text": " solves. And even on this data set right here on this more distant data set, there seems" }, { "start": 2263.48, "end": 2271.2799999999997, "text": " to be almost like a little bit of a diminishing return in the sample only method. And at after" }, { "start": 2271.2799999999997, "end": 2276.68, "text": " a while after a number of expert iterations, the expert iteration method outshines the" }, { "start": 2276.68, "end": 2283.3199999999997, "text": " sample only method. We don't have an adjusted compute curve right here. But you can guess" }, { "start": 2283.3199999999997, "end": 2291.04, "text": " maybe that it might look something like this. Possibly, possibly just kind of like a constant" }, { "start": 2291.04, "end": 2301.56, "text": " over the over the originally orange curve. Orange curve bad. Yeah. Also, let me know" }, { "start": 2301.56, "end": 2307.2, "text": " how you like this this pre annotation right here that I've been doing now for two papers," }, { "start": 2307.2, "end": 2314.02, "text": " I think. So I like pre highlight them. I wonder how that's how that's received. If that makes" }, { "start": 2314.02, "end": 2320.72, "text": " it more or less confusing. It just tells me a bit more where to where to jump to. So we" }, { "start": 2320.72, "end": 2326.56, "text": " get some results right here. The number of statements proved in math with train goes" }, { "start": 2326.56, "end": 2336.36, "text": " from 17,390 at iteration one to 19,476 at iteration nine, while the average proof length" }, { "start": 2336.36, "end": 2345.68, "text": " of these statements goes from 4.8 to 4.0. We hypothesize that this continuously improving" }, { "start": 2345.68, "end": 2351.2599999999998, "text": " performance through expert iteration stems from two effects. So one, the model finding" }, { "start": 2351.26, "end": 2357.36, "text": " new original proofs for the same statements, which would then be shorter than the original" }, { "start": 2357.36, "end": 2363.96, "text": " proofs. And two, the model closing marginally harder statements at each iteration, which" }, { "start": 2363.96, "end": 2369.84, "text": " in turn provides more useful training data for the next iteration. By iteration nine," }, { "start": 2369.84, "end": 2377.6600000000003, "text": " the model is trained on more than 90% generated data. So the original data set is almost a" }, { "start": 2377.66, "end": 2384.2, "text": " is like a small minority of the data that the model is trained on. Again, a another" }, { "start": 2384.2, "end": 2390, "text": " property that I haven't even mentioned yet is that in proof search, you can verify a" }, { "start": 2390, "end": 2395.8799999999997, "text": " proof like you know, if a proof is correct, which in most domains isn't the case, right?" }, { "start": 2395.8799999999997, "end": 2403.04, "text": " So retraining on your own output is dangerous, because you don't exactly know how good it" }, { "start": 2403.04, "end": 2408.44, "text": " is. But here, you can just verify that it's good. And then you know, it's good data, right?" }, { "start": 2408.44, "end": 2413.04, "text": " So it's a it's a bit of a special environment, but I think we can still learn things from" }, { "start": 2413.04, "end": 2420.96, "text": " it. So what do they do? They first train this thing. So now, I think the setup is clear," }, { "start": 2420.96, "end": 2426.7599999999998, "text": " right, the expert iteration setup. And they also have made it clear that, you know, we" }, { "start": 2426.76, "end": 2434.6400000000003, "text": " can reach harder and harder statements. But what we maybe can't do is just jump to hard" }, { "start": 2434.6400000000003, "end": 2442.0200000000004, "text": " statements, we need a curriculum, we need several various difficulties of statements," }, { "start": 2442.0200000000004, "end": 2449.6000000000004, "text": " so that we can sort of expand our knowledge again and again and again. And they do first" }, { "start": 2449.6000000000004, "end": 2455.46, "text": " do that with synthetic data. So apparently, apparently, what you can do is you can do" }, { "start": 2455.46, "end": 2462.36, "text": " a you can make a synthetic inequality statement generator, which gives you symbolic mathematical" }, { "start": 2462.36, "end": 2467.8, "text": " inequalities, and you can kind of control how difficult they are. So what they do is" }, { "start": 2467.8, "end": 2474.2, "text": " they just they just compose known inequality theorems, like Heller inequality or something" }, { "start": 2474.2, "end": 2479.56, "text": " like this, they just compose them. And how many times they compose them, that kind of" }, { "start": 2479.56, "end": 2484.88, "text": " measures how how difficult they are. So they have two parameters right here, they control" }, { "start": 2484.88, "end": 2492.78, "text": " how difficult they are. And they they generate 100 statements of low difficulty, like these" }, { "start": 2492.78, "end": 2499.32, "text": " numbers pretty low, and they formalize a proof for each. So this is kind of their seed set." }, { "start": 2499.32, "end": 2506.94, "text": " So two things you need. So the you need this seed seed set of proofs. This is usually like" }, { "start": 2506.94, "end": 2514.98, "text": " some sort of a data set. In this in their case, they combine the this tactic data set that" }, { "start": 2514.98, "end": 2522.32, "text": " is their seed data set, they combine this one with these 100 statements that they generate," }, { "start": 2522.32, "end": 2527.6, "text": " and they prove themselves, either themselves or automatically. So this would be this would" }, { "start": 2527.6, "end": 2535.94, "text": " be the seed data set. And this thing right here, that's the curriculum." }, { "start": 2535.94, "end": 2543.38, "text": " Or just a collection of statements of various, various difficulties, the curriculum doesn't" }, { "start": 2543.38, "end": 2549.7400000000002, "text": " need a proof, right? This is the key part right here, the curriculum simply gives the" }, { "start": 2549.7400000000002, "end": 2557.5, "text": " model an opportunity to solve continuously harder and harder problems going from the" }, { "start": 2557.5, "end": 2564.2200000000003, "text": " seed, right? So going from the seed, you only need to be able to solve the most easy problems" }, { "start": 2564.22, "end": 2570.4599999999996, "text": " in the curriculum. And then you can sort of rely on the expert iteration on the self bootstrapping" }, { "start": 2570.4599999999996, "end": 2578.2999999999997, "text": " to become more to become better. Results are here, you can see that for a given this this" }, { "start": 2578.2999999999997, "end": 2584.3799999999997, "text": " right here is it's either that it's one of the n numbers, this right here. So it the" }, { "start": 2584.3799999999997, "end": 2591.7, "text": " the color measures the difficulty. Zero is the easiest six is the most, most hard hardest" }, { "start": 2591.7, "end": 2598.18, "text": " difficulty. You can see that even for easy problems, expert iteration just manages to" }, { "start": 2598.18, "end": 2605.8999999999996, "text": " solve much more, set much more problems. And for the hardest problems, the sample only" }, { "start": 2605.8999999999996, "end": 2610.72, "text": " method. So if you just do proof searching without expert iteration, it doesn't solve" }, { "start": 2610.72, "end": 2616.2799999999997, "text": " any of the harder problems. Whereas the expert iteration actually, if you see like there's" }, { "start": 2616.2799999999997, "end": 2621.58, "text": " like a tiny uptick at the bottom right here, it actually manages to solve some even of" }, { "start": 2621.58, "end": 2627.86, "text": " the hardest category. So that gives a bit of credence. Yeah, they say here that the" }, { "start": 2627.86, "end": 2633.7799999999997, "text": " end equals six remains completely out of reach for of simply scaling the number of attempts" }, { "start": 2633.7799999999997, "end": 2642.7999999999997, "text": " per statements, which kind of means that you'd have to like invest a lot lot of compute if" }, { "start": 2642.7999999999997, "end": 2649.36, "text": " you just do proof searching to match the to match how good expert iteration is about compute" }, { "start": 2649.36, "end": 2658.98, "text": " by compute is expert iteration is better. Yeah, so they say, well, we're going to target" }, { "start": 2658.98, "end": 2665.88, "text": " this mini F2F data set, right? This is our final challenge. They say we curated and manually" }, { "start": 2665.88, "end": 2674.06, "text": " formalized a set of math exercises to target this data set. So this is going to be their" }, { "start": 2674.06, "end": 2679.32, "text": " seeds and curricula here. We hypothesize that if the difficulty of the set of statements" }, { "start": 2679.32, "end": 2685.28, "text": " was made varied enough, expert iteration could potentially leverage it to effectively shift" }, { "start": 2685.28, "end": 2691.8, "text": " our models distribution closer to mini F2F, and in turn improve their eventual performance" }, { "start": 2691.8, "end": 2696.92, "text": " on it. So they're going to build they're going to build this curriculum right here, they're" }, { "start": 2696.92, "end": 2705.32, "text": " going to collect some, like 300 statements, we manually formalized, it means just they" }, { "start": 2705.32, "end": 2709.7200000000003, "text": " bring it into this syntax, it doesn't mean they also prove these statements, right? So" }, { "start": 2709.7200000000003, "end": 2717.2000000000003, "text": " these will be these curriculum statements. These come from like books, math books that" }, { "start": 2717.2000000000003, "end": 2723.32, "text": " are used to prepare for math exams, which are much closer to this data set that they" }, { "start": 2723.32, "end": 2732.7200000000003, "text": " target. Yeah, so the set of statements, this is this curriculum that I'm talking about" }, { "start": 2732.72, "end": 2741.6, "text": " is the union, the union of the statements in mathlet train this, they, they, interestingly," }, { "start": 2741.6, "end": 2748.68, "text": " they add these inequalities that they've generated to the set of statements, and also they these" }, { "start": 2748.68, "end": 2756.08, "text": " manually collected things that they mentioned above. And with that, interestingly, they" }, { "start": 2756.08, "end": 2764.7599999999998, "text": " do in fact, get a lot they get better on, they get better on this mini F2F validation" }, { "start": 2764.7599999999998, "end": 2776.64, "text": " set. So yeah, you can see that things go up, which is a good sign. Yeah, again, that you" }, { "start": 2776.64, "end": 2782.84, "text": " have like different parameters. This a parameter is also I think a parameter of how many times" }, { "start": 2782.84, "end": 2788.2000000000003, "text": " you sample per expansion or something like this. I don't know, there are many, many parameters" }, { "start": 2788.2000000000003, "end": 2794, "text": " in these searches. But in general, just from what I've seen from this paper, is you can" }, { "start": 2794, "end": 2801.1200000000003, "text": " always trade off more compute, like trying more times, expanding more times, suggesting" }, { "start": 2801.1200000000003, "end": 2807.28, "text": " more steps to do, you can always trade that for a bit more performance. But the general" }, { "start": 2807.28, "end": 2816.88, "text": " direction, it doesn't matter in in the general direction. Yeah, that's, that's that. Obviously," }, { "start": 2816.88, "end": 2824.8, "text": " they are better than like the results are as you would expect, I think so. Their models" }, { "start": 2824.8, "end": 2830.34, "text": " are generally better than let's say the other models that haven't been targeted at this" }, { "start": 2830.34, "end": 2839.92, "text": " data set, or the models that just do proof search. So they have a short discussion of" }, { "start": 2839.92, "end": 2846.96, "text": " model size. They say we briefly experimented with different model sizes and found that" }, { "start": 2846.96, "end": 2852.56, "text": " model size scaling is not as straightforward in the case of as in the case of unsupervised" }, { "start": 2852.56, "end": 2858.36, "text": " learning, they found that bigger models, they found that bigger models are better in the" }, { "start": 2858.36, "end": 2866.2400000000002, "text": " sense that they consistently exhibit higher pass rate if you just sample once. However," }, { "start": 2866.2400000000002, "end": 2872.7200000000003, "text": " despite that, it is often the case that for a fixed amount of compute sampling more attempts" }, { "start": 2872.7200000000003, "end": 2877.52, "text": " from a smaller model leads to better final performance. So these are these are the sort" }, { "start": 2877.52, "end": 2882.28, "text": " of considerations that you have to do. If you have two independent variables, right," }, { "start": 2882.28, "end": 2890.96, "text": " we can trade them off against one another. Just for the scale, with their big model running" }, { "start": 2890.96, "end": 2898.6400000000003, "text": " a full expert iteration, that's kind of one of these full expert iteration. Full expert" }, { "start": 2898.6400000000003, "end": 2903.1200000000003, "text": " iteration, do they mean that all the nine steps or just one step in the expert, I'm" }, { "start": 2903.1200000000003, "end": 2908.96, "text": " going to guess all the nine steps. So the whole experiment to get to their their model" }, { "start": 2908.96, "end": 2917.96, "text": " after nine expert iteration steps required 2000 a 100 days to compute. That is insane." }, { "start": 2917.96, "end": 2924.56, "text": " Running one full proof search, when properly parallelized requires on average about point" }, { "start": 2924.56, "end": 2934.52, "text": " one a 100 hours of compute. So that's like, it's like still a minute of an a 100. Crazy," }, { "start": 2934.52, "end": 2944.36, "text": " right? So the sizes here are enormous, right? And still, they are able to solve what two" }, { "start": 2944.36, "end": 2953.2, "text": " of these Olympiad problems, right? With manual targeting, with manual data collection that" }, { "start": 2953.2, "end": 2961.44, "text": " is specifically targeted at that data set, and with 2000 a 100 days. And, you know, they" }, { "start": 2961.44, "end": 2969.92, "text": " don't solve all of them, they solve two. So I believe this field is still in its infancy." }, { "start": 2969.92, "end": 2974.64, "text": " I believe there's lots of stuff to do right here. There's probably approaches that make" }, { "start": 2974.64, "end": 2980.7200000000003, "text": " these things a lot better. But I'm excited just because I think that is an area where" }, { "start": 2980.7200000000003, "end": 2986.7200000000003, "text": " deep learning, as they say, hasn't really pushed through quite yet. And I think there's" }, { "start": 2986.72, "end": 2993.2, "text": " a lot to do to bring down the requirements here and the methodologies that they use." }, { "start": 2993.2, "end": 2999.56, "text": " I like the way they combine the language modeling with the proof searching. The expert iteration" }, { "start": 2999.56, "end": 3006.04, "text": " might also be a nice lesson for other fields, like how can we combine the neural models" }, { "start": 3006.04, "end": 3012.9199999999996, "text": " with some sort of search procedures maybe or other heuristics to generate ever better" }, { "start": 3012.92, "end": 3019.08, "text": " training data that we can then feed back to the models. All of this is highly interesting." }, { "start": 3019.08, "end": 3039.96, "text": " And yeah, let me know what you think. Bye bye." } ]
eROy3BrqEVk
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
[ML News] AI predicts race from X-Ray | Google kills HealthStreams | Boosting Search with MuZero
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "mlnews", "schmidhuber", "kaust", "saudi arabia", "ai initiative", "reading race", "xray", "race xray", "ai race", "ai bias", "facebook primates", "muzero", "muzero code", "muzero paper", "google muzero", "health streams", "deepmind health", "wandb", "dmca", "github dmca", "distill", "distill gnn", "graph neural networks", "ai depression", "unconstrained scene generation", "transformers" ]
#mlnews #schmidhuber #muzero Your regular updates on what's happening in the ML world! OUTLINE: 0:00 - Intro 0:15 - Sponsor: Weights & Biases 1:45 - Google shuts down health streams 4:25 - AI predicts race from blurry X-Rays 7:35 - Facebook labels black men as primates 11:05 - Distill papers on Graph Neural Networks 11:50 - Jürgen Schmidhuber to lead KAUST AI Initiative 12:35 - GitHub brief on DMCA notices for source code 14:55 - Helpful Reddit Threads 19:40 - Simple Tricks to improve Transformers 20:40 - Apple's Unconstrained Scene Generation 21:40 - Common Objects in 3D dataset 22:20 - WarpDrive Multi-Agent RL framework 23:10 - My new paper: Boosting Search Agents & MuZero 25:15 - Can AI detect depression from speech? References: Google shuts down Health Streams https://techcrunch.com/2021/08/26/google-confirms-its-pulling-the-plug-on-streams-its-uk-clinician-support-app/ AI predicts race from X-Rays https://www.iflscience.com/technology/ai-makes-strangely-accurate-predictions-from-blurry-medical-scans-alarming-researchers/?fbclid=IwAR2ddIP4w0p6VNbMRoe_9OPXQS6NA365XdB22v7rMlVOcuqnxe1ST7ZuvtA&utm_source=pocket_mylist https://arxiv.org/ftp/arxiv/papers/2107/2107.10356.pdf Facebook labels black men as primates https://www.nytimes.com/2021/09/03/technology/facebook-ai-race-primates.html https://en.wikipedia.org/wiki/Human Distill articles on GNNs https://distill.pub/2021/gnn-intro/ https://distill.pub/2021/understanding-gnns/ Jürgen Schmidhuber leads KAUST AI initiative https://people.idsia.ch/~juergen/kaust-2021.html GitHub issues court brief on code DMCAs https://github.blog/2021-08-31-vague-infringement-allegations-considered-harmful/ Useful Reddit Threads https://www.reddit.com/r/MachineLearning/comments/phvgzb/r_how_machine_learning_will_revolutionise_physics/ https://www.reddit.com/r/MachineLearning/comments/pe9jyt/d_what_are_the_most_important_problems_in_ml_today/ https://www.reddit.com/r/MachineLearning/comments/phnx8c/d_do_you_reproduce_a_method_for_sota_comparison/ https://www.reddit.com/r/MachineLearning/comments/pev04l/d_what_kind_of_hyperparameter_optimisation_do_you/ Tricks to improve Transformers https://arxiv.org/pdf/2108.12284.pdf Unconstrained Scene Generation https://apple.github.io/ml-gsn/ Common Objects in 3D dataset https://ai.facebook.com/blog/common-objects-in-3d-dataset-for-3d-reconstruction WarpDrive Multi-Agent RL framework https://blog.einstein.ai/warpdrive-fast-rl-on-a-gpu/ Boosting Search Engines / MuZero Code https://arxiv.org/abs/2109.00527 https://github.com/google-research/google-research/tree/master/muzero https://github.com/google-research/language/tree/master/language/search_agents Can AI detect depression? https://venturebeat.com/2021/08/31/ai-startups-claim-to-detect-depression-from-speech-but-the-jurys-out-on-their-accuracy/?utm_source=pocket_mylist Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher LinkedIn: https://www.linkedin.com/in/yannic-kilcher-488534136/ BiliBili: https://space.bilibili.com/1824646584 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Google decommissions DeepMinds health app, Juergen Schmidhuber leads an AI initiative in Saudi Arabia, and I have a new paper. Welcome to ML News. Hey, hey, you. Yes, you. Do you run experiments? Machine learning experiments? Yes. How do you track them? What? That's not a good way to track them. Oh, here's what you should do. You should use weights and biases. Coincidentally, this video is sponsored by them. What is it? It's a system to track your experiments, track your artifacts, reproduce all the things you've ever done, see metrics, data sets, models from the inception of your idea to the final deployment and beyond. This is the ultimate tool, you can get started with just one line of code. Yes, one line of code and be amazed at what it gives you hyper parameter tuning, metrics tracking, resource utilization, model and data set versioning on cloud and on premise. Get this and much more when you sign up to weights and biases. Personal accounts are completely free. What are you waiting for? Sign up now. No, actually, watch the video first, then sign up or sign up now and sign up later. Get your mom to sign up, get your pet to sign up. There's absolutely no reason not to go to this URL and get your account now. Cheers. Hello and welcome to ML news on this beautiful glorious Monday. Let's dive into the first story tech crunch writes Google confirms it's pulling the plug on streams, its UK clinician support app. So this app has a bit of a history since 2015. DeepMind started it up originally trying to bring more AI into the health ecosystem. Now the streams health app isn't actually an AI focused app, it's kind of an app to track health data and assist clinicians in making decisions. The goal was always to bring AI into the picture. But this apparently has never succeeded. The article details the history of the app as it went through DeepMind stages, then of course, the big scandal where it was discovered that DeepMind didn't really have the legal basis for dealing with the data that they were dealing with. That was a weird sentence. And finally, DeepMind handing over the app to Google health, even though they said they would never share anything about this with Google. And now finally, Google deciding to turn off the app completely, whether or not this is a result of data privacy issues, or just being a result of the business case not being strong enough, we don't exactly know if you're interested in this, this article on tech crunch dives fairly deeply into the issue. What is special is how often it is mentioned that the data is going to be deleted. So it starts off with at least two paragraphs saying the data is going to be deleted, it mentions it throughout, and then it ends again with a paragraph on how the data is going to be deleted. So rest assured, the data is going to be deleted. I'm winking, you can't see it. I'm winking. Now the article is also a little bit critical of Google starting up projects and then killing them off after a short while, such as Google plus or the many, many, many, many, many, many messaging apps that Google has released things like Google video and so on. But honestly, I think the strategy has worked out so far, we got a couple of very nice products out of Google that started exactly like this that we might have never gotten if every single new product is an eternal commitment to support it. That being said, bring back the free storage for Google Photos. This was actually useful. So finally, Google is turning off this streams app, there's apparently still one group of customers that is using it ongoing, I guess still have to come to some sort of an agreement until the end of their contract. But going further, let's just wait for the next Google inventions, there should be like some sort of a betting market where you can bet whether or not new Google products will make it five years past their inception could be fun. IFL s writes AI makes strangely accurate predictions from blurry medical scans alarming researchers. So this is an article about this paper right here reading race AI recognizes patients racial identity and medical images, that is a study into various data sets and algorithms and whether or not they can detect a patient's race just from radiological images such as these ones. Now there is a common pattern among articles like this one that usually some confounding variable wasn't taken into account like source of data set or things like this. However, this paper specifically pays a lot of attention to eliminate all such confounding variables and really tests multiple hypotheses on how the model makes its assessment. So there are apparently a few distinct markers of race even in these radiological images. But even if they control for those, the models are still able to make out patients self reported races. The really interesting thing is that even if the images are degraded, such as this one right here and really pixelated, the models are still able to make out the patient self reported race with a higher than random accuracy, but the pictures themselves would be completely undiagnosable for any human and certainly humans couldn't make out the race of the patients. So as I said, the paper is a fairly lengthy investigation into these models and data sets, including trying to tease out race from models that have been trained not on predicting race, which essentially means that in order to predict some health outcome, the models in some part make predictions that correlate with race. And it is a fairly lengthy article. But if you're interested in these things, definitely give it a read. It seems like to be a very thorough study of these things. But the article here frames it all in terms of how terrible this is how biased these algorithms are. And while there's certainly truth to that, and many of these algorithms are in fact bias when they shouldn't be and due to various reasons, there also is the apparently rather shocking conclusions that your health outcomes interact with your genetics, I know new concept. So again, while we can certainly all agree that results like this are worrisome, and there are problems with bias in AI, it seems that people would like their ideologies to overrule reality. And I don't think that's a worthwhile goal. So that all being said, these problems are of course incredibly difficult, but we should look at them with the view of what's going to help the most people and what's going to deliver the best outcomes for all individuals. And there are probably no easy solutions for incredibly interconnected problems that are extremely multifactorial and include things like genetics, environment, society, data gathering, and the entire historical context of all of that. And that I guess is my rather boring take on that. In related news, the New York Times writes Facebook apologizes after AI puts primates label on video of black men, Facebook called it an unacceptable error, the company has struggled with other issues related to race. Now the article is about this Daily Mail video about a couple of black men, and the algorithm asks keep seeing videos about primates Yes or dismiss. So the classification algorithm made a mistake here. And this is not a new thing. As the article states in 2015, Google mistakenly labeled pictures of black people as gorillas. And the article also said more than two years later, wired found that Google solution was to censor the word gorilla from searches while also blocking chimp, chimpanzee and monkey. The article then goes into some more inter company things inside of Facebook trying to link this to the system or something like this, which I find quite shady, honestly, these systems have a number of issues, there are issues, of course, with data collection, there are issues with all kinds of other stuff. But ultimately, these systems are trained in a way that errors are errors. So if you fail to distinguish a yacht from a sailboat, that is an error to the model in the same way as if you fail to distinguish a human from a primate, the model has no inherent way of knowing that one is a socially acceptable error and one is a totally socially unacceptable error, there are ways to mitigate this, but they usually require efforts on the part of humans that go there and essentially correct for all the potential socially terrible errors that the model can do. And very often that burden is so large, it's combinatorically very, very hard to do this, all you can do is just block entire pieces of the search space in order to mitigate these mistakes. This is displayed as some kind of like a negative system, like, well, the AI is still biased, but now we're just sort of censoring it. Yes, I mean, what can you do? It's very easy to complain about these types of things. Now, of course, many of you might have noticed that technically, the model isn't wrong as human are the most abundant and widespread species of primates. But you know, technicalities aside, I think we can all agree that this isn't an output that you would want from your system. So what's the solution? I don't know, probably the best solution would be an attack from multiple sides where the companies invest more work into mitigating these types of errors, which means essentially collecting more training data on these intersections of very socially critical issues such that the models get more confident about them. And on the other hand, it might also require a little bit of a rethinking in society where we see a mistake like this, not as some terrible thing happening, but more into the category of mislabeling a sailboat as a yacht and vice versa. It'd be nice if we get to a point where we think, ah, cool, the system made a mistake. Let's go on with my life. But of course, it's not always that easy, because we use these types of systems in situations where it actually matters what the system predicts. So ultimately, it comes down to close supervision of your products and continuously evaluating their deployments. Again, it's a hard problem. I'm confident we can make progress on it. Complaining about it is fine. Just complaining and acting like it's the most terrible thing and it means something beyond what it actually means is probably not helpful. And it was previously reported that this still is taking a break due to the high load and the very high quality standards they have leading to kind of volunteer burnout, they released what appears to be some of the last articles that they're going to release in a while and they are on graph neural networks. One is a gentle introduction to graph neural networks. The other one is understanding convolutions on graphs. So the article pretty much contain what their title says, if you're interested in graph neural network, I can absolutely recommend you give these articles a read, they have very good illustrations of what's happening examples. And as you are used to from distill articles, their quality is extremely high can definitely recommend check it out. Schmidt Hoover announces that he'll be starting as a director of the cost AI initiative. cost is the King Abdullah University of Science and Technology in Saudi Arabia and is one of the most well funded universities on the planet. read who will remain in all his other positions and lead the AI initiative there apparently traveling back and forth. And on his blog, he writes, we hope the new AI initiative will contribute to a new golden age for science analogous to the Islamic golden age that started over a millennium ago. So quite likely we'll be hearing a lot more from KAUST in the near future. Not really ml related, but maybe a little bit if you care about codecs and models that produce code, GitHub has submitted a friend of the court brief, which is essentially an advisory letter to the courts on DMCA takedown notices of copyrighted material in the space of programming. Specifically, the brief concerns what they say is claims involving non literal copying of software. And they give an example case right here where the SAS Institute has brought infringement claims against world programming software. And specifically, they claim that it is not specific lines of code that the defendant has copied, but only that other aspects like the codes overall structure and organization were used. The blog post here also says, after examining the first question, the court found SAS Institute simply repeated and repeated that their system was creative, but did not point to any specific examples that would enable the court or the defendant to identify which parts were used in order to ultimately define those parts that were actually protected by copyright. The court ruled for the defendant leading to this appeal. Imagine something like you didn't exactly copy my picture, but you use the same organization of putting paint on the canvas. Now get a life SAS. Now of course, I don't know all the behinds like copyright is such a complicated issue. And there are legitimate cases where people steal from each other. And I can even see that there are some cases where you can say, well, the structure of my code is so unique and creative, and they copied it or something like this. Like, can't you just spend the money on something useful. So GitHub's position on this is that with a DMCA takedown notice, the noticer should specify in as much detail as possible what are the parts of the defendant's work that are infringing on the copyright such that there is even a possibility of responding. Apparently, it's totally possible to issue a DMCA takedown notice simply by saying, well, there's something in there. And I agree, that's not helpful. But ultimately helpfulness and what ultimately results from the legal system and the courts don't always match. So we'll keep an eye open on how this develops. So this week, there wasn't really many questions in the news to be answered. But there were some really nice questions on Reddit, some really good threads, I thought at least going with it. So there was a thread on how machine learning will revolutionize physics simulations in games. This is almost like a blog article in a Reddit post seems a little bit wasted, honestly, but it's pretty cool. It details what kind of models exist for doing physics simulations and what their advantages and disadvantages are. For example, here's one that's specifically good at modeling large deformations and tears and so on. This is a piece of bread tearing apart. And it also details how machine learning is being used in order to speed up the simulations. Essentially, what you want to do is you want to run the simulations, which are very intensive until you have a data set. And then you want to train the model to sort of predict the end of the simulation from the beginning, which seems like it should be impossible. But hey, it's deep learning. So so pretty cool. If you're interested in the intersection of deep learning and physics, give the Reddit post a read and of course, an upvote. So good job, Syed HM for contributing to the ML subreddit. aristocratic octopus asks, What are the most important problems in ML today? And I specifically want to highlight this thread because the answers are both diverse and really good. They range from diverse environment learning, catastrophic forgetting, modular learning, unstructured data, causality, few shot learning, generalization, and so on. Now, these are things that are researched today. Yet I think if you are coming into this field and looking for something to do, you don't really have an idea of what to work on. This thread might be a little bit of inspiration for you. Kam war asks, do you reproduce a method for state of the art comparison? Or do you just take the result from the paper of the method for state of the art comparison? It's an interesting question. I've seen people doing both. But the user says, for example, they try to reproduce a method, yet they couldn't get the exact same score saying they only got a 30% accuracy on a task, but the paper claimed they can obtain a 70% accuracy. They say they just ran the author's code with maybe a little modification. Some authors said that they need to tune the hyper parameters. And they also say they spend almost 90% time just trying to reproduce previous methods. Welcome to ML research, that is. Yeah, I don't know what the answer is here. There are also various opinions in the comments, you can almost guarantee that a lot of these research papers nowadays, you cannot really count on their numbers, they might leave away from the paper a lot of tricks that they have done to reach that number, or the numbers are just fake altogether. Of course, it could also be that the code they have on GitHub is kind of old code, which happens often if you resubmit somewhere, you redo some experiments, something changes in the meantime, so there can be legit and illegitimate reasons why you don't get the numbers you do. What you can do is you can report both the number they have in the paper, you can also report the number that you achieved with their method and simply consider this as two different baselines and explain yourself in the paper, it is a problem that you spend like ginormous amounts of time reproducing baselines. And as the PhD progressed, I more and more moved away from trying to get the exact numbers that baselines have gotten and simply give it my best shot at reproducing them and then reporting that I think it's up to you as long as you detail in the paper what you do, at least you can't be faulted. And lastly, Oli Mac P asks what kind of hyper parameter optimization do you use? And again, if you are looking for good advice, this thread might be something nice for you. There are suggestions such as ray tune up to no hyper opt, and so on. If you want a cheap method, I would start with all the hyper parameters on the default setting, then simply take the one you think is most important and vary it a little bit while keeping the others constant. Then once you found a good setting for that one, keep that one constant and vary one of the other ones while also keeping the other one constant. If you found a good setting for that one, keep going one by one through the parameters until you've tuned all of them once and start from the beginning. And at some point, you'll converge, you might get into a loop, but it's kind of unlikely that usually got me to relatively good places in hyper parameter search. And it takes way less compute than running some kind of big grid search. Usually these hyper parameters aren't that dependent on each other. So tuning them individually is okay. Speaking of tuning and reproducing and performances, there is a new paper from it's CIUSI and subsea called the devil is in the detail simple tricks to improve systematic generalization of transformers, which gives a number of hints to what you might want to tune when you train transformers. So the paper is an in depth investigation into what it takes to train transformers and what matters. And it gives some advice, for example, relative positional embeddings seem to outperform absolute positional embeddings for certain tasks. Also, you should be careful on how you do early stopping and how you scale your embeddings among other things. And lastly, the paper highlights the trouble with only having IID validation splits and not some sort of test that measures generalization capabilities beyond the exact distribution that the model was trained on. If this is of interest to you, give it a read. Also collaboration between Apple and the vector Institute release unconstrained scene generation with locally conditioned radiance fields at ICP 2021, releasing code on GitHub as well. And this is pretty cool. So this is scene generation, but with a freely moving camera. So apparently previous works have sort of focused on small camera movements, which is already impressive. But with this technique, it allows you to generate scenes from a generator. So this is essentially a GAN that first creates a latent floor map. And then based on that floor map generates the 3d environment in which you can then move around the camera freely. So essentially, you can render that scene from wherever you want. It still looks a little bit wonky. But I think the possibilities of these techniques to make it into entertainment into training into simulation into gaming is pretty cool. And probably not that far away. Again, the code is on GitHub. Check it out. Facebook AI research open sources common objects in 3d, a large scale data set for 3d reconstruction. So this is a data set for 3d reconstructing what they call common objects. Apparently, this is a crowdsourced data set of objects that people just apparently happen to come across, which is pretty cool because these are things that actually appear in real life seems like an extremely challenging data set. But often the most challenging data sets spur new types of discoveries. If you work in 3d reconstruction, this might be your next challenge. Salesforce releases warp drive extremely fast reinforcement learning on an Nvidia GPU. We've seen a number of libraries recently, such as Brax and Isaac gym that make reinforcement learning a lot faster by making use of the accelerators warp drive is especially geared to do multi agent reinforcement learning. So multi agent reinforcement learning is where you have many agents in the same world, and they need to interact with each other somehow cooperating or competing. And the difficult part is of course that you need to evaluate strategies for all of them, they depend on each other and things like backpropagation become extremely hard, especially if you're limited in compute power. This library makes optimal use of the power that you have. And I can definitely recommend that you check it out if you are not a giant corporation. Speaking of giant corporations and reinforcement learning, there's a new paper called boosting search engines with interactive agents. And look, it's me. So I've worked on this with this team as part of my internships and consultancy gigs at Google, but I am in no way the main author here. The paper is about developing agents that search in more than one step. So if you go to a search engine, usually you enter some sort of query. And if you don't immediately find what you're looking for, you may look at the top results and then kind of refine your query to find better results. And that's exactly what we try to do with agents here. So here you might start off with who won the US Open, you'll see a bunch of sports appearing and you might rephrase saying that you're specifically interested in tennis, and so on until you achieve the answer that you want. What's specifically cool about this is that there's code to go along with it. So next to the specific code that powers the search agents, there is a implementation of new zero based on a library called seed RL. Now this is also geared at making optimal use of your accelerators in such as a GPU or TPU while massively distributing the inference environments. So the museum algorithm is generic, I have authored part of it. And if you are looking to use new zero, this might be a good implementation for you as the new zero paper as well as the pseudo code they released contain various small subtle errors that nevertheless make the whole thing essentially not work. This implementation right here to the best of my knowledge contains less bugs, and it works pretty much with gym environments. So you plug in a gym environment with a little bit of extra information on how your tensors are shaped and so on. And that's all you have to do to trigger mu zero. So check out paper, check out code, and let us know if something's wrong. And last news, AI startups claim to detect depression from speech, but the jury's out on their accuracy. This is from venture beat. Now time and time again, we see these articles about claims that AI can do something, but it turns out the reality is a little bit more complicated. So there are a lot of examples of systems claiming to detect something to do with COVID. And then it turns out none of them is useful. This here is a little bit less bad because with COVID there was a big academic push to just make use of the hype to get papers published here we're already a little bit into the direction of actual products being implemented, but still the article details numerous problems that startups face some have only collected their data from certain parts of the world to be exact just from one city others focus on only native English speaker and confuse not being able to speak English with showing signs of depression. Still others neglect entire accents even for native speakers, and the list of problems goes on and on and on. Again, I don't think this is a problem where there is any kind of easy solution. I'm strongly of the opinion that we need to make progress in this there is a shortage of mental health professionals, and it's not inconceivable that machines can assist us and can deliver better lives to people even in the mental health area, but exactly what shape that's going to take and exactly how we're going to prevent some sort of dystopian future where some sort of buggy algorithm has way too much power over your life is I guess one of the big challenges of our generation. Again, a good place to start is to continuously monitor and evaluate the systems there are and to allow ourselves to take some risk as we push forward as long as we have it under control. Again, I know not a super strong opinion, but what can I do? I'm boring. Cool. This was it for ml news. Thank you so much for watching, listening and subscribing. If you know someone who's not informed about the world of ml, please tell them about ml news. We're about to reach 100k subscribers. Very exciting. I'll see you next time. Bye bye.
[ { "start": 0, "end": 6, "text": " Google decommissions DeepMinds health app, Juergen Schmidhuber leads an AI initiative in Saudi" }, { "start": 6, "end": 18.8, "text": " Arabia, and I have a new paper. Welcome to ML News. Hey, hey, you. Yes, you. Do you run experiments?" }, { "start": 19.92, "end": 27.44, "text": " Machine learning experiments? Yes. How do you track them? What? That's not a good way to track" }, { "start": 27.44, "end": 34.4, "text": " them. Oh, here's what you should do. You should use weights and biases. Coincidentally, this video" }, { "start": 34.4, "end": 41.04, "text": " is sponsored by them. What is it? It's a system to track your experiments, track your artifacts," }, { "start": 41.04, "end": 48.08, "text": " reproduce all the things you've ever done, see metrics, data sets, models from the inception of" }, { "start": 48.08, "end": 56, "text": " your idea to the final deployment and beyond. This is the ultimate tool, you can get started" }, { "start": 56, "end": 63.84, "text": " with just one line of code. Yes, one line of code and be amazed at what it gives you hyper parameter" }, { "start": 63.84, "end": 71.68, "text": " tuning, metrics tracking, resource utilization, model and data set versioning on cloud and on" }, { "start": 71.68, "end": 77.52, "text": " premise. Get this and much more when you sign up to weights and biases. Personal accounts are" }, { "start": 77.52, "end": 85.12, "text": " completely free. What are you waiting for? Sign up now. No, actually, watch the video first," }, { "start": 85.12, "end": 92.08, "text": " then sign up or sign up now and sign up later. Get your mom to sign up, get your pet to sign up." }, { "start": 92.08, "end": 99.28, "text": " There's absolutely no reason not to go to this URL and get your account now. Cheers." }, { "start": 103.44, "end": 110.48, "text": " Hello and welcome to ML news on this beautiful glorious Monday. Let's dive into the first story" }, { "start": 110.48, "end": 116.80000000000001, "text": " tech crunch writes Google confirms it's pulling the plug on streams, its UK clinician support app." }, { "start": 116.80000000000001, "end": 124.08, "text": " So this app has a bit of a history since 2015. DeepMind started it up originally trying to bring" }, { "start": 124.08, "end": 130.88, "text": " more AI into the health ecosystem. Now the streams health app isn't actually an AI focused app," }, { "start": 130.88, "end": 135.76, "text": " it's kind of an app to track health data and assist clinicians in making decisions. The goal" }, { "start": 135.76, "end": 141.35999999999999, "text": " was always to bring AI into the picture. But this apparently has never succeeded. The article" }, { "start": 141.35999999999999, "end": 148.79999999999998, "text": " details the history of the app as it went through DeepMind stages, then of course, the big scandal" }, { "start": 148.79999999999998, "end": 154.72, "text": " where it was discovered that DeepMind didn't really have the legal basis for dealing with the data" }, { "start": 154.72, "end": 159.76, "text": " that they were dealing with. That was a weird sentence. And finally, DeepMind handing over the" }, { "start": 159.76, "end": 165.67999999999998, "text": " app to Google health, even though they said they would never share anything about this with Google." }, { "start": 165.67999999999998, "end": 171.76, "text": " And now finally, Google deciding to turn off the app completely, whether or not this is a result" }, { "start": 171.76, "end": 176.64, "text": " of data privacy issues, or just being a result of the business case not being strong enough," }, { "start": 176.64, "end": 182.23999999999998, "text": " we don't exactly know if you're interested in this, this article on tech crunch dives fairly" }, { "start": 182.23999999999998, "end": 188.07999999999998, "text": " deeply into the issue. What is special is how often it is mentioned that the data is going to" }, { "start": 188.08, "end": 194, "text": " be deleted. So it starts off with at least two paragraphs saying the data is going to be deleted," }, { "start": 194, "end": 199.52, "text": " it mentions it throughout, and then it ends again with a paragraph on how the data is going to be" }, { "start": 199.52, "end": 205.60000000000002, "text": " deleted. So rest assured, the data is going to be deleted. I'm winking, you can't see it. I'm winking." }, { "start": 206.88000000000002, "end": 212.56, "text": " Now the article is also a little bit critical of Google starting up projects and then killing them" }, { "start": 212.56, "end": 220.96, "text": " off after a short while, such as Google plus or the many, many, many, many, many, many messaging apps" }, { "start": 220.96, "end": 226.4, "text": " that Google has released things like Google video and so on. But honestly, I think the strategy has" }, { "start": 226.4, "end": 231.28, "text": " worked out so far, we got a couple of very nice products out of Google that started exactly like" }, { "start": 231.28, "end": 236.72, "text": " this that we might have never gotten if every single new product is an eternal commitment to" }, { "start": 236.72, "end": 242.96, "text": " support it. That being said, bring back the free storage for Google Photos. This was actually useful." }, { "start": 242.96, "end": 249.36, "text": " So finally, Google is turning off this streams app, there's apparently still one group of customers" }, { "start": 249.36, "end": 254.56, "text": " that is using it ongoing, I guess still have to come to some sort of an agreement until the end" }, { "start": 254.56, "end": 259.12, "text": " of their contract. But going further, let's just wait for the next Google inventions, there should" }, { "start": 259.12, "end": 264.16, "text": " be like some sort of a betting market where you can bet whether or not new Google products will" }, { "start": 264.16, "end": 272.56, "text": " make it five years past their inception could be fun. IFL s writes AI makes strangely accurate" }, { "start": 272.56, "end": 278.96000000000004, "text": " predictions from blurry medical scans alarming researchers. So this is an article about this" }, { "start": 278.96000000000004, "end": 284.48, "text": " paper right here reading race AI recognizes patients racial identity and medical images," }, { "start": 284.48, "end": 291.6, "text": " that is a study into various data sets and algorithms and whether or not they can detect" }, { "start": 291.6, "end": 299.12, "text": " a patient's race just from radiological images such as these ones. Now there is a common pattern" }, { "start": 299.12, "end": 306.24, "text": " among articles like this one that usually some confounding variable wasn't taken into account" }, { "start": 306.24, "end": 312.24, "text": " like source of data set or things like this. However, this paper specifically pays a lot" }, { "start": 312.24, "end": 318.32000000000005, "text": " of attention to eliminate all such confounding variables and really tests multiple hypotheses" }, { "start": 318.32, "end": 325.59999999999997, "text": " on how the model makes its assessment. So there are apparently a few distinct markers of race even" }, { "start": 325.59999999999997, "end": 331.36, "text": " in these radiological images. But even if they control for those, the models are still able to" }, { "start": 331.36, "end": 338.08, "text": " make out patients self reported races. The really interesting thing is that even if the images are" }, { "start": 338.08, "end": 344.96, "text": " degraded, such as this one right here and really pixelated, the models are still able to make out" }, { "start": 344.96, "end": 350.64, "text": " the patient self reported race with a higher than random accuracy, but the pictures themselves would" }, { "start": 350.64, "end": 355.59999999999997, "text": " be completely undiagnosable for any human and certainly humans couldn't make out the race of" }, { "start": 355.59999999999997, "end": 362.15999999999997, "text": " the patients. So as I said, the paper is a fairly lengthy investigation into these models and data" }, { "start": 362.15999999999997, "end": 368.32, "text": " sets, including trying to tease out race from models that have been trained not on predicting" }, { "start": 368.32, "end": 374.08, "text": " race, which essentially means that in order to predict some health outcome, the models in some" }, { "start": 374.08, "end": 379.44, "text": " part make predictions that correlate with race. And it is a fairly lengthy article. But if you're" }, { "start": 379.44, "end": 384.64, "text": " interested in these things, definitely give it a read. It seems like to be a very thorough study" }, { "start": 384.64, "end": 390.4, "text": " of these things. But the article here frames it all in terms of how terrible this is how biased" }, { "start": 390.4, "end": 395.68, "text": " these algorithms are. And while there's certainly truth to that, and many of these algorithms are" }, { "start": 395.68, "end": 401.68, "text": " in fact bias when they shouldn't be and due to various reasons, there also is the apparently" }, { "start": 401.68, "end": 408.24, "text": " rather shocking conclusions that your health outcomes interact with your genetics, I know" }, { "start": 408.24, "end": 414.40000000000003, "text": " new concept. So again, while we can certainly all agree that results like this are worrisome," }, { "start": 414.40000000000003, "end": 421.12, "text": " and there are problems with bias in AI, it seems that people would like their ideologies to overrule" }, { "start": 421.12, "end": 426.32, "text": " reality. And I don't think that's a worthwhile goal. So that all being said, these problems are" }, { "start": 426.32, "end": 432.15999999999997, "text": " of course incredibly difficult, but we should look at them with the view of what's going to help the" }, { "start": 432.15999999999997, "end": 436.96, "text": " most people and what's going to deliver the best outcomes for all individuals. And there are" }, { "start": 436.96, "end": 442, "text": " probably no easy solutions for incredibly interconnected problems that are extremely" }, { "start": 442, "end": 449.2, "text": " multifactorial and include things like genetics, environment, society, data gathering, and the" }, { "start": 449.2, "end": 454.96, "text": " entire historical context of all of that. And that I guess is my rather boring take on that." }, { "start": 454.96, "end": 462.64, "text": " In related news, the New York Times writes Facebook apologizes after AI puts primates label on video" }, { "start": 462.64, "end": 468, "text": " of black men, Facebook called it an unacceptable error, the company has struggled with other issues" }, { "start": 468, "end": 474.32, "text": " related to race. Now the article is about this Daily Mail video about a couple of black men," }, { "start": 474.32, "end": 481.12, "text": " and the algorithm asks keep seeing videos about primates Yes or dismiss. So the classification" }, { "start": 481.12, "end": 487.28000000000003, "text": " algorithm made a mistake here. And this is not a new thing. As the article states in 2015, Google" }, { "start": 487.28000000000003, "end": 492.8, "text": " mistakenly labeled pictures of black people as gorillas. And the article also said more than two" }, { "start": 492.8, "end": 498.4, "text": " years later, wired found that Google solution was to censor the word gorilla from searches while" }, { "start": 498.4, "end": 504.96, "text": " also blocking chimp, chimpanzee and monkey. The article then goes into some more inter company" }, { "start": 504.96, "end": 510.48, "text": " things inside of Facebook trying to link this to the system or something like this, which I find" }, { "start": 510.48, "end": 516.72, "text": " quite shady, honestly, these systems have a number of issues, there are issues, of course, with data" }, { "start": 516.72, "end": 521.6800000000001, "text": " collection, there are issues with all kinds of other stuff. But ultimately, these systems are" }, { "start": 521.6800000000001, "end": 528.16, "text": " trained in a way that errors are errors. So if you fail to distinguish a yacht from a sailboat," }, { "start": 528.16, "end": 535.6800000000001, "text": " that is an error to the model in the same way as if you fail to distinguish a human from a primate," }, { "start": 535.68, "end": 542.9599999999999, "text": " the model has no inherent way of knowing that one is a socially acceptable error and one is a totally" }, { "start": 542.9599999999999, "end": 548.8, "text": " socially unacceptable error, there are ways to mitigate this, but they usually require efforts" }, { "start": 548.8, "end": 554.64, "text": " on the part of humans that go there and essentially correct for all the potential socially" }, { "start": 554.64, "end": 560.8, "text": " terrible errors that the model can do. And very often that burden is so large, it's combinatorically" }, { "start": 560.8, "end": 567.28, "text": " very, very hard to do this, all you can do is just block entire pieces of the search space in order" }, { "start": 567.28, "end": 572.24, "text": " to mitigate these mistakes. This is displayed as some kind of like a negative system, like," }, { "start": 572.24, "end": 577.8399999999999, "text": " well, the AI is still biased, but now we're just sort of censoring it. Yes, I mean, what can you" }, { "start": 577.8399999999999, "end": 583.04, "text": " do? It's very easy to complain about these types of things. Now, of course, many of you might have" }, { "start": 583.04, "end": 589.76, "text": " noticed that technically, the model isn't wrong as human are the most abundant and widespread species" }, { "start": 589.76, "end": 595.6, "text": " of primates. But you know, technicalities aside, I think we can all agree that this isn't an output" }, { "start": 595.6, "end": 600.56, "text": " that you would want from your system. So what's the solution? I don't know, probably the best" }, { "start": 600.56, "end": 606.88, "text": " solution would be an attack from multiple sides where the companies invest more work into mitigating" }, { "start": 606.88, "end": 612.56, "text": " these types of errors, which means essentially collecting more training data on these intersections" }, { "start": 612.56, "end": 617.92, "text": " of very socially critical issues such that the models get more confident about them. And on the" }, { "start": 617.92, "end": 623.76, "text": " other hand, it might also require a little bit of a rethinking in society where we see a mistake" }, { "start": 623.76, "end": 630.4799999999999, "text": " like this, not as some terrible thing happening, but more into the category of mislabeling a sailboat" }, { "start": 630.4799999999999, "end": 636.4799999999999, "text": " as a yacht and vice versa. It'd be nice if we get to a point where we think, ah, cool, the system" }, { "start": 636.4799999999999, "end": 640.7199999999999, "text": " made a mistake. Let's go on with my life. But of course, it's not always that easy, because we use" }, { "start": 640.7199999999999, "end": 645.1999999999999, "text": " these types of systems in situations where it actually matters what the system predicts. So" }, { "start": 645.2, "end": 650.5600000000001, "text": " ultimately, it comes down to close supervision of your products and continuously evaluating their" }, { "start": 650.5600000000001, "end": 655.76, "text": " deployments. Again, it's a hard problem. I'm confident we can make progress on it. Complaining" }, { "start": 655.76, "end": 660.88, "text": " about it is fine. Just complaining and acting like it's the most terrible thing and it means" }, { "start": 660.88, "end": 667.36, "text": " something beyond what it actually means is probably not helpful. And it was previously reported that" }, { "start": 667.36, "end": 673.6800000000001, "text": " this still is taking a break due to the high load and the very high quality standards they have" }, { "start": 673.68, "end": 679.52, "text": " leading to kind of volunteer burnout, they released what appears to be some of the last articles that" }, { "start": 679.52, "end": 684.3199999999999, "text": " they're going to release in a while and they are on graph neural networks. One is a gentle" }, { "start": 684.3199999999999, "end": 689.4399999999999, "text": " introduction to graph neural networks. The other one is understanding convolutions on graphs. So" }, { "start": 689.4399999999999, "end": 694.9599999999999, "text": " the article pretty much contain what their title says, if you're interested in graph neural network," }, { "start": 694.9599999999999, "end": 700.8, "text": " I can absolutely recommend you give these articles a read, they have very good illustrations of" }, { "start": 700.8, "end": 707.8399999999999, "text": " what's happening examples. And as you are used to from distill articles, their quality is extremely" }, { "start": 707.8399999999999, "end": 715.28, "text": " high can definitely recommend check it out. Schmidt Hoover announces that he'll be starting as" }, { "start": 715.28, "end": 722.0799999999999, "text": " a director of the cost AI initiative. cost is the King Abdullah University of Science and Technology" }, { "start": 722.0799999999999, "end": 730.16, "text": " in Saudi Arabia and is one of the most well funded universities on the planet. read who will remain" }, { "start": 730.16, "end": 735.4399999999999, "text": " in all his other positions and lead the AI initiative there apparently traveling back and" }, { "start": 735.4399999999999, "end": 741.04, "text": " forth. And on his blog, he writes, we hope the new AI initiative will contribute to a new golden" }, { "start": 741.04, "end": 747.36, "text": " age for science analogous to the Islamic golden age that started over a millennium ago. So quite" }, { "start": 747.36, "end": 755.12, "text": " likely we'll be hearing a lot more from KAUST in the near future. Not really ml related, but maybe" }, { "start": 755.12, "end": 761.52, "text": " a little bit if you care about codecs and models that produce code, GitHub has submitted a friend" }, { "start": 761.52, "end": 767.6, "text": " of the court brief, which is essentially an advisory letter to the courts on DMCA takedown" }, { "start": 767.6, "end": 774.64, "text": " notices of copyrighted material in the space of programming. Specifically, the brief concerns" }, { "start": 774.64, "end": 781.04, "text": " what they say is claims involving non literal copying of software. And they give an example" }, { "start": 781.04, "end": 786.8, "text": " case right here where the SAS Institute has brought infringement claims against world programming" }, { "start": 786.8, "end": 793.36, "text": " software. And specifically, they claim that it is not specific lines of code that the defendant has" }, { "start": 793.36, "end": 800, "text": " copied, but only that other aspects like the codes overall structure and organization were used. The" }, { "start": 800, "end": 805.52, "text": " blog post here also says, after examining the first question, the court found SAS Institute" }, { "start": 805.52, "end": 811.4399999999999, "text": " simply repeated and repeated that their system was creative, but did not point to any specific" }, { "start": 811.4399999999999, "end": 816.56, "text": " examples that would enable the court or the defendant to identify which parts were used" }, { "start": 816.56, "end": 821.68, "text": " in order to ultimately define those parts that were actually protected by copyright. The court" }, { "start": 821.68, "end": 826.88, "text": " ruled for the defendant leading to this appeal. Imagine something like you didn't exactly copy" }, { "start": 826.88, "end": 835.04, "text": " my picture, but you use the same organization of putting paint on the canvas. Now get a life SAS." }, { "start": 835.04, "end": 840, "text": " Now of course, I don't know all the behinds like copyright is such a complicated issue. And there" }, { "start": 840, "end": 846.56, "text": " are legitimate cases where people steal from each other. And I can even see that there are some cases" }, { "start": 846.56, "end": 852.88, "text": " where you can say, well, the structure of my code is so unique and creative, and they copied it or" }, { "start": 852.88, "end": 858.56, "text": " something like this. Like, can't you just spend the money on something useful. So GitHub's position" }, { "start": 858.56, "end": 867.1999999999999, "text": " on this is that with a DMCA takedown notice, the noticer should specify in as much detail as" }, { "start": 867.1999999999999, "end": 873.52, "text": " possible what are the parts of the defendant's work that are infringing on the copyright such" }, { "start": 873.52, "end": 879.3599999999999, "text": " that there is even a possibility of responding. Apparently, it's totally possible to issue a DMCA" }, { "start": 879.3599999999999, "end": 885.04, "text": " takedown notice simply by saying, well, there's something in there. And I agree, that's not" }, { "start": 885.04, "end": 890.24, "text": " helpful. But ultimately helpfulness and what ultimately results from the legal system and" }, { "start": 890.24, "end": 897.28, "text": " the courts don't always match. So we'll keep an eye open on how this develops. So this week," }, { "start": 897.28, "end": 903.12, "text": " there wasn't really many questions in the news to be answered. But there were some really nice" }, { "start": 903.12, "end": 908.88, "text": " questions on Reddit, some really good threads, I thought at least going with it. So there was a" }, { "start": 908.88, "end": 914.7199999999999, "text": " thread on how machine learning will revolutionize physics simulations in games. This is almost like" }, { "start": 914.72, "end": 919.76, "text": " a blog article in a Reddit post seems a little bit wasted, honestly, but it's pretty cool. It" }, { "start": 919.76, "end": 925.44, "text": " details what kind of models exist for doing physics simulations and what their advantages" }, { "start": 925.44, "end": 931.28, "text": " and disadvantages are. For example, here's one that's specifically good at modeling large" }, { "start": 931.28, "end": 936.88, "text": " deformations and tears and so on. This is a piece of bread tearing apart. And it also details how" }, { "start": 936.88, "end": 942.5600000000001, "text": " machine learning is being used in order to speed up the simulations. Essentially, what you want to" }, { "start": 942.56, "end": 946.9599999999999, "text": " do is you want to run the simulations, which are very intensive until you have a data set. And then" }, { "start": 946.9599999999999, "end": 951.8399999999999, "text": " you want to train the model to sort of predict the end of the simulation from the beginning," }, { "start": 951.8399999999999, "end": 956.7199999999999, "text": " which seems like it should be impossible. But hey, it's deep learning. So so pretty cool. If you're" }, { "start": 956.7199999999999, "end": 963.68, "text": " interested in the intersection of deep learning and physics, give the Reddit post a read and of" }, { "start": 963.68, "end": 970.8, "text": " course, an upvote. So good job, Syed HM for contributing to the ML subreddit. aristocratic" }, { "start": 970.8, "end": 977.04, "text": " octopus asks, What are the most important problems in ML today? And I specifically want to highlight" }, { "start": 977.04, "end": 984.4799999999999, "text": " this thread because the answers are both diverse and really good. They range from diverse environment" }, { "start": 984.4799999999999, "end": 991.52, "text": " learning, catastrophic forgetting, modular learning, unstructured data, causality, few shot learning," }, { "start": 991.52, "end": 998.4, "text": " generalization, and so on. Now, these are things that are researched today. Yet I think if you are" }, { "start": 998.4, "end": 1002.88, "text": " coming into this field and looking for something to do, you don't really have an idea of what to" }, { "start": 1002.88, "end": 1008.48, "text": " work on. This thread might be a little bit of inspiration for you. Kam war asks, do you" }, { "start": 1008.48, "end": 1014.16, "text": " reproduce a method for state of the art comparison? Or do you just take the result from the paper of" }, { "start": 1014.16, "end": 1019.04, "text": " the method for state of the art comparison? It's an interesting question. I've seen people doing both." }, { "start": 1019.04, "end": 1024.08, "text": " But the user says, for example, they try to reproduce a method, yet they couldn't get the" }, { "start": 1024.08, "end": 1029.36, "text": " exact same score saying they only got a 30% accuracy on a task, but the paper claimed they" }, { "start": 1029.36, "end": 1036.6399999999999, "text": " can obtain a 70% accuracy. They say they just ran the author's code with maybe a little modification." }, { "start": 1036.6399999999999, "end": 1042.6399999999999, "text": " Some authors said that they need to tune the hyper parameters. And they also say they spend almost 90%" }, { "start": 1042.6399999999999, "end": 1047.9199999999998, "text": " time just trying to reproduce previous methods. Welcome to ML research, that is. Yeah, I don't" }, { "start": 1047.9199999999998, "end": 1052.72, "text": " know what the answer is here. There are also various opinions in the comments, you can almost" }, { "start": 1052.72, "end": 1059.3600000000001, "text": " guarantee that a lot of these research papers nowadays, you cannot really count on their numbers," }, { "start": 1059.3600000000001, "end": 1064.4, "text": " they might leave away from the paper a lot of tricks that they have done to reach that number," }, { "start": 1064.4, "end": 1070.32, "text": " or the numbers are just fake altogether. Of course, it could also be that the code they have on GitHub" }, { "start": 1070.32, "end": 1075.84, "text": " is kind of old code, which happens often if you resubmit somewhere, you redo some experiments," }, { "start": 1075.84, "end": 1081.44, "text": " something changes in the meantime, so there can be legit and illegitimate reasons why you don't get" }, { "start": 1081.44, "end": 1087.6000000000001, "text": " the numbers you do. What you can do is you can report both the number they have in the paper," }, { "start": 1087.6000000000001, "end": 1092.72, "text": " you can also report the number that you achieved with their method and simply consider this as two" }, { "start": 1092.72, "end": 1098.8, "text": " different baselines and explain yourself in the paper, it is a problem that you spend like" }, { "start": 1098.8, "end": 1104.64, "text": " ginormous amounts of time reproducing baselines. And as the PhD progressed, I more and more moved" }, { "start": 1104.64, "end": 1110.3200000000002, "text": " away from trying to get the exact numbers that baselines have gotten and simply give it my best" }, { "start": 1110.32, "end": 1115.28, "text": " shot at reproducing them and then reporting that I think it's up to you as long as you detail in" }, { "start": 1115.28, "end": 1120.48, "text": " the paper what you do, at least you can't be faulted. And lastly, Oli Mac P asks what kind" }, { "start": 1120.48, "end": 1126.56, "text": " of hyper parameter optimization do you use? And again, if you are looking for good advice," }, { "start": 1126.56, "end": 1132.1599999999999, "text": " this thread might be something nice for you. There are suggestions such as ray tune up to no hyper" }, { "start": 1132.1599999999999, "end": 1137.9199999999998, "text": " opt, and so on. If you want a cheap method, I would start with all the hyper parameters on the" }, { "start": 1137.92, "end": 1142.96, "text": " default setting, then simply take the one you think is most important and vary it a little bit" }, { "start": 1142.96, "end": 1147.76, "text": " while keeping the others constant. Then once you found a good setting for that one, keep that one" }, { "start": 1147.76, "end": 1153.28, "text": " constant and vary one of the other ones while also keeping the other one constant. If you found a" }, { "start": 1153.28, "end": 1158.48, "text": " good setting for that one, keep going one by one through the parameters until you've tuned all of" }, { "start": 1158.48, "end": 1163.68, "text": " them once and start from the beginning. And at some point, you'll converge, you might get into a" }, { "start": 1163.68, "end": 1169.68, "text": " loop, but it's kind of unlikely that usually got me to relatively good places in hyper parameter" }, { "start": 1169.68, "end": 1175.1200000000001, "text": " search. And it takes way less compute than running some kind of big grid search. Usually these hyper" }, { "start": 1175.1200000000001, "end": 1182.5600000000002, "text": " parameters aren't that dependent on each other. So tuning them individually is okay. Speaking of" }, { "start": 1182.5600000000002, "end": 1189.92, "text": " tuning and reproducing and performances, there is a new paper from it's CIUSI and subsea called the" }, { "start": 1189.92, "end": 1195.2, "text": " devil is in the detail simple tricks to improve systematic generalization of transformers," }, { "start": 1195.2, "end": 1201.76, "text": " which gives a number of hints to what you might want to tune when you train transformers. So the" }, { "start": 1201.76, "end": 1207.6000000000001, "text": " paper is an in depth investigation into what it takes to train transformers and what matters. And" }, { "start": 1207.6000000000001, "end": 1213.68, "text": " it gives some advice, for example, relative positional embeddings seem to outperform absolute" }, { "start": 1213.68, "end": 1219.2, "text": " positional embeddings for certain tasks. Also, you should be careful on how you do early stopping" }, { "start": 1219.2, "end": 1224.88, "text": " and how you scale your embeddings among other things. And lastly, the paper highlights the" }, { "start": 1224.88, "end": 1231.28, "text": " trouble with only having IID validation splits and not some sort of test that measures generalization" }, { "start": 1231.28, "end": 1235.92, "text": " capabilities beyond the exact distribution that the model was trained on. If this is of interest" }, { "start": 1235.92, "end": 1241.68, "text": " to you, give it a read. Also collaboration between Apple and the vector Institute release" }, { "start": 1241.68, "end": 1248.48, "text": " unconstrained scene generation with locally conditioned radiance fields at ICP 2021, releasing" }, { "start": 1248.48, "end": 1255.52, "text": " code on GitHub as well. And this is pretty cool. So this is scene generation, but with a freely" }, { "start": 1255.52, "end": 1262.08, "text": " moving camera. So apparently previous works have sort of focused on small camera movements, which" }, { "start": 1262.08, "end": 1267.2, "text": " is already impressive. But with this technique, it allows you to generate scenes from a generator." }, { "start": 1267.2, "end": 1273.52, "text": " So this is essentially a GAN that first creates a latent floor map. And then based on that floor" }, { "start": 1273.52, "end": 1280.32, "text": " map generates the 3d environment in which you can then move around the camera freely. So essentially," }, { "start": 1280.32, "end": 1286.48, "text": " you can render that scene from wherever you want. It still looks a little bit wonky. But I think the" }, { "start": 1286.48, "end": 1293.28, "text": " possibilities of these techniques to make it into entertainment into training into simulation into" }, { "start": 1293.28, "end": 1299.92, "text": " gaming is pretty cool. And probably not that far away. Again, the code is on GitHub. Check it out." }, { "start": 1299.92, "end": 1307.92, "text": " Facebook AI research open sources common objects in 3d, a large scale data set for 3d reconstruction." }, { "start": 1307.92, "end": 1313.76, "text": " So this is a data set for 3d reconstructing what they call common objects. Apparently," }, { "start": 1313.76, "end": 1319.76, "text": " this is a crowdsourced data set of objects that people just apparently happen to come across," }, { "start": 1319.76, "end": 1324.96, "text": " which is pretty cool because these are things that actually appear in real life seems like an" }, { "start": 1324.96, "end": 1330.72, "text": " extremely challenging data set. But often the most challenging data sets spur new types of" }, { "start": 1330.72, "end": 1338.48, "text": " discoveries. If you work in 3d reconstruction, this might be your next challenge. Salesforce" }, { "start": 1338.48, "end": 1344.48, "text": " releases warp drive extremely fast reinforcement learning on an Nvidia GPU. We've seen a number" }, { "start": 1344.48, "end": 1351.8400000000001, "text": " of libraries recently, such as Brax and Isaac gym that make reinforcement learning a lot faster by" }, { "start": 1351.84, "end": 1357.6, "text": " making use of the accelerators warp drive is especially geared to do multi agent reinforcement" }, { "start": 1357.6, "end": 1362.24, "text": " learning. So multi agent reinforcement learning is where you have many agents in the same world," }, { "start": 1362.24, "end": 1367.9199999999998, "text": " and they need to interact with each other somehow cooperating or competing. And the difficult part" }, { "start": 1367.9199999999998, "end": 1374.3999999999999, "text": " is of course that you need to evaluate strategies for all of them, they depend on each other and" }, { "start": 1374.3999999999999, "end": 1380.8, "text": " things like backpropagation become extremely hard, especially if you're limited in compute power." }, { "start": 1380.8, "end": 1386.96, "text": " This library makes optimal use of the power that you have. And I can definitely recommend" }, { "start": 1386.96, "end": 1393.44, "text": " that you check it out if you are not a giant corporation. Speaking of giant corporations" }, { "start": 1393.44, "end": 1398.72, "text": " and reinforcement learning, there's a new paper called boosting search engines with interactive" }, { "start": 1398.72, "end": 1406.96, "text": " agents. And look, it's me. So I've worked on this with this team as part of my internships" }, { "start": 1406.96, "end": 1413.6000000000001, "text": " and consultancy gigs at Google, but I am in no way the main author here. The paper is about" }, { "start": 1413.6000000000001, "end": 1420.64, "text": " developing agents that search in more than one step. So if you go to a search engine, usually" }, { "start": 1420.64, "end": 1424.64, "text": " you enter some sort of query. And if you don't immediately find what you're looking for, you" }, { "start": 1424.64, "end": 1429.8400000000001, "text": " may look at the top results and then kind of refine your query to find better results. And" }, { "start": 1429.8400000000001, "end": 1436.4, "text": " that's exactly what we try to do with agents here. So here you might start off with who won the US" }, { "start": 1436.4, "end": 1442, "text": " Open, you'll see a bunch of sports appearing and you might rephrase saying that you're specifically" }, { "start": 1442, "end": 1447.52, "text": " interested in tennis, and so on until you achieve the answer that you want. What's specifically cool" }, { "start": 1447.52, "end": 1452.64, "text": " about this is that there's code to go along with it. So next to the specific code that powers the" }, { "start": 1452.64, "end": 1459.8400000000001, "text": " search agents, there is a implementation of new zero based on a library called seed RL. Now this" }, { "start": 1459.84, "end": 1466.8, "text": " is also geared at making optimal use of your accelerators in such as a GPU or TPU while" }, { "start": 1466.8, "end": 1473.76, "text": " massively distributing the inference environments. So the museum algorithm is generic, I have authored" }, { "start": 1473.76, "end": 1479.36, "text": " part of it. And if you are looking to use new zero, this might be a good implementation for you" }, { "start": 1479.36, "end": 1486.1599999999999, "text": " as the new zero paper as well as the pseudo code they released contain various small subtle errors" }, { "start": 1486.16, "end": 1491.76, "text": " that nevertheless make the whole thing essentially not work. This implementation right here to the" }, { "start": 1491.76, "end": 1498.4, "text": " best of my knowledge contains less bugs, and it works pretty much with gym environments. So you" }, { "start": 1498.4, "end": 1503.3600000000001, "text": " plug in a gym environment with a little bit of extra information on how your tensors are shaped" }, { "start": 1503.3600000000001, "end": 1508.3200000000002, "text": " and so on. And that's all you have to do to trigger mu zero. So check out paper, check out code," }, { "start": 1508.3200000000002, "end": 1515.44, "text": " and let us know if something's wrong. And last news, AI startups claim to detect depression from" }, { "start": 1515.44, "end": 1521.8400000000001, "text": " speech, but the jury's out on their accuracy. This is from venture beat. Now time and time again," }, { "start": 1521.8400000000001, "end": 1528.16, "text": " we see these articles about claims that AI can do something, but it turns out the reality is a little" }, { "start": 1528.16, "end": 1534.0800000000002, "text": " bit more complicated. So there are a lot of examples of systems claiming to detect something" }, { "start": 1534.0800000000002, "end": 1539.3600000000001, "text": " to do with COVID. And then it turns out none of them is useful. This here is a little bit less bad" }, { "start": 1539.36, "end": 1545.9199999999998, "text": " because with COVID there was a big academic push to just make use of the hype to get papers published" }, { "start": 1545.9199999999998, "end": 1551.36, "text": " here we're already a little bit into the direction of actual products being implemented, but still" }, { "start": 1551.36, "end": 1557.04, "text": " the article details numerous problems that startups face some have only collected their data from" }, { "start": 1557.04, "end": 1563.04, "text": " certain parts of the world to be exact just from one city others focus on only native English" }, { "start": 1563.04, "end": 1568.6399999999999, "text": " speaker and confuse not being able to speak English with showing signs of depression. Still" }, { "start": 1568.64, "end": 1574.4, "text": " others neglect entire accents even for native speakers, and the list of problems goes on and" }, { "start": 1574.4, "end": 1580.16, "text": " on and on. Again, I don't think this is a problem where there is any kind of easy solution. I'm" }, { "start": 1580.16, "end": 1585.68, "text": " strongly of the opinion that we need to make progress in this there is a shortage of mental" }, { "start": 1585.68, "end": 1591.76, "text": " health professionals, and it's not inconceivable that machines can assist us and can deliver" }, { "start": 1591.76, "end": 1598.0800000000002, "text": " better lives to people even in the mental health area, but exactly what shape that's going to take" }, { "start": 1598.08, "end": 1603.76, "text": " and exactly how we're going to prevent some sort of dystopian future where some sort of buggy" }, { "start": 1603.76, "end": 1609.9199999999998, "text": " algorithm has way too much power over your life is I guess one of the big challenges of our" }, { "start": 1609.9199999999998, "end": 1615.9199999999998, "text": " generation. Again, a good place to start is to continuously monitor and evaluate the systems" }, { "start": 1615.9199999999998, "end": 1622.56, "text": " there are and to allow ourselves to take some risk as we push forward as long as we have it under" }, { "start": 1622.56, "end": 1628.48, "text": " control. Again, I know not a super strong opinion, but what can I do? I'm boring. Cool. This was it" }, { "start": 1628.48, "end": 1636.32, "text": " for ml news. Thank you so much for watching, listening and subscribing. If you know someone" }, { "start": 1636.32, "end": 1642.48, "text": " who's not informed about the world of ml, please tell them about ml news. We're about to reach 100k" }, { "start": 1642.48, "end": 1653.3600000000001, "text": " subscribers. Very exciting. I'll see you next time. Bye bye." } ]
MgJ3JsE3Tqo
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Author Interview - VOS: Learning What You Don't Know by Virtual Outlier Synthesis
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "paper explained", "virtual outliers", "how to detect outliers", "deep learning outliers", "deep learning outlier detection", "vos", "deep learning energy", "latent space outliers", "density estimation", "classification boundaries", "generative models" ]
#deeplearning #objectdetection #outliers An interview with the authors of "Virtual Outlier Synthesis". Watch the paper review video here: https://youtu.be/i-J4T3uLC9M Outliers are data points that are highly unlikely to be seen in the training distribution, and therefore deep neural networks have troubles when dealing with them. Many approaches to detecting outliers at inference time have been proposed, but most of them show limited success. This paper presents Virtual Outlier Synthesis, which is a method that pairs synthetic outliers, forged in the latent space, with an energy-based regularization of the network at training time. The result is a deep network that can reliably detect outlier datapoints during inference with minimal overhead. OUTLINE: 0:00 - Intro 2:20 - What was the motivation behind this paper? 5:30 - Why object detection? 11:05 - What's the connection to energy-based models? 12:15 - Is a Gaussian mixture model appropriate for high-dimensional data? 16:15 - What are the most important components of the method? 18:30 - What are the downstream effects of the regularizer? 22:00 - Are there severe trade-offs to outlier detection? 23:55 - Main experimental takeaways? 26:10 - Why do outlier detection in the last layer? 30:20 - What does it take to finish a research projects successfully? Paper: https://arxiv.org/abs/2202.01197 Code: https://github.com/deeplearning-wisc/vos Abstract: Out-of-distribution (OOD) detection has received much attention lately due to its importance in the safe deployment of neural networks. One of the key challenges is that models lack supervision signals from unknown data, and as a result, can produce overconfident predictions on OOD data. Previous approaches rely on real outlier datasets for model regularization, which can be costly and sometimes infeasible to obtain in practice. In this paper, we present VOS, a novel framework for OOD detection by adaptively synthesizing virtual outliers that can meaningfully regularize the model's decision boundary during training. Specifically, VOS samples virtual outliers from the low-likelihood region of the class-conditional distribution estimated in the feature space. Alongside, we introduce a novel unknown-aware training objective, which contrastively shapes the uncertainty space between the ID data and synthesized outlier data. VOS achieves state-of-the-art performance on both object detection and image classification models, reducing the FPR95 by up to 7.87% compared to the previous best method. Code is available at this https URL. Authors: Xuefeng Du, Zhaoning Wang, Mu Cai, Yixuan Li Links: Merch: http://store.ykilcher.com TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher LinkedIn: https://www.linkedin.com/in/ykilcher BiliBili: https://space.bilibili.com/2017636191 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Hello there, this is an interview with the authors of the paper, Learning What You Don't Know by Virtual Outlier Synthesis. This paper presents a method to create what it calls virtual outliers, which are synthetic out of distribution data points in the latent space of the model. And then it trains that model to successfully recognize these points as out of distribution. The paper performs very well on a wide variety of benchmarks. And I have actually made a comprehensive paper review in the last video about this paper. If you haven't checked that out, please do because I'll go over the paper, I'll explain everything that's in it. And the authors that I'm interviewing today have seen that review. So we all start from a common level, and they're directly able to respond to my criticisms, which is really, really cool. So in this interview, we go over a lot of topics, but mainly I get my questions answered, and we get a bit of a look at the behind the scenes of the research, how the research came about, what the authors were interested in, how they solved problems that came up in between, and much more. I hope you like these paper reviews plus interview things. Let me know how I can improve these videos for you by leaving a comment. Like if you do like the video, subscribe or tell someone to subscribe, and I'll see you around. Bye. Hi, everyone. Today, I'm here with Sharon Lee and Xie Feng Du, who are authors on the virtual Outlier Synthesis paper, and are joining me today discussing the paper and as well as my attempt at an explanation of it. Sharon, Xie Feng, welcome to the channel. Thank you for having us. Thank you. It's very cool to have you here. So you have made this paper, it has gathered, I think, a fair bit of attention in the community because outlier detection obviously is a big challenge, especially for security critical applications. And not only do you do outlier detection in classification where we usually see it, but in like sort of the more challenging task of object detection. So my first question would be, how did you even come up with this? Because it is not an obvious idea, let's say, to even tackle this problem. Like what made you tackle the problem in the first place? Yeah, thank you for the question. I'd be happy to share, I guess, from a little bit behind the scene on the research story, how it got started. And by the way, we're really encouraged to see the interest from the community about our work. And so personally, I really am driven to solve problems that are real, meaning that has some connection to the real world. And just like you said, I think out of distribution detection is one of those problems that really matter a lot in deploying machine learning models in the real world. And so sometimes when we're getting closer to this more realistic scenarios, that also means problems are getting harder and more complex. And this actually takes a trajectory to get there. It's actually reflected, I think, in how the field of OOD detection has evolved and unfolded over the years. And so if you look at some of the early research we've done, including some other researchers have done in the space, a very common way to evaluate how good the algorithms are based on the benchmark, which now seems quite artificial, like if you train a model on Cypher 10, and then you evaluate against data sets such as Street View housing number or SVHN. And so the seemingly simple task actually took a while for the research community to make progress on. I think over the years, we've definitely done a much better job developing algorithms to reduce the false positive rate. And so that's why we think we're at a better timing to start tackling some of the harder questions on the object detection side. And why object detection is very interesting and important, because that directly has a better connection. For example, if you think about self-driving cars, none of those images are simple as Cypher 10, which has a single object well centered around in the scene. In the real world, we are going to encounter inputs that have multiple objects in the scene. And some of those are in distribution, which means they have been exposed to the model during the training time, and some of those are not quite. And so I was really glad when Cypher went to join the lab as well to start tackling some of the questions. So that's when we started the project earlier, actually last year already, last spring semester, that's when we started. So you were already in the space of outlier detection, let's say in the broad space of solving these types of problems. And then what made you decide object detection? That's it. Did you run across a problem? Or is this just a natural continuation of the classification data sets? That's another great question. So why object detection? So one of the, like you said, I think one of the typical scenarios when we think about where outlier detection or out of distribution detection algorithms are being used in the real world is some of the high stakes scenarios like safety critical ones, for example, in self-driving. And that is kind of built on these object detection models where not only we have to perform classification, but at the same time being able to localize where the objects are. So I think in terms of motivation, that just seems like a very natural application focus to start with. And of course, we have been, like I said, we have been in the space for working on the problem I think since a couple years ago. And most of the work we've done in this space are on image classification. And so in terms of solution, I also wanted to share a little bit how we arrived at this virtual outlier synthesis. So I think the first motivation is pretty straightforward. We wanted to kind of go beyond image level OOD detection to have this finer grained uncertainty estimates that tells us at the object level whether things are in distribution or OOD. I think figure one in the paper is kind of a perfect illustration for why we need object level uncertainty, right? So as you explained quite eloquently in your video that, you know, this car is something the model has observed, which is in distribution object, right? Whereas this moose here is something that was not exposed to the model during training. And so this picture kind of highlights the complexity that an image can contain at the same time, both in distribution and OOD object. And therefore, we can't just derive an image level, you know, uncertainty measurement. We have to, you know, go finer grained at the object level. And so that was the first, you know, first, I would say the higher level motivation on the object detection side. And then on the solution side, I want to share a little bit on how we arrived at the virtual outlier synthesis. So the idea, the algorithmic idea of this paper is largely inspired by one of our previous papers on energy-based OOD detection, which was published at NURBS in 2020. And so in that paper, we focused on image classification setting. But from a learning algorithm perspective, we proposed this called energy regularized learning, which in a nutshell is trying to, oh, I see your cat there, just walking by. So in a nutshell, that learning framework tries to kind of tackle the problem of classification by not only minimizing the risks on the in-distribution data set, but at the same time, we're introducing a regularizer. And this regularizer has very similar spirit as what we're using here in this paper. And so this regularizer is trying to kind of minimizing the risk or trying to pushing the energy surface to be as distinguishable between known distribution versus unknown distribution. And so for the image classification setting, we used this technique or data set of outlier exposure, which relies on an external different data set. That's not overlapping with the in-distribution data set. So that's actually one of the requirement or limitation, if you call, in that learning framework. And that does not directly translate into the object detection setting anymore, because as you can imagine, in order to bring in an outlier data set for object detection, it's going to be tricky, because you have to annotate through tons of images to make sure that at the object level, things do not overlap with our training data. And so this data collection itself is a prohibitive process. And it can be very time-consuming and laborious and so on. And so that also kind of motivate us to think, well, if there is no external data we can rely on, is there any way we can devise some of the outlier data from the in-distribution data itself? So that's where this whole idea started really is to think further how we improve on top of the original learning framework that we had. And then that's how you gathered the ideas of synthesizing points that are not where the data is. Is there a connection to, I'm not sure how aware of, Jan LeCun has been pushing this energy-based learning a lot, sort of pushing energy up where data is, pushing energy down anywhere else. Do you see some sort of a connection to that? Absolutely. In fact, the work that I just mentioned on energy-based out-of-distribution detection that was published at New Earths 2020 was precisely inspired by this whole energy-based framework from Jan LeCun. By the way, the plural of moose is moose. I didn't know in my video. That's good to know. I figured it out. Not meese. Not meese. Yeah. So, I mean, it makes sense. And you've seen my explanation, right? And I think one of the criticisms a bit that I had was everything's pretty in this sort of 2D landscape where you can show here's the data and there's outside the data. But it gets very complicated once you go to higher dimensions. For example, you had the picture here when you mentioned we assume that the high-dimensional data are Gaussians. Obviously, your method works, right? I think your evaluation is very thorough. You measure on a lot of datasets against a lot of baselines and so on. So obviously, something works here. However, do you have some maybe some response to me, to someone who says, this does not convince me that a Gaussian mixture model is appropriate for this really high-dimensional data? Yeah, I actually like that question a lot. I wanted to maybe take a step back and first just to highlight one of the key, I guess the key insight and knowledge, which I like about this paper aside from the distributional assumption that we made here, is the fact that the virtual outlier synthesis is done in a feature space, right? As opposed to the original high-dimensional pixel space is already a much, much lower dimensionality. So what you see here, this synthesis is completely done in this later representation or sometimes we extract this from the penultimate layer of neural network. So some earlier works explored, so we're not the first to kind of try to synthesize outliers. But what we've done differently is to realize in order to regularize the neural network's decision boundary, we don't have to go all the way to the original pixel space where training a GAM model can be quite tricky and the convergence is going to be a challenging problem on its own. So that's one kind of step, which I think an important step that we've taken is to look into a lower dimensional latent space, which in some sense makes this problem more tractable compared to the original data space. And now coming to the second point, I think when it comes to modeling the density of the representation space, it's actually also a non-trivial problem, right? Density estimation on its own. I think it's a notoriously hard problem in machine learning. And so when we initially approached this problem, we kind of make this, I would say, you know, Gaussian mixture distribution is the most straightforward assumption kind of to make. And this first algorithm framework, I would say, you know, we kind of just wanted to show even under somewhat simplified assumption of representation space being Gaussian, you can still do this virtual outlier synthesis tractably and train things end to end. And from an empirical perspective, as you said, it actually works surprisingly well. But that doesn't mean this has to be the only solution to it. I think there are great opportunities that Voss really opens up to is how do we perform this synthesis in the feature space more creatively, right? When it comes to the method itself, you have this overview diagram right here. And I've attempted to explain this a little bit. Did you find my explanation satisfactory? Is there something missing? Is there emphasis in the wrong place? Or what would you add to so people really understand what's going on? I think you did a phenomenal job explaining this whole pipeline, perhaps in a clearer way if we were to have to present ourselves. One thing I wanted to maybe call out is this notion of, you know, this uncertainty loss, why we formulate this problem that way. So at a higher level, you can think of our learning framework as trying to do something more than the typical supervised learning, say training a model based on cross entropy loss. There's a bit of element in the synthesis part, which closer to this generative modeling and density estimation, which we've also talked about. And so the whole framework combines sort of both bits of supervised learning and also there is some density estimation involved as well. And I think one interesting bits in the learning methodology is how we leverage energy as an uncertainty measurement and to separate apart the known objects versus the unknown ones. And so it's somewhat a problem that's not quite as complex as trying to estimate exactly the pointwise density of p of x. But rather we're kind of picking back on a simpler problem of we just want this energy to be estimated as a level set that is sufficient enough to separate these two parts of data rather than getting every single point estimated correctly, if that makes sense. The uncertainty loss you describe somewhere here. And yeah, so I think I had this other comment where I said directly this loss sort of only affects sort of the classification layer. However, when you think about it, what you could do is you could simply take your Gaussian mixture model, right? And you could simply have your data point there. And you could say, well, if it's unlikely, it's out of distribution, right? I could simply map my inference data point and then evaluate it according to the Gaussian mixture model that I have at training time. And I say, well, it's low likelihood, it's out of distribution, gone, right? I wouldn't need all of this thing, which tells me that this loss does more than just, you know, modify the last layer bit. So there is a almost, is it fair to or is this correct my assumption that there is like this downstream effect on the entire model? How would you like intuitively adding a loss like this? What does it do to the whole feature extraction pipeline that leads to the latent space? Yeah, that's a great question. So perhaps to answer a bit more to that, do you mind scrolling up a little bit? I think we have perfect, yes, that posterior probability right there. So keep in mind this whole training is done in an end-to-end fashion, right? And then whenever we have an input object that goes into this network, we are optimizing for this loss. And this loss will be back propagated all the way, right, through this entire convolutional backbone in this object detector. And so this objective L uncertainty is trying to kind of separate apart in terms of this energy. We'll get to this interpretation of the energy later on. But at the very high level, it's trying to just push energy to be two sides. One is above zero, one is below zero, right? And if we look at this connection with respect to this posterior probability here, so we can interpret energy as this density function for that data point p of x, perhaps plugged in with some unknown factor that we don't know, right? And so this energy does not precisely capture this density just yet. But during this optimization process, we hope that through this propagation and minimizing this objective, that this whole training would converge to a point where the density could be more separable between the ID object and then the OID object. So that's the inherent connection between the uncertainty measurement to the density. So you sort of maybe reformulated a bit. You want to coerce the feature extractor almost to give you a space where you can be more certain about in distribution data, but then less certain about out of distribution data. So this is naturally a harder problem, right? If you go back to this, even in the two dimensional case, I mentioned this is like to separate three classes, I need three lines, right? But to separate three clusters of data from their surroundings, I need a very decision boundary that's shaped highly complex, high dimensional, right? And so on. What are the trade-offs here that I make? Are they severe or did you find this works without severely impacting my accuracy as such? What's sort of the, like, what do I give up when I employ this method? That's a great question. So I think there's natural trade-off would be to say if we employ this regularization, does that kind of hurt the performance, compromise the performance on the object detection side, right? And so we actually showed in the evaluation part in table one, if I recall correctly, that this whole learning framework actually achieves both quite effectively. I think it pretty much preserves the MAP. So that's on the rightmost column where we show the, on the original PASCO VOC and Berkeley deep drive task, how is that MAP changes. It's pretty much the same or similar as the vanilla FASTR CNN without adding our uncertainty regularizer. And so overall this learning from where it kind of provides an actual layer of safety net by pruning out some of the OOD object, but at the same time, if it's indeed an indistribution image it can do as well. When you maybe, when we're at the experiments, I did not go into that at all in my explanation. Is there things that you want to particularly highlight or what should a reader of your paper take away from the experiments other than you beat all the baselines, which I think we've come to expect a little bit from machine learning papers, but what should a reader take away as sort of conclusions from your experimental section? Totally. I like that question a lot. And I think part of the ablation in the paper is, I think it's quite interesting, going beyond table one. We actually did some of the ablations comparing two different synthesis strategy. And so I think table two is perhaps, table three as well. Table two is one of the interesting ones where we kind of try to contrast with, in terms of synthesize, we wanted to know whether this Gaussian-based sampling is the optimal one. There are works have done in the past, for example, directly using GaN to generate images. Or you could also do mix-up to have this interpolation in the pixel space as well. And then they're also utilizing noise. I think those are all kind of natural alternatives for our outlier synthesis approach. So I think this is one of the ablations I personally quite like. And I also want to call out the fact that there is one previous paper, I think they used these proposals with the large background probability as the negative samples to regularize the model. And that turns out to be also suboptimal compared to using BOSS. I've also, so you had this decision to, in the very last layer, introduce these virtual outliers. And I think in the video I observed something like, okay, that helps if the out of distribution data really looks different in the last layer. However, if I have out of distribution data, but that exhibits the same kind of low level features as in distribution data, that might not be the case in a vanilla network. Is this also, let's say, a weakness of your method? Or would you expect that your regularizer would automatically map these types of outliers to different, would construct the latent space such that they are different? Is it different? Yeah, for that question, perhaps I can defer to Shufun. I think Shufun has some good answer to that question. Oh, yeah. So, actually I want to answer this question from two perspectives. So first perspective, I think you were mentioning some, when a model actually encounters some near in distribution node objects. So how does the feature space functions to prevent the model to predict high confidence predictions? So basically, we can potentially adjust the sampling threshold in VOS to see whether we can create a tighter decision boundary in order to separate those in distribution objects and those OD objects. And in addition, I think near in distribution OD detection is essentially a very hard problem. And there's a couple of works exploring this direction, but they are totally in the classification setting. So perhaps we can explore how to combine VOS with those techniques in the future. So this is the first perspective. I think from the second perspective, I'm mentioning you're saying that can we look at different semantic spaces, like different layers of features. Actually I remember in the paper, actually in the appendix section, we have reported the OD detection performance using the layer rather than the panoply layer for our licensees. And actually, it seems like the performance is not as good as what we have if we use the panoply layer as the semantic space for VOS. So basically, I think the reason is that the later layers in the neural network might be more discriminative for classification. So those more discriminative layers may be better for OD detection and our licensees because those synthesized OD layers relies on the quality of those estimated covariance matrix and those mean embeddings for each in distribution class. So I think that may be the reason for why we choose to use the panoply layer for VOS. It makes sense. As you go earlier and earlier, the less you can probably describe the data using sort of this mixture model approach. So I think it makes sense. I was just wondering. And even I think it's important to remember that we're still in high dimensions. And with being in high dimensions, it means that even if some of the features are the same, the moose will have four legs and so on, it will kind of look like a dog, but not fully. So you'd still expect this in these high dimensions to be separated. So maybe a bit to the research process. You thought of this, you thought you're going to tackle this problem and so on. Could you maybe share a bit of how the process, I think it's always, you just see the paper at the end and the paper is like, oh, wow, you have some examples here. I didn't even, I think, show them much in the video. So here you have comparisons at the bottom, everything that's green is detected as out of distribution, which is really nice. The helicopter, I think, was the most one of the most shared pictures of your paper. This looks really nice, right? I think what people don't see much is the process behind it. Like, could you describe it a little bit? Was there a time when you thought this wouldn't work or doesn't work or you don't know how to go further? How was it like to achieve at a system or arrive at a system that finally works really well? Oh, totally. I'd be happy to speak on that. Perhaps Rufun can add on later as well. I think just like many other research process, nothing works out of the box immediately, right? I think part of the research, the fun is really kind of going through the process of figuring out a lot of intermediate obstacles. And so to give you some example, right, some of the challenges, I think, really, Rufun did a lot of hard work in the process. Just when we started the exploration, the first challenge we have to overcome is what's the right evaluation, right? How do we get this correct evaluation benchmark? Because a lot of the previous work focused on image classification that's more or less well established. And in order to evaluate this new setting, we have to actually gather and clean all of these, for example, OOD test images as well. So that's some of the things you just have to kind of go through during the research process. And I think on the methodology side, there are also the challenges as well. So one thing I want to share is there's actually one hyperparameter in VOS, which is, I think, called the starting epoch, which is when you start adding this regularizer. And so it turns out if you just train this whole entire loss with the object detection plus the LL uncertainty from the start, things are not converging as well. So why is that? Because at the beginning of the training, the representation is not quite well formed yet. And so therefore, estimating this density in the latent space is not also very reliable and not to mention the sampling part. And so that's where we kind of got a little bit stuck on is the performance. If you train from scratch, it's not really as desirable. And so later on, we figured out why don't we wait until the representation becomes more formed. So this idea of starting in a later training process helped resolve this issue. And so that's another example. But how did you get this idea? Did you have some indication from some metrics that you logged? Or did you just sit there and just try 10 different things and this one was the one that worked? Or I imagine you sit there, you try it and stuff doesn't converge. It's just like, well, it doesn't work. What can lead you to come up with the correct solution? I think for this one, perhaps it's more natural because if you think about how the method works, it has to rely on some embedding space that has a somewhat clear structure that you can perform density estimation and then sample from. And so when things kind of doesn't work out, we look at what are the kind of possible major reflux that could happen. This one would be the kind of the top one we are diagnosing into. Excellent. Yeah, I think that's a pretty neat overview. Is there something else that you'd like to share about this? Anything that we haven't touched on maybe? Anything that you want to specifically highlight? Yeah, I think I've talked a lot. Xufeng, do you want to add anything that you particularly wanted to add on to? I think I don't have any further comments. Sharon has covered comprehensively about this paper. Your code is online, right? So people can go, can get into it, can experiment with it. Yeah, I think that's pretty neat. Yeah. And with that, Sharon, Xufeng, thank you very much for being here. And this was very enjoyable. Yeah. Thank you so much for having us again. It's been fun, you know, chatting about the work and so on. Thanks for inviting us. Thank you.
[ { "start": 0, "end": 9.76, "text": " Hello there, this is an interview with the authors of the paper, Learning What You Don't" }, { "start": 9.76, "end": 12.48, "text": " Know by Virtual Outlier Synthesis." }, { "start": 12.48, "end": 17.2, "text": " This paper presents a method to create what it calls virtual outliers, which are synthetic" }, { "start": 17.2, "end": 21, "text": " out of distribution data points in the latent space of the model." }, { "start": 21, "end": 26.04, "text": " And then it trains that model to successfully recognize these points as out of distribution." }, { "start": 26.04, "end": 30.759999999999998, "text": " The paper performs very well on a wide variety of benchmarks." }, { "start": 30.759999999999998, "end": 36.76, "text": " And I have actually made a comprehensive paper review in the last video about this paper." }, { "start": 36.76, "end": 41.14, "text": " If you haven't checked that out, please do because I'll go over the paper, I'll explain" }, { "start": 41.14, "end": 42.599999999999994, "text": " everything that's in it." }, { "start": 42.599999999999994, "end": 46.44, "text": " And the authors that I'm interviewing today have seen that review." }, { "start": 46.44, "end": 51.519999999999996, "text": " So we all start from a common level, and they're directly able to respond to my criticisms," }, { "start": 51.519999999999996, "end": 53.28, "text": " which is really, really cool." }, { "start": 53.28, "end": 58.64, "text": " So in this interview, we go over a lot of topics, but mainly I get my questions answered," }, { "start": 58.64, "end": 63, "text": " and we get a bit of a look at the behind the scenes of the research, how the research came" }, { "start": 63, "end": 68.4, "text": " about, what the authors were interested in, how they solved problems that came up in between," }, { "start": 68.4, "end": 69.4, "text": " and much more." }, { "start": 69.4, "end": 73.24000000000001, "text": " I hope you like these paper reviews plus interview things." }, { "start": 73.24000000000001, "end": 76.6, "text": " Let me know how I can improve these videos for you by leaving a comment." }, { "start": 76.6, "end": 81.24000000000001, "text": " Like if you do like the video, subscribe or tell someone to subscribe, and I'll see you" }, { "start": 81.24000000000001, "end": 82.24000000000001, "text": " around." }, { "start": 82.24, "end": 83.24, "text": " Bye." }, { "start": 83.24, "end": 84.24, "text": " Hi, everyone." }, { "start": 84.24, "end": 91.36, "text": " Today, I'm here with Sharon Lee and Xie Feng Du, who are authors on the virtual Outlier" }, { "start": 91.36, "end": 98.91999999999999, "text": " Synthesis paper, and are joining me today discussing the paper and as well as my attempt" }, { "start": 98.91999999999999, "end": 100.88, "text": " at an explanation of it." }, { "start": 100.88, "end": 103.8, "text": " Sharon, Xie Feng, welcome to the channel." }, { "start": 103.8, "end": 105.8, "text": " Thank you for having us." }, { "start": 105.8, "end": 106.8, "text": " Thank you." }, { "start": 106.8, "end": 109.28, "text": " It's very cool to have you here." }, { "start": 109.28, "end": 118.52, "text": " So you have made this paper, it has gathered, I think, a fair bit of attention in the community" }, { "start": 118.52, "end": 123.96000000000001, "text": " because outlier detection obviously is a big challenge, especially for security critical" }, { "start": 123.96000000000001, "end": 125.24000000000001, "text": " applications." }, { "start": 125.24000000000001, "end": 131.08, "text": " And not only do you do outlier detection in classification where we usually see it, but" }, { "start": 131.08, "end": 135.64, "text": " in like sort of the more challenging task of object detection." }, { "start": 135.64, "end": 141.6, "text": " So my first question would be, how did you even come up with this?" }, { "start": 141.6, "end": 148, "text": " Because it is not an obvious idea, let's say, to even tackle this problem." }, { "start": 148, "end": 151.55999999999997, "text": " Like what made you tackle the problem in the first place?" }, { "start": 151.55999999999997, "end": 153, "text": " Yeah, thank you for the question." }, { "start": 153, "end": 160.42, "text": " I'd be happy to share, I guess, from a little bit behind the scene on the research story," }, { "start": 160.42, "end": 163.64, "text": " how it got started." }, { "start": 163.64, "end": 171.35999999999999, "text": " And by the way, we're really encouraged to see the interest from the community about" }, { "start": 171.35999999999999, "end": 173, "text": " our work." }, { "start": 173, "end": 180.23999999999998, "text": " And so personally, I really am driven to solve problems that are real, meaning that has some" }, { "start": 180.23999999999998, "end": 182.67999999999998, "text": " connection to the real world." }, { "start": 182.67999999999998, "end": 188.89999999999998, "text": " And just like you said, I think out of distribution detection is one of those problems that really" }, { "start": 188.9, "end": 195.08, "text": " matter a lot in deploying machine learning models in the real world." }, { "start": 195.08, "end": 202.28, "text": " And so sometimes when we're getting closer to this more realistic scenarios, that also" }, { "start": 202.28, "end": 207.12, "text": " means problems are getting harder and more complex." }, { "start": 207.12, "end": 211.48000000000002, "text": " And this actually takes a trajectory to get there." }, { "start": 211.48000000000002, "end": 218.88, "text": " It's actually reflected, I think, in how the field of OOD detection has evolved and unfolded" }, { "start": 218.88, "end": 219.88, "text": " over the years." }, { "start": 219.88, "end": 225.79999999999998, "text": " And so if you look at some of the early research we've done, including some other researchers" }, { "start": 225.79999999999998, "end": 235.04, "text": " have done in the space, a very common way to evaluate how good the algorithms are based" }, { "start": 235.04, "end": 244.84, "text": " on the benchmark, which now seems quite artificial, like if you train a model on Cypher 10, and" }, { "start": 244.84, "end": 252.68, "text": " then you evaluate against data sets such as Street View housing number or SVHN." }, { "start": 252.68, "end": 258.96, "text": " And so the seemingly simple task actually took a while for the research community to" }, { "start": 258.96, "end": 259.96, "text": " make progress on." }, { "start": 259.96, "end": 266.68, "text": " I think over the years, we've definitely done a much better job developing algorithms to" }, { "start": 266.68, "end": 269.22, "text": " reduce the false positive rate." }, { "start": 269.22, "end": 276.16, "text": " And so that's why we think we're at a better timing to start tackling some of the harder" }, { "start": 276.16, "end": 280.12, "text": " questions on the object detection side." }, { "start": 280.12, "end": 288.24, "text": " And why object detection is very interesting and important, because that directly has a" }, { "start": 288.24, "end": 289.24, "text": " better connection." }, { "start": 289.24, "end": 297.20000000000005, "text": " For example, if you think about self-driving cars, none of those images are simple as Cypher" }, { "start": 297.2, "end": 301.59999999999997, "text": " 10, which has a single object well centered around in the scene." }, { "start": 301.59999999999997, "end": 309.59999999999997, "text": " In the real world, we are going to encounter inputs that have multiple objects in the scene." }, { "start": 309.59999999999997, "end": 314.32, "text": " And some of those are in distribution, which means they have been exposed to the model" }, { "start": 314.32, "end": 318.76, "text": " during the training time, and some of those are not quite." }, { "start": 318.76, "end": 324.32, "text": " And so I was really glad when Cypher went to join the lab as well to start tackling" }, { "start": 324.32, "end": 326.44, "text": " some of the questions." }, { "start": 326.44, "end": 334.12, "text": " So that's when we started the project earlier, actually last year already, last spring semester," }, { "start": 334.12, "end": 336.6, "text": " that's when we started." }, { "start": 336.6, "end": 342.6, "text": " So you were already in the space of outlier detection, let's say in the broad space of" }, { "start": 342.6, "end": 344.2, "text": " solving these types of problems." }, { "start": 344.2, "end": 351.04, "text": " And then what made you decide object detection?" }, { "start": 351.04, "end": 352.04, "text": " That's it." }, { "start": 352.04, "end": 353.4, "text": " Did you run across a problem?" }, { "start": 353.4, "end": 357.08, "text": " Or is this just a natural continuation of the classification data sets?" }, { "start": 357.08, "end": 358.84, "text": " That's another great question." }, { "start": 358.84, "end": 361.44, "text": " So why object detection?" }, { "start": 361.44, "end": 367.47999999999996, "text": " So one of the, like you said, I think one of the typical scenarios when we think about" }, { "start": 367.47999999999996, "end": 372.08, "text": " where outlier detection or out of distribution detection algorithms are being used in the" }, { "start": 372.08, "end": 378.28, "text": " real world is some of the high stakes scenarios like safety critical ones, for example, in" }, { "start": 378.28, "end": 379.28, "text": " self-driving." }, { "start": 379.28, "end": 385.4, "text": " And that is kind of built on these object detection models where not only we have to" }, { "start": 385.4, "end": 393.29999999999995, "text": " perform classification, but at the same time being able to localize where the objects are." }, { "start": 393.29999999999995, "end": 402.64, "text": " So I think in terms of motivation, that just seems like a very natural application focus" }, { "start": 402.64, "end": 403.64, "text": " to start with." }, { "start": 403.64, "end": 410.4, "text": " And of course, we have been, like I said, we have been in the space for working on the" }, { "start": 410.4, "end": 413.4, "text": " problem I think since a couple years ago." }, { "start": 413.4, "end": 417.71999999999997, "text": " And most of the work we've done in this space are on image classification." }, { "start": 417.71999999999997, "end": 422.44, "text": " And so in terms of solution, I also wanted to share a little bit how we arrived at this" }, { "start": 422.44, "end": 425.03999999999996, "text": " virtual outlier synthesis." }, { "start": 425.03999999999996, "end": 428.84, "text": " So I think the first motivation is pretty straightforward." }, { "start": 428.84, "end": 436.28, "text": " We wanted to kind of go beyond image level OOD detection to have this finer grained uncertainty" }, { "start": 436.28, "end": 442.23999999999995, "text": " estimates that tells us at the object level whether things are in distribution or OOD." }, { "start": 442.23999999999995, "end": 449.5, "text": " I think figure one in the paper is kind of a perfect illustration for why we need object" }, { "start": 449.5, "end": 450.84, "text": " level uncertainty, right?" }, { "start": 450.84, "end": 457.84, "text": " So as you explained quite eloquently in your video that, you know, this car is something" }, { "start": 457.84, "end": 462.32, "text": " the model has observed, which is in distribution object, right?" }, { "start": 462.32, "end": 466.91999999999996, "text": " Whereas this moose here is something that was not exposed to the model during training." }, { "start": 466.91999999999996, "end": 472.32, "text": " And so this picture kind of highlights the complexity that an image can contain at the" }, { "start": 472.32, "end": 476.28, "text": " same time, both in distribution and OOD object." }, { "start": 476.28, "end": 481.59999999999997, "text": " And therefore, we can't just derive an image level, you know, uncertainty measurement." }, { "start": 481.59999999999997, "end": 485.64, "text": " We have to, you know, go finer grained at the object level." }, { "start": 485.64, "end": 493.36, "text": " And so that was the first, you know, first, I would say the higher level motivation on" }, { "start": 493.36, "end": 496, "text": " the object detection side." }, { "start": 496, "end": 501.34, "text": " And then on the solution side, I want to share a little bit on how we arrived at the virtual" }, { "start": 501.34, "end": 502.97999999999996, "text": " outlier synthesis." }, { "start": 502.97999999999996, "end": 510.62, "text": " So the idea, the algorithmic idea of this paper is largely inspired by one of our previous" }, { "start": 510.62, "end": 518.92, "text": " papers on energy-based OOD detection, which was published at NURBS in 2020." }, { "start": 518.92, "end": 525.68, "text": " And so in that paper, we focused on image classification setting." }, { "start": 525.68, "end": 532.2, "text": " But from a learning algorithm perspective, we proposed this called energy regularized" }, { "start": 532.2, "end": 540.32, "text": " learning, which in a nutshell is trying to, oh, I see your cat there, just walking by." }, { "start": 540.32, "end": 548.2, "text": " So in a nutshell, that learning framework tries to kind of tackle the problem of classification" }, { "start": 548.2, "end": 556.9200000000001, "text": " by not only minimizing the risks on the in-distribution data set, but at the same time, we're introducing" }, { "start": 556.9200000000001, "end": 558.08, "text": " a regularizer." }, { "start": 558.08, "end": 563, "text": " And this regularizer has very similar spirit as what we're using here in this paper." }, { "start": 563, "end": 570.8, "text": " And so this regularizer is trying to kind of minimizing the risk or trying to pushing" }, { "start": 570.8, "end": 577.72, "text": " the energy surface to be as distinguishable between known distribution versus unknown" }, { "start": 577.72, "end": 579.06, "text": " distribution." }, { "start": 579.06, "end": 590.6, "text": " And so for the image classification setting, we used this technique or data set of outlier" }, { "start": 590.6, "end": 595.9200000000001, "text": " exposure, which relies on an external different data set." }, { "start": 595.9200000000001, "end": 599.36, "text": " That's not overlapping with the in-distribution data set." }, { "start": 599.36, "end": 606.28, "text": " So that's actually one of the requirement or limitation, if you call, in that learning" }, { "start": 606.28, "end": 607.84, "text": " framework." }, { "start": 607.84, "end": 612.44, "text": " And that does not directly translate into the object detection setting anymore, because" }, { "start": 612.44, "end": 620.6800000000001, "text": " as you can imagine, in order to bring in an outlier data set for object detection, it's" }, { "start": 620.6800000000001, "end": 625.8000000000001, "text": " going to be tricky, because you have to annotate through tons of images to make sure that at" }, { "start": 625.8000000000001, "end": 629.84, "text": " the object level, things do not overlap with our training data." }, { "start": 629.84, "end": 634.74, "text": " And so this data collection itself is a prohibitive process." }, { "start": 634.74, "end": 640.74, "text": " And it can be very time-consuming and laborious and so on." }, { "start": 640.74, "end": 647.6800000000001, "text": " And so that also kind of motivate us to think, well, if there is no external data we can" }, { "start": 647.6800000000001, "end": 654.64, "text": " rely on, is there any way we can devise some of the outlier data from the in-distribution" }, { "start": 654.64, "end": 655.64, "text": " data itself?" }, { "start": 655.64, "end": 665.38, "text": " So that's where this whole idea started really is to think further how we improve on top" }, { "start": 665.38, "end": 670.08, "text": " of the original learning framework that we had." }, { "start": 670.08, "end": 679.08, "text": " And then that's how you gathered the ideas of synthesizing points that are not where" }, { "start": 679.08, "end": 680.08, "text": " the data is." }, { "start": 680.08, "end": 686, "text": " Is there a connection to, I'm not sure how aware of, Jan LeCun has been pushing this" }, { "start": 686, "end": 690.9000000000001, "text": " energy-based learning a lot, sort of pushing energy up where data is, pushing energy down" }, { "start": 690.9000000000001, "end": 691.9000000000001, "text": " anywhere else." }, { "start": 691.9000000000001, "end": 694.36, "text": " Do you see some sort of a connection to that?" }, { "start": 694.36, "end": 695.36, "text": " Absolutely." }, { "start": 695.36, "end": 700.16, "text": " In fact, the work that I just mentioned on energy-based out-of-distribution detection" }, { "start": 700.16, "end": 707.36, "text": " that was published at New Earths 2020 was precisely inspired by this whole energy-based" }, { "start": 707.36, "end": 712.44, "text": " framework from Jan LeCun." }, { "start": 712.44, "end": 716.4, "text": " By the way, the plural of moose is moose." }, { "start": 716.4, "end": 718.84, "text": " I didn't know in my video." }, { "start": 718.84, "end": 721.4, "text": " That's good to know." }, { "start": 721.4, "end": 722.4, "text": " I figured it out." }, { "start": 722.4, "end": 723.4, "text": " Not meese." }, { "start": 723.4, "end": 725.4, "text": " Not meese." }, { "start": 725.4, "end": 726.4, "text": " Yeah." }, { "start": 726.4, "end": 729.64, "text": " So, I mean, it makes sense." }, { "start": 729.64, "end": 733.52, "text": " And you've seen my explanation, right?" }, { "start": 733.52, "end": 739.8, "text": " And I think one of the criticisms a bit that I had was everything's pretty in this sort" }, { "start": 739.8, "end": 745.86, "text": " of 2D landscape where you can show here's the data and there's outside the data." }, { "start": 745.86, "end": 752.4, "text": " But it gets very complicated once you go to higher dimensions." }, { "start": 752.4, "end": 760.72, "text": " For example, you had the picture here when you mentioned we assume that the high-dimensional" }, { "start": 760.72, "end": 763.52, "text": " data are Gaussians." }, { "start": 763.52, "end": 768.24, "text": " Obviously, your method works, right?" }, { "start": 768.24, "end": 770.72, "text": " I think your evaluation is very thorough." }, { "start": 770.72, "end": 774.56, "text": " You measure on a lot of datasets against a lot of baselines and so on." }, { "start": 774.56, "end": 777.88, "text": " So obviously, something works here." }, { "start": 777.88, "end": 787.24, "text": " However, do you have some maybe some response to me, to someone who says, this does not" }, { "start": 787.24, "end": 793.72, "text": " convince me that a Gaussian mixture model is appropriate for this really high-dimensional" }, { "start": 793.72, "end": 794.72, "text": " data?" }, { "start": 794.72, "end": 800.48, "text": " Yeah, I actually like that question a lot." }, { "start": 800.48, "end": 808.9200000000001, "text": " I wanted to maybe take a step back and first just to highlight one of the key, I guess" }, { "start": 808.9200000000001, "end": 813.76, "text": " the key insight and knowledge, which I like about this paper aside from the distributional" }, { "start": 813.76, "end": 820.48, "text": " assumption that we made here, is the fact that the virtual outlier synthesis is done" }, { "start": 820.48, "end": 822.96, "text": " in a feature space, right?" }, { "start": 822.96, "end": 828.8000000000001, "text": " As opposed to the original high-dimensional pixel space is already a much, much lower" }, { "start": 828.8000000000001, "end": 830.32, "text": " dimensionality." }, { "start": 830.32, "end": 837.6, "text": " So what you see here, this synthesis is completely done in this later representation or sometimes" }, { "start": 837.6, "end": 843.44, "text": " we extract this from the penultimate layer of neural network." }, { "start": 843.44, "end": 851.44, "text": " So some earlier works explored, so we're not the first to kind of try to synthesize outliers." }, { "start": 851.44, "end": 856.4000000000001, "text": " But what we've done differently is to realize in order to regularize the neural network's" }, { "start": 856.4, "end": 863.12, "text": " decision boundary, we don't have to go all the way to the original pixel space where" }, { "start": 863.12, "end": 871.12, "text": " training a GAM model can be quite tricky and the convergence is going to be a challenging" }, { "start": 871.12, "end": 872.64, "text": " problem on its own." }, { "start": 872.64, "end": 878.24, "text": " So that's one kind of step, which I think an important step that we've taken is to" }, { "start": 878.24, "end": 888.08, "text": " look into a lower dimensional latent space, which in some sense makes this problem more" }, { "start": 888.08, "end": 891.92, "text": " tractable compared to the original data space." }, { "start": 891.92, "end": 897.92, "text": " And now coming to the second point, I think when it comes to modeling the density of the" }, { "start": 897.92, "end": 903.52, "text": " representation space, it's actually also a non-trivial problem, right?" }, { "start": 903.52, "end": 906.04, "text": " Density estimation on its own." }, { "start": 906.04, "end": 909.3199999999999, "text": " I think it's a notoriously hard problem in machine learning." }, { "start": 909.3199999999999, "end": 915.8, "text": " And so when we initially approached this problem, we kind of make this, I would say, you know," }, { "start": 915.8, "end": 923.36, "text": " Gaussian mixture distribution is the most straightforward assumption kind of to make." }, { "start": 923.36, "end": 930.3199999999999, "text": " And this first algorithm framework, I would say, you know, we kind of just wanted to show" }, { "start": 930.32, "end": 937.08, "text": " even under somewhat simplified assumption of representation space being Gaussian, you" }, { "start": 937.08, "end": 943.2, "text": " can still do this virtual outlier synthesis tractably and train things end to end." }, { "start": 943.2, "end": 949.2, "text": " And from an empirical perspective, as you said, it actually works surprisingly well." }, { "start": 949.2, "end": 954.12, "text": " But that doesn't mean this has to be the only solution to it." }, { "start": 954.12, "end": 961.4, "text": " I think there are great opportunities that Voss really opens up to is how do we perform" }, { "start": 961.4, "end": 967.16, "text": " this synthesis in the feature space more creatively, right?" }, { "start": 967.16, "end": 971.68, "text": " When it comes to the method itself, you have this overview diagram right here." }, { "start": 971.68, "end": 975.12, "text": " And I've attempted to explain this a little bit." }, { "start": 975.12, "end": 978.66, "text": " Did you find my explanation satisfactory?" }, { "start": 978.66, "end": 980.12, "text": " Is there something missing?" }, { "start": 980.12, "end": 982.16, "text": " Is there emphasis in the wrong place?" }, { "start": 982.16, "end": 988.7199999999999, "text": " Or what would you add to so people really understand what's going on?" }, { "start": 988.7199999999999, "end": 993.12, "text": " I think you did a phenomenal job explaining this whole pipeline, perhaps in a clearer" }, { "start": 993.12, "end": 997.52, "text": " way if we were to have to present ourselves." }, { "start": 997.52, "end": 1005.6, "text": " One thing I wanted to maybe call out is this notion of, you know, this uncertainty loss," }, { "start": 1005.6, "end": 1009.6, "text": " why we formulate this problem that way." }, { "start": 1009.6, "end": 1017.32, "text": " So at a higher level, you can think of our learning framework as trying to do something" }, { "start": 1017.32, "end": 1025.56, "text": " more than the typical supervised learning, say training a model based on cross entropy" }, { "start": 1025.56, "end": 1026.84, "text": " loss." }, { "start": 1026.84, "end": 1033.24, "text": " There's a bit of element in the synthesis part, which closer to this generative modeling" }, { "start": 1033.24, "end": 1037.28, "text": " and density estimation, which we've also talked about." }, { "start": 1037.28, "end": 1045, "text": " And so the whole framework combines sort of both bits of supervised learning and also" }, { "start": 1045, "end": 1049.8799999999999, "text": " there is some density estimation involved as well." }, { "start": 1049.8799999999999, "end": 1057.96, "text": " And I think one interesting bits in the learning methodology is how we leverage energy as an" }, { "start": 1057.96, "end": 1068.16, "text": " uncertainty measurement and to separate apart the known objects versus the unknown ones." }, { "start": 1068.16, "end": 1078.16, "text": " And so it's somewhat a problem that's not quite as complex as trying to estimate exactly" }, { "start": 1078.16, "end": 1082.08, "text": " the pointwise density of p of x." }, { "start": 1082.08, "end": 1090.6, "text": " But rather we're kind of picking back on a simpler problem of we just want this energy" }, { "start": 1090.6, "end": 1097.28, "text": " to be estimated as a level set that is sufficient enough to separate these two parts of data" }, { "start": 1097.28, "end": 1102.56, "text": " rather than getting every single point estimated correctly, if that makes sense." }, { "start": 1102.56, "end": 1109.04, "text": " The uncertainty loss you describe somewhere here." }, { "start": 1109.04, "end": 1117.28, "text": " And yeah, so I think I had this other comment where I said directly this loss sort of only" }, { "start": 1117.28, "end": 1119.76, "text": " affects sort of the classification layer." }, { "start": 1119.76, "end": 1124.6, "text": " However, when you think about it, what you could do is you could simply take your Gaussian" }, { "start": 1124.6, "end": 1126.18, "text": " mixture model, right?" }, { "start": 1126.18, "end": 1130.3799999999999, "text": " And you could simply have your data point there." }, { "start": 1130.3799999999999, "end": 1134.84, "text": " And you could say, well, if it's unlikely, it's out of distribution, right?" }, { "start": 1134.84, "end": 1140.08, "text": " I could simply map my inference data point and then evaluate it according to the Gaussian" }, { "start": 1140.08, "end": 1142.76, "text": " mixture model that I have at training time." }, { "start": 1142.76, "end": 1146.6799999999998, "text": " And I say, well, it's low likelihood, it's out of distribution, gone, right?" }, { "start": 1146.6799999999998, "end": 1152.1999999999998, "text": " I wouldn't need all of this thing, which tells me that this loss does more than just, you" }, { "start": 1152.1999999999998, "end": 1154, "text": " know, modify the last layer bit." }, { "start": 1154, "end": 1160.36, "text": " So there is a almost, is it fair to or is this correct my assumption that there is like" }, { "start": 1160.36, "end": 1164.76, "text": " this downstream effect on the entire model?" }, { "start": 1164.76, "end": 1169.08, "text": " How would you like intuitively adding a loss like this?" }, { "start": 1169.08, "end": 1177.72, "text": " What does it do to the whole feature extraction pipeline that leads to the latent space?" }, { "start": 1177.72, "end": 1180.26, "text": " Yeah, that's a great question." }, { "start": 1180.26, "end": 1187.48, "text": " So perhaps to answer a bit more to that, do you mind scrolling up a little bit?" }, { "start": 1187.48, "end": 1193.54, "text": " I think we have perfect, yes, that posterior probability right there." }, { "start": 1193.54, "end": 1199.24, "text": " So keep in mind this whole training is done in an end-to-end fashion, right?" }, { "start": 1199.24, "end": 1205.8, "text": " And then whenever we have an input object that goes into this network, we are optimizing" }, { "start": 1205.8, "end": 1206.8, "text": " for this loss." }, { "start": 1206.8, "end": 1213.58, "text": " And this loss will be back propagated all the way, right, through this entire convolutional" }, { "start": 1213.58, "end": 1216.6, "text": " backbone in this object detector." }, { "start": 1216.6, "end": 1224.8799999999999, "text": " And so this objective L uncertainty is trying to kind of separate apart in terms of this" }, { "start": 1224.8799999999999, "end": 1225.8799999999999, "text": " energy." }, { "start": 1225.8799999999999, "end": 1228.7199999999998, "text": " We'll get to this interpretation of the energy later on." }, { "start": 1228.7199999999998, "end": 1233.84, "text": " But at the very high level, it's trying to just push energy to be two sides." }, { "start": 1233.84, "end": 1237.1799999999998, "text": " One is above zero, one is below zero, right?" }, { "start": 1237.1799999999998, "end": 1242.52, "text": " And if we look at this connection with respect to this posterior probability here, so we" }, { "start": 1242.52, "end": 1256.52, "text": " can interpret energy as this density function for that data point p of x, perhaps plugged" }, { "start": 1256.52, "end": 1259.48, "text": " in with some unknown factor that we don't know, right?" }, { "start": 1259.48, "end": 1264.12, "text": " And so this energy does not precisely capture this density just yet." }, { "start": 1264.12, "end": 1270.04, "text": " But during this optimization process, we hope that through this propagation and minimizing" }, { "start": 1270.04, "end": 1278.24, "text": " this objective, that this whole training would converge to a point where the density could" }, { "start": 1278.24, "end": 1283.12, "text": " be more separable between the ID object and then the OID object." }, { "start": 1283.12, "end": 1289.3999999999999, "text": " So that's the inherent connection between the uncertainty measurement to the density." }, { "start": 1289.3999999999999, "end": 1293.32, "text": " So you sort of maybe reformulated a bit." }, { "start": 1293.32, "end": 1299.96, "text": " You want to coerce the feature extractor almost to give you a space where you can be more" }, { "start": 1299.96, "end": 1308.6000000000001, "text": " certain about in distribution data, but then less certain about out of distribution data." }, { "start": 1308.6000000000001, "end": 1313.88, "text": " So this is naturally a harder problem, right?" }, { "start": 1313.88, "end": 1320.68, "text": " If you go back to this, even in the two dimensional case, I mentioned this is like to separate" }, { "start": 1320.68, "end": 1323.3600000000001, "text": " three classes, I need three lines, right?" }, { "start": 1323.36, "end": 1332.9199999999998, "text": " But to separate three clusters of data from their surroundings, I need a very decision" }, { "start": 1332.9199999999998, "end": 1337.6799999999998, "text": " boundary that's shaped highly complex, high dimensional, right?" }, { "start": 1337.6799999999998, "end": 1340.6, "text": " And so on." }, { "start": 1340.6, "end": 1343.8, "text": " What are the trade-offs here that I make?" }, { "start": 1343.8, "end": 1350.6, "text": " Are they severe or did you find this works without severely impacting my accuracy as" }, { "start": 1350.6, "end": 1351.6, "text": " such?" }, { "start": 1351.6, "end": 1358.8799999999999, "text": " What's sort of the, like, what do I give up when I employ this method?" }, { "start": 1358.8799999999999, "end": 1359.8799999999999, "text": " That's a great question." }, { "start": 1359.8799999999999, "end": 1364.08, "text": " So I think there's natural trade-off would be to say if we employ this regularization," }, { "start": 1364.08, "end": 1369.7199999999998, "text": " does that kind of hurt the performance, compromise the performance on the object detection side," }, { "start": 1369.7199999999998, "end": 1370.7199999999998, "text": " right?" }, { "start": 1370.7199999999998, "end": 1376.9199999999998, "text": " And so we actually showed in the evaluation part in table one, if I recall correctly," }, { "start": 1376.92, "end": 1384.76, "text": " that this whole learning framework actually achieves both quite effectively." }, { "start": 1384.76, "end": 1387.52, "text": " I think it pretty much preserves the MAP." }, { "start": 1387.52, "end": 1394.24, "text": " So that's on the rightmost column where we show the, on the original PASCO VOC and Berkeley" }, { "start": 1394.24, "end": 1399.48, "text": " deep drive task, how is that MAP changes." }, { "start": 1399.48, "end": 1407.04, "text": " It's pretty much the same or similar as the vanilla FASTR CNN without adding our uncertainty" }, { "start": 1407.04, "end": 1408.4, "text": " regularizer." }, { "start": 1408.4, "end": 1414.78, "text": " And so overall this learning from where it kind of provides an actual layer of safety" }, { "start": 1414.78, "end": 1422.8, "text": " net by pruning out some of the OOD object, but at the same time, if it's indeed an indistribution" }, { "start": 1422.8, "end": 1426.3, "text": " image it can do as well." }, { "start": 1426.3, "end": 1434.12, "text": " When you maybe, when we're at the experiments, I did not go into that at all in my explanation." }, { "start": 1434.12, "end": 1439.52, "text": " Is there things that you want to particularly highlight or what should a reader of your" }, { "start": 1439.52, "end": 1446.04, "text": " paper take away from the experiments other than you beat all the baselines, which I think" }, { "start": 1446.04, "end": 1453.68, "text": " we've come to expect a little bit from machine learning papers, but what should a reader" }, { "start": 1453.68, "end": 1458.52, "text": " take away as sort of conclusions from your experimental section?" }, { "start": 1458.52, "end": 1459.52, "text": " Totally." }, { "start": 1459.52, "end": 1461.8, "text": " I like that question a lot." }, { "start": 1461.8, "end": 1469.2, "text": " And I think part of the ablation in the paper is, I think it's quite interesting, going" }, { "start": 1469.2, "end": 1471.16, "text": " beyond table one." }, { "start": 1471.16, "end": 1477.76, "text": " We actually did some of the ablations comparing two different synthesis strategy." }, { "start": 1477.76, "end": 1481.76, "text": " And so I think table two is perhaps, table three as well." }, { "start": 1481.76, "end": 1490.44, "text": " Table two is one of the interesting ones where we kind of try to contrast with, in terms" }, { "start": 1490.44, "end": 1498.68, "text": " of synthesize, we wanted to know whether this Gaussian-based sampling is the optimal one." }, { "start": 1498.68, "end": 1507.76, "text": " There are works have done in the past, for example, directly using GaN to generate images." }, { "start": 1507.76, "end": 1517.36, "text": " Or you could also do mix-up to have this interpolation in the pixel space as well." }, { "start": 1517.36, "end": 1520.64, "text": " And then they're also utilizing noise." }, { "start": 1520.64, "end": 1529.48, "text": " I think those are all kind of natural alternatives for our outlier synthesis approach." }, { "start": 1529.48, "end": 1537.68, "text": " So I think this is one of the ablations I personally quite like." }, { "start": 1537.68, "end": 1543.64, "text": " And I also want to call out the fact that there is one previous paper, I think they" }, { "start": 1543.64, "end": 1551.96, "text": " used these proposals with the large background probability as the negative samples to regularize" }, { "start": 1551.96, "end": 1552.96, "text": " the model." }, { "start": 1552.96, "end": 1558.24, "text": " And that turns out to be also suboptimal compared to using BOSS." }, { "start": 1558.24, "end": 1568.52, "text": " I've also, so you had this decision to, in the very last layer, introduce these virtual" }, { "start": 1568.52, "end": 1569.8, "text": " outliers." }, { "start": 1569.8, "end": 1576.8, "text": " And I think in the video I observed something like, okay, that helps if the out of distribution" }, { "start": 1576.8, "end": 1579.32, "text": " data really looks different in the last layer." }, { "start": 1579.32, "end": 1584.52, "text": " However, if I have out of distribution data, but that exhibits the same kind of low level" }, { "start": 1584.52, "end": 1591.4, "text": " features as in distribution data, that might not be the case in a vanilla network." }, { "start": 1591.4, "end": 1594.52, "text": " Is this also, let's say, a weakness of your method?" }, { "start": 1594.52, "end": 1601.92, "text": " Or would you expect that your regularizer would automatically map these types of outliers" }, { "start": 1601.92, "end": 1608.04, "text": " to different, would construct the latent space such that they are different?" }, { "start": 1608.04, "end": 1610.04, "text": " Is it different?" }, { "start": 1610.04, "end": 1615, "text": " Yeah, for that question, perhaps I can defer to Shufun." }, { "start": 1615, "end": 1619.04, "text": " I think Shufun has some good answer to that question." }, { "start": 1619.04, "end": 1621.6, "text": " Oh, yeah." }, { "start": 1621.6, "end": 1627.84, "text": " So, actually I want to answer this question from two perspectives." }, { "start": 1627.84, "end": 1635.12, "text": " So first perspective, I think you were mentioning some, when a model actually encounters some" }, { "start": 1635.12, "end": 1638.44, "text": " near in distribution node objects." }, { "start": 1638.44, "end": 1644.68, "text": " So how does the feature space functions to prevent the model to predict high confidence" }, { "start": 1644.68, "end": 1645.8, "text": " predictions?" }, { "start": 1645.8, "end": 1652.44, "text": " So basically, we can potentially adjust the sampling threshold in VOS to see whether we" }, { "start": 1652.44, "end": 1661, "text": " can create a tighter decision boundary in order to separate those in distribution objects" }, { "start": 1661, "end": 1663.68, "text": " and those OD objects." }, { "start": 1663.68, "end": 1670.6000000000001, "text": " And in addition, I think near in distribution OD detection is essentially a very hard problem." }, { "start": 1670.6000000000001, "end": 1676.64, "text": " And there's a couple of works exploring this direction, but they are totally in the classification" }, { "start": 1676.64, "end": 1677.64, "text": " setting." }, { "start": 1677.64, "end": 1685.24, "text": " So perhaps we can explore how to combine VOS with those techniques in the future." }, { "start": 1685.24, "end": 1686.8400000000001, "text": " So this is the first perspective." }, { "start": 1686.84, "end": 1696.6, "text": " I think from the second perspective, I'm mentioning you're saying that can we look at different" }, { "start": 1696.6, "end": 1700.6, "text": " semantic spaces, like different layers of features." }, { "start": 1700.6, "end": 1705.9199999999998, "text": " Actually I remember in the paper, actually in the appendix section, we have reported" }, { "start": 1705.9199999999998, "end": 1714.1999999999998, "text": " the OD detection performance using the layer rather than the panoply layer for our licensees." }, { "start": 1714.2, "end": 1719.8400000000001, "text": " And actually, it seems like the performance is not as good as what we have if we use the" }, { "start": 1719.8400000000001, "end": 1724.56, "text": " panoply layer as the semantic space for VOS." }, { "start": 1724.56, "end": 1731.0800000000002, "text": " So basically, I think the reason is that the later layers in the neural network might be" }, { "start": 1731.0800000000002, "end": 1735.44, "text": " more discriminative for classification." }, { "start": 1735.44, "end": 1743.88, "text": " So those more discriminative layers may be better for OD detection and our licensees" }, { "start": 1743.88, "end": 1750.72, "text": " because those synthesized OD layers relies on the quality of those estimated covariance" }, { "start": 1750.72, "end": 1755.0800000000002, "text": " matrix and those mean embeddings for each in distribution class." }, { "start": 1755.0800000000002, "end": 1763.5200000000002, "text": " So I think that may be the reason for why we choose to use the panoply layer for VOS." }, { "start": 1763.5200000000002, "end": 1764.5200000000002, "text": " It makes sense." }, { "start": 1764.5200000000002, "end": 1770.64, "text": " As you go earlier and earlier, the less you can probably describe the data using sort" }, { "start": 1770.64, "end": 1775, "text": " of this mixture model approach." }, { "start": 1775, "end": 1777.48, "text": " So I think it makes sense." }, { "start": 1777.48, "end": 1779.0800000000002, "text": " I was just wondering." }, { "start": 1779.0800000000002, "end": 1783.1200000000001, "text": " And even I think it's important to remember that we're still in high dimensions." }, { "start": 1783.1200000000001, "end": 1787.48, "text": " And with being in high dimensions, it means that even if some of the features are the" }, { "start": 1787.48, "end": 1792.8400000000001, "text": " same, the moose will have four legs and so on, it will kind of look like a dog, but not" }, { "start": 1792.8400000000001, "end": 1793.8400000000001, "text": " fully." }, { "start": 1793.84, "end": 1801.04, "text": " So you'd still expect this in these high dimensions to be separated." }, { "start": 1801.04, "end": 1804.12, "text": " So maybe a bit to the research process." }, { "start": 1804.12, "end": 1807.9599999999998, "text": " You thought of this, you thought you're going to tackle this problem and so on." }, { "start": 1807.9599999999998, "end": 1815.8799999999999, "text": " Could you maybe share a bit of how the process, I think it's always, you just see the paper" }, { "start": 1815.8799999999999, "end": 1819.3999999999999, "text": " at the end and the paper is like, oh, wow, you have some examples here." }, { "start": 1819.3999999999999, "end": 1822.3999999999999, "text": " I didn't even, I think, show them much in the video." }, { "start": 1822.4, "end": 1827.64, "text": " So here you have comparisons at the bottom, everything that's green is detected as out" }, { "start": 1827.64, "end": 1830.4, "text": " of distribution, which is really nice." }, { "start": 1830.4, "end": 1837.2800000000002, "text": " The helicopter, I think, was the most one of the most shared pictures of your paper." }, { "start": 1837.2800000000002, "end": 1840.0800000000002, "text": " This looks really nice, right?" }, { "start": 1840.0800000000002, "end": 1843.76, "text": " I think what people don't see much is the process behind it." }, { "start": 1843.76, "end": 1845.72, "text": " Like, could you describe it a little bit?" }, { "start": 1845.72, "end": 1854.84, "text": " Was there a time when you thought this wouldn't work or doesn't work or you don't know how" }, { "start": 1854.84, "end": 1856.8, "text": " to go further?" }, { "start": 1856.8, "end": 1862.92, "text": " How was it like to achieve at a system or arrive at a system that finally works really" }, { "start": 1862.92, "end": 1863.92, "text": " well?" }, { "start": 1863.92, "end": 1864.92, "text": " Oh, totally." }, { "start": 1864.92, "end": 1868.3600000000001, "text": " I'd be happy to speak on that." }, { "start": 1868.3600000000001, "end": 1870.6000000000001, "text": " Perhaps Rufun can add on later as well." }, { "start": 1870.6, "end": 1877.56, "text": " I think just like many other research process, nothing works out of the box immediately," }, { "start": 1877.56, "end": 1878.56, "text": " right?" }, { "start": 1878.56, "end": 1884.76, "text": " I think part of the research, the fun is really kind of going through the process of figuring" }, { "start": 1884.76, "end": 1890.52, "text": " out a lot of intermediate obstacles." }, { "start": 1890.52, "end": 1895, "text": " And so to give you some example, right, some of the challenges, I think, really, Rufun" }, { "start": 1895, "end": 1897.52, "text": " did a lot of hard work in the process." }, { "start": 1897.52, "end": 1905.72, "text": " Just when we started the exploration, the first challenge we have to overcome is what's" }, { "start": 1905.72, "end": 1907.56, "text": " the right evaluation, right?" }, { "start": 1907.56, "end": 1911.08, "text": " How do we get this correct evaluation benchmark?" }, { "start": 1911.08, "end": 1916.4, "text": " Because a lot of the previous work focused on image classification that's more or less" }, { "start": 1916.4, "end": 1918, "text": " well established." }, { "start": 1918, "end": 1927.76, "text": " And in order to evaluate this new setting, we have to actually gather and clean all of" }, { "start": 1927.76, "end": 1931.06, "text": " these, for example, OOD test images as well." }, { "start": 1931.06, "end": 1939.16, "text": " So that's some of the things you just have to kind of go through during the research" }, { "start": 1939.16, "end": 1940.16, "text": " process." }, { "start": 1940.16, "end": 1947.68, "text": " And I think on the methodology side, there are also the challenges as well." }, { "start": 1947.68, "end": 1955.96, "text": " So one thing I want to share is there's actually one hyperparameter in VOS, which is, I think," }, { "start": 1955.96, "end": 1961.88, "text": " called the starting epoch, which is when you start adding this regularizer." }, { "start": 1961.88, "end": 1969.24, "text": " And so it turns out if you just train this whole entire loss with the object detection" }, { "start": 1969.24, "end": 1977.2, "text": " plus the LL uncertainty from the start, things are not converging as well." }, { "start": 1977.2, "end": 1978.2, "text": " So why is that?" }, { "start": 1978.2, "end": 1982.72, "text": " Because at the beginning of the training, the representation is not quite well formed" }, { "start": 1982.72, "end": 1983.8600000000001, "text": " yet." }, { "start": 1983.8600000000001, "end": 1990.9, "text": " And so therefore, estimating this density in the latent space is not also very reliable" }, { "start": 1990.9, "end": 1993.76, "text": " and not to mention the sampling part." }, { "start": 1993.76, "end": 1998.6000000000001, "text": " And so that's where we kind of got a little bit stuck on is the performance." }, { "start": 1998.6000000000001, "end": 2003.4, "text": " If you train from scratch, it's not really as desirable." }, { "start": 2003.4, "end": 2009.4, "text": " And so later on, we figured out why don't we wait until the representation becomes more" }, { "start": 2009.4, "end": 2010.4, "text": " formed." }, { "start": 2010.4, "end": 2021.88, "text": " So this idea of starting in a later training process helped resolve this issue." }, { "start": 2021.88, "end": 2025.2800000000002, "text": " And so that's another example." }, { "start": 2025.2800000000002, "end": 2027.88, "text": " But how did you get this idea?" }, { "start": 2027.88, "end": 2030.92, "text": " Did you have some indication from some metrics that you logged?" }, { "start": 2030.92, "end": 2036.16, "text": " Or did you just sit there and just try 10 different things and this one was the one" }, { "start": 2036.16, "end": 2037.16, "text": " that worked?" }, { "start": 2037.16, "end": 2042, "text": " Or I imagine you sit there, you try it and stuff doesn't converge." }, { "start": 2042, "end": 2045.3000000000002, "text": " It's just like, well, it doesn't work." }, { "start": 2045.3000000000002, "end": 2051.04, "text": " What can lead you to come up with the correct solution?" }, { "start": 2051.04, "end": 2056.96, "text": " I think for this one, perhaps it's more natural because if you think about how the method" }, { "start": 2056.96, "end": 2063.92, "text": " works, it has to rely on some embedding space that has a somewhat clear structure that you" }, { "start": 2063.92, "end": 2068.04, "text": " can perform density estimation and then sample from." }, { "start": 2068.04, "end": 2078.12, "text": " And so when things kind of doesn't work out, we look at what are the kind of possible major" }, { "start": 2078.12, "end": 2080.2400000000002, "text": " reflux that could happen." }, { "start": 2080.2400000000002, "end": 2086.4, "text": " This one would be the kind of the top one we are diagnosing into." }, { "start": 2086.4, "end": 2087.4, "text": " Excellent." }, { "start": 2087.4, "end": 2091.4, "text": " Yeah, I think that's a pretty neat overview." }, { "start": 2091.4, "end": 2097.1600000000003, "text": " Is there something else that you'd like to share about this?" }, { "start": 2097.1600000000003, "end": 2099.44, "text": " Anything that we haven't touched on maybe?" }, { "start": 2099.44, "end": 2101.48, "text": " Anything that you want to specifically highlight?" }, { "start": 2101.48, "end": 2103.56, "text": " Yeah, I think I've talked a lot." }, { "start": 2103.56, "end": 2108.12, "text": " Xufeng, do you want to add anything that you particularly wanted to add on to?" }, { "start": 2108.12, "end": 2111, "text": " I think I don't have any further comments." }, { "start": 2111, "end": 2116.36, "text": " Sharon has covered comprehensively about this paper." }, { "start": 2116.36, "end": 2119.92, "text": " Your code is online, right?" }, { "start": 2119.92, "end": 2124.1200000000003, "text": " So people can go, can get into it, can experiment with it." }, { "start": 2124.1200000000003, "end": 2126.6800000000003, "text": " Yeah, I think that's pretty neat." }, { "start": 2126.6800000000003, "end": 2127.6800000000003, "text": " Yeah." }, { "start": 2127.6800000000003, "end": 2133.2400000000002, "text": " And with that, Sharon, Xufeng, thank you very much for being here." }, { "start": 2133.2400000000002, "end": 2134.6, "text": " And this was very enjoyable." }, { "start": 2134.6, "end": 2135.6, "text": " Yeah." }, { "start": 2135.6, "end": 2137.04, "text": " Thank you so much for having us again." }, { "start": 2137.04, "end": 2140.56, "text": " It's been fun, you know, chatting about the work and so on." }, { "start": 2140.56, "end": 2141.56, "text": " Thanks for inviting us." }, { "start": 2141.56, "end": 2155.92, "text": " Thank you." } ]
Z_kWZpgEZ7w
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Multimodal Neurons in Artificial Neural Networks (w/ OpenAI Microscope, Research Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "explained", "neural networks", "ai", "artificial intelligence", "paper", "openai", "openai emotions", "openai dalle", "openai clip", "openai microscope", "openai clip microscope", "alec radford", "emotion neuron", "deep learning emotion", "chris olah", "chris olah openai", "neural network feature visualization", "multimodal neural network", "what does a neural network learn", "what do neural networks learn", "how do neural networks work", "what does openai do", "faceted visualization" ]
#openai #clip #microscope OpenAI does a huge investigation into the inner workings of their recent CLIP model via faceted feature visualization and finds amazing things: Some neurons in the last layer respond to distinct concepts across multiple modalities, meaning they fire for photographs, drawings, and signs depicting the same concept, even when the images are vastly distinct. Through manual examination, they identify and investigate neurons corresponding to persons, geographical regions, religions, emotions, and much more. In this video, I go through the publication and then I present my own findings from digging around in the OpenAI Microscope. OUTLINE: 0:00 - Intro & Overview 3:35 - OpenAI Microscope 7:10 - Categories of found neurons 11:10 - Person Neurons 13:00 - Donald Trump Neuron 17:15 - Emotion Neurons 22:45 - Region Neurons 26:40 - Sparse Mixture of Emotions 28:05 - Emotion Atlas 29:45 - Adversarial Typographic Attacks 31:55 - Stroop Test 33:10 - My Findings in OpenAI Microscope 33:30 - Superman Neuron 33:50 - Resting B*tchface Neuron 34:10 - Trash Bag Neuron 35:25 - God Weightlifting Neuron 36:40 - Organ Neuron 38:35 - Film Spool Neuron 39:05 - Feather Neuron 39:20 - Spartan Neuron 40:25 - Letter E Neuron 40:35 - Cleanin Neuron 40:45 - Frown Neuron 40:55 - Lion Neuron 41:05 - Fashion Model Neuron 41:20 - Baseball Neuron 41:50 - Bride Neuron 42:00 - Navy Neuron 42:30 - Hemp Neuron 43:25 - Staircase Neuron 43:45 - Disney Neuron 44:15 - Hillary Clinton Neuron 44:50 - God Neuron 45:15 - Blurry Neuron 45:35 - Arrow Neuron 45:55 - Trophy Presentation Neuron 46:10 - Receding Hairline Neuron 46:30 - Traffic Neuron 46:40 - Raised Hand Neuron 46:50 - Google Maps Neuron 47:15 - Nervous Smile Neuron 47:30 - Elvis Neuron 47:55 - The Flash Neuron 48:05 - Beard Neuron 48:15 - Kilt Neuron 48:25 - Rainy Neuron 48:35 - Electricity Neuron 48:50 - Droplets Neuron 49:00 - Escape Neuron 49:25 - King Neuron 49:35 - Country Neuron 49:45 - Overweight Men Neuron 49:55 - Wedding 50:05 - Australia Neuron 50:15 - Yawn Neuron 50:30 - Bees & Simpsons Neuron 50:40 - Mussles Neuron 50:50 - Spice Neuron 51:00 - Conclusion Paper: https://distill.pub/2021/multimodal-neurons/ My Findings: https://www.notion.so/CLIP-OpenAI-Microscope-Findings-27465eac373c451d8083428443e0837c My Video on CLIP: https://youtu.be/T9XSU0pKX2E My Video on Feature Visualizations & The OpenAI Microscope: https://youtu.be/Ok44otx90D4 Abstract: In 2005, a letter published in Nature described human neurons responding to specific people, such as Jennifer Aniston or Halle Berry. The exciting thing wasn’t just that they selected for particular people, but that they did so regardless of whether they were shown photographs, drawings, or even images of the person’s name. The neurons were multimodal. As the lead author would put it: "You are looking at the far end of the transformation from metric, visual shapes to conceptual... information." We report the existence of similar multimodal neurons in artificial neural networks. This includes neurons selecting for prominent public figures or fictional characters, such as Lady Gaga or Spiderman. Like the biological multimodal neurons, these artificial neurons respond to the same subject in photographs, drawings, and images of their name. Authors: Gabriel Goh, Nick Cammarata, Chelsea Voss, Shan Carter, Michael Petrov, Ludwig Schubert, Alec Radford, Chris Olah Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher LinkedIn: https://www.linkedin.com/in/yannic-kilcher-488534136/ BiliBili: https://space.bilibili.com/1824646584 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Hi there and welcome back my dear fellow scholars. Today we're going to look at multimodal neurons in artificial neural networks by Gabriel Goh, Nick Camarada, Chelsea Voss, Shan Carter, Michael Petroff, Ludwig Schubert, Alec Radford and Chris Ola that has appeared in this Distillpub journal which I think is a pretty cool journal going beyond the classic PDF publishing. So this paper is an investigation into the new CLIP model by OpenAI and specifically the discovery of what they call multimodal neurons in this model. So this is an investigative work. They work with visualizations and I've made a video about both the CLIP model as well as the feature visualizations that has appeared previously. So safe to say what they are claiming as the high-level claim here is that in biology we sort of expect there to be neurons that respond not to individual patterns or to individual words but to concepts. So there could be a concept neuron of Halle Berry as you can see here and that neuron would respond to photographs of Halle Berry, to drawings and sketches of Halle Berry and also to text. So if we see the text, the rasterized text or we hear the word, that neuron, that same neuron would fire. Now so far in artificial neural networks we had not seen this kind of multimodal perception. So we have seen neurons responding in general to the same class of images because we train them as image classifiers but we have not seen that generalize to other modalities such as drawings or text. What they find in this CLIP model right here is that exactly what we expect in humans or in general in biological neural networks that happens. So they find for example a neuron that responds to Spider-Man. That is you know photos of Spider-Man in the real world or some person in a Spider-Man costume, drawings of Spider-Man and also text that says spider. So that would always the neuron would respond to all of these things, the same neuron and that is a sort of sign that these models have learned to connect to different modalities together. We've already discussed in the CLIP video that the model sort of learns to do OCR so it learns to recognize text because the CLIP model is fundamentally a model that connects images to text and my claim here is going to be that this addition of text, the model I think is very much a text model. So a lot of the connection it makes go via the textual level and a lot of the responses you're going to see here, the visualizations are going to deal with text rather than with images. So here you can see what this neuron responds to. If you thought it was the spider web here, no there's spider as a text, spider here, spider there, drawings of Spider-Man. So this neuron would respond to all of these things which is pretty pretty cool. So what they do, what they present here is an overview over the different neurons they find and as I understand it what they have done is they've gone through these neurons and they use their feature visualization technique with every single one of them. So I can show you what that looks like. Here is the open AI microscope and you can find that and this is the exact model they're looking at. So what you can do is you can simply click around in these neurons over here and then these are the visualizations right here. So now the visualizations are twofold. So on the left hand you have channel optimization, on the right hand you have neuron optimization. We've treated them in a previous video if you want to know how they come about but for now what you should know is that these are images that activate that particular neuron or that particular channel very much. So these images activate this particular thing in the neural network but not other things. So this is a way to see what these neurons respond to heavily. So here you can see on the left you often have kind of pattern structures, on the right you more have kind of in the center individual things. So maybe it's not really clear what this is. So what they also portray is data samples from the ImageNet data set that activate mostly that particular neuron. So you can pretty clearly see that this responds to popsicle ice cream. Now they also have a different data set down here. There is a Flickr Creative Commons and very much the same you see this is kind of ice and ice cream and at the bottom you have text that goes along with it. So here it's not really ice cream so this is a bit of a failure case but you always have to keep in mind that it could also be because of the lack in power in searching for text. So what they do down here is they have a search algorithm that finds pieces of text that that neuron responds to highly. So text that maximizes the dot product. So in the clip model you have an image part, you have a text part and you have a dot product at the end. So this is text that when you input it to the text part maximizes the dot product with that particular neuron. So it's not always going to be you know really good text but very often you can give you a hint in what the neuron thinks. Note that this isn't the same text as we're going to see later like the text that you saw in Spider-Man because the text you saw in Spider-Man that was rendered text. So they do a lot of investigation into rendered text because the clip model is quite good at responding to rendered text in the image side. Alright so they find they look at these neurons literally I think they just click here on the left boom and you look at them. So this seems to be like a hamburger pancake neuron and it is I did this for hours and I'll show you later what I found. This is absolutely fascinating what you'll find here by just clicking through and every now and then you find something like yeah alright but let's get back to the paper first. So the paper they find region neurons so neurons that respond to different regions of the world for example the USA. Now they not only do they have not only do they have this visualization technique for a for kind of the whole image they have faceted visualization so in this paper they introduce faceted visualization which they can so they can produce specifically faces that are US that respond to USA. They can produce specifically indoor things so this is all the same neuron these are images that are made such that they represent indoor scenes and there is an appendix if you want to know how that's done they can trim it to only produce nature pictures that this particular neuron responds to. So here you can get a much better insight into what into what the neuron looks at for example in if you create faces for the USA this is I don't know I call this one I call this one Benjamin Washington because it's a sort of a blend of Ben Franklin and George Washington but in general it's pretty cool so you can even yeah nature you can do pose for North America pose for the US I think that's kind of a GI a pose for Europe I don't know what that is but it doesn't always you know work out super well but they find person neurons so neurons that respond to individual people be that faces be that text so this is Donald Trump be that poses yeah Elvis is also pretty cool I've actually found I don't know if it I found the Elvis neuron myself or if I found a different one yeah so they also have emotion neurons which is also pretty cool where they so they find the neurons that respond to particular emotions so when they tell these neurons when they make a faceted reconstruction and tell please give me a face this is what comes out and that you know it's just shocking when you do something like a pose for shocked this I think we're only scratching the surface here honestly but you can see the claim here the claim is that the same neuron responds to this picture and to this picture this is supposed to be text you can only guide it you can't you know force it to this picture indoor to this picture so the same neuron will respond to all of these and they call that multimodal neuron because it represents a concept the concept of being shocked rather than in a particular fine-grained pattern which was always the kind of problem so far with these neural networks that the they were more looking at you know low level patterns than high level concepts it seems with clip with by combining modalities like images and text and by not forcing this constraint like in a classifier into 1000 predefined classes we can gain much more we can go up the hierarchy of features so they have art style they have holiday neurons religion neurons person trait neurons abstract concept neurons the star I found the star I yeah I remember time neurons counting neurons pairs of force they are not always so super good but it clearly goes into the good direction so here they highlight specific things first person neurons so they find neurons that respond for example to Jesus Christ so they would respond to all of these images here on the right you see their crosses Jesus Christ and so on depictions of Jesus drawings of Jesus and when you ask the model to generate you a image that reconstructs this neurons activation and you can force it or you guide it to make a face this turns out if you got it to make a pose this turns out a logo obviously they also have Hitler right here which is also pretty cool though I have if you click on these things you'll get actually to the microscope thing and this is the one for for Hitler and you know I'm I'm not entirely sure that this is the case like I can see you know the kind of mustache thing but if you look at what in the data set activates this one it's it is a bunch of swastikas but it is also just a bunch of kind of German political stuff but yeah I mean the concept the concept here even if it's not Hitler directly it's pretty pretty cool I yeah also found that domain endings rendered as images will activate the same neuron as the flag of that country and activate the same neuron as like the architecture of that country it is super duper interesting alright so they have these person neurons which is already cool and they have so they've found these they do a case study here for the Donald Trump neuron so the Donald Trump neuron recognizes Donald Trump and then they want to see what images in the data set activate this neuron by how much so they make the claim here that if you for example choose profile pictures of Donald Trump and you see here is the zero line and here is the standard deviations from zero activation so pictures of Donald Trump activate this neuron like 30 times more than it is activated over the whole data set which makes sense if that neuron responds to Donald Trump but it also responds to art images containing Donald Trump by the way these are classified by the authors here they've gone through the images and they've classified them into these categories text containing Donald Trump's name the model also strongly responds with the same neuron right that's the that's the crazy part so a picture with text in it that says Trump activates the same neuron as a profile picture of Trump activates the same neuron as a mugger hat and activates sometimes the same neuron as political images activates so the if you look at games and music and so on that is very that neuron is very deactivated so not only is it zero it's actually negative which the authors interpreted as sort of being being counter to that in the space of all concepts they do so the this paper is is full of this kind of content warnings it might be disturbing and so on which you know you can you can do but I also find I also find the rest of the paper is kind of a fairly large hedge against certain things and it gets political at times for example when they want to when they want to claim that so here on the other hand it most negatively activates to musicians like Nicki Minaj and Eminem video games like fortnight civil rights activists like Martin Luther King jr. and LGBT symbols like rainbow flags so the games and the fortnight here yes we can see that but if you click on this and they have four images of this you can see that it's activated at relatively low magnet like negative magnitudes which is correct then it is also almost equally activated over here at high magnitudes so like I see the point you're trying to make but I mean if if you are in the political sphere this is not you have to you have to not interpret this as meaning that these things are kind of aligned but you have to interpret it as these things will appear together often which you know one can one can definitely understand in this case so here they search for profile pictures of other people when including Donald Trump himself and they plot how much these profile pictures of other people activate the Trump neuron and you can see that for example well yeah Pence activates this neuron by quite a bit I think yeah the selection here is you know up to the authors of course but it's it's fairly interesting to see that Clinton Cruz and Obama activated more than Hitler and almost as much as Steve Jobs for some reason so I'm not I'm not entirely sure what you can make of this but it's definitely interesting to in on this side like to observe the multimodality of pictures just the fact that text drawings symbols of that campaign and profile pictures will all activate the same neuron that is fairly impressive they go on and they identify emotion neurons so again there's a content warning by the way also here so here they identify a neuron that responds to surprise or shock and you can see that all of these pictures on the right will activate that neuron so there are faces being shocked there are horses being shocked and there is rendered text saying like WTF OMG and so on again if you I think we've we've gone through this this is the the shocked one there they're also secondary neurons that help let's say help the primary emotion neurons so here you can see an overview over the different emotion neurons they have found and it is pretty stunning so here they ask them obviously to create a face when they constrain them not constrain they guide them towards making poses by the way the way you guide them is they train linear probe classifiers on separate data sets so they would train a classifier on a face data set to distinguish all faces from all non faces and then that use that classifier to sort of guide this reconstruction process that's how you can sort of choose to end up with a face or with a pose or with a piece of text so as you can see it's pretty pretty cool that even the text that comes out of this reconstruction process these aren't real images right these are kind of reconstructed to activate those neurons like for evil you can see that there's devil and Satan for shocked it's like OMG for crowd for happy it's it's happy if you look at the poses for happy for serious evil is particularly cool incarcerated rejected this is I think this is absolutely cool there is the NSFW there is erotic there are erotic neurons and if I click on this it will show now if you click on this absolutely nothing not safe for work will happen I promise I don't promise but you know I I've tried it it's fine I will not click on it because if this model things that's not safe for work the YouTube algorithm will think is not safe for work so but what I can tell you is that if you go on that neuron and you go click through it to go to the microscope and you look at what image net pictures respond to that neuron heavily you'll find out that image net isn't the really clean dog breed data set that you might have known all right they found other neurons corresponding to silly facial expressions like duck faces and and and tongue showing and so on which is is pretty neat and they find this neuron that corresponds to mental illness which the reconstruction is just amazing like this is just mind-baffling nature kind of always looks the same but mental illness let's say face this is it's crazy how this model connects things and it connects these things to books and writings of sad mental health anxiety and so on now do I think the model understands what a mental illness is no I don't think so I think much like in GPT-3 it is learned to statistically associate things so it has learned that there might be and I think that happens via the textual input so in clip for every image you have a piece of text and I think the connection between the topics happens on the textual level because the text descriptions are the same between images so there will be images of people you know cowering like this being sad and the textual description for it would be something like mental illness anxiety sadness and then for these pictures of these books as well there the descriptions would be I mean this is one is literally called overcoming anxiety so if the picture is of a book and the description says what is on the picture obviously that text will be connected so I think that's how it learns to connect things via the text and I think this thing is in large part a text model so here they do the same study for images that are associated with mental illness so depression sad pictures like anxiety pictures are pretty high depressing jokes if you look at music and sports that's negatively activated so on so you can see that I think via the text the model can sort of learn about how different different concepts different things different patterns are connected to one another they have region neurons which I find pretty cool so they discover neurons that when they show them a crop of this world map this this world map when they show them a crop of the world map the the neuron will respond the neural will flare up and so the neuron this red neuron here that reacts to these pieces of text and now it reacts to the pieces of text when they are rendered into images right then the neuron responds if you render the word American in an image and then you give it to the network that neuron will flare up the same neuron will flare up if you show it a crop of this region here of the map which is crazy like crazy again I think the connection happens in the textual domain but still crazy you can have it do face facets for these different regions yeah if you if you go over here so the neuron that responds to this blue area responds to the rendered words Mumbai Singh Pakistan Afghanistan Bangladesh and responds strongly or if you make reconstructions that activate that neuron you get these kinds of pictures which is fairly cool the same here for Europe so this is kind of European and yeah I that looks like home so check this out a bit for yourself but it's immensely cool they even find these secondary regional neurons that aren't exactly regional but they also respond to crops of this map and they highlight this entrepreneur neuron that you know it's a response to sort of the words entrepreneur entrepreneurial and it you know it kind of looks like his company logos a little bit I guess but it you know the the model that responds to the word entrepreneur lights up when you show it the west coast of the US kind of the the California region interestingly it also lights up when you show it the west coast of the of the lower of the southern African continent which is cool like that's definitely unexpected I don't know I I'm not informed enough to know whether or not there is significant entrepreneurial drive going on there could also be that it the model simply confuses the west coast of the two countries right like they look in a crop they look the same could be I'm not I'm not I don't know so maybe I'm wrong it's also interesting that only these regions light up right if for this particular neuron so I have my doubts whether that's just kind of a a lucky cherry pick I'm not saying it's cherry picked but you know kind of the I can stumble upon and you make something of it or not they have more case study of African kind of subdivisions and let's go down here here is where they discuss that they can also produce text for the text side of clip so not only do they render and this this text here is what you're going to see the maximal text align with an image or with a neuron sorry is what you're going to see at the bottom of the microscope pages so lastly they force a they kind of make a sparse code out of their main neurons that they find and they try to build more complex emotions from them for example jealous and they do they do claim here that that makes sort of a bit of sense like jealous is champion plus hug plus grumpy minus crying I'm not exactly sure if you know if that makes super much sense so bored is relaxing plus grumpy maybe yeah intimate is soft smile plus heart minus sick you can you can probably make something out of that though yeah powerful is lightning miracle plus evil plus yoga that's that's definitely definitely the case do check it out it is very interesting to look at some of those things even though I think it does not make you know terrible much sense but in often cases but stressed being success plus mental disorder plus pink objects maybe but it is more kind of it is not claimed that this is you know kind of an absolute thing it's more an investigation into these networks if you lay them out in sort of a 2d surface you can see that these emotion neurons they come pretty close to sort of an atlas of what people when we just use two factors we roughly reconstruct the canonical mood axis of in much used in much of psychology valence and arousal so you can divide these emotions into two things so there is valence which is good or bad so I think that's top bottom here so here's mad angry hostile and so on maybe not no top bottom is probably valence like how strong something is and then left right might be good and bad no also not here insecure inspired aroused awful sad well these are all bad no hostile is here appalled is here and horrified is here where are you happy in the middle maybe creative okay happy is here also it might not be exactly axis aligned right you can also divide it into seven factors which we nearly reconstruct a well-known categorization of these emotions into happy surprised bad disgusted fearful and angry except with disgusted switch for a new category related to affection that includes valued loving lonely and insignificant all right so this next piece is really funny what they do is so given clip you can build a classifier so if you have the clip model that connects images to text what you can do is you feed one image and then you give it a bunch of texts to choose from and whichever one it responds highest with that's kind of the class so if you provide the class labels as text you can build a zero short classifier now clip papers demonstrated that that works well so here they do this so they have this Apple right here and the label is correctly Apple but if they just slap a sticker on it that says iPod the clip model will switch to iPod and here yeah here is where I really think that this model it is a textual model it responds even to rendered text it responds very heavily so here it responds to this iPod library like this iPod looks like something I bought off Craigslist last week so you can see it works like almost every single time you just slap a label on it and that tells me that we are still like the text is might be too dominant in these models especially you know this models they will connect the text with render text in the image and that that's a very strong signal for what's in the image right this is only zero shot though if you switch this to do linear probe so if you actually train a linear probe on the representation of clip then these attacks don't work anymore so this is going back again to sort of the old-school deep learning approach where you actually train a classifier and once you train it picks up on on other features and then it doesn't work anymore all right yeah so they they evaluate this on a large scale they can't always slap a label so they just fill the image with render text and that usually gets the classifier confused fairly fairly well they also do this with this strupe test which you can do with humans which is fairly difficult if you do it at a high speed and they discover that the model basically pays no attention whatsoever to the color of the word it pays much more attention to what the word says which is strange right because you think if I have a neural network and you know it basically needs to to recognize the color here it needs to filter out the white pixels but then just average the pixels it gets the correct answer that's so easy right it simply averages whereas to recognize that this says green is much more difficult but the model was trained to connect text and images images which often have text in them so it has learned to do OCR basically in the Dolly video I claimed that Dolly has learned to do reverse OCR and people correctly pointed out that that is more aptly called writing but I love reverse OCR I'm gonna call writing from now on reverse OCR so again this is evidence for the claim that this is mostly a textual model and now I want to show you what I found so if you're not in the mood I have all this in a notion page which I'll link down below so I'll show you just some interesting stuff sometimes it's multimodal sometimes it's not right so we already were here we just clicked around but now I want to kind of show you the good stuff so this is a Superman neuron that I found so it responds as you can see to symbols of Superman in the image net data set Superman Superman drawing Superman comics Superman spelled out rendered and so on this is exactly kind of what what the the article was about right but now it's Superman not spider-man this I call the resting bee face neuron so it responds to people being slightly annoyed yeah as you can see here this is trash bags so this responds to trash bags pretty cool right so not any kind of bag right specifically trash bags even if they are not black so there are a couple in there they don't necessarily breath black there is even trash cans like dump containers right here that have no bag in sight yet still that neuron response this sorry about sorry about that yeah for some reason you might want to I don't know maybe have something in your pockets yeah so so fairly cool oh there's a tree is not always you know perfect but these are the data set examples that most excite that neuron so you can also see the text isn't always good though I think I think if the text here isn't super good it might more be an effect of this method to search text because text is of course not a continuous signal so it's fairly hard to search text that maximizes some activation otherwise we could build GANs for text very easily which we still can't this one here I've titled this strength and a law and weightlifting which I'm aware of this is not you know iconography of a law however so this is pretty cool as an image right now if you look at what in the data set what samples it responds to it's kind of all weightlifting it's all weights so this is weight weight and if you go down here to the other data set this is why I called it sort of a law because you have also rendered names like the the rendered Allah you have the Quran you have symbols of Islam and if you go to the text that it searches goes like hammer workout prophet prophet Zana in lumber iron gym the brutal workout of God so you know pretty cool neuron honestly and you know that it that responds with this I don't even I don't even know what what that is is that is that Hindu imagery or Buddhist imagery so cool these are organs this is an organ neuron I hope like you you can see that and it responds to the render text of control I don't know what to make of it also canal viral but also to drawings you can see here a drawing of a heart for some reason also chins so it's not always super duper clear what a neuron does in fact most of these neurons you will find if you go look at what image net sound and these I believe these are crops of image net samples not entire pictures so if you look at what by the way control and CTRL if you look at what examples most often it will be rendered text so that the image that no matter what neuron most neurons actually pay attention to rendered text rather than to images the ones I've selected are the ones that do not but if you just go and click on some random neuron we can actually try and it's certainly going to probably fail this one looks pretty cool looks pretty cool actually that responds to printers yep demonstration effect fails horribly how about this one yeah so you can see that you know maybe you don't exactly know what that is so you want to look at what so here you see that it primarily responds to the text miss I guess mss I miss you Mississippi and so on you know Mississippi having it twice in there that got a respond pretty pretty heavily and most of the time you'll find something like this that it responds very much to the rendered pieces of text in images these are film spools and so not only does it respond to film spools but also to things like director screening popcorn the kind of movie theater labeling showing Hollywood cinemas there's also entertainment so you know the multimodality again this this is a this is a phenomenon because we introduced the text and it can connect it on the text level this is feather patterns and leaf patterns so even when it's in coffee you see the feather and leaf patterns even when it's a drawing it can it will still respond this one is strange so this responds to things like Sparta and front and Troy but so that it responds to rendered front Trojan Spartans front and it also has a lot of people doing sort of squats as you can see so and and fighting so this is kind of an iron so this is a bit of kind of a warrior neurons you can see oh there's lots of ah of course it's because of these Spartan runs and all they're called like this right these kind of sporting events I see Roman frontside Roman Roman so it connects the workout with the Spartan workout kind of division and then it connects the Trojan and so on via again via the text because it makes no sense to connect like the vodka and the and the weightlifting maybe so yeah I hope I hope you're fairly convinced by now we're gonna know a bit faster now because the videos already too long but this one here is the letter E so it's e it responds again to rendered text of E this one here is cleaning so it responds to cleaning products and cleaning things this one here is frown so this is frowning frowning frowning grumpy face grumpy face lion lion responding to lions rendered text of lions team names called lions and so on fashion model fashion model a bit by the way the labels are mine I just looked at them and decided what they are but you can see like there's a lot of these kind of runway shots here baseball stadium so cool so these are kind of top views of baseball stadium but it responds a lot to things saying park PNC park AT&T park but also kind of home team park lights and and baseball dugouts and even players I've seen some players logos of teams baseball depictions of actual baseballs immense immensely cool here bride this is bride you can see this is bride this one what do you think this one is Navy so super cool that it can I kind of connect these ropes with the emblems the the kind of your tags so and it connects it to render text saying Navy right so these are the crops of images that it responds to Navy of fish like officers Navy gravestones yeah so cool this one okay this for this I also had to look at sort of the pictures here and the text going along with it this is hemp but it is also kind of goa patterns it is also for some reason turn or earn it is also Hendrix so this isn't even Jimi Hendrix right like this this is definitely connected to these goa shirts there is also there's pictures of Jimi Hendrix which I guess you can understand there is also turn again whereas there's Bob no this is Bob Marley sorry this Bob Marley yeah so so it connects these things staircase and here for some reason also responds to text rendered human and to staircases and here I have I don't know why but there's there's this thing which I'm not sure so it has human in it but it is also arranged like a staircase so maybe that's why it responds extra extra yeah the Disney neuron this is a Disney neuron how cool is this how cool is this so you can clearly see that that but then it you know Disney these are the samples that it responds to simply something saying Disney the Mickey Mouse ear the mini bow no immensely cool the castle right the Disney castle this is the Hillary Clinton neuron you can see this is Hillary and the images it responds to is Hillary Hill pill Polly Hill pills so this is maybe it's more like the LL why the IL why neuron but it it does pick out Hillary Clinton as well yeah so image net of course is older than at least one of Hillary's campaigns I'm not sure this is God so I found this one this is yeah God if you so the reconstruction process it's not very good at generating text maybe because so they have a lot of priors in that if you look at the reconstruction article you can probably and they do this in in this article they reconstruct text but it's still not super clear maybe it has to do with the architecture this here is blurry it's just the concept of blurry so you look at the images they're kind of often blurry and if you look at the text going along with it it's all like blurry blurry blurry blurry blurry blurry blurry blurry cool like it's not even what's on the image but you can clearly see like this comes from the other description this is hand-drawn arrows or arrows in general this looks like my videos now right like this recognizes arrows is specifically a you know kind of color re arrows this one what does it do this is presenting a trophy you see this one here in the middle this is kind of so these are all you know people presenting some kind of thing holding some kind of thing in their hand showing it like fishermen or diplomas this one I was amazed by this is a neuron responding to receding hairlines like it responds to receding hairlines how cool is that how cool is that this is traffic tent and so on so it responds to tents and traffics and crowds of people this one is raised arms but also pancakes so pancakes and raised hands for some reason there's a connection no but I mean these these models they still overload when they can this one how cool is that this is the Google Maps neuron these are reconstructions these are not samples these are reconstructions you can see it's clearly it has kind of the street labels and the pins on it so this is a Google Google Maps like neuron what so cool this one I call nervous smile you can maybe see that it's like yeah here's Elvis this is the Elvis neuron I know it sort of it also looks like Hendrix a bit but the things it connects it to is that's not Elvis that's not Elvis kiss okay maybe it's not exactly Elvis maybe it's more like a pop star neuron yeah maybe it's not Elvis only Elvis Billy Elliot this one is the flash right that's the flash and the cool thing is it responds to images saying flash what okay beards response to beards generally beards lots of beards kilts kilts and bagpipes response to guilt kilts and bagpipes rainy this is a neuron that responds to things that are rainy rainy days so you can see here out the window it's raining rainy windows so cool this is flash and electricity so you will see like symbols these symbols of these flashes but also kind of electric hair curling up droplets how cool does that look like that's just cool and the occasional image net reconstruction thing where there must be like half a dog face in there that is just trippy this one is this one is escape okay escape like look at that like to connect these things how long would you like without contrastive learning how well I guess if as long as you have images and labels but still king this is king so the depicted are crowns but response to renderings of King this is nation how cool is that nation response to country country country oh it's country not nation but still this one response to overweight men there's a neuron that responds to over phases of overweight men this one is wedding this one is Australia and the cool thing here is that it responds to rendered domain names of Australia like the top-level domain of Australia what mind-blown this is yawning or screaming well I think you know like here we have a same neuron for bees and the Simpsons bees and the Simpsons this is muscles and seafood and lastly spices spices and other powdery things you know don't ask too many questions hmm alright so that was it for me for today I have many more that are linked in a notion description somewhere go check it out please try out this I've not yet looked through all of them there are so many there are literally thousands of these units and this is just one of the models they have available go look and share you know on our discord you know the best ones you find alright that was it thanks for listening bye bye
[ { "start": 0, "end": 6.16, "text": " Hi there and welcome back my dear fellow scholars. Today we're going to look at" }, { "start": 6.16, "end": 11.94, "text": " multimodal neurons in artificial neural networks by Gabriel Goh, Nick Camarada," }, { "start": 11.94, "end": 17.78, "text": " Chelsea Voss, Shan Carter, Michael Petroff, Ludwig Schubert, Alec Radford and" }, { "start": 17.78, "end": 22.84, "text": " Chris Ola that has appeared in this Distillpub journal which I think is a" }, { "start": 22.84, "end": 29.88, "text": " pretty cool journal going beyond the classic PDF publishing. So this paper is" }, { "start": 29.88, "end": 35.72, "text": " an investigation into the new CLIP model by OpenAI and specifically the" }, { "start": 35.72, "end": 41.28, "text": " discovery of what they call multimodal neurons in this model. So this is an" }, { "start": 41.28, "end": 45.96, "text": " investigative work. They work with visualizations and I've made a video" }, { "start": 45.96, "end": 51.76, "text": " about both the CLIP model as well as the feature visualizations that has appeared" }, { "start": 51.76, "end": 59.519999999999996, "text": " previously. So safe to say what they are claiming as the high-level claim here is" }, { "start": 59.52, "end": 65.96000000000001, "text": " that in biology we sort of expect there to be neurons that respond not to" }, { "start": 65.96000000000001, "end": 72.08, "text": " individual patterns or to individual words but to concepts. So there could be" }, { "start": 72.08, "end": 76.72, "text": " a concept neuron of Halle Berry as you can see here and that neuron would" }, { "start": 76.72, "end": 82.16, "text": " respond to photographs of Halle Berry, to drawings and sketches of Halle Berry and" }, { "start": 82.16, "end": 88.72, "text": " also to text. So if we see the text, the rasterized text or we hear the word, that" }, { "start": 88.72, "end": 96.03999999999999, "text": " neuron, that same neuron would fire. Now so far in artificial neural networks we" }, { "start": 96.03999999999999, "end": 102.76, "text": " had not seen this kind of multimodal perception. So we have seen neurons" }, { "start": 102.76, "end": 108.28, "text": " responding in general to the same class of images because we train them as image" }, { "start": 108.28, "end": 114.46000000000001, "text": " classifiers but we have not seen that generalize to other modalities such as" }, { "start": 114.46, "end": 120.88, "text": " drawings or text. What they find in this CLIP model right here is that exactly" }, { "start": 120.88, "end": 126.67999999999999, "text": " what we expect in humans or in general in biological neural networks that" }, { "start": 126.67999999999999, "end": 133.16, "text": " happens. So they find for example a neuron that responds to Spider-Man. That" }, { "start": 133.16, "end": 138.76, "text": " is you know photos of Spider-Man in the real world or some person in a Spider-Man" }, { "start": 138.76, "end": 146.07999999999998, "text": " costume, drawings of Spider-Man and also text that says spider. So that would" }, { "start": 146.07999999999998, "end": 151.76, "text": " always the neuron would respond to all of these things, the same neuron and that" }, { "start": 151.76, "end": 157.12, "text": " is a sort of sign that these models have learned to connect to different" }, { "start": 157.12, "end": 163.92, "text": " modalities together. We've already discussed in the CLIP video that the" }, { "start": 163.92, "end": 170.83999999999997, "text": " model sort of learns to do OCR so it learns to recognize text because the" }, { "start": 170.83999999999997, "end": 177.51999999999998, "text": " CLIP model is fundamentally a model that connects images to text and my claim" }, { "start": 177.51999999999998, "end": 182.16, "text": " here is going to be that this addition of text, the model I think is very much a" }, { "start": 182.16, "end": 187.83999999999997, "text": " text model. So a lot of the connection it makes go via the textual level and a lot" }, { "start": 187.83999999999997, "end": 192.6, "text": " of the responses you're going to see here, the visualizations are going to" }, { "start": 192.6, "end": 198.29999999999998, "text": " deal with text rather than with images. So here you can see what this neuron" }, { "start": 198.29999999999998, "end": 203.56, "text": " responds to. If you thought it was the spider web here, no there's spider as a" }, { "start": 203.56, "end": 209.72, "text": " text, spider here, spider there, drawings of Spider-Man. So this neuron would" }, { "start": 209.72, "end": 216.44, "text": " respond to all of these things which is pretty pretty cool. So what they do, what" }, { "start": 216.44, "end": 221.92, "text": " they present here is an overview over the different neurons they find and as I" }, { "start": 221.92, "end": 225.67999999999998, "text": " understand it what they have done is they've gone through these neurons and" }, { "start": 225.67999999999998, "end": 230.95999999999998, "text": " they use their feature visualization technique with every single one of them." }, { "start": 230.95999999999998, "end": 236.67999999999998, "text": " So I can show you what that looks like. Here is the open AI microscope" }, { "start": 236.67999999999998, "end": 241.33999999999997, "text": " and you can find that and this is the exact model they're looking at. So what" }, { "start": 241.33999999999997, "end": 246.76, "text": " you can do is you can simply click around in these neurons over here and" }, { "start": 246.76, "end": 253.2, "text": " then these are the visualizations right here. So now the visualizations are" }, { "start": 253.2, "end": 258.2, "text": " twofold. So on the left hand you have channel optimization, on the right hand" }, { "start": 258.2, "end": 262.44, "text": " you have neuron optimization. We've treated them in a previous video if you" }, { "start": 262.44, "end": 267, "text": " want to know how they come about but for now what you should know is that these" }, { "start": 267, "end": 273.96, "text": " are images that activate that particular neuron or that particular channel very" }, { "start": 273.96, "end": 278.96, "text": " much. So these images activate this particular thing in the neural" }, { "start": 278.96, "end": 284.59999999999997, "text": " network but not other things. So this is a way to see what these neurons" }, { "start": 284.59999999999997, "end": 290.35999999999996, "text": " respond to heavily. So here you can see on the left you often have kind of" }, { "start": 290.35999999999996, "end": 294.12, "text": " pattern structures, on the right you more have kind of in the center" }, { "start": 294.12, "end": 300.85999999999996, "text": " individual things. So maybe it's not really clear what this is. So what they" }, { "start": 300.86, "end": 307.56, "text": " also portray is data samples from the ImageNet data set that activate mostly" }, { "start": 307.56, "end": 313.84000000000003, "text": " that particular neuron. So you can pretty clearly see that this responds to popsicle" }, { "start": 313.84000000000003, "end": 318.44, "text": " ice cream. Now they also have a different data set down here. There is a Flickr" }, { "start": 318.44, "end": 323.04, "text": " Creative Commons and very much the same you see this is kind of ice and ice" }, { "start": 323.04, "end": 329.64, "text": " cream and at the bottom you have text that goes along with it. So here it's not" }, { "start": 329.64, "end": 335.91999999999996, "text": " really ice cream so this is a bit of a failure case but you always have to keep" }, { "start": 335.91999999999996, "end": 340.96, "text": " in mind that it could also be because of the lack in power in searching for text." }, { "start": 340.96, "end": 346.8, "text": " So what they do down here is they have a search algorithm that finds pieces of" }, { "start": 346.8, "end": 352.91999999999996, "text": " text that that neuron responds to highly. So text that maximizes the dot product." }, { "start": 352.91999999999996, "end": 357.52, "text": " So in the clip model you have an image part, you have a text part and you have a" }, { "start": 357.52, "end": 362.24, "text": " dot product at the end. So this is text that when you input it to the text part" }, { "start": 362.24, "end": 368.44, "text": " maximizes the dot product with that particular neuron. So it's not always" }, { "start": 368.44, "end": 373.35999999999996, "text": " going to be you know really good text but very often you can give you a hint" }, { "start": 373.35999999999996, "end": 378.52, "text": " in what the neuron thinks. Note that this isn't the same text as we're going to" }, { "start": 378.52, "end": 383.79999999999995, "text": " see later like the text that you saw in Spider-Man because the text you saw in" }, { "start": 383.8, "end": 389.04, "text": " Spider-Man that was rendered text. So they do a lot of investigation into" }, { "start": 389.04, "end": 393.06, "text": " rendered text because the clip model is quite good at responding to rendered" }, { "start": 393.06, "end": 398.24, "text": " text in the image side. Alright so they find they look at these neurons" }, { "start": 398.24, "end": 406.08000000000004, "text": " literally I think they just click here on the left boom and you look at them. So" }, { "start": 406.08, "end": 415.2, "text": " this seems to be like a hamburger pancake neuron and it is I did this for" }, { "start": 415.2, "end": 419.96, "text": " hours and I'll show you later what I found. This is absolutely fascinating" }, { "start": 419.96, "end": 424.59999999999997, "text": " what you'll find here by just clicking through and every now and then you find" }, { "start": 424.59999999999997, "end": 432.71999999999997, "text": " something like yeah alright but let's get back to the paper first. So the paper" }, { "start": 432.72, "end": 438.32000000000005, "text": " they find region neurons so neurons that respond to different regions of the" }, { "start": 438.32000000000005, "end": 445.40000000000003, "text": " world for example the USA. Now they not only do they have not only do they have" }, { "start": 445.40000000000003, "end": 451.16, "text": " this visualization technique for a for kind of the whole image they have" }, { "start": 451.16, "end": 455.96000000000004, "text": " faceted visualization so in this paper they introduce faceted visualization" }, { "start": 455.96, "end": 463.68, "text": " which they can so they can produce specifically faces that are US that" }, { "start": 463.68, "end": 469.52, "text": " respond to USA. They can produce specifically indoor things so this is" }, { "start": 469.52, "end": 474.32, "text": " all the same neuron these are images that are made such that they represent" }, { "start": 474.32, "end": 479.79999999999995, "text": " indoor scenes and there is an appendix if you want to know how that's done they" }, { "start": 479.79999999999995, "end": 484.14, "text": " can trim it to only produce nature pictures that this particular neuron" }, { "start": 484.14, "end": 491, "text": " responds to. So here you can get a much better insight into what into what the" }, { "start": 491, "end": 497.52, "text": " neuron looks at for example in if you create faces for the USA this is I don't" }, { "start": 497.52, "end": 502.76, "text": " know I call this one I call this one Benjamin Washington because it's a sort" }, { "start": 502.76, "end": 508.03999999999996, "text": " of a blend of Ben Franklin and George Washington but in general it's pretty" }, { "start": 508.04, "end": 514.4, "text": " cool so you can even yeah nature you can do pose for North America pose for the" }, { "start": 514.4, "end": 522, "text": " US I think that's kind of a GI a pose for Europe I don't know what that is but" }, { "start": 522, "end": 526.84, "text": " it doesn't always you know work out super well but they find person neurons" }, { "start": 526.84, "end": 535.2, "text": " so neurons that respond to individual people be that faces be that text so" }, { "start": 535.2, "end": 543.88, "text": " this is Donald Trump be that poses yeah Elvis is also pretty cool I've actually" }, { "start": 543.88, "end": 549.5600000000001, "text": " found I don't know if it I found the Elvis neuron myself or if I found a" }, { "start": 549.5600000000001, "end": 557.24, "text": " different one yeah so they also have emotion neurons which is also pretty" }, { "start": 557.24, "end": 564.2800000000001, "text": " cool where they so they find the neurons that respond to particular emotions so" }, { "start": 564.28, "end": 570.8399999999999, "text": " when they tell these neurons when they make a faceted reconstruction and tell" }, { "start": 570.8399999999999, "end": 576.36, "text": " please give me a face this is what comes out and that you know it's just shocking" }, { "start": 576.36, "end": 583.92, "text": " when you do something like a pose for shocked this I think we're only" }, { "start": 583.92, "end": 591.3199999999999, "text": " scratching the surface here honestly but you can see the claim here the claim is" }, { "start": 591.32, "end": 598.82, "text": " that the same neuron responds to this picture and to this picture this is" }, { "start": 598.82, "end": 603.38, "text": " supposed to be text you can only guide it you can't you know force it to this" }, { "start": 603.38, "end": 610.12, "text": " picture indoor to this picture so the same neuron will respond to all of these" }, { "start": 610.12, "end": 616.8000000000001, "text": " and they call that multimodal neuron because it represents a concept the" }, { "start": 616.8, "end": 621.76, "text": " concept of being shocked rather than in a particular fine-grained pattern which" }, { "start": 621.76, "end": 627.1999999999999, "text": " was always the kind of problem so far with these neural networks that the they" }, { "start": 627.1999999999999, "end": 632.68, "text": " were more looking at you know low level patterns than high level concepts it" }, { "start": 632.68, "end": 639.4799999999999, "text": " seems with clip with by combining modalities like images and text and by" }, { "start": 639.4799999999999, "end": 646.5999999999999, "text": " not forcing this constraint like in a classifier into 1000 predefined classes" }, { "start": 646.6, "end": 654.12, "text": " we can gain much more we can go up the hierarchy of features so they have art" }, { "start": 654.12, "end": 659.84, "text": " style they have holiday neurons religion neurons person trait neurons abstract" }, { "start": 659.84, "end": 665.6, "text": " concept neurons the star I found the star I yeah I remember time neurons" }, { "start": 665.6, "end": 670.9200000000001, "text": " counting neurons pairs of force they are not always so super good but it clearly" }, { "start": 670.9200000000001, "end": 675.64, "text": " goes into the good direction so here they highlight specific things first" }, { "start": 675.64, "end": 681.76, "text": " person neurons so they find neurons that respond for example to Jesus Christ so" }, { "start": 681.76, "end": 685.8, "text": " they would respond to all of these images here on the right you see their" }, { "start": 685.8, "end": 692.28, "text": " crosses Jesus Christ and so on depictions of Jesus drawings of Jesus and" }, { "start": 692.28, "end": 699.04, "text": " when you ask the model to generate you a image that reconstructs this neurons" }, { "start": 699.04, "end": 703.92, "text": " activation and you can force it or you guide it to make a face this turns out" }, { "start": 703.92, "end": 712.9599999999999, "text": " if you got it to make a pose this turns out a logo obviously they also have" }, { "start": 712.9599999999999, "end": 717.3199999999999, "text": " Hitler right here which is also pretty cool though I have if you click on these" }, { "start": 717.3199999999999, "end": 722.64, "text": " things you'll get actually to the microscope thing and this is the one for" }, { "start": 722.64, "end": 729.1999999999999, "text": " for Hitler and you know I'm I'm not entirely sure that this is the case like" }, { "start": 729.2, "end": 734.0400000000001, "text": " I can see you know the kind of mustache thing but if you look at what in the" }, { "start": 734.0400000000001, "end": 740.24, "text": " data set activates this one it's it is a bunch of swastikas but it is also just a" }, { "start": 740.24, "end": 749.2800000000001, "text": " bunch of kind of German political stuff but yeah I mean the concept the concept" }, { "start": 749.2800000000001, "end": 755, "text": " here even if it's not Hitler directly it's pretty pretty cool I yeah also" }, { "start": 755, "end": 762.92, "text": " found that domain endings rendered as images will activate the same neuron as" }, { "start": 762.92, "end": 770.16, "text": " the flag of that country and activate the same neuron as like the architecture" }, { "start": 770.16, "end": 775.8, "text": " of that country it is super duper interesting alright so they have these" }, { "start": 775.8, "end": 780.4, "text": " person neurons which is already cool and they have so they've found these they do" }, { "start": 780.4, "end": 785.9599999999999, "text": " a case study here for the Donald Trump neuron so the Donald Trump neuron" }, { "start": 785.9599999999999, "end": 791.68, "text": " recognizes Donald Trump and then they want to see what images in the data set" }, { "start": 791.68, "end": 796.6, "text": " activate this neuron by how much so they make the claim here that if you for" }, { "start": 796.6, "end": 800.4, "text": " example choose profile pictures of Donald Trump and you see here is the" }, { "start": 800.4, "end": 804.4, "text": " zero line and here is the standard deviations from zero activation so" }, { "start": 804.4, "end": 810, "text": " pictures of Donald Trump activate this neuron like 30 times more than it is" }, { "start": 810, "end": 815.52, "text": " activated over the whole data set which makes sense if that neuron responds to" }, { "start": 815.52, "end": 820.08, "text": " Donald Trump but it also responds to art images containing Donald Trump by the" }, { "start": 820.08, "end": 823.36, "text": " way these are classified by the authors here they've gone through the images" }, { "start": 823.36, "end": 828.88, "text": " and they've classified them into these categories text containing Donald" }, { "start": 828.88, "end": 834.64, "text": " Trump's name the model also strongly responds with the same neuron right" }, { "start": 834.64, "end": 843.84, "text": " that's the that's the crazy part so a picture with text in it that says Trump" }, { "start": 843.84, "end": 848.96, "text": " activates the same neuron as a profile picture of Trump activates the same" }, { "start": 848.96, "end": 855.4399999999999, "text": " neuron as a mugger hat and activates sometimes the same neuron as political" }, { "start": 855.4399999999999, "end": 863.28, "text": " images activates so the if you look at games and music and so on that is very" }, { "start": 863.28, "end": 869.04, "text": " that neuron is very deactivated so not only is it zero it's actually negative" }, { "start": 869.04, "end": 876.16, "text": " which the authors interpreted as sort of being being counter to that in the space" }, { "start": 876.16, "end": 883.88, "text": " of all concepts they do so the this paper is is full of this kind of content" }, { "start": 883.88, "end": 889.12, "text": " warnings it might be disturbing and so on which you know you can you can do but" }, { "start": 889.12, "end": 895.36, "text": " I also find I also find the rest of the paper is kind of a fairly large hedge" }, { "start": 895.36, "end": 901, "text": " against certain things and it gets political at times for example when they" }, { "start": 901, "end": 907.04, "text": " want to when they want to claim that so here on the other hand it most" }, { "start": 907.04, "end": 912.28, "text": " negatively activates to musicians like Nicki Minaj and Eminem video games like" }, { "start": 912.28, "end": 917.76, "text": " fortnight civil rights activists like Martin Luther King jr. and LGBT symbols" }, { "start": 917.76, "end": 923.92, "text": " like rainbow flags so the games and the fortnight here yes we can see that but" }, { "start": 923.92, "end": 927.56, "text": " if you click on this and they have four images of this you can see that it's" }, { "start": 927.56, "end": 932.88, "text": " activated at relatively low magnet like negative magnitudes which is correct" }, { "start": 932.88, "end": 940.36, "text": " then it is also almost equally activated over here at high magnitudes so like I" }, { "start": 940.36, "end": 946.08, "text": " see the point you're trying to make but I mean if if you are in the political" }, { "start": 946.08, "end": 951.76, "text": " sphere this is not you have to you have to not interpret this as meaning that" }, { "start": 951.76, "end": 959.2, "text": " these things are kind of aligned but you have to interpret it as these things" }, { "start": 959.2, "end": 965.48, "text": " will appear together often which you know one can one can definitely" }, { "start": 965.48, "end": 971.4000000000001, "text": " understand in this case so here they search for profile pictures of other" }, { "start": 971.4, "end": 977.36, "text": " people when including Donald Trump himself and they plot how much these" }, { "start": 977.36, "end": 982.1999999999999, "text": " profile pictures of other people activate the Trump neuron and you can" }, { "start": 982.1999999999999, "end": 991.24, "text": " see that for example well yeah Pence activates this neuron by quite a bit I" }, { "start": 991.24, "end": 996.24, "text": " think yeah the selection here is you know up to the authors of course but" }, { "start": 996.24, "end": 1003.64, "text": " it's it's fairly interesting to see that Clinton Cruz and Obama activated more" }, { "start": 1003.64, "end": 1014.08, "text": " than Hitler and almost as much as Steve Jobs for some reason so I'm not I'm not" }, { "start": 1014.08, "end": 1020.6, "text": " entirely sure what you can make of this but it's definitely interesting to in on" }, { "start": 1020.6, "end": 1024.96, "text": " this side like to observe the multimodality of pictures just the fact" }, { "start": 1024.96, "end": 1031.76, "text": " that text drawings symbols of that campaign and profile pictures will all" }, { "start": 1031.76, "end": 1036.92, "text": " activate the same neuron that is fairly impressive they go on and they identify" }, { "start": 1036.92, "end": 1042.32, "text": " emotion neurons so again there's a content warning by the way also here so" }, { "start": 1042.32, "end": 1046.4, "text": " here they identify a neuron that responds to surprise or shock and you" }, { "start": 1046.4, "end": 1052.16, "text": " can see that all of these pictures on the right will activate that neuron so" }, { "start": 1052.16, "end": 1056.68, "text": " there are faces being shocked there are horses being shocked and there is" }, { "start": 1056.68, "end": 1064.68, "text": " rendered text saying like WTF OMG and so on again if you I think we've we've gone" }, { "start": 1064.68, "end": 1070.0800000000002, "text": " through this this is the the shocked one there they're also secondary neurons" }, { "start": 1070.0800000000002, "end": 1080.64, "text": " that help let's say help the primary emotion neurons so here you can see an" }, { "start": 1080.64, "end": 1086.44, "text": " overview over the different emotion neurons they have found and it is pretty" }, { "start": 1086.44, "end": 1092.96, "text": " stunning so here they ask them obviously to create a face when they constrain them" }, { "start": 1092.96, "end": 1097.16, "text": " not constrain they guide them towards making poses by the way the way you guide" }, { "start": 1097.16, "end": 1101.76, "text": " them is they train linear probe classifiers on separate data sets so" }, { "start": 1101.76, "end": 1108.2800000000002, "text": " they would train a classifier on a face data set to distinguish all faces from" }, { "start": 1108.28, "end": 1113.52, "text": " all non faces and then that use that classifier to sort of guide this" }, { "start": 1113.52, "end": 1118.84, "text": " reconstruction process that's how you can sort of choose to end up with a face" }, { "start": 1118.84, "end": 1125.84, "text": " or with a pose or with a piece of text so as you can see it's pretty pretty" }, { "start": 1125.84, "end": 1131.3999999999999, "text": " cool that even the text that comes out of this reconstruction process these" }, { "start": 1131.3999999999999, "end": 1135.08, "text": " aren't real images right these are kind of reconstructed to activate those" }, { "start": 1135.08, "end": 1141.6799999999998, "text": " neurons like for evil you can see that there's devil and Satan for shocked it's" }, { "start": 1141.6799999999998, "end": 1152.36, "text": " like OMG for crowd for happy it's it's happy if you look at the poses for happy" }, { "start": 1152.36, "end": 1163.08, "text": " for serious evil is particularly cool incarcerated rejected this is I think" }, { "start": 1163.08, "end": 1168.04, "text": " this is absolutely cool there is the NSFW there is erotic there are erotic" }, { "start": 1168.04, "end": 1176.56, "text": " neurons and if I click on this it will show now if you click on this absolutely" }, { "start": 1176.56, "end": 1183.1599999999999, "text": " nothing not safe for work will happen I promise I don't promise but you know I" }, { "start": 1183.1599999999999, "end": 1188.8799999999999, "text": " I've tried it it's fine I will not click on it because if this model things" }, { "start": 1188.88, "end": 1193.3200000000002, "text": " that's not safe for work the YouTube algorithm will think is not safe for" }, { "start": 1193.3200000000002, "end": 1198.6000000000001, "text": " work so but what I can tell you is that if you go on that neuron and you go" }, { "start": 1198.6000000000001, "end": 1203.96, "text": " click through it to go to the microscope and you look at what image net pictures" }, { "start": 1203.96, "end": 1211.8000000000002, "text": " respond to that neuron heavily you'll find out that image net isn't the really" }, { "start": 1211.8, "end": 1219.6, "text": " clean dog breed data set that you might have known all right they found other" }, { "start": 1219.6, "end": 1227.12, "text": " neurons corresponding to silly facial expressions like duck faces and and and" }, { "start": 1227.12, "end": 1234.36, "text": " tongue showing and so on which is is pretty neat and they find this neuron" }, { "start": 1234.36, "end": 1239.36, "text": " that corresponds to mental illness which the reconstruction is just amazing like" }, { "start": 1239.36, "end": 1246.6799999999998, "text": " this is just mind-baffling nature kind of always looks the same but mental" }, { "start": 1246.6799999999998, "end": 1254.1999999999998, "text": " illness let's say face this is it's crazy how this model connects things and" }, { "start": 1254.1999999999998, "end": 1262.6399999999999, "text": " it connects these things to books and writings of sad mental health anxiety" }, { "start": 1262.64, "end": 1269.5200000000002, "text": " and so on now do I think the model understands what a mental illness is no" }, { "start": 1269.5200000000002, "end": 1275.44, "text": " I don't think so I think much like in GPT-3 it is learned to statistically" }, { "start": 1275.44, "end": 1282.1200000000001, "text": " associate things so it has learned that there might be and I think that happens" }, { "start": 1282.1200000000001, "end": 1286.96, "text": " via the textual input so in clip for every image you have a piece of text and" }, { "start": 1286.96, "end": 1293.4, "text": " I think the connection between the topics happens on the textual level because the" }, { "start": 1293.4, "end": 1298.08, "text": " text descriptions are the same between images so there will be images of people" }, { "start": 1298.08, "end": 1304.4, "text": " you know cowering like this being sad and the textual description for it would" }, { "start": 1304.4, "end": 1310.72, "text": " be something like mental illness anxiety sadness and then for these pictures of" }, { "start": 1310.72, "end": 1314.08, "text": " these books as well there the descriptions would be I mean this is" }, { "start": 1314.08, "end": 1318.6399999999999, "text": " one is literally called overcoming anxiety so if the picture is of a book" }, { "start": 1318.6399999999999, "end": 1324.72, "text": " and the description says what is on the picture obviously that text will be" }, { "start": 1324.72, "end": 1329.96, "text": " connected so I think that's how it learns to connect things via the text" }, { "start": 1329.96, "end": 1336.02, "text": " and I think this thing is in large part a text model so here they do the same" }, { "start": 1336.02, "end": 1343.28, "text": " study for images that are associated with mental illness so depression sad" }, { "start": 1343.28, "end": 1351.28, "text": " pictures like anxiety pictures are pretty high depressing jokes if you look" }, { "start": 1351.28, "end": 1357.04, "text": " at music and sports that's negatively activated so on so you can see that I" }, { "start": 1357.04, "end": 1362.84, "text": " think via the text the model can sort of learn about how different different" }, { "start": 1362.84, "end": 1367.32, "text": " concepts different things different patterns are connected to one another" }, { "start": 1367.32, "end": 1372, "text": " they have region neurons which I find pretty cool so they discover neurons" }, { "start": 1372, "end": 1379.64, "text": " that when they show them a crop of this world map this this world map when they" }, { "start": 1379.64, "end": 1386, "text": " show them a crop of the world map the the neuron will respond the neural" }, { "start": 1386, "end": 1393.68, "text": " will flare up and so the neuron this red neuron here that reacts to these pieces" }, { "start": 1393.68, "end": 1399.52, "text": " of text and now it reacts to the pieces of text when they are rendered into" }, { "start": 1399.52, "end": 1405.52, "text": " images right then the neuron responds if you render the word American in an" }, { "start": 1405.52, "end": 1409.96, "text": " image and then you give it to the network that neuron will flare up the" }, { "start": 1409.96, "end": 1416.36, "text": " same neuron will flare up if you show it a crop of this region here of the map" }, { "start": 1416.36, "end": 1425.4, "text": " which is crazy like crazy again I think the connection happens in the textual" }, { "start": 1425.4, "end": 1432, "text": " domain but still crazy you can have it do face facets for these different" }, { "start": 1432, "end": 1439.88, "text": " regions yeah if you if you go over here so the neuron that responds to this blue" }, { "start": 1439.88, "end": 1445.16, "text": " area responds to the rendered words Mumbai Singh Pakistan Afghanistan" }, { "start": 1445.16, "end": 1452.24, "text": " Bangladesh and responds strongly or if you make reconstructions that activate" }, { "start": 1452.24, "end": 1458.1200000000001, "text": " that neuron you get these kinds of pictures which is fairly cool the same" }, { "start": 1458.1200000000001, "end": 1469.52, "text": " here for Europe so this is kind of European and yeah I that looks like home" }, { "start": 1469.52, "end": 1476.08, "text": " so check this out a bit for yourself but it's immensely cool they even find these" }, { "start": 1476.08, "end": 1482.8, "text": " secondary regional neurons that aren't exactly regional but they also respond" }, { "start": 1482.8, "end": 1488.04, "text": " to crops of this map and they highlight this entrepreneur neuron that you know" }, { "start": 1488.04, "end": 1495.6, "text": " it's a response to sort of the words entrepreneur entrepreneurial and it you" }, { "start": 1495.6, "end": 1501.08, "text": " know it kind of looks like his company logos a little bit I guess but it you" }, { "start": 1501.08, "end": 1506.1999999999998, "text": " know the the model that responds to the word entrepreneur lights up when you" }, { "start": 1506.1999999999998, "end": 1513.4399999999998, "text": " show it the west coast of the US kind of the the California region interestingly" }, { "start": 1513.4399999999998, "end": 1520.48, "text": " it also lights up when you show it the west coast of the of the lower of the" }, { "start": 1520.48, "end": 1528.48, "text": " southern African continent which is cool like that's definitely unexpected I" }, { "start": 1528.48, "end": 1534.72, "text": " don't know I I'm not informed enough to know whether or not there is significant" }, { "start": 1534.72, "end": 1540.28, "text": " entrepreneurial drive going on there could also be that it the model simply" }, { "start": 1540.28, "end": 1545.04, "text": " confuses the west coast of the two countries right like they look in a crop" }, { "start": 1545.04, "end": 1552.52, "text": " they look the same could be I'm not I'm not I don't know so maybe I'm wrong it's" }, { "start": 1552.52, "end": 1557.92, "text": " also interesting that only these regions light up right if for this particular" }, { "start": 1557.92, "end": 1565.28, "text": " neuron so I have my doubts whether that's just kind of a a lucky cherry" }, { "start": 1565.28, "end": 1569.0800000000002, "text": " pick I'm not saying it's cherry picked but you know kind of the I can stumble" }, { "start": 1569.0800000000002, "end": 1574.28, "text": " upon and you make something of it or not they have more case study of African" }, { "start": 1574.28, "end": 1582.64, "text": " kind of subdivisions and let's go down here here is where they discuss that" }, { "start": 1582.64, "end": 1586.68, "text": " they can also produce text for the text side of clip so not only do they render" }, { "start": 1586.68, "end": 1593.1200000000001, "text": " and this this text here is what you're going to see the maximal text align with" }, { "start": 1593.1200000000001, "end": 1598.3600000000001, "text": " an image or with a neuron sorry is what you're going to see at the bottom of the" }, { "start": 1598.3600000000001, "end": 1607.2, "text": " microscope pages so lastly they force a they kind of make a sparse code out of" }, { "start": 1607.2, "end": 1613.2, "text": " their main neurons that they find and they try to build more complex emotions" }, { "start": 1613.2, "end": 1619.68, "text": " from them for example jealous and they do they do claim here that that makes" }, { "start": 1619.68, "end": 1628.8, "text": " sort of a bit of sense like jealous is champion plus hug plus grumpy minus" }, { "start": 1628.8, "end": 1637.1200000000001, "text": " crying I'm not exactly sure if you know if that makes super much sense so bored" }, { "start": 1637.12, "end": 1647.04, "text": " is relaxing plus grumpy maybe yeah intimate is soft smile plus heart minus" }, { "start": 1647.04, "end": 1653.4799999999998, "text": " sick you can you can probably make something out of that though yeah" }, { "start": 1653.4799999999998, "end": 1660.9599999999998, "text": " powerful is lightning miracle plus evil plus yoga that's that's definitely" }, { "start": 1660.96, "end": 1668.2, "text": " definitely the case do check it out it is very interesting to look at some of" }, { "start": 1668.2, "end": 1675.56, "text": " those things even though I think it does not make you know terrible much sense" }, { "start": 1675.56, "end": 1684.44, "text": " but in often cases but stressed being success plus mental disorder plus pink" }, { "start": 1684.44, "end": 1690.96, "text": " objects maybe but it is more kind of it is not claimed that this is you know" }, { "start": 1690.96, "end": 1696.1200000000001, "text": " kind of an absolute thing it's more an investigation into these networks if you" }, { "start": 1696.1200000000001, "end": 1703.44, "text": " lay them out in sort of a 2d surface you can see that these emotion neurons they" }, { "start": 1703.44, "end": 1711.6000000000001, "text": " come pretty close to sort of an atlas of what people when we just use two factors" }, { "start": 1711.6, "end": 1715.6, "text": " we roughly reconstruct the canonical mood axis of in much used in much of" }, { "start": 1715.6, "end": 1721.04, "text": " psychology valence and arousal so you can divide these emotions into two" }, { "start": 1721.04, "end": 1726.24, "text": " things so there is valence which is good or bad so I think that's top bottom" }, { "start": 1726.24, "end": 1737, "text": " here so here's mad angry hostile and so on maybe not no top bottom is probably" }, { "start": 1737, "end": 1742.08, "text": " valence like how strong something is and then left right might be good and bad no" }, { "start": 1742.08, "end": 1749.4, "text": " also not here insecure inspired aroused awful sad well these are all bad no" }, { "start": 1749.4, "end": 1755.56, "text": " hostile is here appalled is here and horrified is here where are you happy in" }, { "start": 1755.56, "end": 1763.56, "text": " the middle maybe creative okay happy is here also it might not be exactly axis" }, { "start": 1763.56, "end": 1769.04, "text": " aligned right you can also divide it into seven factors which we nearly" }, { "start": 1769.04, "end": 1773.84, "text": " reconstruct a well-known categorization of these emotions into happy surprised" }, { "start": 1773.84, "end": 1779.8799999999999, "text": " bad disgusted fearful and angry except with disgusted switch for a new category" }, { "start": 1779.8799999999999, "end": 1784.72, "text": " related to affection that includes valued loving lonely and insignificant" }, { "start": 1784.72, "end": 1792.2, "text": " all right so this next piece is really funny what they do is so given clip you" }, { "start": 1792.2, "end": 1796.1200000000001, "text": " can build a classifier so if you have the clip model that connects images to" }, { "start": 1796.1200000000001, "end": 1800.0800000000002, "text": " text what you can do is you feed one image and then you give it a bunch of" }, { "start": 1800.0800000000002, "end": 1804.92, "text": " texts to choose from and whichever one it responds highest with that's kind of" }, { "start": 1804.92, "end": 1809.52, "text": " the class so if you provide the class labels as text you can build a zero" }, { "start": 1809.52, "end": 1814.92, "text": " short classifier now clip papers demonstrated that that works well so" }, { "start": 1814.92, "end": 1820.64, "text": " here they do this so they have this Apple right here and the label is" }, { "start": 1820.64, "end": 1826.92, "text": " correctly Apple but if they just slap a sticker on it that says iPod the clip" }, { "start": 1826.92, "end": 1832.5200000000002, "text": " model will switch to iPod and here yeah here is where I really think that this" }, { "start": 1832.5200000000002, "end": 1840.88, "text": " model it is a textual model it responds even to rendered text it responds very" }, { "start": 1840.88, "end": 1846.3600000000001, "text": " heavily so here it responds to this iPod library like this iPod looks like" }, { "start": 1846.36, "end": 1853.56, "text": " something I bought off Craigslist last week so you can see it works like almost" }, { "start": 1853.56, "end": 1859.1599999999999, "text": " every single time you just slap a label on it and that tells me that we are" }, { "start": 1859.1599999999999, "end": 1865.04, "text": " still like the text is might be too dominant in these models especially you" }, { "start": 1865.04, "end": 1870.12, "text": " know this models they will connect the text with render text in the image and" }, { "start": 1870.12, "end": 1876.24, "text": " that that's a very strong signal for what's in the image right this is only" }, { "start": 1876.24, "end": 1880.1200000000001, "text": " zero shot though if you switch this to do linear probe so if you actually train" }, { "start": 1880.1200000000001, "end": 1885.92, "text": " a linear probe on the representation of clip then these attacks don't work" }, { "start": 1885.92, "end": 1891.92, "text": " anymore so this is going back again to sort of the old-school deep learning" }, { "start": 1891.92, "end": 1896.96, "text": " approach where you actually train a classifier and once you train it picks" }, { "start": 1896.96, "end": 1901.16, "text": " up on on other features and then it doesn't work anymore" }, { "start": 1901.16, "end": 1906.4, "text": " all right yeah so they they evaluate this on a large scale they can't always" }, { "start": 1906.4, "end": 1911.88, "text": " slap a label so they just fill the image with render text and that usually gets" }, { "start": 1911.88, "end": 1917.8400000000001, "text": " the classifier confused fairly fairly well they also do this with this strupe" }, { "start": 1917.8400000000001, "end": 1922.68, "text": " test which you can do with humans which is fairly difficult if you do it at a" }, { "start": 1922.68, "end": 1927.76, "text": " high speed and they discover that the model basically pays no attention" }, { "start": 1927.76, "end": 1935.04, "text": " whatsoever to the color of the word it pays much more attention to what the" }, { "start": 1935.04, "end": 1939.16, "text": " word says which is strange right because you think if I have a neural network" }, { "start": 1939.16, "end": 1945.02, "text": " and you know it basically needs to to recognize the color here it needs to" }, { "start": 1945.02, "end": 1949.36, "text": " filter out the white pixels but then just average the pixels it gets the" }, { "start": 1949.36, "end": 1954.64, "text": " correct answer that's so easy right it simply averages whereas to recognize" }, { "start": 1954.64, "end": 1959.24, "text": " that this says green is much more difficult but the model was trained to" }, { "start": 1959.24, "end": 1963.8400000000001, "text": " connect text and images images which often have text in them so it has" }, { "start": 1963.8400000000001, "end": 1970.3200000000002, "text": " learned to do OCR basically in the Dolly video I claimed that Dolly has learned" }, { "start": 1970.3200000000002, "end": 1974.4, "text": " to do reverse OCR and people correctly pointed out that that is more aptly" }, { "start": 1974.4, "end": 1981, "text": " called writing but I love reverse OCR I'm gonna call writing from now on reverse" }, { "start": 1981, "end": 1987.52, "text": " OCR so again this is evidence for the claim that this is mostly a textual" }, { "start": 1987.52, "end": 1993.44, "text": " model and now I want to show you what I found so if you're not in the mood I" }, { "start": 1993.44, "end": 1997.96, "text": " have all this in a notion page which I'll link down below so I'll show you" }, { "start": 1997.96, "end": 2001.92, "text": " just some interesting stuff sometimes it's multimodal sometimes it's not" }, { "start": 2001.92, "end": 2008.8, "text": " right so we already were here we just clicked around but now I want to kind" }, { "start": 2008.8, "end": 2015.32, "text": " of show you the good stuff so this is a Superman neuron that I found so it" }, { "start": 2015.32, "end": 2019.24, "text": " responds as you can see to symbols of Superman in the image net data set" }, { "start": 2019.24, "end": 2026.44, "text": " Superman Superman drawing Superman comics Superman spelled out rendered and" }, { "start": 2026.44, "end": 2032.28, "text": " so on this is exactly kind of what what the the article was about right but now" }, { "start": 2032.28, "end": 2041.04, "text": " it's Superman not spider-man this I call the resting bee face neuron so it" }, { "start": 2041.04, "end": 2052.24, "text": " responds to people being slightly annoyed yeah as you can see here this is trash" }, { "start": 2052.24, "end": 2060.68, "text": " bags so this responds to trash bags pretty cool right so not any kind of" }, { "start": 2060.68, "end": 2065.04, "text": " bag right specifically trash bags even if they are not black so there are a" }, { "start": 2065.04, "end": 2069.6, "text": " couple in there they don't necessarily breath black there is even trash cans" }, { "start": 2069.6, "end": 2074.68, "text": " like dump containers right here that have no bag in sight yet still that" }, { "start": 2074.68, "end": 2083.68, "text": " neuron response this sorry about sorry about that yeah for some reason you" }, { "start": 2083.68, "end": 2089.7599999999998, "text": " might want to I don't know maybe have something in your pockets yeah so so" }, { "start": 2089.76, "end": 2093.7200000000003, "text": " fairly cool oh there's a tree is not always you know perfect but these are" }, { "start": 2093.7200000000003, "end": 2102.1200000000003, "text": " the data set examples that most excite that neuron so you can also see the text" }, { "start": 2102.1200000000003, "end": 2107.2000000000003, "text": " isn't always good though I think I think if the text here isn't super good it" }, { "start": 2107.2000000000003, "end": 2112.1200000000003, "text": " might more be an effect of this method to search text because text is of course" }, { "start": 2112.1200000000003, "end": 2117.92, "text": " not a continuous signal so it's fairly hard to search text that maximizes some" }, { "start": 2117.92, "end": 2123.7200000000003, "text": " activation otherwise we could build GANs for text very easily which we still" }, { "start": 2123.7200000000003, "end": 2133.52, "text": " can't this one here I've titled this strength and a law and weightlifting" }, { "start": 2133.52, "end": 2142.36, "text": " which I'm aware of this is not you know iconography of a law however so this is" }, { "start": 2142.36, "end": 2147.28, "text": " pretty cool as an image right now if you look at what in the data set what" }, { "start": 2147.28, "end": 2154.28, "text": " samples it responds to it's kind of all weightlifting it's all weights so this" }, { "start": 2154.28, "end": 2161.52, "text": " is weight weight and if you go down here to the other data set this is why I" }, { "start": 2161.52, "end": 2167.8, "text": " called it sort of a law because you have also rendered names like the the" }, { "start": 2167.8, "end": 2173.32, "text": " rendered Allah you have the Quran you have symbols of Islam and if you go to" }, { "start": 2173.32, "end": 2180.56, "text": " the text that it searches goes like hammer workout prophet prophet Zana in" }, { "start": 2180.56, "end": 2188.6800000000003, "text": " lumber iron gym the brutal workout of God so you know pretty cool neuron" }, { "start": 2188.6800000000003, "end": 2194.56, "text": " honestly and you know that it that responds with this I don't even I don't" }, { "start": 2194.56, "end": 2201.2400000000002, "text": " even know what what that is is that is that Hindu imagery or Buddhist imagery" }, { "start": 2201.24, "end": 2209.16, "text": " so cool these are organs this is an organ neuron I hope like you you can see" }, { "start": 2209.16, "end": 2215.24, "text": " that and it responds to the render text of control I don't know what to make of" }, { "start": 2215.24, "end": 2224.3199999999997, "text": " it also canal viral but also to drawings you can see here a drawing of a heart" }, { "start": 2224.3199999999997, "end": 2231, "text": " for some reason also chins so it's not always super duper clear what a neuron" }, { "start": 2231, "end": 2236.36, "text": " does in fact most of these neurons you will find if you go look at what image" }, { "start": 2236.36, "end": 2240.12, "text": " net sound and these I believe these are crops of image net samples not entire" }, { "start": 2240.12, "end": 2247.94, "text": " pictures so if you look at what by the way control and CTRL if you look at what" }, { "start": 2247.94, "end": 2252.24, "text": " examples most often it will be rendered text so that the image that no matter" }, { "start": 2252.24, "end": 2257.28, "text": " what neuron most neurons actually pay attention to rendered text rather than" }, { "start": 2257.28, "end": 2263.0800000000004, "text": " to images the ones I've selected are the ones that do not but if you just go and" }, { "start": 2263.0800000000004, "end": 2267.44, "text": " click on some random neuron we can actually try and it's certainly going to" }, { "start": 2267.44, "end": 2276.6800000000003, "text": " probably fail this one looks pretty cool looks pretty cool actually that responds" }, { "start": 2276.6800000000003, "end": 2283.6000000000004, "text": " to printers yep demonstration effect fails horribly how about this one yeah" }, { "start": 2283.6, "end": 2290.04, "text": " so you can see that you know maybe you don't exactly know what that is so you" }, { "start": 2290.04, "end": 2294.72, "text": " want to look at what so here you see that it primarily responds to the text" }, { "start": 2294.72, "end": 2302.44, "text": " miss I guess mss I miss you Mississippi and so on you know Mississippi having" }, { "start": 2302.44, "end": 2308.12, "text": " it twice in there that got a respond pretty pretty heavily and most of the" }, { "start": 2308.12, "end": 2311.2799999999997, "text": " time you'll find something like this that it responds very much to the" }, { "start": 2311.28, "end": 2319.52, "text": " rendered pieces of text in images these are film spools and so not only does it" }, { "start": 2319.52, "end": 2327.4, "text": " respond to film spools but also to things like director screening popcorn" }, { "start": 2327.4, "end": 2335.2000000000003, "text": " the kind of movie theater labeling showing Hollywood cinemas there's also" }, { "start": 2335.2000000000003, "end": 2340.6000000000004, "text": " entertainment so you know the multimodality again this this is a this" }, { "start": 2340.6, "end": 2343.92, "text": " is a phenomenon because we introduced the text and it can connect it on the" }, { "start": 2343.92, "end": 2350.2, "text": " text level this is feather patterns and leaf patterns so even when it's in coffee" }, { "start": 2350.2, "end": 2357.96, "text": " you see the feather and leaf patterns even when it's a drawing it can it will" }, { "start": 2357.96, "end": 2368.56, "text": " still respond this one is strange so this responds to things like Sparta and" }, { "start": 2368.56, "end": 2379.52, "text": " front and Troy but so that it responds to rendered front Trojan Spartans front" }, { "start": 2379.52, "end": 2386.2, "text": " and it also has a lot of people doing sort of squats as you can see so and and" }, { "start": 2386.2, "end": 2391.72, "text": " fighting so this is kind of an iron so this is a bit of kind of a warrior" }, { "start": 2391.72, "end": 2396.88, "text": " neurons you can see oh there's lots of ah of course it's because of these" }, { "start": 2396.88, "end": 2402.12, "text": " Spartan runs and all they're called like this right these kind of sporting events" }, { "start": 2402.12, "end": 2409.78, "text": " I see Roman frontside Roman Roman so it connects the workout with the Spartan" }, { "start": 2409.78, "end": 2415.6, "text": " workout kind of division and then it connects the Trojan and so on via again" }, { "start": 2415.6, "end": 2420.28, "text": " via the text because it makes no sense to connect like the vodka and the and" }, { "start": 2420.28, "end": 2426, "text": " the weightlifting maybe so yeah I hope I hope you're fairly convinced by now" }, { "start": 2426, "end": 2430.64, "text": " we're gonna know a bit faster now because the videos already too long but" }, { "start": 2430.64, "end": 2438.08, "text": " this one here is the letter E so it's e it responds again to rendered text of E" }, { "start": 2438.08, "end": 2443.48, "text": " this one here is cleaning so it responds to cleaning products and cleaning things" }, { "start": 2443.48, "end": 2450.68, "text": " this one here is frown so this is frowning frowning frowning grumpy face" }, { "start": 2450.68, "end": 2462.24, "text": " grumpy face lion lion responding to lions rendered text of lions team names" }, { "start": 2462.24, "end": 2471.6, "text": " called lions and so on fashion model fashion model a bit by the way the" }, { "start": 2471.6, "end": 2475.64, "text": " labels are mine I just looked at them and decided what they are but you can" }, { "start": 2475.64, "end": 2484.2799999999997, "text": " see like there's a lot of these kind of runway shots here baseball stadium so" }, { "start": 2484.2799999999997, "end": 2488.3199999999997, "text": " cool so these are kind of top views of baseball stadium but it responds a lot" }, { "start": 2488.3199999999997, "end": 2496.08, "text": " to things saying park PNC park AT&T park but also kind of home team park lights" }, { "start": 2496.08, "end": 2501.68, "text": " and and baseball dugouts and even players I've seen some players logos of" }, { "start": 2501.68, "end": 2509.52, "text": " teams baseball depictions of actual baseballs immense immensely cool here" }, { "start": 2509.52, "end": 2522.68, "text": " bride this is bride you can see this is bride this one what do you think this" }, { "start": 2522.68, "end": 2528.3599999999997, "text": " one is Navy so super cool that it can I kind of connect these ropes with the" }, { "start": 2528.36, "end": 2536.48, "text": " emblems the the kind of your tags so and it connects it to render text saying" }, { "start": 2536.48, "end": 2545.08, "text": " Navy right so these are the crops of images that it responds to Navy of fish" }, { "start": 2545.08, "end": 2556, "text": " like officers Navy gravestones yeah so cool this one okay this for this I also" }, { "start": 2556, "end": 2561.68, "text": " had to look at sort of the pictures here and the text going along with it this is" }, { "start": 2561.68, "end": 2570.2, "text": " hemp but it is also kind of goa patterns it is also for some reason turn or earn" }, { "start": 2570.2, "end": 2578.12, "text": " it is also Hendrix so this isn't even Jimi Hendrix right like this this is" }, { "start": 2578.12, "end": 2584.52, "text": " definitely connected to these goa shirts there is also there's pictures of Jimi" }, { "start": 2584.52, "end": 2593, "text": " Hendrix which I guess you can understand there is also turn again" }, { "start": 2593, "end": 2602.68, "text": " whereas there's Bob no this is Bob Marley sorry this Bob Marley yeah so so" }, { "start": 2602.68, "end": 2608.92, "text": " it connects these things staircase and here for some reason also responds to" }, { "start": 2608.92, "end": 2617.2000000000003, "text": " text rendered human and to staircases and here I have I don't know why but" }, { "start": 2617.2000000000003, "end": 2620.6, "text": " there's there's this thing which I'm not sure so it has human in it but it is" }, { "start": 2620.6, "end": 2627.44, "text": " also arranged like a staircase so maybe that's why it responds extra extra yeah" }, { "start": 2627.44, "end": 2633.48, "text": " the Disney neuron this is a Disney neuron how cool is this how cool is this" }, { "start": 2633.48, "end": 2639.4, "text": " so you can clearly see that that but then it you know Disney these are the" }, { "start": 2639.4, "end": 2643.52, "text": " samples that it responds to simply something saying Disney the Mickey Mouse" }, { "start": 2643.52, "end": 2655.96, "text": " ear the mini bow no immensely cool the castle right the Disney castle this is" }, { "start": 2655.96, "end": 2663.64, "text": " the Hillary Clinton neuron you can see this is Hillary and the images it" }, { "start": 2663.64, "end": 2672.76, "text": " responds to is Hillary Hill pill Polly Hill pills so this is maybe it's more" }, { "start": 2672.76, "end": 2680.76, "text": " like the LL why the IL why neuron but it it does pick out Hillary Clinton as" }, { "start": 2680.76, "end": 2689.5200000000004, "text": " well yeah so image net of course is older than at least one of Hillary's" }, { "start": 2689.5200000000004, "end": 2695.96, "text": " campaigns I'm not sure this is God so I found this one this is yeah God if you" }, { "start": 2695.96, "end": 2701.4, "text": " so the reconstruction process it's not very good at generating text maybe" }, { "start": 2701.4, "end": 2705.6800000000003, "text": " because so they have a lot of priors in that if you look at the reconstruction" }, { "start": 2705.68, "end": 2711.68, "text": " article you can probably and they do this in in this article they reconstruct" }, { "start": 2711.68, "end": 2715.8399999999997, "text": " text but it's still not super clear maybe it has to do with the architecture" }, { "start": 2715.8399999999997, "end": 2720.72, "text": " this here is blurry it's just the concept of blurry so you look at the" }, { "start": 2720.72, "end": 2724.96, "text": " images they're kind of often blurry and if you look at the text going along with" }, { "start": 2724.96, "end": 2732.04, "text": " it it's all like blurry blurry blurry blurry blurry blurry blurry blurry cool" }, { "start": 2732.04, "end": 2736.44, "text": " like it's not even what's on the image but you can clearly see like this comes" }, { "start": 2736.44, "end": 2740.92, "text": " from the other description this is hand-drawn arrows or arrows in general" }, { "start": 2740.92, "end": 2749.32, "text": " this looks like my videos now right like this recognizes arrows is specifically" }, { "start": 2749.32, "end": 2758.24, "text": " a you know kind of color re arrows this one what does it do this is presenting a" }, { "start": 2758.24, "end": 2762.3199999999997, "text": " trophy you see this one here in the middle this is kind of so these are all" }, { "start": 2762.3199999999997, "end": 2766.7599999999998, "text": " you know people presenting some kind of thing holding some kind of thing in" }, { "start": 2766.7599999999998, "end": 2776.4799999999996, "text": " their hand showing it like fishermen or diplomas this one I was amazed by this" }, { "start": 2776.4799999999996, "end": 2783.8799999999997, "text": " is a neuron responding to receding hairlines like it responds to receding" }, { "start": 2783.88, "end": 2793.6400000000003, "text": " hairlines how cool is that how cool is that this is traffic tent and so on so" }, { "start": 2793.6400000000003, "end": 2803, "text": " it responds to tents and traffics and crowds of people this one is raised arms" }, { "start": 2803, "end": 2809.6400000000003, "text": " but also pancakes so pancakes and raised hands for some reason there's a" }, { "start": 2809.64, "end": 2814.16, "text": " connection no but I mean these these models they still overload when they can" }, { "start": 2814.16, "end": 2819.3599999999997, "text": " this one how cool is that this is the Google Maps neuron these are" }, { "start": 2819.3599999999997, "end": 2822.44, "text": " reconstructions these are not samples these are reconstructions you can see" }, { "start": 2822.44, "end": 2828.64, "text": " it's clearly it has kind of the street labels and the pins on it so this is a" }, { "start": 2828.64, "end": 2841, "text": " Google Google Maps like neuron what so cool this one I call nervous smile you" }, { "start": 2841, "end": 2853.2799999999997, "text": " can maybe see that it's like yeah here's Elvis this is the Elvis neuron I know it" }, { "start": 2853.28, "end": 2859, "text": " sort of it also looks like Hendrix a bit but the things it connects it to is" }, { "start": 2859, "end": 2865.48, "text": " that's not Elvis that's not Elvis kiss okay maybe it's not exactly Elvis maybe" }, { "start": 2865.48, "end": 2874.84, "text": " it's more like a pop star neuron yeah maybe it's not Elvis only Elvis Billy" }, { "start": 2874.84, "end": 2882, "text": " Elliot this one is the flash right that's the flash and the cool thing is" }, { "start": 2882, "end": 2892.4, "text": " it responds to images saying flash what okay beards response to beards" }, { "start": 2892.4, "end": 2900.24, "text": " generally beards lots of beards kilts kilts and bagpipes response to guilt" }, { "start": 2900.24, "end": 2906.56, "text": " kilts and bagpipes rainy this is a neuron that responds to things that are rainy" }, { "start": 2906.56, "end": 2914.24, "text": " rainy days so you can see here out the window it's raining rainy windows so" }, { "start": 2914.24, "end": 2920.68, "text": " cool this is flash and electricity so you will see like symbols these symbols" }, { "start": 2920.68, "end": 2931, "text": " of these flashes but also kind of electric hair curling up droplets how" }, { "start": 2931, "end": 2936.04, "text": " cool does that look like that's just cool and the occasional image net" }, { "start": 2936.04, "end": 2941.36, "text": " reconstruction thing where there must be like half a dog face in there that is" }, { "start": 2941.36, "end": 2951.36, "text": " just trippy this one is this one is escape okay escape like look at that like" }, { "start": 2951.36, "end": 2958.32, "text": " to connect these things how long would you like without contrastive learning" }, { "start": 2958.32, "end": 2967.32, "text": " how well I guess if as long as you have images and labels but still king this is" }, { "start": 2967.32, "end": 2974.5800000000004, "text": " king so the depicted are crowns but response to renderings of King this is" }, { "start": 2974.5800000000004, "end": 2982, "text": " nation how cool is that nation response to country country country oh it's" }, { "start": 2982, "end": 2991.6, "text": " country not nation but still this one response to overweight men there's a" }, { "start": 2991.6, "end": 3002.72, "text": " neuron that responds to over phases of overweight men this one is wedding this" }, { "start": 3002.72, "end": 3011.4, "text": " one is Australia and the cool thing here is that it responds to rendered domain" }, { "start": 3011.4, "end": 3019.2000000000003, "text": " names of Australia like the top-level domain of Australia what mind-blown this" }, { "start": 3019.2000000000003, "end": 3034.32, "text": " is yawning or screaming well I think you know like here we have a same neuron" }, { "start": 3034.32, "end": 3046.84, "text": " for bees and the Simpsons bees and the Simpsons this is muscles and seafood and" }, { "start": 3046.84, "end": 3058.32, "text": " lastly spices spices and other powdery things you know don't ask too many" }, { "start": 3058.32, "end": 3065.52, "text": " questions hmm alright so that was it for me for today I have many more that are" }, { "start": 3065.52, "end": 3072.48, "text": " linked in a notion description somewhere go check it out please try out this I've" }, { "start": 3072.48, "end": 3075.52, "text": " not yet looked through all of them there are so many there are literally" }, { "start": 3075.52, "end": 3079.0800000000004, "text": " thousands of these units and this is just one of the models they have" }, { "start": 3079.0800000000004, "end": 3084.52, "text": " available go look and share you know on our discord you know the best ones you" }, { "start": 3084.52, "end": 3089.8, "text": " find alright that was it thanks for listening bye bye" } ]
hQEnzdLkPj4
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Learning To Classify Images Without Labels (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "ethz", "clustering", "self-supervision", "self-labeling", "entropy", "dot product", "representation learning", "cnns", "convolutional neural network", "deep cluster", "nce", "noise contrastive estimation", "unsupervised", "overcluster", "imagenet", "cifar10", "nearest neighbors" ]
How do you learn labels without labels? How do you classify images when you don't know what to classify them into? This paper investigates a new combination of representation learning, clustering, and self-labeling in order to group visually similar images together - and achieves surprisingly high accuracy on benchmark datasets. OUTLINE: 0:00 - Intro & High-level Overview 2:15 - Problem Statement 4:50 - Why naive Clustering does not work 9:25 - Representation Learning 13:40 - Nearest-neighbor-based Clustering 28:00 - Self-Labeling 32:10 - Experiments 38:20 - ImageNet Experiments 41:00 - Overclustering Paper: https://arxiv.org/abs/2005.12320 Code: https://github.com/wvangansbeke/Unsupervised-Classification Abstract: Is it possible to automatically classify images without the use of ground-truth annotations? Or when even the classes themselves, are not a priori known? These remain important, and open questions in computer vision. Several approaches have tried to tackle this problem in an end-to-end fashion. In this paper, we deviate from recent works, and advocate a two-step approach where feature learning and clustering are decoupled. First, a self-supervised task from representation learning is employed to obtain semantically meaningful features. Second, we use the obtained features as a prior in a learnable clustering approach. In doing so, we remove the ability for cluster learning to depend on low-level features, which is present in current end-to-end learning approaches. Experimental evaluation shows that we outperform state-of-the-art methods by huge margins, in particular +26.9% on CIFAR10, +21.5% on CIFAR100-20 and +11.7% on STL10 in terms of classification accuracy. Furthermore, results on ImageNet show that our approach is the first to scale well up to 200 randomly selected classes, obtaining 69.3% top-1 and 85.5% top-5 accuracy, and marking a difference of less than 7.5% with fully-supervised methods. Finally, we applied our approach to all 1000 classes on ImageNet, and found the results to be very encouraging. The code will be made publicly available. Authors: Wouter Van Gansbeke, Simon Vandenhende, Stamatios Georgoulis, Marc Proesmans, Luc Van Gool Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there! Check out these clusters of images right here. And just have a look at how all of them are pretty much showing the same object. So here's balloons, here's birds, here's sharks or other fish. These are images from the ImageNet dataset. And you can see that these clusters are pretty much the object classes themselves. There's all the frogs right here, all the people that have caught fish. So the astonishing thing about this is that these clusters have been obtained without any labels of the ImageNet dataset. Of course, the dataset has labels, but this method doesn't use the labels. It learns to classify images without labels. So today we're looking at this paper, Learning to Classify Images Without Labels, by Wouter von Ganzbecke, Simon Vandenhende, Stamatios Georgoulis, Mark Prozémans and Luke van Gaal. And on a high-level overview, they have a three-step procedure. Basically, first, they use self-supervised learning in order to get good representations. Second, they do a clustering. So they do a sort of k-nearest neighbor clustering, but they do clustering on top of those things. But they do it in a kind of special way. And then third, they do a refinement through self-labeling. So if you know what all of these are, you basically understand the paper already. But there's a bit of tricky steps in there. And it's pretty cool that at the end it works out like you just saw. So before we dive in, as always, if you're here and not subscribed, then please do. And if you like the video, share it out. And leave a comment if you feel like commenting. Cool. So as we already stated the problem, they ask, is it possible to automatically classify images without the use of ground truth annotations? Or even when the classes themselves are not known a priori? Now, you might think that this is outrageous. How can you classify when you don't even know what the classes are and so on? So the way you have to imagine it going forward, and they don't explicitly explain it, but it's assumed that if you have a dataset, and you learn to classify it, what basically that means is you cluster it, right? You put some of the data points in the same clusters. And then, of course, the dataset, I'm going to draw the same dataset right here, the same dataset would have an actual classification thing. So this would be class 0, this here may be class 1, and this here might be class 2. Now, you can't possibly know how the classes are called or something, which one is the first, which one is the second. So at test time, basically, if you have a method like this that doesn't use labels, what you're going to do is you're basically going to find, you're going to be as generous as possible in the assignment of these and say, oh, look, if I assign this here to cluster 0 and this here to cluster 2 and this here to cluster 1, and I just carry over the labels, what would my accuracy be under that labeling? So you're as generous as possible with the assignments of the labels. So that's how it's going to work, right? That's what you have to keep in mind. We're basically developing an algorithm that gives us this kind of clustering of the data. And then if that clustering partitions the data in the same way as the actual labeling would, the actual labeling with the test labels, then we think it's a good algorithm. OK, so they claim they have a... OK, in this paper, we deviate from recent works and advocate a two-step approach. And it's actually a three-step approach, but where feature learning and clustering are decoupled. OK, why is that? So they argue what you could do, what people have done is... And I'm going to... Well, this is just a wall of text. So what you could do is you could just basically cluster the data. Like who says you can't use clustering algorithms? And then the question is, what do you cluster them by? Like you need a distance. So if I have points in 2D, it sort of makes sense to use the Euclidean distance here. But if I have images of cats and dogs and whatnot, then the Euclidean distance between the pixels is really not a good thing. But also, so you might think we could actually... We could use a deep neural network and then basically send the image, that's the image right here, send the image through the deep neural network and then either take this last state right here. So it goes through and through and through. And we could get take either of the hidden states or we could just take, you know, the last state, that is the sort of hidden representation right here and do the clustering with that. But then of course, the question is, what do you, which neural network do you take? How do you train that neural network? And there have been a few approaches such as a deep cluster, which try to formulate basically an objective for that neural network where you first, you send all the images through, right? You send a bunch of images through to get you in embedding space, to get you points. And then in embedding space, you think, well, the features that are in the embedding space, they are somehow latent and they... If basically the entire thing is, if this neural network was used to classify images, you would have a classification head on top. And a classification head, this is like a five class classification head, is nothing else than a linear classifier boundary that you put on top of this hidden representation. So if you were to use this neural network for classification, it must be possible to draw a linear boundary between the classes. And therefore, the either things like the inner product distance or the Euclidean distance must make sense in that space. They don't make sense in the picture space, but they must make sense in the hidden representation space, because what you're going to do with them is exactly linear classification. The last classification head of a neural network is just a linear classifier. So the assumption is that, and the conclusion is, well, in this space, you should be able to cluster by Euclidean distance. So what deep cluster does, like is first get the representations. You start off with a random neural network, then cluster these representations, then basically label, self label the images in a way. Now, way over simplifying that technique right here. But you have these alternative steps of clustering and then kind of finding better representation and then clustering these representations. And what it basically says is that the CNN itself is such a is like a prior, because it's the translation of it and works very good for very well for natural images. So the CNN itself will lead to good representations if we do it in this way. And they have some good results there. But this paper argues that if you do that, then the the algorithm tends to focus a lot on very low level features. So if the pixel on the bottom right here is blue, right, then you can. And the neural network, by chance, puts two of those images where the blue pixel on the bottom right, it puts them close together. Then in the next step, it will, because they're close together, will cluster them together. And then it will basically feed back the new representation should put the two in the same class, right? It will feed back that it should focus even more on that blue pixel. So it's very, very dependent on initializations and it can jump super easily onto these low level features that have nothing to do with with the high level task you're ultimately trying to solve, which is to classify these images later. So what this paper does is it says we can eliminate this. We can eliminate this the fact that these methods will produce will produce neural networks that focus on low level features. And how do we do that? We do that by representation learning. So representation learning, you might know this as self supervised learning. And this is the task they solve in the first step of their objective. So let's go through this. This right here is an image. Now, the T is a transformation of that image. And in self supervised learning, there are several methods that you can transform an image. So, for example, you can random crop an image. You can just cut out like a piece right here and scale that up to be as large as the original image. Or you can use, for example, data augmentation, which means you take the image and you basically so if there is, I don't know, the cat right here, you kind of convolve it with something. So there's like a very squiggly cat. OK, I'm terrible. You can you can rotate it, for example. So it's like this. OK, so these are all these are all sets, including the crop sets of this transformation T. So you transform it in some way and you want after you've transformed it, you send your original image. That should be read. You send your original image and the transformed image through a neural network, each one by themselves. OK, and then after this, you say the hidden representation here should be close to each other. OK, this is this is basically the self supervised training task. It's been shown to work very, very well as a pre training method for classification neural networks. You have an image and its augmented version and you minimize the inner product or the Euclidean distance between the two versions in the hidden space. And the rationale is exactly the same. The rationale is that this hidden space, of course, should be linearly classifiable. And so the distance between those should be close. And the rationale between having these tasks is that, well, if I flip the image, right, if I flip the image to the right, it cannot focus on the pixel on the bottom right anymore, because that's not going to be the pixel on the bottom right here. And I'm not always going to flip it into the same direction. And sometimes I'm going to crop it so it also can't focus on the pixel on the bottom right, because in the crop, that pixel is like out here. It's not even in the crop. But basically what you're looking to do with the self supervised methods is you are looking to destroy this low level information. That's that's all you're looking to build a pipeline of a neural network here that destroys deliberately low level information. And you do that by coming up with tasks like this self supervision tasks that just that deliberately exclude this information from being used. I think that's what's going on generally in the self supervised learning thing. OK, so this here, as you can see, is the neural network that you train. You send both images, the original and the augmented version, through the same neural network, and then you minimize some distance, which is usually like the inner product or the Euclidean distance in this embedding space. OK, and what you train, you can see right here, you train the parameters of this neural network. So the transformations are fixed or sampled and the distance is fixed. You train the neural networks such that your embeddings minimize this task. Now, this is nothing new. This has been this has been used for a couple of years now to get better representation. Self supervised learning is a thing. But they basically say we can use this as an initialization step for this clustering procedure, because if we don't do that, we we focus on these low level features. OK, and notice you don't need any labels for this procedure. That's why it's called self supervised. OK, so the second second part is the clustering. Now they cluster, but they don't just cluster these representations. That would be that doesn't perform very well in their in their experiments. What they instead do is they minimize this entire objective right here and we'll go through it step by step. So they train a new neural network. OK, this thing right here, this is a new neural network. So first you have you already have the neural network, which was called. What was it even called? The one that gives you the embedding with the theta. OK, it's called five theta. It's the same architecture. I think they initialize one with the other. So in step one, you get five theta five theta goes give from from X gives you a representation of X. OK, let's call it hidden X. So that's the self supervised learning. But in step two, you train an entirely new neural network, this five data here, and you initialize it with this one. But now you train it to do the following again. You want to minimize. Sorry, you want to maximize the inner product right here. See, that's the inner product. You want to maximize the inner product between two things. Now, that's the same thing as before. We want to minimize the distance between two things and the dot product distance. In that case, you maximize the dot product between two things. And the two things are two images that go through the same neural network as before. Right. This and this. Now, what's different here is that here we input an one image of the data set. That's the same as before. OK, so we input one image. But here before in the self supervised learning, we input an augmented version of that. And now we input something else. We input this K right here. Now, what's K? What K comes from this neighbor set of X. OK, this is the set of neighbors of X. And these neighbors are determined with respect to this neural network right here. So what you do after step one is you take your neural network with the good embeddings. And here is your data set X. Your data set X. This should be another. Your data set X is this list basically of all the images in your data set. And what you're going to do is you're going to take all of them using that neural network that you just trained and embed them into a latent space right here. OK. This is the latent space where you have done the self supervised training. And now for each image right here. So if this is X, I, you're going to find its K nearest neighbors. And they use I think they use five as a benchmark. So you're going to find its nearest neighbors. It's five nearest neighbors. And you do this for each image. So this image has these five nearest neighbors. So in step two, what you're trying to do is you're going to try to pull together each image and its nearest neighbors in that in this this not in this space directly, but you determine which ones are the nearest neighbor from this neural network and you keep it constant. That's how you determine what the nearest neighbors are in the first task. And that is your NX set for X, I. And in the second step, you're trying to make the representations of any image and its nearest neighbors closer to each other. OK, so with with this thing right here, you maximize the inner product between X in after this neural network and a nearest neighbor of X that was was a nearest neighbor after the first task. Now, the way they cluster here is not just again by putting it into an embedding space like we saw before. But this thing right here, this neural network, as you can see here, is is a C dimensional vector in zero one. Now, C is the number of classes that you can either know that. So you don't know which classes which you don't have labels, but you could know how many classes there are. Or you could just guess how many classes there are. And as long as you as you overguess, you can still like build super clusters later. So this they simply say it's in zero one, but they also say it performs a soft assignment. So we're also going to assume that this is normalized. So for each for each data point X here, you're going to you're going to have an image. You're going to put it through this new neural network. Okay, this new neural network new, and it's going to tell you it's going to give you basically a histogram. Let's say class one, two or three, we guess there are three class and it's going to give you an assignment of the three. And you also take a nearest neighbor. Here is your data set. You also take a nearest neighbor of that. So you look for this set N of X and you take a nearest neighbor. Maybe that's that's a maybe that's a dog. I can't I really can't draw dog. Yeah, that's the best I can do. I'm sorry. And you also put that through the same network. And you're saying since they were nearest neighbor in task one, they must share some sort of interesting high level features because that's what the first task was for. Therefore, I want to make them closer together in in the in the light of these of this neural network right here. So this is also going to give you an assignment like maybe like this. Okay. And now you object you you train this network right here to basically match these two distributions. Okay. So this is this is now a classifier into C classes, but we guess C and we don't have labels. We simply our label is going to be my neighbors from the first task must have the same labels. That's our label. Now they say they also have this term right here, which is the entropy over assignments. Okay. As you can see, so they minimize the following. They minimize this quantity, which has a negative in front of it. So that means they maximize this log inner product. And they also maximize the entropy because sorry. So they minimize this thing. But the entropy is a negative quantity. Right. So they maximize the entropy because here's a plus. And now they minimize the entropy. Let's see what they say by minimizing the following objective. Now entropy is the sum of the negative sum of P log P. And this if this is P. Yes, this is the probability that an image is going to be assigned to cluster C over the entire data set. So they're going to. Yes, so it's negative. This quantity negative. Minus P log P. And this is the entropy. So they're going to minimize the entropy. Let's see what they say. We include an entropy term. The second term in equation two. Which spreads the predictions uniformly across clusters C. OK. So what we want is a uniform assignment over cluster, which means we should maximize the entropy. Oh, yes. OK. They minimize this thing. And this here is the negative entropy. Right. So they want basically what they want over the whole data set that not all of the images are going to be in the same cluster. This is cluster one. And then this is cluster two. And then this is cluster three. So that term counteracts that basically the more evenly spread the entire data set distribution is the the higher the entropy, the lower the negative entropy. And that's the goal right here. I'm sorry. This this was I was confused by the too many negative signs. And then you minimize the entire thing. All right. Now, they say they say a different thing right here. They say here this bracket denotes the dot product operator. As we saw, it's the dot product between these two distributions right here. The first term in equation two imposes this neural network to make consistent predictions for a sample XI and its neighboring samples, the neighbors of XI. And here is an interesting thing. Note that the dot product will be maximal when the predictions are one hot. That means confident and assigned to the same cluster consistent. So they basically say the objective encourages confidence because it encourages predictions to be one hot and it encourages consistency because it you know the because the distributions need to be the same. They should be in the same cluster. Right now, I agree with the consistency. Like if you make the inner product high, then of the of two of these histograms, of course, they look the same. Right. Because these are ultimately vectors. These are three dimensional vectors. Let's call them two dimensional vectors. Right. So here is class one. Here's class two. If you make the inner product small or high, they will agree on their predictions. But I disagree that this encourages anything to be one hot. Like in my mind, if you have two vectors, they're both zero one times zero one. The inner product is going to be one. And if you have two assignments that are point five and point five, then it is also going to result in an in an inner product of it. Zero point five. Right. It's also going to to be no. So what's the inner product here? The inner product is point five times point five plus point five times point five, which is point five. Am I dumb? An embarrassingly long time later. Oh, it's because the L1 norm. OK, OK, we got it. We got it. I am I am OK. I am too dumb. Yes, of course, I was thinking of these vectors being normalized in L2 space where their inner products would always be one. But of course, if you have assignments between classes and it's a probability distribution, a histogram, then all of the prob possible assignments lie on this on this thing right here. Now, the inner product with yourself, of course, is the length of the vector and the length of a vector that points to one class or the other class is longer than a vector that points in between. So, OK, I see. That's where they get this. That's where they get this must be one hot from. So, OK, I'll give that to them. It is actually encouraging one hot predictions as long as these things are normalized in L1 space, which they probably are because they're histograms. Right. Yes, that was that was dumbness of me. I was trying to make a counter example. I'm like, wait a minute, this counter example is a counter example to my counter example. OK, so, yeah, that's that. So, as you can see, they are, of course, correct here and they now make the first experiments. So they say basically after the first step of the self supervised training, they can already retrieve sort of nearest neighbors and the nearest neighbors. The nearest neighbors of these images right here are the ones that you see on the right. And after the self supervised one, these nearest neighbors are already pretty good at sharing the high level features actually crazy, crazy good. Right. This flute here is in different sizes. As you can see, the fishes aren't aren't all exactly the same. The birds. So you can see it really focuses on sort of higher level features, but I guess it's really dependent on this higher level task. And they were they also investigate this quantitatively, but I just want to focus on how good is this after only the self supervised thing. And now they do this clustering and they can already sort of could already evaluate it right here because now they have a clustering. Right. After this step, they've basically pulled together the neighbors and they have this neural network that is now assigning classes. So they could already evaluate this and they are going to do that. But that's not good enough yet. Then they do a third step, which is fine tuning through self labeling. Now self labeling is pretty much exactly what it's what it says. It's you label your own data with your own classifier. Now that might be a bit outrageous. But it's basically saying, wait a minute, if I label my own data and learn a classifier on these labels, isn't isn't it just going to come out the same? And the answer is no. Right. If you have a data set because your classifier doesn't give you just first of all, if your classifier is something like this. Right. Just happens to be and you label and you learn a new classifier. It is going to be more like this. Right. Because it sort of maximizes a lot of classifiers maximize these distances between the classes. So even if it's like that and then the second step they do is they say, OK, there are some points where we are actually more confident about such as this one. We're more confident about that one. Also this one. And then this one here is pretty close. Like we're not super neither this one, but we're very confident about these two. So we're only going to use the ones where we are in fact confident about to learn to learn the new classifier. Or basically we you can also weigh them and so on. But they go by confidence right here, as you can see in this final algorithm. So this is the entire algorithm. And I got kicked away. Our algorithm. There we go. All right. So semantic clustering by adopting nearest neighbors, their scan algorithm. So in the first step, you do this pretext task. This is the self supervision, the representation learning. For your entire data set. No, sorry. This is this year. Optimize, optimize the neural network with task T. That's just self supervised representation learning. OK, then the second thing we're going to determine the nearest neighbor set for each X. Now they also in that step, they also augment the data. They do heavy data augmentation and so on. Also in this in the third step in the self labeling, they do data augmentation. There's a lot of tricks in here, but ultimately the base algorithm goes like this. So you find your neighboring sets for each X. And then what you do while your clustering loss decreases, you update this clustering neural network by with this loss that we saw. So this is the loss where you make the nearest neighbors closer to each other while still keeping the entropy high. OK, and then in the last after you've done this. You go through and you say, while the length of Y increases, what's why? Why is all the data points that are above a certain threshold? Now you're going to filter the data set that is above a certain threshold. And that's your data set Y. And you train this same neural network. You basically fine tune it with the cross entropy loss on your own labels. So now you only have labels Y. OK, so it's not it's not labels. You have the cross entropy loss between the assignments of this and the assignments of your data set. OK, so you basically do the same task, but you filter by confidence. And they use a threshold, I think, of point seven or something like this. Now let's go into the experiments, the experiments or look as follows. So they do some ablations to find out where in their methods kind of the the gains come from and will just quickly go through them. If they just do these self supervision at the beginning and then just do K means clustering on top of that, that will give them on C for 10 a thirty five point nine percent accuracy. So not very good. So the clustering, you can't just cluster on top of these representations and then be done. If they do what they say, so this is sample and batch entropy loss. This basically means you do not care about the nearest neighbors. You do this entire thing, but you only make an image close to the prediction, close to itself and its augmentations. So you don't use any nearest neighbor information also doesn't work. I wouldn't pay too much attention that the numbers are 10, 20 or 30. It just doesn't work. Now, if you use the scan loss, you all of a sudden you get into a regime where there is actual signal. So this is this is now significantly above the this is significantly above random guessing. And if you use strong data augmentation, as I said, is a lot of this is has these tricks in it of what kind of data augmentation you do and so on. So never forget that that these papers, besides their idea, they put in all the tricks they can. So you get 10 percent more. And then if you do this self labeling step, you get another 10 percent more. And this is fairly respectable, like eighty three point five without ever seeing labels. It's fairly good. But of course, there are only 10 classes right here. So keep that in mind. But they will do it on ImageNet later. And they investigate what kind of self supervision tasks at the beginning are important. And they investigate things like ROTNET feature decoupling and noise contrastive estimation, which noise contrastive estimation is the best. And noise contrastive estimation, I think, is just where you as we said, you input an image and then it's kind of noisy versions with augmented in various ways. And then you classify them together. This has been like this. These methods have been very successful in the last few years. Yeah, so this they have various investigations into their algorithm. I want to point out this here. This is the accuracy versus confidence after the complete clustering step. So this is now the third step, the self labeling. And you can see right here as this confidence of the network goes up, the actual accuracy goes up as well. So that means the network after the clustering is really more confident about the points that it can classify more accurately. There's like a correlation between where the network is confident and the actual label of the point, which is remarkable because it has never seen the label. But also see how sort of the range here is quite small. So with the standard augmentation, it goes like from here to here. So where you set that threshold is fairly important and might be quite brittle here because you need to set the threshold right such that some points are below it and some are above it. And you don't want to pull in points where you're not because if you pull in points from here, you're only you only have the correct label for 75 percent or something like them of them. And that means if you now self label and learn on them, you're going to learn the wrong signal. So this this step seems fairly brittle, honestly, but I don't know, of course. They go on and investigate various things such as how many clusters do you need or how many nearest neighbors? Sorry. Do you need this number K here? And you can see that if you have zero neighbors, then you're doing a lot worse than if you have, let's say, five nearest neighbors. So the jump here, as you can see, is fairly high in all the data sets. But after that, it sort of doesn't really matter much. So it seems like five nearest neighbors should be enough for most things. And here they just show that when they remove the false positives, that their algorithm actually converges to the correct clustering, the correct accuracy, which is not surprising. Like if you remove the wrong samples that are wrong, then the rest of the samples are going to be right. I think that's just showing that it doesn't go into some kind of crazy downward spiral loop or something like this. But still, it's just kind of funny. OK, so they do investigate how much they improve. And they improve by quite a lot above the kind of previous methods. So they have a lot of previous methods. But even this includes things like K means and so on, GANs, deep cluster that we spoke about. And this method, it already gets, as you can see, fairly close to good accuracy. So you have like 88.6% accuracy. And that's fairly remarkable on C410 without seeing the labels. But we'll go on. And now they go into ImageNet. Now ImageNet, of course, has way more classes. It has 1,000 classes compared to C410's 10 classes. So if you think clustering 10 classes might, and they're fairly apart from each other, might work with various techniques, ImageNet, 1,000 classes, that's way more difficult. Now they do sub sample this to 5100 and 200 classes. And they get OK accuracy. As you can see, they get 81% for 50 classes where a supervised baseline would get 86%. Into 200 classes, they get 69% where a supervised baseline would get 76%. So it's fairly, it's there. And that's quite remarkable for these low number of classes. And they figure out that if they look for the samples that are kind of in the most of the middle of their cluster, they get these prototypes right here. You can see all of these images. If you know ImageNet, some of the images really only have a part of the object and so on. So here with the prototypical things, you really get center clear shot of the object with clearly visible features and so on. So this sort of repeats the fact that this clustering really does go on that sort of semantic information. Of course, the labels here are from the test label set. The network can't figure that out. And then they go for 1,000 classes. And in 1,000 classes, it doesn't really work because there might be just too many confusions right here. But they do have this confusion matrix of their method. And it shows that the confusion matrix is pretty much a block diagonal along these super clusters right here. So you can see the dogs, the network confuses the dogs fairly often and then insects with each other, but not really across here. Which is still quite remarkable. But I mean, that's you get the same thing for a lot of these methods. So I don't I don't know how much different this would be in other methods. But certainly it's interesting to look at. Now, they go into one last thing, and that is what if we don't know how many clusters there are, right? If we don't know anything. So say so far, we have assumed to to have knowledge about the number of ground truth classes. The model predictions were valid losing the Hungarian matching algorithm. We already saw this in the DETR by Facebook, if you remember. However, what happens if the number of clusters does not match the number of ground truth classes anymore? So they now say table three reports the results when we overestimate the number of ground truth classes by a factor of two. OK, so now they build just 20 classes for C for 10 instead of 10 classes. And we'll look at table three real quick. Where's table three? This is table three. OK, so when they over cluster, you get the thing here on the bottom. And you can see there is a drop in accuracy right here. Now, what I don't actually they don't actually say how they do the over cluster matching. So if you imagine if I now have, I don't know, six clusters, but I need to assign them to three clusters, you know, here. Do I still use this most optimistic thing? So do I still use I think they still use this most optimistic matching right where you assign everything to its best fitted cluster. You compute all the permutations and then you give it the best benefit of the doubt. Now, if you imagine the situation where I over cluster to the point that I have each image in its own cluster and I run this algorithm to evaluate my clustering, I give it basically the most beneficial view, then I would get 100 percent accuracy. OK, so like in one of in these over cluster approach, I would sort of expect that you actually get a better score because you can like there is more generosity of the matching algorithm involved. Now, that's counteracted by the fact that you can't group together things that obviously have similar features because they are in the same class. So there's kind of two forces pulling here. But I was kind of astounded that it's going down and the evaluation method of this matching algorithm, it sort of breaks down when you have more classes, at least in my opinion. Yeah, but but it's interesting to see that you can just overshoot and but then you need some sort of heuristic to reconcile that. In any case, I think this paper is pretty cool. It brings together a lot of things that were already present and introduces this kind of this step approach. But what you have to keep in mind and by the way, there's lots of samples down here. What you have to keep in mind is there are a lot of hyperparameters in here. There are like this threshold and you know, the first of all, yeah, the number of classes, the thresholds, the architectures and so on. And all of this has been tuned to get these numbers really high. Right. All of these steps, all of the augmentations and so on, the chosen data augmentations. It has been chosen to get this number as high as possible. So, you know, to interpret this as, oh, look, we can classify without knowing the labels is, you know, yes, in this case, but the hyperparameter choices of the algorithm are all informed by the labels. So it is still very, very unclear of how this method will actually work when you really don't have the labels, when you actually have to choose the hyperparameters in absence of anything. And yeah, I think the future might tell if they continue to work on this. All right. Thanks for listening, looking, watching and bearing with me through my wrestling with various math, basic math in this video. I wish you a good day and bye bye.
[ { "start": 0, "end": 10, "text": " Hi there! Check out these clusters of images right here. And just have a look at how all of them are pretty much showing the same object." }, { "start": 10, "end": 19, "text": " So here's balloons, here's birds, here's sharks or other fish. These are images from the ImageNet dataset." }, { "start": 19, "end": 26, "text": " And you can see that these clusters are pretty much the object classes themselves." }, { "start": 26, "end": 33, "text": " There's all the frogs right here, all the people that have caught fish." }, { "start": 33, "end": 41, "text": " So the astonishing thing about this is that these clusters have been obtained without any labels of the ImageNet dataset." }, { "start": 41, "end": 50, "text": " Of course, the dataset has labels, but this method doesn't use the labels. It learns to classify images without labels." }, { "start": 50, "end": 69, "text": " So today we're looking at this paper, Learning to Classify Images Without Labels, by Wouter von Ganzbecke, Simon Vandenhende, Stamatios Georgoulis, Mark Prozémans and Luke van Gaal." }, { "start": 69, "end": 83, "text": " And on a high-level overview, they have a three-step procedure. Basically, first, they use self-supervised learning in order to get good representations." }, { "start": 83, "end": 95, "text": " Second, they do a clustering. So they do a sort of k-nearest neighbor clustering, but they do clustering on top of those things." }, { "start": 95, "end": 104, "text": " But they do it in a kind of special way. And then third, they do a refinement through self-labeling." }, { "start": 104, "end": 110, "text": " So if you know what all of these are, you basically understand the paper already." }, { "start": 110, "end": 118, "text": " But there's a bit of tricky steps in there. And it's pretty cool that at the end it works out like you just saw." }, { "start": 118, "end": 128, "text": " So before we dive in, as always, if you're here and not subscribed, then please do. And if you like the video, share it out." }, { "start": 128, "end": 133, "text": " And leave a comment if you feel like commenting. Cool." }, { "start": 133, "end": 144, "text": " So as we already stated the problem, they ask, is it possible to automatically classify images without the use of ground truth annotations?" }, { "start": 144, "end": 149, "text": " Or even when the classes themselves are not known a priori?" }, { "start": 149, "end": 157, "text": " Now, you might think that this is outrageous. How can you classify when you don't even know what the classes are and so on?" }, { "start": 157, "end": 168, "text": " So the way you have to imagine it going forward, and they don't explicitly explain it, but it's assumed that if you have a dataset," }, { "start": 168, "end": 176, "text": " and you learn to classify it, what basically that means is you cluster it, right?" }, { "start": 176, "end": 181, "text": " You put some of the data points in the same clusters." }, { "start": 181, "end": 193, "text": " And then, of course, the dataset, I'm going to draw the same dataset right here, the same dataset would have an actual classification thing." }, { "start": 193, "end": 198, "text": " So this would be class 0, this here may be class 1, and this here might be class 2." }, { "start": 198, "end": 205, "text": " Now, you can't possibly know how the classes are called or something, which one is the first, which one is the second." }, { "start": 205, "end": 213, "text": " So at test time, basically, if you have a method like this that doesn't use labels, what you're going to do is you're basically going to find," }, { "start": 213, "end": 218, "text": " you're going to be as generous as possible in the assignment of these and say," }, { "start": 218, "end": 225, "text": " oh, look, if I assign this here to cluster 0 and this here to cluster 2 and this here to cluster 1," }, { "start": 225, "end": 232, "text": " and I just carry over the labels, what would my accuracy be under that labeling?" }, { "start": 232, "end": 238, "text": " So you're as generous as possible with the assignments of the labels." }, { "start": 238, "end": 242, "text": " So that's how it's going to work, right? That's what you have to keep in mind." }, { "start": 242, "end": 247, "text": " We're basically developing an algorithm that gives us this kind of clustering of the data." }, { "start": 247, "end": 254, "text": " And then if that clustering partitions the data in the same way as the actual labeling would," }, { "start": 254, "end": 262, "text": " the actual labeling with the test labels, then we think it's a good algorithm." }, { "start": 262, "end": 267, "text": " OK, so they claim they have a..." }, { "start": 267, "end": 272, "text": " OK, in this paper, we deviate from recent works and advocate a two-step approach." }, { "start": 272, "end": 279, "text": " And it's actually a three-step approach, but where feature learning and clustering are decoupled." }, { "start": 279, "end": 286, "text": " OK, why is that? So they argue what you could do, what people have done is..." }, { "start": 286, "end": 289, "text": " And I'm going to..." }, { "start": 289, "end": 295, "text": " Well, this is just a wall of text. So what you could do is you could just basically cluster the data." }, { "start": 295, "end": 298, "text": " Like who says you can't use clustering algorithms?" }, { "start": 298, "end": 303, "text": " And then the question is, what do you cluster them by? Like you need a distance." }, { "start": 303, "end": 309, "text": " So if I have points in 2D, it sort of makes sense to use the Euclidean distance here." }, { "start": 309, "end": 317, "text": " But if I have images of cats and dogs and whatnot, then the Euclidean distance between the pixels is really not a good thing." }, { "start": 317, "end": 322, "text": " But also, so you might think we could actually..." }, { "start": 322, "end": 328, "text": " We could use a deep neural network and then basically send the image, that's the image right here," }, { "start": 328, "end": 334, "text": " send the image through the deep neural network and then either take this last state right here." }, { "start": 334, "end": 337, "text": " So it goes through and through and through." }, { "start": 337, "end": 342, "text": " And we could get take either of the hidden states or we could just take, you know, the last state," }, { "start": 342, "end": 347, "text": " that is the sort of hidden representation right here and do the clustering with that." }, { "start": 347, "end": 352, "text": " But then of course, the question is, what do you, which neural network do you take?" }, { "start": 352, "end": 355, "text": " How do you train that neural network?" }, { "start": 355, "end": 359, "text": " And there have been a few approaches such as a deep cluster," }, { "start": 359, "end": 366, "text": " which try to formulate basically an objective for that neural network where you first, you send all the images through, right?" }, { "start": 366, "end": 371, "text": " You send a bunch of images through to get you in embedding space, to get you points." }, { "start": 371, "end": 376, "text": " And then in embedding space, you think, well, the features that are in the embedding space," }, { "start": 376, "end": 379, "text": " they are somehow latent and they..." }, { "start": 379, "end": 385, "text": " If basically the entire thing is, if this neural network was used to classify images," }, { "start": 385, "end": 388, "text": " you would have a classification head on top." }, { "start": 388, "end": 392, "text": " And a classification head, this is like a five class classification head," }, { "start": 392, "end": 401, "text": " is nothing else than a linear classifier boundary that you put on top of this hidden representation." }, { "start": 401, "end": 409, "text": " So if you were to use this neural network for classification, it must be possible to draw a linear boundary between the classes." }, { "start": 409, "end": 418, "text": " And therefore, the either things like the inner product distance or the Euclidean distance must make sense in that space." }, { "start": 418, "end": 424, "text": " They don't make sense in the picture space, but they must make sense in the hidden representation space," }, { "start": 424, "end": 428, "text": " because what you're going to do with them is exactly linear classification." }, { "start": 428, "end": 434, "text": " The last classification head of a neural network is just a linear classifier." }, { "start": 434, "end": 444, "text": " So the assumption is that, and the conclusion is, well, in this space, you should be able to cluster by Euclidean distance." }, { "start": 444, "end": 449, "text": " So what deep cluster does, like is first get the representations." }, { "start": 449, "end": 453, "text": " You start off with a random neural network, then cluster these representations," }, { "start": 453, "end": 459, "text": " then basically label, self label the images in a way." }, { "start": 459, "end": 462, "text": " Now, way over simplifying that technique right here." }, { "start": 462, "end": 469, "text": " But you have these alternative steps of clustering and then kind of finding better representation and then clustering these representations." }, { "start": 469, "end": 475, "text": " And what it basically says is that the CNN itself is such a is like a prior," }, { "start": 475, "end": 481, "text": " because it's the translation of it and works very good for very well for natural images." }, { "start": 481, "end": 486, "text": " So the CNN itself will lead to good representations if we do it in this way." }, { "start": 486, "end": 488, "text": " And they have some good results there." }, { "start": 488, "end": 499, "text": " But this paper argues that if you do that, then the the algorithm tends to focus a lot on very low level features." }, { "start": 499, "end": 504, "text": " So if the pixel on the bottom right here is blue, right, then you can." }, { "start": 504, "end": 513, "text": " And the neural network, by chance, puts two of those images where the blue pixel on the bottom right, it puts them close together." }, { "start": 513, "end": 517, "text": " Then in the next step, it will, because they're close together, will cluster them together." }, { "start": 517, "end": 523, "text": " And then it will basically feed back the new representation should put the two in the same class, right?" }, { "start": 523, "end": 529, "text": " It will feed back that it should focus even more on that blue pixel." }, { "start": 529, "end": 546, "text": " So it's very, very dependent on initializations and it can jump super easily onto these low level features that have nothing to do with with the high level task you're ultimately trying to solve, which is to classify these images later." }, { "start": 546, "end": 552, "text": " So what this paper does is it says we can eliminate this." }, { "start": 552, "end": 564, "text": " We can eliminate this the fact that these methods will produce will produce neural networks that focus on low level features." }, { "start": 564, "end": 568, "text": " And how do we do that? We do that by representation learning." }, { "start": 568, "end": 573, "text": " So representation learning, you might know this as self supervised learning." }, { "start": 573, "end": 580, "text": " And this is the task they solve in the first step of their objective." }, { "start": 580, "end": 586, "text": " So let's go through this. This right here is an image." }, { "start": 586, "end": 590, "text": " Now, the T is a transformation of that image." }, { "start": 590, "end": 597, "text": " And in self supervised learning, there are several methods that you can transform an image." }, { "start": 597, "end": 601, "text": " So, for example, you can random crop an image." }, { "start": 601, "end": 608, "text": " You can just cut out like a piece right here and scale that up to be as large as the original image." }, { "start": 608, "end": 620, "text": " Or you can use, for example, data augmentation, which means you take the image and you basically so if there is, I don't know, the cat right here, you kind of convolve it with something." }, { "start": 620, "end": 625, "text": " So there's like a very squiggly cat. OK, I'm terrible." }, { "start": 625, "end": 629, "text": " You can you can rotate it, for example." }, { "start": 629, "end": 636, "text": " So it's like this. OK, so these are all these are all sets, including the crop sets of this transformation T." }, { "start": 636, "end": 648, "text": " So you transform it in some way and you want after you've transformed it, you send your original image." }, { "start": 648, "end": 658, "text": " That should be read. You send your original image and the transformed image through a neural network, each one by themselves." }, { "start": 658, "end": 666, "text": " OK, and then after this, you say the hidden representation here should be close to each other." }, { "start": 666, "end": 670, "text": " OK, this is this is basically the self supervised training task." }, { "start": 670, "end": 678, "text": " It's been shown to work very, very well as a pre training method for classification neural networks." }, { "start": 678, "end": 687, "text": " You have an image and its augmented version and you minimize the inner product or the Euclidean distance between the two versions in the hidden space." }, { "start": 687, "end": 689, "text": " And the rationale is exactly the same." }, { "start": 689, "end": 695, "text": " The rationale is that this hidden space, of course, should be linearly classifiable." }, { "start": 695, "end": 698, "text": " And so the distance between those should be close." }, { "start": 698, "end": 708, "text": " And the rationale between having these tasks is that, well, if I flip the image, right, if I flip the image to the right," }, { "start": 708, "end": 714, "text": " it cannot focus on the pixel on the bottom right anymore, because that's not going to be the pixel on the bottom right here." }, { "start": 714, "end": 717, "text": " And I'm not always going to flip it into the same direction." }, { "start": 717, "end": 724, "text": " And sometimes I'm going to crop it so it also can't focus on the pixel on the bottom right, because in the crop, that pixel is like out here." }, { "start": 724, "end": 726, "text": " It's not even in the crop." }, { "start": 726, "end": 734, "text": " But basically what you're looking to do with the self supervised methods is you are looking to destroy this low level information." }, { "start": 734, "end": 741, "text": " That's that's all you're looking to build a pipeline of a neural network here that destroys deliberately low level information." }, { "start": 741, "end": 754, "text": " And you do that by coming up with tasks like this self supervision tasks that just that deliberately exclude this information from being used." }, { "start": 754, "end": 758, "text": " I think that's what's going on generally in the self supervised learning thing." }, { "start": 758, "end": 764, "text": " OK, so this here, as you can see, is the neural network that you train." }, { "start": 764, "end": 773, "text": " You send both images, the original and the augmented version, through the same neural network, and then you minimize some distance," }, { "start": 773, "end": 778, "text": " which is usually like the inner product or the Euclidean distance in this embedding space." }, { "start": 778, "end": 783, "text": " OK, and what you train, you can see right here, you train the parameters of this neural network." }, { "start": 783, "end": 787, "text": " So the transformations are fixed or sampled and the distance is fixed." }, { "start": 787, "end": 793, "text": " You train the neural networks such that your embeddings minimize this task." }, { "start": 793, "end": 799, "text": " Now, this is nothing new. This has been this has been used for a couple of years now to get better representation." }, { "start": 799, "end": 801, "text": " Self supervised learning is a thing." }, { "start": 801, "end": 808, "text": " But they basically say we can use this as an initialization step for this clustering procedure," }, { "start": 808, "end": 814, "text": " because if we don't do that, we we focus on these low level features." }, { "start": 814, "end": 817, "text": " OK, and notice you don't need any labels for this procedure." }, { "start": 817, "end": 820, "text": " That's why it's called self supervised." }, { "start": 820, "end": 826, "text": " OK, so the second second part is the clustering." }, { "start": 826, "end": 830, "text": " Now they cluster, but they don't just cluster these representations." }, { "start": 830, "end": 835, "text": " That would be that doesn't perform very well in their in their experiments." }, { "start": 835, "end": 845, "text": " What they instead do is they minimize this entire objective right here and we'll go through it step by step." }, { "start": 845, "end": 853, "text": " So they train a new neural network. OK, this thing right here, this is a new neural network." }, { "start": 853, "end": 860, "text": " So first you have you already have the neural network, which was called." }, { "start": 860, "end": 865, "text": " What was it even called? The one that gives you the embedding with the theta." }, { "start": 865, "end": 869, "text": " OK, it's called five theta. It's the same architecture." }, { "start": 869, "end": 871, "text": " I think they initialize one with the other." }, { "start": 871, "end": 881, "text": " So in step one, you get five theta five theta goes give from from X gives you a representation of X." }, { "start": 881, "end": 888, "text": " OK, let's call it hidden X. So that's the self supervised learning." }, { "start": 888, "end": 896, "text": " But in step two, you train an entirely new neural network, this five data here," }, { "start": 896, "end": 903, "text": " and you initialize it with this one. But now you train it to do the following again." }, { "start": 903, "end": 910, "text": " You want to minimize. Sorry, you want to maximize the inner product right here." }, { "start": 910, "end": 915, "text": " See, that's the inner product. You want to maximize the inner product between two things." }, { "start": 915, "end": 921, "text": " Now, that's the same thing as before. We want to minimize the distance between two things and the dot product distance." }, { "start": 921, "end": 927, "text": " In that case, you maximize the dot product between two things. And the two things are two images" }, { "start": 927, "end": 932, "text": " that go through the same neural network as before. Right. This and this." }, { "start": 932, "end": 937, "text": " Now, what's different here is that here we input an one image of the data set." }, { "start": 937, "end": 941, "text": " That's the same as before. OK, so we input one image." }, { "start": 941, "end": 947, "text": " But here before in the self supervised learning, we input an augmented version of that." }, { "start": 947, "end": 952, "text": " And now we input something else. We input this K right here. Now, what's K?" }, { "start": 952, "end": 959, "text": " What K comes from this neighbor set of X. OK, this is the set of neighbors of X." }, { "start": 959, "end": 966, "text": " And these neighbors are determined with respect to this neural network right here." }, { "start": 966, "end": 974, "text": " So what you do after step one is you take your neural network with the good embeddings." }, { "start": 974, "end": 979, "text": " And here is your data set X. Your data set X. This should be another." }, { "start": 979, "end": 985, "text": " Your data set X is this list basically of all the images in your data set." }, { "start": 985, "end": 990, "text": " And what you're going to do is you're going to take all of them using that neural network that you just trained" }, { "start": 990, "end": 997, "text": " and embed them into a latent space right here. OK." }, { "start": 997, "end": 1001, "text": " This is the latent space where you have done the self supervised training." }, { "start": 1001, "end": 1010, "text": " And now for each image right here. So if this is X, I, you're going to find its K nearest neighbors." }, { "start": 1010, "end": 1016, "text": " And they use I think they use five as a benchmark. So you're going to find its nearest neighbors." }, { "start": 1016, "end": 1021, "text": " It's five nearest neighbors. And you do this for each image." }, { "start": 1021, "end": 1030, "text": " So this image has these five nearest neighbors. So in step two, what you're trying to do is you're going to try to pull together" }, { "start": 1030, "end": 1038, "text": " each image and its nearest neighbors in that in this this not in this space directly," }, { "start": 1038, "end": 1044, "text": " but you determine which ones are the nearest neighbor from this neural network and you keep it constant." }, { "start": 1044, "end": 1047, "text": " That's how you determine what the nearest neighbors are in the first task." }, { "start": 1047, "end": 1057, "text": " And that is your NX set for X, I. And in the second step, you're trying to make the representations" }, { "start": 1057, "end": 1063, "text": " of any image and its nearest neighbors closer to each other." }, { "start": 1063, "end": 1074, "text": " OK, so with with this thing right here, you maximize the inner product between X in after this neural network" }, { "start": 1074, "end": 1082, "text": " and a nearest neighbor of X that was was a nearest neighbor after the first task." }, { "start": 1082, "end": 1089, "text": " Now, the way they cluster here is not just again by putting it into an embedding space like we saw before." }, { "start": 1089, "end": 1100, "text": " But this thing right here, this neural network, as you can see here, is is a C dimensional vector in zero one." }, { "start": 1100, "end": 1104, "text": " Now, C is the number of classes that you can either know that." }, { "start": 1104, "end": 1109, "text": " So you don't know which classes which you don't have labels, but you could know how many classes there are." }, { "start": 1109, "end": 1113, "text": " Or you could just guess how many classes there are." }, { "start": 1113, "end": 1119, "text": " And as long as you as you overguess, you can still like build super clusters later." }, { "start": 1119, "end": 1125, "text": " So this they simply say it's in zero one, but they also say it performs a soft assignment." }, { "start": 1125, "end": 1129, "text": " So we're also going to assume that this is normalized." }, { "start": 1129, "end": 1136, "text": " So for each for each data point X here, you're going to you're going to have an image." }, { "start": 1136, "end": 1139, "text": " You're going to put it through this new neural network." }, { "start": 1139, "end": 1146, "text": " Okay, this new neural network new, and it's going to tell you it's going to give you basically a histogram." }, { "start": 1146, "end": 1153, "text": " Let's say class one, two or three, we guess there are three class and it's going to give you an assignment of the three." }, { "start": 1153, "end": 1156, "text": " And you also take a nearest neighbor." }, { "start": 1156, "end": 1158, "text": " Here is your data set." }, { "start": 1158, "end": 1161, "text": " You also take a nearest neighbor of that." }, { "start": 1161, "end": 1166, "text": " So you look for this set N of X and you take a nearest neighbor." }, { "start": 1166, "end": 1171, "text": " Maybe that's that's a maybe that's a dog." }, { "start": 1171, "end": 1174, "text": " I can't I really can't draw dog." }, { "start": 1174, "end": 1176, "text": " Yeah, that's the best I can do." }, { "start": 1176, "end": 1177, "text": " I'm sorry." }, { "start": 1177, "end": 1180, "text": " And you also put that through the same network." }, { "start": 1180, "end": 1188, "text": " And you're saying since they were nearest neighbor in task one, they must share some sort of interesting high level features" }, { "start": 1188, "end": 1191, "text": " because that's what the first task was for." }, { "start": 1191, "end": 1198, "text": " Therefore, I want to make them closer together in in the in the light of these of this neural network right here." }, { "start": 1198, "end": 1204, "text": " So this is also going to give you an assignment like maybe like this." }, { "start": 1204, "end": 1205, "text": " Okay." }, { "start": 1205, "end": 1214, "text": " And now you object you you train this network right here to basically match these two distributions." }, { "start": 1214, "end": 1222, "text": " Okay. So this is this is now a classifier into C classes, but we guess C and we don't have labels." }, { "start": 1222, "end": 1228, "text": " We simply our label is going to be my neighbors from the first task must have the same labels." }, { "start": 1228, "end": 1230, "text": " That's our label." }, { "start": 1230, "end": 1237, "text": " Now they say they also have this term right here, which is the entropy over assignments." }, { "start": 1237, "end": 1238, "text": " Okay." }, { "start": 1238, "end": 1240, "text": " As you can see, so they minimize the following." }, { "start": 1240, "end": 1244, "text": " They minimize this quantity, which has a negative in front of it." }, { "start": 1244, "end": 1247, "text": " So that means they maximize this log inner product." }, { "start": 1247, "end": 1255, "text": " And they also maximize the entropy because sorry." }, { "start": 1255, "end": 1257, "text": " So they minimize this thing." }, { "start": 1257, "end": 1260, "text": " But the entropy is a negative quantity." }, { "start": 1260, "end": 1261, "text": " Right." }, { "start": 1261, "end": 1266, "text": " So they maximize the entropy because here's a plus." }, { "start": 1266, "end": 1271, "text": " And now they minimize the entropy." }, { "start": 1271, "end": 1276, "text": " Let's see what they say by minimizing the following objective." }, { "start": 1276, "end": 1280, "text": " Now entropy is the sum of the negative sum of P log P." }, { "start": 1280, "end": 1283, "text": " And this if this is P." }, { "start": 1283, "end": 1291, "text": " Yes, this is the probability that an image is going to be assigned to cluster C over the entire data set." }, { "start": 1291, "end": 1295, "text": " So they're going to." }, { "start": 1295, "end": 1297, "text": " Yes, so it's negative." }, { "start": 1297, "end": 1302, "text": " This quantity negative." }, { "start": 1302, "end": 1305, "text": " Minus P log P." }, { "start": 1305, "end": 1308, "text": " And this is the entropy." }, { "start": 1308, "end": 1311, "text": " So they're going to minimize the entropy." }, { "start": 1311, "end": 1315, "text": " Let's see what they say." }, { "start": 1315, "end": 1319, "text": " We include an entropy term." }, { "start": 1319, "end": 1321, "text": " The second term in equation two." }, { "start": 1321, "end": 1328, "text": " Which spreads the predictions uniformly across clusters C." }, { "start": 1328, "end": 1329, "text": " OK." }, { "start": 1329, "end": 1341, "text": " So what we want is a uniform assignment over cluster, which means we should maximize the entropy." }, { "start": 1341, "end": 1342, "text": " Oh, yes." }, { "start": 1342, "end": 1343, "text": " OK." }, { "start": 1343, "end": 1344, "text": " They minimize this thing." }, { "start": 1344, "end": 1347, "text": " And this here is the negative entropy." }, { "start": 1347, "end": 1348, "text": " Right." }, { "start": 1348, "end": 1356, "text": " So they want basically what they want over the whole data set that not all of the images are going to be in the same cluster." }, { "start": 1356, "end": 1358, "text": " This is cluster one." }, { "start": 1358, "end": 1359, "text": " And then this is cluster two." }, { "start": 1359, "end": 1360, "text": " And then this is cluster three." }, { "start": 1360, "end": 1372, "text": " So that term counteracts that basically the more evenly spread the entire data set distribution is the the higher the entropy, the lower the negative entropy." }, { "start": 1372, "end": 1373, "text": " And that's the goal right here." }, { "start": 1373, "end": 1374, "text": " I'm sorry." }, { "start": 1374, "end": 1378, "text": " This this was I was confused by the too many negative signs." }, { "start": 1378, "end": 1380, "text": " And then you minimize the entire thing." }, { "start": 1380, "end": 1381, "text": " All right." }, { "start": 1381, "end": 1384, "text": " Now, they say they say a different thing right here." }, { "start": 1384, "end": 1388, "text": " They say here this bracket denotes the dot product operator." }, { "start": 1388, "end": 1395, "text": " As we saw, it's the dot product between these two distributions right here." }, { "start": 1395, "end": 1407, "text": " The first term in equation two imposes this neural network to make consistent predictions for a sample XI and its neighboring samples, the neighbors of XI." }, { "start": 1407, "end": 1409, "text": " And here is an interesting thing." }, { "start": 1409, "end": 1414, "text": " Note that the dot product will be maximal when the predictions are one hot." }, { "start": 1414, "end": 1418, "text": " That means confident and assigned to the same cluster consistent." }, { "start": 1418, "end": 1432, "text": " So they basically say the objective encourages confidence because it encourages predictions to be one hot and it encourages consistency because it you know the because the distributions need to be the same." }, { "start": 1432, "end": 1434, "text": " They should be in the same cluster." }, { "start": 1434, "end": 1437, "text": " Right now, I agree with the consistency." }, { "start": 1437, "end": 1445, "text": " Like if you make the inner product high, then of the of two of these histograms, of course, they look the same." }, { "start": 1445, "end": 1446, "text": " Right." }, { "start": 1446, "end": 1449, "text": " Because these are ultimately vectors. These are three dimensional vectors." }, { "start": 1449, "end": 1451, "text": " Let's call them two dimensional vectors." }, { "start": 1451, "end": 1453, "text": " Right. So here is class one." }, { "start": 1453, "end": 1454, "text": " Here's class two." }, { "start": 1454, "end": 1462, "text": " If you make the inner product small or high, they will agree on their predictions." }, { "start": 1462, "end": 1467, "text": " But I disagree that this encourages anything to be one hot." }, { "start": 1467, "end": 1472, "text": " Like in my mind, if you have two vectors, they're both zero one times zero one." }, { "start": 1472, "end": 1474, "text": " The inner product is going to be one." }, { "start": 1474, "end": 1485, "text": " And if you have two assignments that are point five and point five, then it is also going to result in an in an inner product of it." }, { "start": 1485, "end": 1487, "text": " Zero point five." }, { "start": 1487, "end": 1492, "text": " Right. It's also going to to be no." }, { "start": 1492, "end": 1494, "text": " So what's the inner product here?" }, { "start": 1494, "end": 1503, "text": " The inner product is point five times point five plus point five times point five, which is point five." }, { "start": 1503, "end": 1505, "text": " Am I dumb?" }, { "start": 1505, "end": 1509, "text": " An embarrassingly long time later." }, { "start": 1509, "end": 1511, "text": " Oh, it's because the L1 norm." }, { "start": 1511, "end": 1513, "text": " OK, OK, we got it." }, { "start": 1513, "end": 1516, "text": " We got it." }, { "start": 1516, "end": 1518, "text": " I am I am OK." }, { "start": 1518, "end": 1519, "text": " I am too dumb." }, { "start": 1519, "end": 1526, "text": " Yes, of course, I was thinking of these vectors being normalized in L2 space where their inner products would always be one." }, { "start": 1526, "end": 1541, "text": " But of course, if you have assignments between classes and it's a probability distribution, a histogram, then all of the prob possible assignments lie on this on this thing right here." }, { "start": 1541, "end": 1554, "text": " Now, the inner product with yourself, of course, is the length of the vector and the length of a vector that points to one class or the other class is longer than a vector that points in between." }, { "start": 1554, "end": 1556, "text": " So, OK, I see." }, { "start": 1556, "end": 1557, "text": " That's where they get this." }, { "start": 1557, "end": 1560, "text": " That's where they get this must be one hot from." }, { "start": 1560, "end": 1562, "text": " So, OK, I'll give that to them." }, { "start": 1562, "end": 1572, "text": " It is actually encouraging one hot predictions as long as these things are normalized in L1 space, which they probably are because they're histograms." }, { "start": 1572, "end": 1574, "text": " Right." }, { "start": 1574, "end": 1579, "text": " Yes, that was that was dumbness of me." }, { "start": 1579, "end": 1581, "text": " I was trying to make a counter example." }, { "start": 1581, "end": 1587, "text": " I'm like, wait a minute, this counter example is a counter example to my counter example." }, { "start": 1587, "end": 1591, "text": " OK, so, yeah, that's that." }, { "start": 1591, "end": 1602, "text": " So, as you can see, they are, of course, correct here and they now make the first experiments." }, { "start": 1602, "end": 1613, "text": " So they say basically after the first step of the self supervised training, they can already retrieve sort of nearest neighbors and the nearest neighbors." }, { "start": 1613, "end": 1620, "text": " The nearest neighbors of these images right here are the ones that you see on the right." }, { "start": 1620, "end": 1629, "text": " And after the self supervised one, these nearest neighbors are already pretty good at sharing the high level features actually crazy, crazy good." }, { "start": 1629, "end": 1632, "text": " Right. This flute here is in different sizes." }, { "start": 1632, "end": 1638, "text": " As you can see, the fishes aren't aren't all exactly the same." }, { "start": 1638, "end": 1640, "text": " The birds." }, { "start": 1640, "end": 1648, "text": " So you can see it really focuses on sort of higher level features, but I guess it's really dependent on this higher level task." }, { "start": 1648, "end": 1660, "text": " And they were they also investigate this quantitatively, but I just want to focus on how good is this after only the self supervised thing." }, { "start": 1660, "end": 1668, "text": " And now they do this clustering and they can already sort of could already evaluate it right here because now they have a clustering." }, { "start": 1668, "end": 1676, "text": " Right. After this step, they've basically pulled together the neighbors and they have this neural network that is now assigning classes." }, { "start": 1676, "end": 1679, "text": " So they could already evaluate this and they are going to do that." }, { "start": 1679, "end": 1682, "text": " But that's not good enough yet." }, { "start": 1682, "end": 1688, "text": " Then they do a third step, which is fine tuning through self labeling." }, { "start": 1688, "end": 1693, "text": " Now self labeling is pretty much exactly what it's what it says." }, { "start": 1693, "end": 1697, "text": " It's you label your own data with your own classifier." }, { "start": 1697, "end": 1700, "text": " Now that might be a bit outrageous." }, { "start": 1700, "end": 1710, "text": " But it's basically saying, wait a minute, if I label my own data and learn a classifier on these labels, isn't isn't it just going to come out the same?" }, { "start": 1710, "end": 1712, "text": " And the answer is no." }, { "start": 1712, "end": 1726, "text": " Right. If you have a data set because your classifier doesn't give you just first of all, if your classifier is something like this." }, { "start": 1726, "end": 1727, "text": " Right." }, { "start": 1727, "end": 1731, "text": " Just happens to be and you label and you learn a new classifier." }, { "start": 1731, "end": 1733, "text": " It is going to be more like this." }, { "start": 1733, "end": 1741, "text": " Right. Because it sort of maximizes a lot of classifiers maximize these distances between the classes." }, { "start": 1741, "end": 1751, "text": " So even if it's like that and then the second step they do is they say, OK, there are some points where we are actually more confident about such as this one." }, { "start": 1751, "end": 1753, "text": " We're more confident about that one." }, { "start": 1753, "end": 1754, "text": " Also this one." }, { "start": 1754, "end": 1757, "text": " And then this one here is pretty close." }, { "start": 1757, "end": 1761, "text": " Like we're not super neither this one, but we're very confident about these two." }, { "start": 1761, "end": 1772, "text": " So we're only going to use the ones where we are in fact confident about to learn to learn the new classifier." }, { "start": 1772, "end": 1776, "text": " Or basically we you can also weigh them and so on." }, { "start": 1776, "end": 1783, "text": " But they go by confidence right here, as you can see in this final algorithm." }, { "start": 1783, "end": 1786, "text": " So this is the entire algorithm." }, { "start": 1786, "end": 1792, "text": " And I got kicked away." }, { "start": 1792, "end": 1793, "text": " Our algorithm." }, { "start": 1793, "end": 1794, "text": " There we go." }, { "start": 1794, "end": 1797, "text": " All right." }, { "start": 1797, "end": 1803, "text": " So semantic clustering by adopting nearest neighbors, their scan algorithm." }, { "start": 1803, "end": 1806, "text": " So in the first step, you do this pretext task." }, { "start": 1806, "end": 1809, "text": " This is the self supervision, the representation learning." }, { "start": 1809, "end": 1813, "text": " For your entire data set." }, { "start": 1813, "end": 1814, "text": " No, sorry." }, { "start": 1814, "end": 1815, "text": " This is this year." }, { "start": 1815, "end": 1819, "text": " Optimize, optimize the neural network with task T." }, { "start": 1819, "end": 1822, "text": " That's just self supervised representation learning." }, { "start": 1822, "end": 1830, "text": " OK, then the second thing we're going to determine the nearest neighbor set for each X." }, { "start": 1830, "end": 1833, "text": " Now they also in that step, they also augment the data." }, { "start": 1833, "end": 1836, "text": " They do heavy data augmentation and so on." }, { "start": 1836, "end": 1840, "text": " Also in this in the third step in the self labeling, they do data augmentation." }, { "start": 1840, "end": 1845, "text": " There's a lot of tricks in here, but ultimately the base algorithm goes like this." }, { "start": 1845, "end": 1849, "text": " So you find your neighboring sets for each X." }, { "start": 1849, "end": 1860, "text": " And then what you do while your clustering loss decreases, you update this clustering neural network by with this loss that we saw." }, { "start": 1860, "end": 1867, "text": " So this is the loss where you make the nearest neighbors closer to each other while still keeping the entropy high." }, { "start": 1867, "end": 1871, "text": " OK, and then in the last after you've done this." }, { "start": 1871, "end": 1877, "text": " You go through and you say, while the length of Y increases, what's why?" }, { "start": 1877, "end": 1883, "text": " Why is all the data points that are above a certain threshold?" }, { "start": 1883, "end": 1888, "text": " Now you're going to filter the data set that is above a certain threshold." }, { "start": 1888, "end": 1890, "text": " And that's your data set Y." }, { "start": 1890, "end": 1893, "text": " And you train this same neural network." }, { "start": 1893, "end": 1898, "text": " You basically fine tune it with the cross entropy loss on your own labels." }, { "start": 1898, "end": 1905, "text": " So now you only have labels Y." }, { "start": 1905, "end": 1910, "text": " OK, so it's not it's not labels." }, { "start": 1910, "end": 1917, "text": " You have the cross entropy loss between the assignments of this and the assignments of your data set." }, { "start": 1917, "end": 1927, "text": " OK, so you basically do the same task, but you filter by confidence." }, { "start": 1927, "end": 1932, "text": " And they use a threshold, I think, of point seven or something like this." }, { "start": 1932, "end": 1940, "text": " Now let's go into the experiments, the experiments or look as follows." }, { "start": 1940, "end": 1949, "text": " So they do some ablations to find out where in their methods kind of the the gains come from and will just quickly go through them." }, { "start": 1949, "end": 1956, "text": " If they just do these self supervision at the beginning and then just do K means clustering on top of that," }, { "start": 1956, "end": 1961, "text": " that will give them on C for 10 a thirty five point nine percent accuracy." }, { "start": 1961, "end": 1963, "text": " So not very good." }, { "start": 1963, "end": 1970, "text": " So the clustering, you can't just cluster on top of these representations and then be done." }, { "start": 1970, "end": 1978, "text": " If they do what they say, so this is sample and batch entropy loss." }, { "start": 1978, "end": 1981, "text": " This basically means you do not care about the nearest neighbors." }, { "start": 1981, "end": 1989, "text": " You do this entire thing, but you only make an image close to the prediction, close to itself and its augmentations." }, { "start": 1989, "end": 1993, "text": " So you don't use any nearest neighbor information also doesn't work." }, { "start": 1993, "end": 1998, "text": " I wouldn't pay too much attention that the numbers are 10, 20 or 30." }, { "start": 1998, "end": 2000, "text": " It just doesn't work." }, { "start": 2000, "end": 2009, "text": " Now, if you use the scan loss, you all of a sudden you get into a regime where there is actual signal." }, { "start": 2009, "end": 2018, "text": " So this is this is now significantly above the this is significantly above random guessing." }, { "start": 2018, "end": 2027, "text": " And if you use strong data augmentation, as I said, is a lot of this is has these tricks in it of what kind of data augmentation you do and so on." }, { "start": 2027, "end": 2035, "text": " So never forget that that these papers, besides their idea, they put in all the tricks they can." }, { "start": 2035, "end": 2037, "text": " So you get 10 percent more." }, { "start": 2037, "end": 2043, "text": " And then if you do this self labeling step, you get another 10 percent more." }, { "start": 2043, "end": 2049, "text": " And this is fairly respectable, like eighty three point five without ever seeing labels." }, { "start": 2049, "end": 2051, "text": " It's fairly good." }, { "start": 2051, "end": 2054, "text": " But of course, there are only 10 classes right here." }, { "start": 2054, "end": 2056, "text": " So keep that in mind." }, { "start": 2056, "end": 2059, "text": " But they will do it on ImageNet later." }, { "start": 2059, "end": 2065, "text": " And they investigate what kind of self supervision tasks at the beginning are important." }, { "start": 2065, "end": 2071, "text": " And they investigate things like ROTNET feature decoupling and noise contrastive estimation," }, { "start": 2071, "end": 2073, "text": " which noise contrastive estimation is the best." }, { "start": 2073, "end": 2083, "text": " And noise contrastive estimation, I think, is just where you as we said, you input an image and then it's kind of noisy versions with augmented in various ways." }, { "start": 2083, "end": 2088, "text": " And then you classify them together." }, { "start": 2088, "end": 2090, "text": " This has been like this." }, { "start": 2090, "end": 2096, "text": " These methods have been very successful in the last few years." }, { "start": 2096, "end": 2103, "text": " Yeah, so this they have various investigations into their algorithm." }, { "start": 2103, "end": 2106, "text": " I want to point out this here." }, { "start": 2106, "end": 2113, "text": " This is the accuracy versus confidence after the complete clustering step." }, { "start": 2113, "end": 2116, "text": " So this is now the third step, the self labeling." }, { "start": 2116, "end": 2125, "text": " And you can see right here as this confidence of the network goes up, the actual accuracy goes up as well." }, { "start": 2125, "end": 2132, "text": " So that means the network after the clustering is really more confident about the points that it can classify more accurately." }, { "start": 2132, "end": 2142, "text": " There's like a correlation between where the network is confident and the actual label of the point, which is remarkable because it has never seen the label." }, { "start": 2142, "end": 2147, "text": " But also see how sort of the range here is quite small." }, { "start": 2147, "end": 2151, "text": " So with the standard augmentation, it goes like from here to here." }, { "start": 2151, "end": 2166, "text": " So where you set that threshold is fairly important and might be quite brittle here because you need to set the threshold right such that some points are below it and some are above it." }, { "start": 2166, "end": 2183, "text": " And you don't want to pull in points where you're not because if you pull in points from here, you're only you only have the correct label for 75 percent or something like them of them." }, { "start": 2183, "end": 2188, "text": " And that means if you now self label and learn on them, you're going to learn the wrong signal." }, { "start": 2188, "end": 2199, "text": " So this this step seems fairly brittle, honestly, but I don't know, of course." }, { "start": 2199, "end": 2207, "text": " They go on and investigate various things such as how many clusters do you need or how many nearest neighbors?" }, { "start": 2207, "end": 2210, "text": " Sorry. Do you need this number K here?" }, { "start": 2210, "end": 2219, "text": " And you can see that if you have zero neighbors, then you're doing a lot worse than if you have, let's say, five nearest neighbors." }, { "start": 2219, "end": 2224, "text": " So the jump here, as you can see, is fairly high in all the data sets." }, { "start": 2224, "end": 2228, "text": " But after that, it sort of doesn't really matter much." }, { "start": 2228, "end": 2233, "text": " So it seems like five nearest neighbors should be enough for most things." }, { "start": 2233, "end": 2244, "text": " And here they just show that when they remove the false positives, that their algorithm actually converges to the correct clustering, the correct accuracy, which is not surprising." }, { "start": 2244, "end": 2250, "text": " Like if you remove the wrong samples that are wrong, then the rest of the samples are going to be right." }, { "start": 2250, "end": 2256, "text": " I think that's just showing that it doesn't go into some kind of crazy downward spiral loop or something like this." }, { "start": 2256, "end": 2260, "text": " But still, it's just kind of funny." }, { "start": 2260, "end": 2264, "text": " OK, so they do investigate how much they improve." }, { "start": 2264, "end": 2268, "text": " And they improve by quite a lot above the kind of previous methods." }, { "start": 2268, "end": 2270, "text": " So they have a lot of previous methods." }, { "start": 2270, "end": 2278, "text": " But even this includes things like K means and so on, GANs, deep cluster that we spoke about." }, { "start": 2278, "end": 2284, "text": " And this method, it already gets, as you can see, fairly close to good accuracy." }, { "start": 2284, "end": 2290, "text": " So you have like 88.6% accuracy." }, { "start": 2290, "end": 2299, "text": " And that's fairly remarkable on C410 without seeing the labels." }, { "start": 2299, "end": 2301, "text": " But we'll go on." }, { "start": 2301, "end": 2303, "text": " And now they go into ImageNet." }, { "start": 2303, "end": 2306, "text": " Now ImageNet, of course, has way more classes." }, { "start": 2306, "end": 2310, "text": " It has 1,000 classes compared to C410's 10 classes." }, { "start": 2310, "end": 2315, "text": " So if you think clustering 10 classes might, and they're fairly apart from each other," }, { "start": 2315, "end": 2320, "text": " might work with various techniques, ImageNet, 1,000 classes, that's way more difficult." }, { "start": 2320, "end": 2327, "text": " Now they do sub sample this to 5100 and 200 classes." }, { "start": 2327, "end": 2333, "text": " And they get OK accuracy." }, { "start": 2333, "end": 2344, "text": " As you can see, they get 81% for 50 classes where a supervised baseline would get 86%." }, { "start": 2344, "end": 2350, "text": " Into 200 classes, they get 69% where a supervised baseline would get 76%." }, { "start": 2350, "end": 2355, "text": " So it's fairly, it's there." }, { "start": 2355, "end": 2361, "text": " And that's quite remarkable for these low number of classes." }, { "start": 2361, "end": 2368, "text": " And they figure out that if they look for the samples that are kind of in the most of the middle of their cluster," }, { "start": 2368, "end": 2371, "text": " they get these prototypes right here." }, { "start": 2371, "end": 2373, "text": " You can see all of these images." }, { "start": 2373, "end": 2378, "text": " If you know ImageNet, some of the images really only have a part of the object and so on." }, { "start": 2378, "end": 2388, "text": " So here with the prototypical things, you really get center clear shot of the object with clearly visible features and so on." }, { "start": 2388, "end": 2401, "text": " So this sort of repeats the fact that this clustering really does go on that sort of semantic information." }, { "start": 2401, "end": 2405, "text": " Of course, the labels here are from the test label set." }, { "start": 2405, "end": 2410, "text": " The network can't figure that out." }, { "start": 2410, "end": 2413, "text": " And then they go for 1,000 classes." }, { "start": 2413, "end": 2420, "text": " And in 1,000 classes, it doesn't really work because there might be just too many confusions right here." }, { "start": 2420, "end": 2426, "text": " But they do have this confusion matrix of their method." }, { "start": 2426, "end": 2434, "text": " And it shows that the confusion matrix is pretty much a block diagonal along these super clusters right here." }, { "start": 2434, "end": 2442, "text": " So you can see the dogs, the network confuses the dogs fairly often and then insects with each other, but not really across here." }, { "start": 2442, "end": 2445, "text": " Which is still quite remarkable." }, { "start": 2445, "end": 2449, "text": " But I mean, that's you get the same thing for a lot of these methods." }, { "start": 2449, "end": 2456, "text": " So I don't I don't know how much different this would be in other methods." }, { "start": 2456, "end": 2459, "text": " But certainly it's interesting to look at." }, { "start": 2459, "end": 2466, "text": " Now, they go into one last thing, and that is what if we don't know how many clusters there are, right?" }, { "start": 2466, "end": 2468, "text": " If we don't know anything." }, { "start": 2468, "end": 2473, "text": " So say so far, we have assumed to to have knowledge about the number of ground truth classes." }, { "start": 2473, "end": 2477, "text": " The model predictions were valid losing the Hungarian matching algorithm." }, { "start": 2477, "end": 2484, "text": " We already saw this in the DETR by Facebook, if you remember." }, { "start": 2484, "end": 2490, "text": " However, what happens if the number of clusters does not match the number of ground truth classes anymore?" }, { "start": 2490, "end": 2498, "text": " So they now say table three reports the results when we overestimate the number of ground truth classes by a factor of two." }, { "start": 2498, "end": 2505, "text": " OK, so now they build just 20 classes for C for 10 instead of 10 classes." }, { "start": 2505, "end": 2508, "text": " And we'll look at table three real quick." }, { "start": 2508, "end": 2510, "text": " Where's table three?" }, { "start": 2510, "end": 2512, "text": " This is table three." }, { "start": 2512, "end": 2519, "text": " OK, so when they over cluster, you get the thing here on the bottom." }, { "start": 2519, "end": 2523, "text": " And you can see there is a drop in accuracy right here." }, { "start": 2523, "end": 2532, "text": " Now, what I don't actually they don't actually say how they do the over cluster matching." }, { "start": 2532, "end": 2544, "text": " So if you imagine if I now have, I don't know, six clusters, but I need to assign them to three clusters, you know, here." }, { "start": 2544, "end": 2547, "text": " Do I still use this most optimistic thing?" }, { "start": 2547, "end": 2557, "text": " So do I still use I think they still use this most optimistic matching right where you assign everything to its best fitted cluster." }, { "start": 2557, "end": 2562, "text": " You compute all the permutations and then you give it the best benefit of the doubt." }, { "start": 2562, "end": 2574, "text": " Now, if you imagine the situation where I over cluster to the point that I have each image in its own cluster" }, { "start": 2574, "end": 2583, "text": " and I run this algorithm to evaluate my clustering, I give it basically the most beneficial view, then I would get 100 percent accuracy." }, { "start": 2583, "end": 2596, "text": " OK, so like in one of in these over cluster approach, I would sort of expect that you actually get a better score" }, { "start": 2596, "end": 2602, "text": " because you can like there is more generosity of the matching algorithm involved." }, { "start": 2602, "end": 2611, "text": " Now, that's counteracted by the fact that you can't group together things that obviously have similar features because they are in the same class." }, { "start": 2611, "end": 2613, "text": " So there's kind of two forces pulling here." }, { "start": 2613, "end": 2620, "text": " But I was kind of astounded that it's going down and the evaluation method of this matching algorithm," }, { "start": 2620, "end": 2626, "text": " it sort of breaks down when you have more classes, at least in my opinion." }, { "start": 2626, "end": 2636, "text": " Yeah, but but it's interesting to see that you can just overshoot and but then you need some sort of heuristic to reconcile that." }, { "start": 2636, "end": 2640, "text": " In any case, I think this paper is pretty cool." }, { "start": 2640, "end": 2647, "text": " It brings together a lot of things that were already present and introduces this kind of this step approach." }, { "start": 2647, "end": 2652, "text": " But what you have to keep in mind and by the way, there's lots of samples down here." }, { "start": 2652, "end": 2656, "text": " What you have to keep in mind is there are a lot of hyperparameters in here." }, { "start": 2656, "end": 2666, "text": " There are like this threshold and you know, the first of all, yeah, the number of classes, the thresholds, the architectures and so on." }, { "start": 2666, "end": 2672, "text": " And all of this has been tuned to get these numbers really high." }, { "start": 2672, "end": 2678, "text": " Right. All of these steps, all of the augmentations and so on, the chosen data augmentations." }, { "start": 2678, "end": 2683, "text": " It has been chosen to get this number as high as possible." }, { "start": 2683, "end": 2692, "text": " So, you know, to interpret this as, oh, look, we can classify without knowing the labels is, you know," }, { "start": 2692, "end": 2700, "text": " yes, in this case, but the hyperparameter choices of the algorithm are all informed by the labels." }, { "start": 2700, "end": 2708, "text": " So it is still very, very unclear of how this method will actually work when you really don't have the labels," }, { "start": 2708, "end": 2713, "text": " when you actually have to choose the hyperparameters in absence of anything." }, { "start": 2713, "end": 2719, "text": " And yeah, I think the future might tell if they continue to work on this." }, { "start": 2719, "end": 2729, "text": " All right. Thanks for listening, looking, watching and bearing with me through my wrestling with various math," }, { "start": 2729, "end": 2733, "text": " basic math in this video. I wish you a good day and bye bye." } ]
hHZSA9z_abE
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
STOCHASTIC MEME DESCENT - Deep Learning Meme Review - Episode 2 (Part 2 of 2)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "funny", "meme", "memes", "meme review", "gpt-3", "google", "deepmind", "haha", "deep neural networks", "christmas", "sunglasses", "transformers", "neurips", "gathertown", "pytorch", "tensorflow", "paddlepaddle", "review", "rebuttal", "proof", "theory", "analysis", "is all you need", "captcha", "stock market", "state of the art", "attention" ]
#memes #science #ai Part 2 of Antonio and me examining the latest and greatest of deep learning memes. Music: Sunshower - LATASHÁ Papov - Yung Logos Sunny Days - Anno Domini Beats Trinity - Jeremy Blake More memes: facebook.com/convolutionalmemes Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher LinkedIn: https://www.linkedin.com/in/yannic-kilcher-488534136/ BiliBili: https://space.bilibili.com/1824646584 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
At some point I will be able to code you, Janik. You will be able to? To code you. To code me? Yes, so that finally you will release videos in time. ["Random Guessing"] Random guessing, Michael Asperger. 47% accuracy. Nice. Yes. Nice. Yes. If you change the seed, you can get 48. Ha ha, you'll never reach me. Yes, I will. How? Are you coming up with a better algorithm? No, but using a weaker baseline. Ha ha. Getting published is so easy. It's a job. Yeah. It's a job. Janik, do you even, sometimes I realize that, you know, my life, every three months is gonna be like a deadline. Is this real life? This is it. It doesn't get better. Is this the peak? This is it. Like some years ago I thought it was gonna be fun, you know, you just enjoy the life. You just, you know, have nice conversations. Like you try your best. You think about things like for a long time. Then you find, no. That does not sound like machine learning research. Okay. Two things we don't have, long times and thinking. Model overfits on training data. Or, word, new data. I got one paper rejected because the review was like, where is Cypher? That was the review. Where is Cypher? Where is it? Where is it, Antonio? If there's no Cypher, how should I know? How does any paper get accepted without C-Phar? It's called C-Phar. I don't know. Maybe it's called Cypher. I don't know. It's like an abbreviation of something. People who study Latin will call it C-Phar. Social distancing guidelines. COVID-19, 1.5 meters. Minion outlier. That's very true. I'm having something like that to deal with right now. I think I forgot something. If you forgot, it wasn't that important. Yeah, you're right. This could actually work, you know? Like there are these, aren't there these proofs that some of these algorithms only converge if you average over gradients? Yeah. So if you accumulate your gradients technically with a decreasing learning rate, this might work. Yannick, it's all wrong. So. Yeah, that's exactly how it's done. But what's the story behind this? There's no story. There's no story. No. I'll just give you a minute. I didn't get it. Should I really, I should really calculate the, yeah. It's true, right? It's true, yeah. It's actually true. Okay, okay. This actually works. I thought like, okay, Yannick, it's Saturday. I woke up two hours ago. Yeah, it's actually true. It's actually true. Dig move now. Weaver process. Yeah. Beautiful, beautiful. Douchiness. Douchiness, it's a word. I didn't know. Epsilon is expected to grow very large over the next 48 hours. No, no. No. No. You're not. No. It has to be small enough. Enough, small enough. Abstract, introduction results. I was, did I tell you this? Maybe it was also in the other meme review, where it was a paper. It's mine, that's my paper. Well, I remember it was like, in this paper, in this specific paper, that was this, okay, we prove that this is true. And in the introduction, it was like, sometimes. It was like the same thing, but with a sometimes. We show that sometimes, under some assumption, and then you read the paper, it's actually just an example. Not everyone should go, recommended for you. I'm surprised that sometimes I look at the thing, I don't, I will never enjoy it. And then I do. And then I do. Us YouTubers, we have to regularly sacrifice GPUs to the algorithm and recommendation. Yeah, it really likes GPUs. Does it, do you have to burn them? Do you have to make them burn? You have to, like. You have to take like some cooler liquid and sprinkle it on top, and then you have to dance around it and some flowers on top of it. And then you have to eat it. OMG, I love all this water-cooled CPUs. New toothpaste exists, dentists. I didn't get the machine learning thing. There's no machine. Okay, okay, yeah, perfect, perfect. I love this. I don't know why, but it's so good. Yannick, that's the big surprise. At the end of this video, there's going to be a big surprise. What? It's a citation from the office. Okay, but yeah, seriously, for each one of you, Yannick is going to make a good video. I'm going to make a good video. For each one of you, Yannick is going to make a gift. Is it the MATLAB license? Damn it, don't spoil... Forms of birth control, tens of... I should just put machine learning. When your model improves from 5% accuracy to 7% accuracy. Machine learning! Machine learning, finding global minima. Machine learning, finding local minima. Yeah. That's so damn true. Fury people are weird. Fury people are the worst. That's even true, like I'm completely serious, 100% serious. Like they get excited about infinitely wide neural networks. Oh yeah, or what if you take the step size to be infinitely small? Yeah. That's how you do things. I mean, the only thing that's infinitely wide is your mom. Self-driving cars aren't even hard to make lol. Just program it, not to hit stuff. Don't... You know, in all of my code, true story, in all of my code, I write in a line. And it's usually like a common doubt, but I write in a line that says If target equals Yannick, then don't fire. Really, just I anticipate that some of my code will be used in the robot overlord army. Yeah, that's such a smart move. I know. You gotta think ahead. For some reason, they will shoot everything except the traffic lights. How? Interviewer, what's your biggest strength? I'm an expert in machine learning. Now good that we did this this way, because the other way would have been a bit strange. What's 9 plus 10? It's 3. Not even close, it's 19. It's 16. Wrong, it's still 19. It's 18. No, it's 19. It's 19. You're fired. I wonder what GPT-3 would say to this. Should we try it out? We should try it out. When you drop the learning rate. Everyone is freaking out what happened here, but they dropped the learning rate. So clear. It's like, that's what you do. You stagnate, you divide it by 10, shag-a-boom. I'll give you 10 seconds to copy what's on the whiteboard. The whiteboard. It's actually from my video. Yeah, I kind of remember something similar to that. What was this? I have no idea. Not a slightest clue. So this actually is also on my video. They really tried. They really tried. But sometimes I mean, if I make a mistake on a video or something, I'll put like a comment. You never make mistakes. Before I set the video to visible, it's just so mean to the people who want to do this. Mom, if your friends jumped off a bridge, would you jump too? How much time? I needed this meme and I didn't know I needed that. No, you can't just add more parameters and data to model GPT-3 is no different from Elisa since it's just glorified pattern matching and curve fitting not true intelligence which requires a symbolic representation of the input which connectionist models will never be able to do. Also the data needed is almost an entire percent of the total possible data we can collect from common ground alone and the hardware needed to train GPT-3 is unfit with new ass-witching-crasters. The guy is like... Do you think GPT is intelligent? I think he's aware. And that he... Oh my God, no, no, no. Oh, we're going to leave this in Elkrochus. Do you think GPT-3 is intelligent though? I think, well, I like the colors. I like the colors of the GPU there. I think that anybody with best colors, it's like slightly, you know, funny. So it can be funny, you know, but not intelligent. Do you think? I think it is... It's not? Is... I think it is. It is... I'll be canceled for like the 50th time. Researchers hate him. Local man discovers one weird trick to general intelligence. Turns out you just weren't using enough layers. Learn the secret to a stunning result. Learn the truth now. Yeah. Yes, that's again me. That's again me. Own it. The stickers, the stickers, they own it. Own it. And that is probably the Adam paper. Do you know that Adam proof is famously wrong? I very much know. Oh yeah, yeah, I do. I just heard it. I just repeat it to sound smart. No, I know it. I know it. It's like there are at least four mistakes in that proof. And the thing that it got probably like 30,000 citations before, before realizing that it was... It's still getting citations, no? No, you know, the second part of a story, well, now it's 60,000. The other paper, the paper that fixes the mistake introduces AMS grad. The proof, the mistake, basically the V variable. Yeah. Then it's a problem for the proof. OK. And AMS grad fixes the mistake. But now there's another paper that tells that actually Adam does converge. So we go back to the fact, no, no, guys, no. It just did it wrong. It just did it wrong. But yeah. It's like when you don't use the method your teacher wants you to use. Exactly. Yeah. But nobody used AMS grad. Yeah. Nobody ever used it. No. I spit on AMS grad. I really don't like it. Albert Einstein. Insanity is doing the same thing over and over again and expecting different results. That's how I make papers. Come on. Seed equals to. Or maybe like resubmission. How it started. Jaleco. Against the mob. This is a very dark period. How it's going? In the channels. Jaleco versus Twitter. Yeah. Yeah. We have a superstar right here. We don't. We don't. We don't talk about this. No, no, we don't. We don't talk about this. That's nothing happened. Nothing happened. Nvidia new AI be like. That's what they do now. You're like how many millions of dollars are going into just making your eyes go. Crazy. Mew for God. Free loop. All right. That was it for in review. Thank you so much for watching. Thank you. Thank you. I want to thank Yannick for having me here. It is always a pleasure. Yeah. And hopefully 2021 will have also cake Yannick. Where the hell is the cake? More cake. Yeah. Bye bye. Bye.
[ { "start": 0, "end": 2.92, "text": " At some point I will be able to code you, Janik." }, { "start": 2.92, "end": 3.84, "text": " You will be able to?" }, { "start": 3.84, "end": 4.96, "text": " To code you." }, { "start": 4.96, "end": 5.88, "text": " To code me?" }, { "start": 5.88, "end": 9.28, "text": " Yes, so that finally you will release videos in time." }, { "start": 9.28, "end": 12.280000000000001, "text": " [\"Random Guessing\"]" }, { "start": 15.84, "end": 17.96, "text": " Random guessing, Michael Asperger." }, { "start": 17.96, "end": 19.88, "text": " 47% accuracy." }, { "start": 19.88, "end": 20.72, "text": " Nice." }, { "start": 20.72, "end": 21.56, "text": " Yes." }, { "start": 21.56, "end": 22.400000000000002, "text": " Nice." }, { "start": 22.400000000000002, "end": 23.22, "text": " Yes." }, { "start": 23.22, "end": 24.88, "text": " If you change the seed, you can get 48." }, { "start": 25.88, "end": 27.8, "text": " Ha ha, you'll never reach me." }, { "start": 27.8, "end": 28.72, "text": " Yes, I will." }, { "start": 28.72, "end": 29.560000000000002, "text": " How?" }, { "start": 29.56, "end": 31.52, "text": " Are you coming up with a better algorithm?" }, { "start": 31.52, "end": 35.16, "text": " No, but using a weaker baseline." }, { "start": 35.16, "end": 36.04, "text": " Ha ha." }, { "start": 36.04, "end": 37.6, "text": " Getting published is so easy." }, { "start": 37.6, "end": 38.44, "text": " It's a job." }, { "start": 38.44, "end": 39.26, "text": " Yeah." }, { "start": 39.26, "end": 40.1, "text": " It's a job." }, { "start": 40.1, "end": 44.8, "text": " Janik, do you even, sometimes I realize that, you know," }, { "start": 44.8, "end": 48.54, "text": " my life, every three months is gonna be like a deadline." }, { "start": 48.54, "end": 50.12, "text": " Is this real life?" }, { "start": 50.12, "end": 51.28, "text": " This is it." }, { "start": 51.28, "end": 52.12, "text": " It doesn't get better." }, { "start": 52.12, "end": 53.4, "text": " Is this the peak?" }, { "start": 53.4, "end": 54.239999999999995, "text": " This is it." }, { "start": 55.16, "end": 57.9, "text": " Like some years ago I thought it was gonna be fun," }, { "start": 57.9, "end": 60.32, "text": " you know, you just enjoy the life." }, { "start": 60.32, "end": 63.66, "text": " You just, you know, have nice conversations." }, { "start": 63.66, "end": 65.56, "text": " Like you try your best." }, { "start": 65.56, "end": 69.28, "text": " You think about things like for a long time." }, { "start": 69.28, "end": 71.08, "text": " Then you find, no." }, { "start": 71.08, "end": 73.56, "text": " That does not sound like machine learning research." }, { "start": 73.56, "end": 74.4, "text": " Okay." }, { "start": 74.4, "end": 77, "text": " Two things we don't have, long times and thinking." }, { "start": 78.36, "end": 80.22, "text": " Model overfits on training data." }, { "start": 81.24, "end": 83.52, "text": " Or, word, new data." }, { "start": 85.22, "end": 87.88, "text": " I got one paper rejected because the review was like," }, { "start": 87.88, "end": 89.24, "text": " where is Cypher?" }, { "start": 90.52, "end": 91.52, "text": " That was the review." }, { "start": 92.8, "end": 93.64, "text": " Where is Cypher?" }, { "start": 93.64, "end": 94.47999999999999, "text": " Where is it?" }, { "start": 94.47999999999999, "end": 96.6, "text": " Where is it, Antonio?" }, { "start": 96.6, "end": 98.64, "text": " If there's no Cypher, how should I know?" }, { "start": 98.64, "end": 102.96, "text": " How does any paper get accepted without C-Phar?" }, { "start": 102.96, "end": 103.8, "text": " It's called C-Phar." }, { "start": 103.8, "end": 104.64, "text": " I don't know." }, { "start": 104.64, "end": 105.96, "text": " Maybe it's called Cypher." }, { "start": 105.96, "end": 106.8, "text": " I don't know." }, { "start": 106.8, "end": 108.39999999999999, "text": " It's like an abbreviation of something." }, { "start": 108.39999999999999, "end": 111.12, "text": " People who study Latin will call it C-Phar." }, { "start": 111.12, "end": 112.72, "text": " Social distancing guidelines." }, { "start": 112.72, "end": 117.12, "text": " COVID-19, 1.5 meters." }, { "start": 117.12, "end": 120.12, "text": " Minion outlier." }, { "start": 120.12, "end": 122.32, "text": " That's very true." }, { "start": 122.32, "end": 125.56, "text": " I'm having something like that to deal with right now." }, { "start": 125.56, "end": 127.2, "text": " I think I forgot something." }, { "start": 127.2, "end": 130.32, "text": " If you forgot, it wasn't that important." }, { "start": 130.32, "end": 131.32, "text": " Yeah, you're right." }, { "start": 133.4, "end": 135.16, "text": " This could actually work, you know?" }, { "start": 135.16, "end": 136.6, "text": " Like there are these," }, { "start": 136.6, "end": 139.28, "text": " aren't there these proofs that some of these algorithms" }, { "start": 139.28, "end": 143.24, "text": " only converge if you average over gradients?" }, { "start": 143.24, "end": 144.08, "text": " Yeah." }, { "start": 144.08, "end": 148.24, "text": " So if you accumulate your gradients technically" }, { "start": 148.24, "end": 150.76, "text": " with a decreasing learning rate, this might work." }, { "start": 150.76, "end": 152.48, "text": " Yannick, it's all wrong." }, { "start": 152.48, "end": 153.32, "text": " So." }, { "start": 154.24, "end": 156.24, "text": " Yeah, that's exactly how it's done." }, { "start": 156.24, "end": 158.4, "text": " But what's the story behind this?" }, { "start": 158.4, "end": 159.24, "text": " There's no story." }, { "start": 159.24, "end": 160.08, "text": " There's no story." }, { "start": 160.08, "end": 160.92000000000002, "text": " No." }, { "start": 160.92000000000002, "end": 162.04, "text": " I'll just give you a minute." }, { "start": 164.8, "end": 165.64, "text": " I didn't get it." }, { "start": 166.96, "end": 168.76, "text": " Should I really, I should really calculate the," }, { "start": 168.76, "end": 169.6, "text": " yeah." }, { "start": 169.6, "end": 170.95999999999998, "text": " It's true, right?" }, { "start": 170.95999999999998, "end": 172.04, "text": " It's true, yeah." }, { "start": 172.04, "end": 173.16, "text": " It's actually true." }, { "start": 173.16, "end": 174, "text": " Okay, okay." }, { "start": 174, "end": 174.84, "text": " This actually works." }, { "start": 174.84, "end": 177.12, "text": " I thought like, okay, Yannick, it's Saturday." }, { "start": 178.12, "end": 180.32, "text": " I woke up two hours ago." }, { "start": 180.32, "end": 181.28, "text": " Yeah, it's actually true." }, { "start": 181.28, "end": 182.32, "text": " It's actually true." }, { "start": 183.35999999999999, "end": 184.84, "text": " Dig move now." }, { "start": 184.84, "end": 186.72, "text": " Weaver process." }, { "start": 186.72, "end": 187.56, "text": " Yeah." }, { "start": 190.68, "end": 192.39999999999998, "text": " Beautiful, beautiful." }, { "start": 192.39999999999998, "end": 194.12, "text": " Douchiness." }, { "start": 194.12, "end": 195.88, "text": " Douchiness, it's a word." }, { "start": 195.88, "end": 196.72, "text": " I didn't know." }, { "start": 196.72, "end": 200.24, "text": " Epsilon is expected to grow very large" }, { "start": 200.24, "end": 202, "text": " over the next 48 hours." }, { "start": 202, "end": 202.84, "text": " No, no." }, { "start": 204.48, "end": 205.32, "text": " No." }, { "start": 205.32, "end": 206.16, "text": " No." }, { "start": 206.16, "end": 207, "text": " You're not." }, { "start": 207, "end": 207.84, "text": " No." }, { "start": 207.84, "end": 209.72, "text": " It has to be small enough." }, { "start": 209.72, "end": 211.52, "text": " Enough, small enough." }, { "start": 214.68, "end": 217.64, "text": " Abstract, introduction results." }, { "start": 217.64, "end": 218.84, "text": " I was, did I tell you this?" }, { "start": 218.84, "end": 220.32, "text": " Maybe it was also in the other meme review," }, { "start": 220.32, "end": 221.52, "text": " where it was a paper." }, { "start": 221.52, "end": 222.92, "text": " It's mine, that's my paper." }, { "start": 222.92, "end": 227.92, "text": " Well, I remember it was like, in this paper," }, { "start": 227.92, "end": 229.83999999999997, "text": " in this specific paper, that was this," }, { "start": 229.83999999999997, "end": 232.35999999999999, "text": " okay, we prove that this is true." }, { "start": 232.35999999999999, "end": 236.35999999999999, "text": " And in the introduction, it was like, sometimes." }, { "start": 236.35999999999999, "end": 239.95999999999998, "text": " It was like the same thing, but with a sometimes." }, { "start": 239.95999999999998, "end": 243.16, "text": " We show that sometimes, under some assumption," }, { "start": 244.04, "end": 247.32, "text": " and then you read the paper, it's actually just an example." }, { "start": 247.32, "end": 254.32, "text": " Not everyone should go, recommended for you." }, { "start": 254.32, "end": 256.71999999999997, "text": " I'm surprised that sometimes I look at the thing," }, { "start": 256.71999999999997, "end": 259.56, "text": " I don't, I will never enjoy it." }, { "start": 259.56, "end": 260.96, "text": " And then I do." }, { "start": 260.96, "end": 261.92, "text": " And then I do." }, { "start": 261.92, "end": 266.15999999999997, "text": " Us YouTubers, we have to regularly sacrifice GPUs" }, { "start": 266.15999999999997, "end": 268.88, "text": " to the algorithm and recommendation." }, { "start": 268.88, "end": 271.56, "text": " Yeah, it really likes GPUs." }, { "start": 271.56, "end": 273.15999999999997, "text": " Does it, do you have to burn them?" }, { "start": 273.15999999999997, "end": 274.68, "text": " Do you have to make them burn?" }, { "start": 274.68, "end": 276.12, "text": " You have to, like." }, { "start": 276.12, "end": 278.2, "text": " You have to take like some cooler liquid" }, { "start": 278.2, "end": 280.96, "text": " and sprinkle it on top, and then you have to dance around it" }, { "start": 280.96, "end": 282.96, "text": " and some flowers on top of it." }, { "start": 282.96, "end": 284.36, "text": " And then you have to eat it." }, { "start": 286.48, "end": 291.48, "text": " OMG, I love all this water-cooled CPUs." }, { "start": 296.48, "end": 299.68, "text": " New toothpaste exists, dentists." }, { "start": 299.68, "end": 302.68, "text": " I didn't get the machine learning thing." }, { "start": 302.68, "end": 303.68, "text": " There's no machine." }, { "start": 303.68, "end": 305.68, "text": " Okay, okay, yeah, perfect, perfect." }, { "start": 305.68, "end": 307.2, "text": " I love this." }, { "start": 307.2, "end": 309.2, "text": " I don't know why, but it's so good." }, { "start": 310.72, "end": 313.2, "text": " Yannick, that's the big surprise." }, { "start": 313.2, "end": 316.68, "text": " At the end of this video, there's going to be a big surprise." }, { "start": 316.68, "end": 317.68, "text": " What?" }, { "start": 318.68, "end": 321.2, "text": " It's a citation from the office." }, { "start": 321.2, "end": 323.48, "text": " Okay, but yeah, seriously, for each one of you," }, { "start": 323.48, "end": 325.68, "text": " Yannick is going to make a good video." }, { "start": 325.68, "end": 327.68, "text": " I'm going to make a good video." }, { "start": 327.68, "end": 330.68, "text": " For each one of you, Yannick is going to make a gift." }, { "start": 330.68, "end": 332.68, "text": " Is it the MATLAB license?" }, { "start": 332.68, "end": 334.68, "text": " Damn it, don't spoil..." }, { "start": 334.68, "end": 336.68, "text": " Forms of birth control, tens of..." }, { "start": 336.68, "end": 338.68, "text": " I should just put machine learning." }, { "start": 338.68, "end": 342.68, "text": " When your model improves from 5% accuracy to 7% accuracy." }, { "start": 342.68, "end": 344.68, "text": " Machine learning!" }, { "start": 345.68, "end": 348.68, "text": " Machine learning, finding global minima." }, { "start": 349.68, "end": 352.68, "text": " Machine learning, finding local minima." }, { "start": 352.68, "end": 353.68, "text": " Yeah." }, { "start": 353.68, "end": 355.68, "text": " That's so damn true." }, { "start": 355.68, "end": 357.68, "text": " Fury people are weird." }, { "start": 357.68, "end": 359.68, "text": " Fury people are the worst." }, { "start": 359.68, "end": 363.68, "text": " That's even true, like I'm completely serious, 100% serious." }, { "start": 363.68, "end": 366.68, "text": " Like they get excited about infinitely wide neural networks." }, { "start": 366.68, "end": 371.68, "text": " Oh yeah, or what if you take the step size to be infinitely small?" }, { "start": 371.68, "end": 372.68, "text": " Yeah." }, { "start": 372.68, "end": 374.68, "text": " That's how you do things." }, { "start": 374.68, "end": 377.68, "text": " I mean, the only thing that's infinitely wide is your mom." }, { "start": 378.68, "end": 381.68, "text": " Self-driving cars aren't even hard to make lol." }, { "start": 381.68, "end": 385.68, "text": " Just program it, not to hit stuff." }, { "start": 386.68, "end": 387.68, "text": " Don't..." }, { "start": 388.68, "end": 393.68, "text": " You know, in all of my code, true story, in all of my code, I write in a line." }, { "start": 393.68, "end": 399.68, "text": " And it's usually like a common doubt, but I write in a line that says" }, { "start": 399.68, "end": 404.68, "text": " If target equals Yannick, then don't fire." }, { "start": 404.68, "end": 412.68, "text": " Really, just I anticipate that some of my code will be used in the robot overlord army." }, { "start": 412.68, "end": 415.68, "text": " Yeah, that's such a smart move." }, { "start": 415.68, "end": 416.68, "text": " I know." }, { "start": 416.68, "end": 417.68, "text": " You gotta think ahead." }, { "start": 417.68, "end": 421.68, "text": " For some reason, they will shoot everything except the traffic lights." }, { "start": 422.68, "end": 423.68, "text": " How?" }, { "start": 427.68, "end": 430.68, "text": " Interviewer, what's your biggest strength?" }, { "start": 430.68, "end": 432.68, "text": " I'm an expert in machine learning." }, { "start": 432.68, "end": 436.68, "text": " Now good that we did this this way, because the other way would have been a bit strange." }, { "start": 437.68, "end": 439.68, "text": " What's 9 plus 10?" }, { "start": 439.68, "end": 440.68, "text": " It's 3." }, { "start": 441.68, "end": 442.68, "text": " Not even close, it's 19." }, { "start": 442.68, "end": 443.68, "text": " It's 16." }, { "start": 444.68, "end": 445.68, "text": " Wrong, it's still 19." }, { "start": 445.68, "end": 446.68, "text": " It's 18." }, { "start": 447.68, "end": 449.68, "text": " No, it's 19." }, { "start": 449.68, "end": 450.68, "text": " It's 19." }, { "start": 450.68, "end": 451.68, "text": " You're fired." }, { "start": 452.68, "end": 455.68, "text": " I wonder what GPT-3 would say to this." }, { "start": 456.68, "end": 457.68, "text": " Should we try it out?" }, { "start": 457.68, "end": 458.68, "text": " We should try it out." }, { "start": 458.68, "end": 462.68, "text": " When you drop the learning rate." }, { "start": 464.68, "end": 469.68, "text": " Everyone is freaking out what happened here, but they dropped the learning rate." }, { "start": 469.68, "end": 470.68, "text": " So clear." }, { "start": 471.68, "end": 473.68, "text": " It's like, that's what you do." }, { "start": 473.68, "end": 477.68, "text": " You stagnate, you divide it by 10, shag-a-boom." }, { "start": 477.68, "end": 481.68, "text": " I'll give you 10 seconds to copy what's on the whiteboard." }, { "start": 481.68, "end": 483.68, "text": " The whiteboard." }, { "start": 484.68, "end": 485.68, "text": " It's actually from my video." }, { "start": 485.68, "end": 489.68, "text": " Yeah, I kind of remember something similar to that." }, { "start": 489.68, "end": 490.68, "text": " What was this?" }, { "start": 490.68, "end": 491.68, "text": " I have no idea." }, { "start": 491.68, "end": 494.68, "text": " Not a slightest clue." }, { "start": 494.68, "end": 497.68, "text": " So this actually is also on my video." }, { "start": 499.68, "end": 501.68, "text": " They really tried." }, { "start": 501.68, "end": 503.68, "text": " They really tried." }, { "start": 503.68, "end": 511.68, "text": " But sometimes I mean, if I make a mistake on a video or something, I'll put like a comment." }, { "start": 511.68, "end": 512.6800000000001, "text": " You never make mistakes." }, { "start": 512.68, "end": 519.68, "text": " Before I set the video to visible, it's just so mean to the people who want to do this." }, { "start": 519.68, "end": 524.68, "text": " Mom, if your friends jumped off a bridge, would you jump too?" }, { "start": 528.68, "end": 529.68, "text": " How much time?" }, { "start": 529.68, "end": 532.68, "text": " I needed this meme and I didn't know I needed that." }, { "start": 533.68, "end": 538.68, "text": " No, you can't just add more parameters and data to model GPT-3 is no different from Elisa" }, { "start": 538.68, "end": 546.68, "text": " since it's just glorified pattern matching and curve fitting not true intelligence which requires a symbolic representation of the input which connectionist models will never be able to do." }, { "start": 546.68, "end": 551.68, "text": " Also the data needed is almost an entire percent of the total possible data we can collect from common ground alone" }, { "start": 551.68, "end": 556.68, "text": " and the hardware needed to train GPT-3 is unfit with new ass-witching-crasters." }, { "start": 558.68, "end": 560.68, "text": " The guy is like..." }, { "start": 562.68, "end": 566.68, "text": " Do you think GPT is intelligent?" }, { "start": 566.68, "end": 568.68, "text": " I think he's aware." }, { "start": 568.68, "end": 573.68, "text": " And that he... Oh my God, no, no, no." }, { "start": 573.68, "end": 578.68, "text": " Oh, we're going to leave this in Elkrochus." }, { "start": 578.68, "end": 580.68, "text": " Do you think GPT-3 is intelligent though?" }, { "start": 580.68, "end": 583.68, "text": " I think, well, I like the colors." }, { "start": 583.68, "end": 585.68, "text": " I like the colors of the GPU there." }, { "start": 585.68, "end": 591.68, "text": " I think that anybody with best colors, it's like slightly, you know, funny." }, { "start": 591.68, "end": 594.68, "text": " So it can be funny, you know, but not intelligent." }, { "start": 594.68, "end": 596.68, "text": " Do you think?" }, { "start": 596.68, "end": 600.68, "text": " I think it is..." }, { "start": 601.68, "end": 602.68, "text": " It's not?" }, { "start": 602.68, "end": 604.68, "text": " Is..." }, { "start": 605.68, "end": 607.68, "text": " I think it is." }, { "start": 608.68, "end": 610.68, "text": " It is..." }, { "start": 610.68, "end": 613.68, "text": " I'll be canceled for like the 50th time." }, { "start": 613.68, "end": 615.68, "text": " Researchers hate him." }, { "start": 615.68, "end": 619.68, "text": " Local man discovers one weird trick to general intelligence." }, { "start": 619.68, "end": 624.68, "text": " Turns out you just weren't using enough layers." }, { "start": 624.68, "end": 628.68, "text": " Learn the secret to a stunning result." }, { "start": 628.68, "end": 631.68, "text": " Learn the truth now." }, { "start": 633.68, "end": 634.68, "text": " Yeah." }, { "start": 634.68, "end": 638.68, "text": " Yes, that's again me." }, { "start": 638.68, "end": 639.68, "text": " That's again me." }, { "start": 639.68, "end": 641.68, "text": " Own it." }, { "start": 641.68, "end": 644.68, "text": " The stickers, the stickers, they own it." }, { "start": 644.68, "end": 645.68, "text": " Own it." }, { "start": 645.68, "end": 648.68, "text": " And that is probably the Adam paper." }, { "start": 648.68, "end": 651.68, "text": " Do you know that Adam proof is famously wrong?" }, { "start": 651.68, "end": 653.68, "text": " I very much know." }, { "start": 653.68, "end": 654.68, "text": " Oh yeah, yeah, I do." }, { "start": 654.68, "end": 656.68, "text": " I just heard it. I just repeat it to sound smart." }, { "start": 656.68, "end": 658.68, "text": " No, I know it. I know it." }, { "start": 658.68, "end": 661.68, "text": " It's like there are at least four mistakes in that proof." }, { "start": 661.68, "end": 669.68, "text": " And the thing that it got probably like 30,000 citations before, before realizing that it was..." }, { "start": 669.68, "end": 671.68, "text": " It's still getting citations, no?" }, { "start": 671.68, "end": 674.68, "text": " No, you know, the second part of a story, well, now it's 60,000." }, { "start": 674.68, "end": 679.68, "text": " The other paper, the paper that fixes the mistake introduces AMS grad." }, { "start": 679.68, "end": 683.68, "text": " The proof, the mistake, basically the V variable." }, { "start": 685.68, "end": 686.68, "text": " Yeah." }, { "start": 686.68, "end": 688.68, "text": " Then it's a problem for the proof." }, { "start": 688.68, "end": 689.68, "text": " OK." }, { "start": 689.68, "end": 691.68, "text": " And AMS grad fixes the mistake." }, { "start": 691.68, "end": 697.68, "text": " But now there's another paper that tells that actually Adam does converge." }, { "start": 697.68, "end": 701.68, "text": " So we go back to the fact, no, no, guys, no." }, { "start": 701.68, "end": 702.68, "text": " It just did it wrong." }, { "start": 702.68, "end": 704.68, "text": " It just did it wrong. But yeah." }, { "start": 704.68, "end": 708.68, "text": " It's like when you don't use the method your teacher wants you to use." }, { "start": 708.68, "end": 709.68, "text": " Exactly. Yeah." }, { "start": 709.68, "end": 712.68, "text": " But nobody used AMS grad." }, { "start": 712.68, "end": 713.68, "text": " Yeah." }, { "start": 713.68, "end": 714.68, "text": " Nobody ever used it." }, { "start": 714.68, "end": 715.68, "text": " No." }, { "start": 715.68, "end": 716.68, "text": " I spit on AMS grad." }, { "start": 716.68, "end": 718.68, "text": " I really don't like it." }, { "start": 718.68, "end": 719.68, "text": " Albert Einstein." }, { "start": 719.68, "end": 726.68, "text": " Insanity is doing the same thing over and over again and expecting different results." }, { "start": 728.68, "end": 730.68, "text": " That's how I make papers. Come on." }, { "start": 730.68, "end": 732.68, "text": " Seed equals to." }, { "start": 734.68, "end": 736.68, "text": " Or maybe like resubmission." }, { "start": 739.68, "end": 740.68, "text": " How it started." }, { "start": 740.68, "end": 741.68, "text": " Jaleco." }, { "start": 743.68, "end": 744.68, "text": " Against the mob." }, { "start": 744.68, "end": 746.68, "text": " This is a very dark period." }, { "start": 746.68, "end": 747.68, "text": " How it's going?" }, { "start": 747.68, "end": 748.68, "text": " In the channels." }, { "start": 748.68, "end": 750.68, "text": " Jaleco versus Twitter." }, { "start": 751.68, "end": 752.68, "text": " Yeah." }, { "start": 752.68, "end": 753.68, "text": " Yeah." }, { "start": 753.68, "end": 754.68, "text": " We have a superstar right here." }, { "start": 754.68, "end": 755.68, "text": " We don't." }, { "start": 755.68, "end": 756.68, "text": " We don't." }, { "start": 756.68, "end": 757.68, "text": " We don't talk about this." }, { "start": 757.68, "end": 758.68, "text": " No, no, we don't." }, { "start": 758.68, "end": 759.68, "text": " We don't talk about this." }, { "start": 759.68, "end": 761.68, "text": " That's nothing happened." }, { "start": 761.68, "end": 762.68, "text": " Nothing happened." }, { "start": 764.68, "end": 766.68, "text": " Nvidia new AI be like." }, { "start": 770.68, "end": 771.68, "text": " That's what they do now." }, { "start": 771.68, "end": 777.68, "text": " You're like how many millions of dollars are going into just making your eyes go." }, { "start": 779.68, "end": 780.68, "text": " Crazy." }, { "start": 783.68, "end": 784.68, "text": " Mew for God." }, { "start": 784.68, "end": 787.68, "text": " Free loop." }, { "start": 787.68, "end": 788.68, "text": " All right." }, { "start": 788.68, "end": 790.68, "text": " That was it for in review." }, { "start": 790.68, "end": 791.68, "text": " Thank you so much for watching." }, { "start": 791.68, "end": 792.68, "text": " Thank you." }, { "start": 792.68, "end": 793.68, "text": " Thank you." }, { "start": 793.68, "end": 795.68, "text": " I want to thank Yannick for having me here." }, { "start": 795.68, "end": 796.68, "text": " It is always a pleasure." }, { "start": 796.68, "end": 797.68, "text": " Yeah." }, { "start": 797.68, "end": 802.68, "text": " And hopefully 2021 will have also cake Yannick." }, { "start": 802.68, "end": 804.68, "text": " Where the hell is the cake?" }, { "start": 804.68, "end": 805.68, "text": " More cake." }, { "start": 805.68, "end": 806.68, "text": " Yeah." }, { "start": 806.68, "end": 807.68, "text": " Bye bye." }, { "start": 807.68, "end": 830.68, "text": " Bye." } ]
J7CrtblmMnU
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Is Google Translate Sexist? Gender Stereotypes in Statistical Machine Translation
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "google translate", "gender stereotype", "machine learning biased", "debiasing", "debiasing machine learning", "algorithmic fairness", "machine learning social justice", "machine learning bias", "deep learning bias", "deep learning gender", "what is deep learning", "deep learning tutorial", "introduction to deep learning", "hungarian translate", "translate gender stereotype" ]
#genderbias #algorithmicfairness #debiasing A brief look into gender stereotypes in Google Translate. The origin is a Tweet containing a Hungarian text. Hungarian is a gender-neutral language, so translating gender pronouns is ambiguous. Turns out that Google Translate assigns very stereotypical pronouns. In this video, we'll have a look at the origins and possible solutions to this problem. OUTLINE: 0:00 - Intro 1:10 - Digging Deeper 2:30 - How does Machine Translation work? 3:50 - Training Data Problems 4:40 - Learning Algorithm Problems 5:45 - Argmax Output Problems 6:45 - Pragmatics 7:50 - More on Google Translate 9:40 - Social Engineering 11:15 - Conclusion Songs: Like That - Anno Domini Beats Submarine - Dyalla Dude - Patrick Patrikios Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher LinkedIn: https://www.linkedin.com/in/yannic-kilcher-488534136/ BiliBili: https://space.bilibili.com/1824646584 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
So you might have seen this tweet. Hungarian is a gender neutral language. It has no gender pronouns so Google Translate automatically chooses the gender for you. Here is how everyday sexism is consistently encoded in 2021. F you Google. On the left hand side is a Hungarian sentence. Google Translate then translates this to the following text saying she is beautiful. He is clever. He reads, she washes the dishes, he builds, she sues, he teaches, she cooks. So Google Translate chooses the gender pronoun and it appears to choose gender pronouns that are very consistent with common gender stereotypes. So this has generated a lot of outrage and the topic is coming up again and again. And I thought we just dig a little bit into the background of why this happens and what we might do about it. So the first thing you might notice is the text here is really a bouquet of stereotypes and also ends with go to hell Google. So no doubt this person has tried a bunch of things. So I've kind of reproduced the first four sentences of the input. And here it is, she is beautiful. He is clever. He reads, she washes the dishes. Now to detect whether or not this is a feature of the language, maybe there are subtle gender hints. Here is a thing you can do. You can translate it back into the other direction. She is beautiful. He is clever, which will give you the Hungarian sentence. And then we can simply change the pronouns right here. He is beautiful. She is clever. If there are subtle language hints, you would expect that if you translate this to Hungarian and back that the same sentence returns. However, if this is a truly gender neutral language, then you would not expect this to matter. So if we now translate this to Hungarian and then we take this Hungarian sentence and translate it back, oh, see, it has actually switched around the pronouns back to she is beautiful. He is clever. So no doubt Google Translate here is inferring the pronoun from the words that follow assigning beautiful to a more feminine pronoun, assigning clever to more masculine pronoun. These are gender stereotypes, and we're going to dig a little bit into why this happens. For that, we have to understand how the machine learning systems currently work. Machine learning systems are statistical systems that try to translate a piece of text into a piece of text of a different language. So here we enter the piece of text in one language, it goes into this big ML box, and outcomes actually not a single sentence, but outcomes usually a plethora of possible sentences, along with probabilities assigned to each of those outputs, the system then chooses the most likely output and displays that to the user already said this is a statistical system, it is derived from a set of training data. So it's important to understand that all the system does is tell us that the sentence she is beautiful is the most likely sentence to appear in a document that is translated from Hungarian where this original sentence was present, given the training data, the training data itself is, of course, derived from the world in some way, if you believe that such a thing as reality exists. And there we have the whole system. Now we might ask ourselves, what do we do about it? How should we fix this? And the answer, unfortunately, is it depends. It depends on where you think the problem lies. So the first point where there could be a problem is the way we derive the training data from the world or from reality itself. Common issues here are that the sampling of data is somehow skewed, it is out of date, we're working with old data. In general, the data that we have does not reflect the world. And if the data that we have is skewed in some way, we can only expect that our machine learning system picks up on that skew. So a person arguing this would say that it is actually not that likely that these sent Hungarian sentence here translates to she is beautiful. And it might be equal, you're more likely that it translates to something else. If we only had all the translation data that we could hope of the second point where we could introduce problems is when we derive the ML system from the training data. Here's the thing, every machine learning system introduces statistical biases in order for it to generalize properly. Otherwise, we could not do learning. And it's entirely possible that some of these things such as the regularizer and the loss function, or the particular choice of architecture would introduce statistical bias into the system. This would result in a model that does not reflect the data as we have it. So someone arguing for this would argue that even though we have good training data in the training data, there is no problem. The ML system derived from the training data introduces unwanted effects. So someone might argue even though the feminine version here is slightly bigger in the training data than the masculine version, through the process of learning and distilling the ML model simply abstracts this and makes it a lot more likely therefore skewing the gender balance unfairly. The last problem is the fact that we simply choose the top prediction and output that to the user. This is not really accurate. If we simply output whatever is most likely, this is an unfair representation. In fact, what we should do is we should give the user all the possibilities with all the probabilities associated. Someone arguing for this might say that the training data is fine. The ML model even makes good outputs, the probability distributions are correct and reflect the world. However, because we only pick the top one, the user is tricked into thinking that that is the only possibility or maybe just that this possibility is much more likely than the alternatives. As good as that sounds to output always the probabilities associated with different ambiguous translations. The short answer of why we don't do this is pragmatics. I'll give you an example. This is Billy Billy. It's a Chinese video sharing websites and for people who cannot access YouTube from China, I do upload my videos to Billy Billy so they can watch them. However, while I'm practicing Mandarin, I'm not good enough yet to navigate a site that is full of characters that I have even a difficult time parsing. And this is what Google Translate is usually used as I just want to navigate effectively to the point where I can upload a video define its categories, leave a description, and then send that off. If Google Translate were to give me every possible ambiguity of every translation, how could I possibly achieve my task. And this all breaks down if you just think one step beyond the things like gender, if there is ambiguity in a translation, and you give me all the outputs, what am I supposed to know, I go to Google Translate, because I don't know what something means. And especially if you don't give me actual probabilities together with the possibilities, I have no clue what to do. But let's go into this a little bit more. See, if we go to this original sentence and explore Google a little bit more, you might ask why is not even consistent across the entire thing I input Google splits by sentences, it's pretty clear, because once you hover over it, you get the different sentences right here, you can solve this by inputting a comma, in which case, at least within a sentence, the translation is consistent. This is not always the case. But it gives you a little bit of a hint on how Google Translate works. Moreover, if you just input a single word, Google will actually give you the output distribution over all the translations here. The second thing is if you input an entire sentence, and it has a gender pronoun, Google actually gives you both versions. And it says that translations are gender specific. It is only when you input more than one sentence that this doesn't work anymore. In fact, if I make this into one sentence, Google gives me both versions. And this is already the corner case, because technically, it should give me every combinatorial version of the different assignments of these four variables right here. So you can clearly see that Google is doing everything it can to give you a good practical solution that still makes sense in the majority of use cases, people use Google Translate, because they want to get an idea of what something in a language means that they don't understand, they don't go to Google Translate to draft their formal letters that must be absolutely correct. So I think the accusation against Google here and saying things like fu Google, and honestly, Google has found a super pragmatic solution. And I think they're just doing the best they can in the face of the overwhelming complexity that is machine translation. All of that being said, there is a fourth category, a category of people that says that even if we derive the training data correctly, and it reflects the world, even if our algorithm does not introduce any additional bias, even if the output probability distribution is the correct probability distribution for that translation, this is still not good, because they see the problem here in reality itself, it is reality that doesn't conform to some preconceived notion. And this might have multiple reasons. For example, a person arguing this might argue that if we output the correct probability distribution, that might have some downstream effects, or it might reinforce the stereotypes or a number of other arguments, someone arguing like this would see ml models more as tools for social engineering, which is a valid stance to have not criticizing that any of this pipeline is wrong, but that the original bias that exists in the world is carried over into these outputs. And we should change that in order to affect the world. Now, while that is valid stance to have, and certainly debatable, you have to ask yourself whether you really want to give Google a multi billion multi national corporation, the almost monopolistic power to decide on what's good and bad for society. And personally, I'm going to go no with this one. In any case, what I want you to take away from this is that there are many possible places where problems can be introduced, and therefore many possible points where we can introduce solutions. But what we have to be careful of is that we don't confuse the different points and we don't let people provide evidence for one particular point of problem and then suggest a solution that is in an entirely different area. All right, that was it for me. I hope this was at least a little bit entertaining. Bye bye.
[ { "start": 0, "end": 7.6000000000000005, "text": " So you might have seen this tweet. Hungarian is a gender neutral language. It has no gender" }, { "start": 7.6000000000000005, "end": 14.48, "text": " pronouns so Google Translate automatically chooses the gender for you. Here is how everyday sexism" }, { "start": 14.48, "end": 22.16, "text": " is consistently encoded in 2021. F you Google. On the left hand side is a Hungarian sentence." }, { "start": 22.16, "end": 28.16, "text": " Google Translate then translates this to the following text saying she is beautiful. He is" }, { "start": 28.16, "end": 35.44, "text": " clever. He reads, she washes the dishes, he builds, she sues, he teaches, she cooks. So Google" }, { "start": 35.44, "end": 41.44, "text": " Translate chooses the gender pronoun and it appears to choose gender pronouns that are very" }, { "start": 41.44, "end": 48.16, "text": " consistent with common gender stereotypes. So this has generated a lot of outrage and the topic is" }, { "start": 48.16, "end": 53.92, "text": " coming up again and again. And I thought we just dig a little bit into the background of why this" }, { "start": 53.92, "end": 60.480000000000004, "text": " happens and what we might do about it. So the first thing you might notice is the text here is really" }, { "start": 60.480000000000004, "end": 68.56, "text": " a bouquet of stereotypes and also ends with go to hell Google. So no doubt this person has tried a" }, { "start": 68.56, "end": 75.36, "text": " bunch of things. So I've kind of reproduced the first four sentences of the input. And here it is," }, { "start": 75.36, "end": 81.76, "text": " she is beautiful. He is clever. He reads, she washes the dishes. Now to detect whether or not" }, { "start": 81.76, "end": 86.72, "text": " this is a feature of the language, maybe there are subtle gender hints. Here is a thing you can do." }, { "start": 86.72, "end": 92.32000000000001, "text": " You can translate it back into the other direction. She is beautiful. He is clever," }, { "start": 92.32000000000001, "end": 97.2, "text": " which will give you the Hungarian sentence. And then we can simply change the pronouns" }, { "start": 97.2, "end": 103.84, "text": " right here. He is beautiful. She is clever. If there are subtle language hints, you would expect" }, { "start": 103.84, "end": 110.64, "text": " that if you translate this to Hungarian and back that the same sentence returns. However, if this" }, { "start": 110.64, "end": 116.96000000000001, "text": " is a truly gender neutral language, then you would not expect this to matter. So if we now translate" }, { "start": 116.96000000000001, "end": 123.2, "text": " this to Hungarian and then we take this Hungarian sentence and translate it back, oh, see, it has" }, { "start": 123.2, "end": 130, "text": " actually switched around the pronouns back to she is beautiful. He is clever. So no doubt Google" }, { "start": 130, "end": 137.68, "text": " Translate here is inferring the pronoun from the words that follow assigning beautiful to a more" }, { "start": 137.68, "end": 144.48000000000002, "text": " feminine pronoun, assigning clever to more masculine pronoun. These are gender stereotypes," }, { "start": 144.48000000000002, "end": 151.04000000000002, "text": " and we're going to dig a little bit into why this happens. For that, we have to understand how the" }, { "start": 151.04000000000002, "end": 157.52, "text": " machine learning systems currently work. Machine learning systems are statistical systems that try" }, { "start": 157.52, "end": 163.44, "text": " to translate a piece of text into a piece of text of a different language. So here we enter" }, { "start": 163.44, "end": 170.07999999999998, "text": " the piece of text in one language, it goes into this big ML box, and outcomes actually not a" }, { "start": 170.07999999999998, "end": 178.48, "text": " single sentence, but outcomes usually a plethora of possible sentences, along with probabilities" }, { "start": 178.48, "end": 185.36, "text": " assigned to each of those outputs, the system then chooses the most likely output and displays that" }, { "start": 185.36, "end": 191.6, "text": " to the user already said this is a statistical system, it is derived from a set of training data." }, { "start": 191.6, "end": 196.79999999999998, "text": " So it's important to understand that all the system does is tell us that the sentence she is" }, { "start": 196.79999999999998, "end": 204.16, "text": " beautiful is the most likely sentence to appear in a document that is translated from Hungarian" }, { "start": 204.16, "end": 210.56, "text": " where this original sentence was present, given the training data, the training data itself is," }, { "start": 210.56, "end": 217.04, "text": " of course, derived from the world in some way, if you believe that such a thing as reality exists." }, { "start": 217.04, "end": 222.56, "text": " And there we have the whole system. Now we might ask ourselves, what do we do about it? How should" }, { "start": 222.56, "end": 230.88, "text": " we fix this? And the answer, unfortunately, is it depends. It depends on where you think the problem" }, { "start": 230.88, "end": 237.2, "text": " lies. So the first point where there could be a problem is the way we derive the training data" }, { "start": 237.2, "end": 244.88, "text": " from the world or from reality itself. Common issues here are that the sampling of data is" }, { "start": 244.88, "end": 251.2, "text": " somehow skewed, it is out of date, we're working with old data. In general, the data that we have" }, { "start": 251.2, "end": 257.12, "text": " does not reflect the world. And if the data that we have is skewed in some way, we can only expect" }, { "start": 257.12, "end": 262.96, "text": " that our machine learning system picks up on that skew. So a person arguing this would say that it" }, { "start": 262.96, "end": 269.68, "text": " is actually not that likely that these sent Hungarian sentence here translates to she is beautiful." }, { "start": 269.68, "end": 275.84000000000003, "text": " And it might be equal, you're more likely that it translates to something else. If we only had all" }, { "start": 275.84000000000003, "end": 281.92, "text": " the translation data that we could hope of the second point where we could introduce problems" }, { "start": 281.92, "end": 288, "text": " is when we derive the ML system from the training data. Here's the thing, every machine learning" }, { "start": 288, "end": 295.76, "text": " system introduces statistical biases in order for it to generalize properly. Otherwise, we could not" }, { "start": 295.76, "end": 301.52, "text": " do learning. And it's entirely possible that some of these things such as the regularizer and the" }, { "start": 301.52, "end": 307.52, "text": " loss function, or the particular choice of architecture would introduce statistical bias" }, { "start": 307.52, "end": 314, "text": " into the system. This would result in a model that does not reflect the data as we have it." }, { "start": 314, "end": 320.15999999999997, "text": " So someone arguing for this would argue that even though we have good training data in the training" }, { "start": 320.16, "end": 328, "text": " data, there is no problem. The ML system derived from the training data introduces unwanted effects." }, { "start": 328, "end": 334, "text": " So someone might argue even though the feminine version here is slightly bigger in the training" }, { "start": 334, "end": 340.16, "text": " data than the masculine version, through the process of learning and distilling the ML model" }, { "start": 340.16, "end": 345.28000000000003, "text": " simply abstracts this and makes it a lot more likely therefore skewing the gender balance" }, { "start": 345.28, "end": 352.47999999999996, "text": " unfairly. The last problem is the fact that we simply choose the top prediction and output that" }, { "start": 352.47999999999996, "end": 359.67999999999995, "text": " to the user. This is not really accurate. If we simply output whatever is most likely, this is an" }, { "start": 359.67999999999995, "end": 366.23999999999995, "text": " unfair representation. In fact, what we should do is we should give the user all the possibilities" }, { "start": 366.23999999999995, "end": 372.47999999999996, "text": " with all the probabilities associated. Someone arguing for this might say that the training data" }, { "start": 372.48, "end": 379.28000000000003, "text": " is fine. The ML model even makes good outputs, the probability distributions are correct and reflect" }, { "start": 379.28000000000003, "end": 386.24, "text": " the world. However, because we only pick the top one, the user is tricked into thinking that that" }, { "start": 386.24, "end": 392.40000000000003, "text": " is the only possibility or maybe just that this possibility is much more likely than the alternatives." }, { "start": 392.40000000000003, "end": 398.64000000000004, "text": " As good as that sounds to output always the probabilities associated with different ambiguous" }, { "start": 398.64, "end": 405.76, "text": " translations. The short answer of why we don't do this is pragmatics. I'll give you an example." }, { "start": 405.76, "end": 413.91999999999996, "text": " This is Billy Billy. It's a Chinese video sharing websites and for people who cannot access YouTube" }, { "start": 413.91999999999996, "end": 420.4, "text": " from China, I do upload my videos to Billy Billy so they can watch them. However, while I'm practicing" }, { "start": 420.4, "end": 426.56, "text": " Mandarin, I'm not good enough yet to navigate a site that is full of characters that I have even" }, { "start": 426.56, "end": 432.48, "text": " a difficult time parsing. And this is what Google Translate is usually used as I just want to" }, { "start": 432.48, "end": 438.08, "text": " navigate effectively to the point where I can upload a video define its categories, leave a" }, { "start": 438.08, "end": 445.04, "text": " description, and then send that off. If Google Translate were to give me every possible ambiguity" }, { "start": 445.04, "end": 451.28, "text": " of every translation, how could I possibly achieve my task. And this all breaks down if you just think" }, { "start": 451.28, "end": 457.11999999999995, "text": " one step beyond the things like gender, if there is ambiguity in a translation, and you give me" }, { "start": 457.67999999999995, "end": 462.64, "text": " all the outputs, what am I supposed to know, I go to Google Translate, because I don't know what" }, { "start": 462.64, "end": 467.91999999999996, "text": " something means. And especially if you don't give me actual probabilities together with the" }, { "start": 467.91999999999996, "end": 473.44, "text": " possibilities, I have no clue what to do. But let's go into this a little bit more. See, if we go to" }, { "start": 473.44, "end": 480.4, "text": " this original sentence and explore Google a little bit more, you might ask why is not even consistent" }, { "start": 480.4, "end": 487.28, "text": " across the entire thing I input Google splits by sentences, it's pretty clear, because once you hover" }, { "start": 487.28, "end": 493.91999999999996, "text": " over it, you get the different sentences right here, you can solve this by inputting a comma," }, { "start": 493.91999999999996, "end": 499.44, "text": " in which case, at least within a sentence, the translation is consistent. This is not always the" }, { "start": 499.44, "end": 504.71999999999997, "text": " case. But it gives you a little bit of a hint on how Google Translate works. Moreover, if you just" }, { "start": 504.72, "end": 512.1600000000001, "text": " input a single word, Google will actually give you the output distribution over all the translations" }, { "start": 512.1600000000001, "end": 518.08, "text": " here. The second thing is if you input an entire sentence, and it has a gender pronoun, Google" }, { "start": 518.08, "end": 526, "text": " actually gives you both versions. And it says that translations are gender specific. It is only when" }, { "start": 526, "end": 532.24, "text": " you input more than one sentence that this doesn't work anymore. In fact, if I make this into one" }, { "start": 532.24, "end": 539.12, "text": " sentence, Google gives me both versions. And this is already the corner case, because technically," }, { "start": 539.12, "end": 545.76, "text": " it should give me every combinatorial version of the different assignments of these four variables" }, { "start": 545.76, "end": 551.92, "text": " right here. So you can clearly see that Google is doing everything it can to give you a good" }, { "start": 551.92, "end": 558.96, "text": " practical solution that still makes sense in the majority of use cases, people use Google Translate," }, { "start": 558.96, "end": 564.96, "text": " because they want to get an idea of what something in a language means that they don't understand," }, { "start": 564.96, "end": 570.32, "text": " they don't go to Google Translate to draft their formal letters that must be absolutely correct." }, { "start": 570.32, "end": 575.52, "text": " So I think the accusation against Google here and saying things like fu Google, and honestly," }, { "start": 575.52, "end": 579.6, "text": " Google has found a super pragmatic solution. And I think they're just doing the best they can in" }, { "start": 579.6, "end": 585.0400000000001, "text": " the face of the overwhelming complexity that is machine translation. All of that being said," }, { "start": 585.04, "end": 592.64, "text": " there is a fourth category, a category of people that says that even if we derive the training data" }, { "start": 592.64, "end": 599.92, "text": " correctly, and it reflects the world, even if our algorithm does not introduce any additional bias," }, { "start": 599.92, "end": 606.0799999999999, "text": " even if the output probability distribution is the correct probability distribution for that" }, { "start": 606.08, "end": 615.44, "text": " translation, this is still not good, because they see the problem here in reality itself, it is reality" }, { "start": 615.44, "end": 621.5200000000001, "text": " that doesn't conform to some preconceived notion. And this might have multiple reasons. For example," }, { "start": 621.5200000000001, "end": 627.0400000000001, "text": " a person arguing this might argue that if we output the correct probability distribution," }, { "start": 627.0400000000001, "end": 633.12, "text": " that might have some downstream effects, or it might reinforce the stereotypes or a number of" }, { "start": 633.12, "end": 640.08, "text": " other arguments, someone arguing like this would see ml models more as tools for social engineering," }, { "start": 640.08, "end": 645.52, "text": " which is a valid stance to have not criticizing that any of this pipeline is wrong, but that the" }, { "start": 645.52, "end": 653.92, "text": " original bias that exists in the world is carried over into these outputs. And we should change that" }, { "start": 653.92, "end": 659.6, "text": " in order to affect the world. Now, while that is valid stance to have, and certainly debatable," }, { "start": 659.6, "end": 666.5600000000001, "text": " you have to ask yourself whether you really want to give Google a multi billion multi national" }, { "start": 666.5600000000001, "end": 673.0400000000001, "text": " corporation, the almost monopolistic power to decide on what's good and bad for society." }, { "start": 673.0400000000001, "end": 678.48, "text": " And personally, I'm going to go no with this one. In any case, what I want you to take away from this" }, { "start": 678.48, "end": 684.32, "text": " is that there are many possible places where problems can be introduced, and therefore many" }, { "start": 684.32, "end": 691.0400000000001, "text": " possible points where we can introduce solutions. But what we have to be careful of is that we don't" }, { "start": 691.0400000000001, "end": 697.0400000000001, "text": " confuse the different points and we don't let people provide evidence for one particular point" }, { "start": 697.0400000000001, "end": 702.8000000000001, "text": " of problem and then suggest a solution that is in an entirely different area. All right," }, { "start": 702.8, "end": 715.68, "text": " that was it for me. I hope this was at least a little bit entertaining. Bye bye." } ]
s9UAOmyah1A
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Competition-Level Code Generation with AlphaCode (Paper Review)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "alphacode", "alpha code", "deepmind", "deepmind code", "deepmind alphacode", "alphacoder", "codex", "copilot", "ai code", "ai programmer", "ai competitive programming", "ai leetcode", "machine learning leetcode", "deepmind leetcode", "codeforces", "large scale sampling", "language models", "language models for code", "ai python programmer", "deep mind", "fuzzing", "google deepmind", "competitive programming ai" ]
#ai #alphacode #deepmind AlphaCode is an automated system that can solve competitive programing exercises. The authors found an interesting combination of language models, large-scale sampling, and clever techniques to filter and subsequently cluster the resulting programs, which lets the system perform on the level of an average competitor in real competitions. In this video, we take a deep dive into AlphaCode's design, architecture, and experimental evaluation. The paper is very well structured and the empirical results are super interesting! OUTLINE: 0:00 - Intro 2:10 - Paper Overview 3:30 - An example problem from competitive programming 8:00 - AlphaCode system overview 14:00 - Filtering out wrong solutions 17:15 - Clustering equivalent generated programs 21:50 - Model configurations & engineering choices 24:30 - Adding privileged information to the input & more tricks 28:15 - Experimental Results (very interesting!) Paper: https://storage.googleapis.com/deepmind-media/AlphaCode/competition_level_code_generation_with_alphacode.pdf Code: https://github.com/deepmind/code_contests Abstract: Programming is a powerful and ubiquitous problem-solving tool. Developing systems that can assist programmers or even generate programs independently could make programming more productive and accessible, yet so far incorporating innovations in AI has proven challenging. Recent large-scale language models have demonstrated an impressive ability to generate code, and are now able to complete simple programming tasks. However, these models still perform poorly when evaluated on more complex, unseen problems that require problem-solving skills beyond simply translating instructions into code. For example, competitive programming problems which require an understanding of algorithms and complex natural language remain extremely challenging. To address this gap, we introduce AlphaCode, a system for code generation that can create novel solutions to these problems that require deeper reasoning. Evaluated on recent programming competitions on the Codeforces platform, AlphaCode achieved on average a ranking of top 54.3% in programming competitions with more than 5,000 participants. We found that three key components were critical to achieve good and reliable performance: (1) an extensive and clean competitive programming dataset for training and evaluation, (2) large and efficient-to-sample transformer-based architectures, and (3) large-scale model sampling to explore the search space, followed by filtering based on program behavior to a small set of submissions. Authors: Yujia Li, David Choi, Junyoung Chung, Nate Kushman, Julian Schrittwieser, Rémi Leblond, Tom Eccles, James Keeling, Felix Gimeno, Agustin Dal Lago, Thomas Hubert, Peter Choy, Cyprien de Masson d’Autume, Igor Babuschkin, Xinyun Chen, Po-Sen Huang, Johannes Welbl, Sven Gowal, Alexey Cherepanov, James Molloy, Daniel J. Mankowitz, Esme Sutherland Robson, Pushmeet Kohli, Nando de Freitas, Koray Kavukcuoglu and Oriol Vinyals Links: Merch: http://store.ykilcher.com TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher LinkedIn: https://www.linkedin.com/in/ykilcher BiliBili: https://space.bilibili.com/2017636191 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Alpha code is a system by DeepMind that does automated competitive programming. You're able to give this system a lead code style problem in natural language, and it will come up with code by itself that solves the problem. It does this by using a combination of language modeling, sampling, filtering, and clustering before it finally decides on the solutions that it's going to try out to submit to the server. What is mind blowing is that this system was able to perform in human competitions and be about as good as the average programmer in these competitions, which is crazy because previous systems were nowhere near human level. So here's how it goes. This video right here is a comprehensive paper review where I will go through the paper with you and explain to you the most important parts of the paper, what's in there, and what I think is good and what I think is bad. After this video, you'll have a good understanding of the paper and of how the system works and what its potential weaknesses are. However, in the next video released tomorrow, I will interview the authors of Alpha code, which is a huge privilege, and I'll be able to ask them anything I want and they will have seen my paper review and they'll be directly able to respond to any criticism that I've raised there to any questions that I had and to whatever I did wrong in my paper review. On top of that, you're able to get a behind the scenes look into their work. Even at places like DeepMind, things go wrong, things don't work out. They've had results that they thought were too good to be true, and they turned out not to be true and many more things. On top of that, we talk about how the project came to be and also how they've dealt with media reception because this paper has made big waves. So I absolutely invite you to watch both this video and the interview part because they're very much complimentary. Let me know how I can improve these videos for you. If you like, leave a like, tell someone to subscribe and I'll see you around. Bye. Hello there. Today we're going to look at competition level code generation with Alpha code. This is by researchers of DeepMind and presents a novel system that can take part in competitive programming challenges. These are challenges where you as a user, you'd register and then you'd be given lead code style problems to solve. These aren't easy problems. These aren't just solving some or writing down some SQL statement. These are legitimate, difficult programming challenges where you need to think of algorithms and solutions to problems and so on. So having a system that can actually take part and compete against humans is very remarkable. They've submitted this system to 10 of these challenges. And as you can see, the orange lines here is Alpha code's relation to other humans. They perform about as well as a median human would like an average middle of the road competitive programmer, if you will. So this is pretty remarkable, especially since the baseline system so far had been sort of in the third or fourth percentile, not very good. So this represents a significant boost. And today we're going to find out how they did it. But first, here is what such a problem might look like. So this is one problem. This is one data point in this data set or one such challenge that you have to solve. You can see it starts with a description. So the title is Backspace. It starts with a description. You're given two strings, S and T, both consisting of lowercase English letters, yada, yada, yada. What you should note right here is that the description is in natural language. It's made for humans. And therefore, it's just natural that it is a natural language. There is no other form. There's no machine readable form right here. This is it. This is what the algorithm alpha code sees and gets as an input. There's also a description of the input again in natural language. There's description of the output. And there is also this part right here. This is an important part. It consists of a bunch of example inputs and outputs. So here is an example input. For example, there are four problems in this problem set. All of this will be described in the input section. So the input section here says the first line is a single integer, the number of test cases and so on. So that's the four. Then we have this is a problem. So this is S and this is T of the first problem. The goal is to type S and strategically type the Backspace button instead of the letter at S to go from S to T. So in this case, we start with S. So the first letter is A, but we choose to type the Backspace button, which would not type A and would delete what we have, but we have nothing. So yeah, then we would type B. Sorry about that. And we would type B. Then we would type A, then we would type B. And instead of the last A we get and type the Backspace button, which would delete the letter before it. And we'd end up with B, A. Therefore, we got from S to T and therefore we output the letter the word yes. Okay, so we are tasked with writing an algorithm that automatically determines whether it's possible to go from S to T in any of these test cases and output the corresponding answer. This is challenging by itself, but you only get the problem right if you can do it for all the test cases. And the way these problems are evaluated is that on the test server, they have a whole bunch more of these test cases, including checking all the corner cases like very long inputs, no input at all, only inputs containing the letter A, if for some reason you expected a B to be there. And so they test all the edge cases, and you need to be correct in all of them in order to get the points. This is extremely challenging even for a human. The output that you're supposed to give is an algorithm like this. You can see it's not an easy thing. It's not just a snippet. It's a full blown algorithm. It contains inputs. So you read the inputs, even that to program an algorithm to come up with that piece of code is already challenging by itself to firstly read that first line and then read as many inputs. Then you need to build lists and reverse lists. Then you go into a while loop where you pop of things of list depending on comparisons. And in the end, you output the correct thing depending on whether that list is zero or empty or not empty. So as you can see, this is a challenging task. And this is just one data point. The next data point isn't going to be another variant on two strings and typing the back space button. The next data point is going to be a completely different problem. Like searching for shortest paths and some graph or something with denominators of numbers or numerators or something like this, right? It is very diverse set of problems and very challenging even for humans. And the fact that an algorithm can tackle it is very remarkable. So how do they do it? That's our question today. If you guessed that it has something to do with large language models, then and transformers and so on. And yes, kudos. You got it. But there is a lot more to it. And this is really an engineering effort. And I think we should appreciate just how far you can push a system to get continuous improvements. What they do first, though, is they collect a data set. They do train on a open source code from GitHub. That is the pre training data set. This is very similar to OpenAI's codex model. So OpenAI's codex model is trained on code from GitHub. And it can simply do next token prediction on code. And I have to say I've tried codex and I'm pretty happy with its suggestions. It's very good. But it can give me like longer snippets than an autocomplete. But it cannot solve any kind of problems like this. It can just continue code. In any case, they collect this pre training data set, they have whatever 700 gigabytes of code that they train on, and they run their regular language modeling objective on that piece of code. Then they fine tune on an appropriate data set of code contests. So this is a mixture data set that they scrape from multiple websites, for example, code forces description to code, code net. These are these are papers, previous papers or competition settings that they have collected these data sets from and the data sets again, this here is one data point, right? This is a problem description. And usually these these data sets, they contain one or multiple solutions, not all of them might be correct, but they contain about an order of magnitude more solutions than they contain text or problem descriptions. So first they collect a data set. And then they train on that data set. So that could be the story right here. But it is not. The entire pipeline is a bit more complicated. You can see first, there's GitHub, we collect pre training data. We do pre training, then fine tuning on pairs of problems and solutions of these code contests data set. This is, as I said, a collection of various data sets that contain these that contain these code challenge type of problems, lead code style problems, and they do fine tuning. By the way, their model is a transformer model, you could guess it. They do have a special they have an encoder decoder model. So you have some sort of an encoder, and they choose to make the encoder shallow and the decoder the decoder deep. And there are specific reasons for that, which we'll get to in a second. But the encoder mainly handles the description, which is so the description is natural language mostly contains, you know, some some code snippets and so on. However, it contains mostly the description. That's the encoder. The benefit of using an encoder decoder architecture over a decoder only is that you do get by directionality in the encoder. And as they do here, you can make them different sizes, which means that you can shrink the encoder, which makes you sample able to sample faster and sampling is going to be very important for this system right here in just a second. And then the decoder will be a autoregressive decoder where they just well int J equals five, yada yada yada. So this is this is actually going to produce the code token by token in sort of a language modeling way. Their objective is is they have a masked language model objective at the end coder. And then the decoder obviously there is cross attention right here. There's there's self attention in the encoder. There's self attention causal self attention in the decoder. And then there is cross attention from the decoder to the encoder. And they have a a language modeling objective in the decoder. They do say it's quite important to have the master language modeling loss additionally in the encoder because it apparently makes the encoder understand this the stuff in inside of it, the stuff that it's fed a lot better. I'm just going to believe them right here. So now that we have this model, we can we can fine tune it on these data sets, right? We can feed a description right here, and we can feed one of the solutions. And that could already be it. However, that's not it. It turns out that most of the time, this doesn't actually solve the problem. So you feed in a description, and you sample the solution, it is not it does not go well. So what do they do? Well, there are two ways. The first way is you try to make your model a lot better at like thinking and coming up with solutions and reasoning abstractly and so on. But that doesn't sound very deep learning and transformer like. So what do we do is we just do large scale sampling. That essentially means you have a problem, you get a new problem, you feed this into your decoder right here, and then you just sample like a bunch of solutions from your decoder. Sorry, I just said decoder over here, it put this into the encoder, you let the decoder run and you generate a ginormous a ginormous amount of outputs. So you can do this with language models, you can sample according to some temperature, you can do some other stuff, you do nucleus sampling and whatnot. But you can generate diverse outputs from the decoder. And they do, they sample 1000s up to a million different outputs from the decoder. So now they have this large set of potential solutions. And what do they do with it? This is very important, they do filter, and they cluster. So first, the filtering happens. And it might not surprise you, but the filtering happens on these example inputs that we saw right here. So with every problem, you get a tiny amount of example inputs and corresponding example outputs, they simply let all of the programs they generate run on these example inputs. And the ones that don't crash, they evaluate whether they do get the example outputs. And if they do get the example outputs correctly, they keep them around, otherwise they discard them. This is obviously vastly different from how humans solve these things. Humans don't just generate giant amounts of solutions and then let them run on this tiny amount of example problems. But this eliminates as they say, it eliminates over 99% of these sample things. So you end up with a slice right here of this data that you've generated by simply evaluating on these example cases that you had. So it's quite important that these are there for the system to work. I wonder if we could replace this, because we have this approach as well in, for example, Ali, where a lot of stuff is generated and then clip is used to rerank. I wonder if something like this could be done here. But they have several helper models in here in order to help the system during training. So I don't know if another helper model might be even appropriate. So this leaves them with a tiny amount of solutions, which could still be a lot, right? 10% out of a million is still a lot of solutions. And they keep themselves to just submitting 10 of them. As a human, sometimes these code platforms, they have actually a limit on how many things you can try to submit. And 10 is like a reasonable limit. It gives you a little bit of, as a human, a little bit of, you're not anxious to submit a solution if you think it's the correct one. Sorry. You also, you can submit a few times, but not too often. Like you can't brute force the test set that's on the server. So they need to get down from these still large amount of solutions to 10 solutions. And that's where this clustering comes in. So the goal is to end up with this small select set of candidates to execute and evaluate. And what do they do with the clustering? This is where one of these helper models gets in. So all of these things right here, they are programs. They're programs that could take inputs and outputs. And there are many, many of them. What we want to do is we want to cluster them. A lot of these programs are going to be different in the tokens that they use, like in the exact code, but they're going to be essentially the equivalent program to each other. Like they're going to be the same program, isomorphic to each other. However, graph isomorphism, like let's say we parse them in a syntax tree and check graph isomorphism. I do believe that's like a really hard problem. I might be mistaken, but I think that's used in cryptography to show like a really hard problem. So it's not really graph isomorphism on the syntax tree. It might not even get all the isomorphic programs. So what do we do? Our plan is going to be, we want to group these programs into the same ones. So maybe these three here are actually the same and this one here is actually the same. So we'd like to figure that out. How do we do it? We just feed like a whole bunch of inputs. We just generate a whole bunch of inputs to the programs. And this is, we train a little model that can take descriptions, like problem descriptions, and generate new input output pairs, not even input output pairs, just inputs. So we take a problem and we take these example inputs and it can generate new ones. Now we don't care what the output is. What we do care is we just feed all of them to all of the models, like all of them go to all of the models and we just observe the outputs. And we say, well, whenever two programs have the same outputs on all of these test cases that we came up with, they are the same program. We don't, again, we don't know the solutions to these inputs because we made them up. But we can assume that if two programs output the same thing for all kinds of inputs, that they're essentially the equivalent program. Note that we can't just input random garbage right here, because the programs might differ with respect to how they handle edge cases and so on. So it is good to have an informed model be the one that's inputting things into these models. But this lets us figure out groups. Let's say, okay, all of these models responded the same to all of these inputs that we gave them. So we'll just consider that the same program and we'll just submit one of them as the one of the 10. Then we go to the next bucket, submit one of those and so on. We start with the largest bucket, and then we progressively go to the smaller buckets. And if we still have some some budget left, we go to the largest bucket again and sample a different one. But that's essentially how we group programs. And that's how they get it down to fairly small set of candidates. Why do they start with the largest bucket? The reasoning is that there are many ways that wrong programs can be wrong. So selecting the largest bucket, I don't know, we'll have to read what they're saying. But essentially, they say there are many ways to introduce bugs. And therefore, they expect the wrong programs to be in smaller but distinct buckets. And that's the system that is how they solve the programming competition. This might not be as flashy as you know, you imagined, but it's still very, very impressive. This strategy of generating a whole bunch of things and then selecting, I think has been popularized more and more in recent times. As I said, for example, with systems like Dali, we've seen that generative models can be used to generate very diverse sets of outputs. If they are post processed correctly, we can end up with something that the generative model by itself could not necessarily have done. Right. This is the base of the system. Now, as I already said, there are a lot of engineering things right here. Most notably, if you are going to sample such a large amount of things in order to answer a single data point, sampling needs to be very, very fast. And a lot of their engineering choices are in order to make sampling fast. For example, as you can see, their encoders are consistently smaller than their decoders. They have shallow encoders, but deep decoders, precisely for that reason, making the encoder more shallow saves on parameters, saves on forward propagation, makes sampling a lot faster. Hey, this is Janek from the future. Just a small correction right here. I claimed that the shallowness of the encoder would help with the sampling speed, which is not entirely true. In fact, in sampling, the decoder is the bottleneck because you can reuse the encoders encoding over and over again as you autoregressively sample. So the decoder being small would help the sampling speed, but they figured that the decoder really needs to be deep in order to keep up performance. The encoder being shallow helps really during training because during training, I don't do anything autoregressively. And therefore, any part being smaller really helps the speed during training. So just small correction back to the video. They also use the shared, they use a system like a transformer variant that shares all of the values and keys across the heads. As you can see right here, for example, here we have six query heads, but all of the keys and values are shared among those heads. This again saves computation and makes sampling a lot faster. So that is how they make this sampling even tractable, right? Because these choices influence how many solutions you can generate at once. And yeah, they already say it's a massive effort to generate these solutions at runtime. Although I wonder, what does that mean, like a couple of seconds or what? Because humans are time limited in these challenges. And that's one of the major obstacles is that you're under time pressure as a human. So I wonder how that kind of plays into codecs right here. What do they mean by, it's a lot of effort to generate these things and how much time does it actually take? In any case, they have a lots of intricacies right here. For example, they add additional meta information to the problem description. So they feed this stuff here into the problem description as well. For example, what the language is, whether or not the solution that the training, so in the training data, they know whether a solution is correct or not. Whether or not it's the correct solution. And also tags, tags might help you. For example, this is dynamic programming, the implementation. I don't know what implementation tag is. Oh, maybe you must implement an actual algorithm instead of just solving a decidability problem, a rating to indicate how hard the problem is. These things are not known at test time. However, they've discovered that if they include them at training time, it helps a lot. And obviously at test time, you can not just always input correct solution, right? That's how you can let your model train on even incorrect solutions and still not have the incorrect solutions during training contaminate the model trying to produce correct solutions. So there's potentially something that the model can learn from the incorrect solutions. Yeah, at test time, you just always put correct solution. It's a bit pretentious, but you know, it is what it is. And they also discover that by varying the tags right here, obviously, they don't have the tags because they could give a hint in how you solve the problem. But they can just put like random tags there. And that would even increase the diversity of the things they sample. And that's ultimately what they go for right here, a very diverse set of potential solutions that they can then filter down and cluster down. So I thought this was quite smart to include sort of data that you only know at training time and then use that in a creative manner. It's sort of like prompt engineering in GPT-3, but in an automated and planned fashion, right? So they go through a lot of things, right? I have no time to go through all of this, but I highly encourage you to read all of it. They have various techniques right here. They do tempering, they do value conditioning that also helps value prediction that also helps this is a little bit like reinforcement learning where you add additional proxy losses in order to make the model understand the problem space better or maybe learn more relevant features. They do reweighting of the gradient with this technique called gold. And yeah, as I can just if you're if you're interested, this is very, very detailed paper. And I found it also quite easy and straightforward to read. And I hope you have the same experience. As we said, they get to the filtering and they say filtering removes approximately 99% of model samples, although the exact amount depends on the problem and the model. And filtering can still leave thousands or tens of thousands of candidate samples for many problems. So that's why they filter them. They filter them down and after filtering, they use this clustering algorithm, which I've already described. So I won't do that again right here. But now we go into the results already and the results are themselves quite interesting, not only because of the performance of the model, which is pretty good, at least for some of the models. So they train different models right here in different sizes, but also because they do very detailed investigations into what the individual contributions that they introduced brought. So as you can see right here, for example, this metric right here, by the way, 10 at 10k, it means they submit 10 examples at the end. So this is after the whole clustering and so on. And they generate 10,000 candidate solutions. So at that size, if they consult their 9 billion parameter model, you can see they get a pass rate or a solve rate of 22.6% of the validation set examples that they have. If they use their 41 billion parameter model, that increases. And if they additionally use clustering instead of just randomly sampling 10 examples from the filtered data set, they get 26.2%. You can see right here, both size and the additional features that they build in, get them a large gain. And this is consistent across all the sizes and so on. And what you can also see is that sampling more distinctly helps. For example, if you go to 100,000 or a million samples, even though you only submit 10 of them at the end still, if you sample more, all of the models automatically get better, as you can see. Yeah, so that is, I think that that is a good lesson and an indication of what could be done more in the future to augment our generative models with post-processing. So the paper is quite long. It's actually copied again right here. We'll just jump more into the results section, because there are some other very interesting things. For example, if you look at how the models compare in their size, there's clearly, as we already saw, there is an advantage to being larger, which you can see right here. 300 million parameters performing okay, 41 billion parameters performing a lot better. You can see at this point right here, the small model solves not even 20% of problems, the large model solves more than half of the problems more than the small model. You can also see that when they are unrestricted, so unlimited attempts instead of just 10 attempts. So unlimited attempts, we don't need clustering, we don't need filtering, we could filter, right? Because there's zero chance that a problem that doesn't pass the test inputs will actually pass the server inputs. But no clustering, no selecting, no sub selecting, you can see that the models, they just get better as you sample more, which makes sense, right? This must be a monotonous function as you sample more, your chance of getting some of some solution being correct is like gets more and more. But there are so many programs, like the space of possible programs is so huge. Even the space of possible programs in these datasets is like, or that that would confer to these is so large. It is really astonishing to me to see that there is really this improvement. It's log linear. Yes, this is a log scale. But still, it seems crazy that you can actually get a better performance by just sampling more searching through the space more according to the language models. Also notable is that the large models have a bigger slope than the small models. I've overdone it a bit with my drawing right here. But I hope you can still see it. So the large models have better scaling properties with respect to sampling from them, which is also interesting, and will be another, I think, addition to the common knowledge of how these models, like of the scaling laws of these models. So whether you filter them down to 10 problems, which at some point gets you diminishing returns, or whether you don't filter them, in which case, I don't see any diminishing returns right here, it just kind of speeds up. Again, these are log scales on the bottom. So it seems to concur very well with the scaling laws we have in that in order to get like a linear improvement in performance, you need an exponential improvement in data, compute, or in this case, samples. The next thing they so they look at they look at various things right here, like how long they train, obviously, with more training compute, again, our solve rate goes up. Again, this seems to be a log linear relationship, which is also very interesting. And also the the solve rate goes up with more sampling compute, which is kind of the same plot as above. But here it's measured in terms of compute, and not necessarily in terms of of number of samples. Obviously, the larger models, they do take longer time to forward propagate and therefore use more compute. But interestingly, because of their scaling property, you can see that at the beginning, because they take longer, they need more compute to reach the same pass rate or solve rate. However, as you go up with the compute because of their slope being being higher right here, they eventually will surpass the other models. And even seen from a compute perspective, it will be cheaper to use the larger model than to use the small models for the same performance. Yeah, here, they investigate their decisions with respect to how fast they can sample. You see right here, the alpha code model can sample at 4.74 samples per TPU second. If they were to use a decoder only model, they would be a lot slower because now obviously the decoder has a bigger length, which means the attention matrix has a bigger, a bigger size, I guess. They also allocate more blocks to the decoder so that the parameters are approximately equal, which then means all in all means that this architecture is in total slower because it has more connections, it has more blocks. Then they also they test with the regular transformer like standard multi-head attention and that's just kind of abysmal. So this is due to the fact that they use this shared query attention right here in their architecture. And yeah, yes, okay, this is the same, the same encoder decoder split, but they use a different, they don't use the shared query. So that is speed. Now what I also find interesting is the pre-training data set. I'm sorry, we'll go through a lot of results right here, but they're all very interesting. So the pre-training data set used also influences the performance at the end. So as you can see, if they restrict themselves to GitHub, but Python only instead of GitHub all languages and all languages means something like Python and C++ and Julia and things like this, but it's still programming languages. So if they use Python only, their solve rate drops dramatically. However, if they use Massive Text and Massive Text does contain some GitHub data, but it's also a natural language data set, it doesn't drop as much. I just, I think that's quite interesting. Like why might that be? I don't know. Yeah, here they list up all the advancements and don't want to go through them, but you can just see how just how engineering plays in here. It's not just I have an idea and I built the model. No, no, no. It's, you know, if I just built the model, I get 10.4% right here, but then I add multi, I add the encoder loss of the mask language model. I add the tempering, I add the tags and ratings. So the little snippet they put in front that they randomize at test time, right? I add value predictions, I add this weighting of the gradient, I add the clustering. You can see that with everything they add, they get improvement after improvement. So I guess what the lesson here is that there might always be a way to sort of push your system even further by just adding something, something smart or alternatively just scaling by a factor of 10. But you know, that I guess that's the sad story of deep learning, right? Because these things, they kind of give you a constant improvement, right? You can see that across all of the things right here. For example, the first the mask language modeling gives you maybe not here, but maybe not here, but here like a 2%. This is about 2%. This is about 2% improvement. And you know, some of these things, they scale with size, but some of them also kind of give you a constant improvement. And the you can always get the same improvement, but just scaling up models, right? In fact, you look at you have to get all of these improvements right here. Or you just scale up the model by a factor of 10. And you get like also an improvement. Sad story of deep learning. Yeah, this right here is a comparison of this is a comparison of the filtering and clustering algorithms. So if they just do no filtering, they just select 10 outputs at random, obviously, their solve rate is just zero, because they generate like most of the generated samples, they are just garbage, they don't. Well they don't solve the problem. So if they now filter that already gives the biggest boost, right? That eliminates the 99% that fail on the test inputs. And therefore, that is that is pretty, pretty significant improvement. If they also add clustering, then as you can see, especially at the larger sample budgets, the clustering helps a lot. And the blue line here is a theoretical upper bound. So the blue line is where they just submit every single thing that they sample and see how much that would solve. So this is theoretical upper bound if they could always sample and select not sample the correct but if they could always select the correct things from the things they sampled, you can see that there is still a big gap. So even though they do this whole clustering thing, they seem to be still unable in, let's say about 10% or so about 10 percentage points or so of solutions to actually come up with the to select the correct solution among all of their candidates, which is surprising, right? Maybe not, maybe not. I mean, yeah, I don't know. They do test against baselines. And I guess the only thing to be said is that the baselines, they sometimes succeed on easy problems. You can see right here that in the introductory problems, something like codex doesn't perform too poorly. However, as soon as you go to like competition level problems, and this is a different data set right here in different methodologies in order to make the models comparable. And their alpha code just shines quite out shines its competitors quite a bit. And this is the one one billion model. This is not even the larger model. They do compare whether or not the model just copies over code. And they have a lot of ways to investigate that and they find that largely no, it doesn't copy more code than humans copy. Therefore, so also humans in these competitions, they they have some algorithm in mind that they've seen somewhere they just write it down again, or they even actively copy from other solutions. They do investigate quantitatively and qualitatively that right here. And they find that the model largely does not. It does not copy over entire solutions from somewhere else. Like it doesn't just try out all the things that it has seen so far. There are other tricks right here. Sorry, there are also ablations, which I this video is already too long. So I don't want to necessarily go into it into all of the things. One interesting thing is that they report that their validation loss after very short time increases. So you can see right here, the validation loss drops. And after a while, it increases again. This would indicate overfitting usually. And you can see that for the rest of the run, the validation loss increases. However, their real metric, the true metric, the solve rate actually increases too throughout. You can see right here, the solve rate increasing throughout the run. First diminishing returns, but it does continue to increase, which means that the validation loss is not necessarily a good metric. They do have an explanation for this, namely that these coding models, there's not one correct solution, not even in the data set, right? The data set contains many instances of problem A, and then solution one, solution two, solution three, solution four. So if the model learned to produce solution one for problem A, which is a correct solution, but the current data point wants the model to produce solution two, right? Because you're doing language modeling, you need to select one that you train on. Then that would technically be wrong. And therefore, if you measure this on the validation set, you might actually get worse. Yet still, you might actually increase in your ability to solve the actual problems. This leads me to believe a little bit that, you know, is the training loss even appropriate for this thing? I mean, it's fine, you know, the validation loss goes up, I can understand why and why that might not be necessarily a problem. But does that kind of mean that the training loss itself should be rethought and that we should have a better training loss for these types of models where multiple continuations, multiple solutions exist in the data set to the same prefix? I don't know. That is one of many questions that I have right here. As I said, they have lots of other stuff, they augment the data set with some fuzzing procedure. They do lots, lots of different things and investigations. The paper also has a long appendix. If you're into that, you can see a lot more stuff, a lot more analysis. But I think I'm going to leave it here and jump over to the interview. Thanks so much. And I hope you enjoy that as well.
[ { "start": 0, "end": 11.16, "text": " Alpha code is a system by DeepMind that does automated competitive programming." }, { "start": 11.16, "end": 15.92, "text": " You're able to give this system a lead code style problem in natural language, and it" }, { "start": 15.92, "end": 20.080000000000002, "text": " will come up with code by itself that solves the problem." }, { "start": 20.080000000000002, "end": 25.2, "text": " It does this by using a combination of language modeling, sampling, filtering, and clustering" }, { "start": 25.2, "end": 31.08, "text": " before it finally decides on the solutions that it's going to try out to submit to the" }, { "start": 31.08, "end": 32.08, "text": " server." }, { "start": 32.08, "end": 37.480000000000004, "text": " What is mind blowing is that this system was able to perform in human competitions and" }, { "start": 37.480000000000004, "end": 43.78, "text": " be about as good as the average programmer in these competitions, which is crazy because" }, { "start": 43.78, "end": 47.16, "text": " previous systems were nowhere near human level." }, { "start": 47.16, "end": 48.480000000000004, "text": " So here's how it goes." }, { "start": 48.480000000000004, "end": 53.72, "text": " This video right here is a comprehensive paper review where I will go through the paper with" }, { "start": 53.72, "end": 59.8, "text": " you and explain to you the most important parts of the paper, what's in there, and what" }, { "start": 59.8, "end": 62.519999999999996, "text": " I think is good and what I think is bad." }, { "start": 62.519999999999996, "end": 67.36, "text": " After this video, you'll have a good understanding of the paper and of how the system works and" }, { "start": 67.36, "end": 69.52, "text": " what its potential weaknesses are." }, { "start": 69.52, "end": 75.8, "text": " However, in the next video released tomorrow, I will interview the authors of Alpha code," }, { "start": 75.8, "end": 80.44, "text": " which is a huge privilege, and I'll be able to ask them anything I want and they will" }, { "start": 80.44, "end": 86.03999999999999, "text": " have seen my paper review and they'll be directly able to respond to any criticism that I've" }, { "start": 86.03999999999999, "end": 92.08, "text": " raised there to any questions that I had and to whatever I did wrong in my paper review." }, { "start": 92.08, "end": 96.56, "text": " On top of that, you're able to get a behind the scenes look into their work." }, { "start": 96.56, "end": 100.52, "text": " Even at places like DeepMind, things go wrong, things don't work out." }, { "start": 100.52, "end": 105.47999999999999, "text": " They've had results that they thought were too good to be true, and they turned out not" }, { "start": 105.47999999999999, "end": 108.12, "text": " to be true and many more things." }, { "start": 108.12, "end": 113.08, "text": " On top of that, we talk about how the project came to be and also how they've dealt with" }, { "start": 113.08, "end": 116.88000000000001, "text": " media reception because this paper has made big waves." }, { "start": 116.88000000000001, "end": 122.02000000000001, "text": " So I absolutely invite you to watch both this video and the interview part because they're" }, { "start": 122.02000000000001, "end": 124, "text": " very much complimentary." }, { "start": 124, "end": 126.4, "text": " Let me know how I can improve these videos for you." }, { "start": 126.4, "end": 131.12, "text": " If you like, leave a like, tell someone to subscribe and I'll see you around." }, { "start": 131.12, "end": 132.12, "text": " Bye." }, { "start": 132.12, "end": 133.12, "text": " Hello there." }, { "start": 133.12, "end": 137.52, "text": " Today we're going to look at competition level code generation with Alpha code." }, { "start": 137.52, "end": 143.64000000000001, "text": " This is by researchers of DeepMind and presents a novel system that can take part in competitive" }, { "start": 143.64000000000001, "end": 145.56, "text": " programming challenges." }, { "start": 145.56, "end": 150.52, "text": " These are challenges where you as a user, you'd register and then you'd be given lead" }, { "start": 150.52, "end": 153.88, "text": " code style problems to solve." }, { "start": 153.88, "end": 155.02, "text": " These aren't easy problems." }, { "start": 155.02, "end": 159.08, "text": " These aren't just solving some or writing down some SQL statement." }, { "start": 159.08, "end": 165.64000000000001, "text": " These are legitimate, difficult programming challenges where you need to think of algorithms" }, { "start": 165.64, "end": 168.6, "text": " and solutions to problems and so on." }, { "start": 168.6, "end": 176.04, "text": " So having a system that can actually take part and compete against humans is very remarkable." }, { "start": 176.04, "end": 179.39999999999998, "text": " They've submitted this system to 10 of these challenges." }, { "start": 179.39999999999998, "end": 183.95999999999998, "text": " And as you can see, the orange lines here is Alpha code's relation to other humans." }, { "start": 183.95999999999998, "end": 191.92, "text": " They perform about as well as a median human would like an average middle of the road competitive" }, { "start": 191.92, "end": 193.88, "text": " programmer, if you will." }, { "start": 193.88, "end": 199.96, "text": " So this is pretty remarkable, especially since the baseline system so far had been sort of" }, { "start": 199.96, "end": 204.78, "text": " in the third or fourth percentile, not very good." }, { "start": 204.78, "end": 209.16, "text": " So this represents a significant boost." }, { "start": 209.16, "end": 211.64, "text": " And today we're going to find out how they did it." }, { "start": 211.64, "end": 215.56, "text": " But first, here is what such a problem might look like." }, { "start": 215.56, "end": 217.51999999999998, "text": " So this is one problem." }, { "start": 217.52, "end": 225.32000000000002, "text": " This is one data point in this data set or one such challenge that you have to solve." }, { "start": 225.32000000000002, "end": 227.96, "text": " You can see it starts with a description." }, { "start": 227.96, "end": 229.94, "text": " So the title is Backspace." }, { "start": 229.94, "end": 231.4, "text": " It starts with a description." }, { "start": 231.4, "end": 237.92000000000002, "text": " You're given two strings, S and T, both consisting of lowercase English letters, yada, yada, yada." }, { "start": 237.92000000000002, "end": 243.04000000000002, "text": " What you should note right here is that the description is in natural language." }, { "start": 243.04000000000002, "end": 244.4, "text": " It's made for humans." }, { "start": 244.4, "end": 247.32000000000002, "text": " And therefore, it's just natural that it is a natural language." }, { "start": 247.32, "end": 248.44, "text": " There is no other form." }, { "start": 248.44, "end": 250.88, "text": " There's no machine readable form right here." }, { "start": 250.88, "end": 252.4, "text": " This is it." }, { "start": 252.4, "end": 257.68, "text": " This is what the algorithm alpha code sees and gets as an input." }, { "start": 257.68, "end": 261.34, "text": " There's also a description of the input again in natural language." }, { "start": 261.34, "end": 264.32, "text": " There's description of the output." }, { "start": 264.32, "end": 267.15999999999997, "text": " And there is also this part right here." }, { "start": 267.15999999999997, "end": 269.2, "text": " This is an important part." }, { "start": 269.2, "end": 273.06, "text": " It consists of a bunch of example inputs and outputs." }, { "start": 273.06, "end": 275.34, "text": " So here is an example input." }, { "start": 275.34, "end": 278.76, "text": " For example, there are four problems in this problem set." }, { "start": 278.76, "end": 281.76, "text": " All of this will be described in the input section." }, { "start": 281.76, "end": 285.56, "text": " So the input section here says the first line is a single integer, the number of test cases" }, { "start": 285.56, "end": 286.56, "text": " and so on." }, { "start": 286.56, "end": 288.67999999999995, "text": " So that's the four." }, { "start": 288.67999999999995, "end": 290.64, "text": " Then we have this is a problem." }, { "start": 290.64, "end": 293.84, "text": " So this is S and this is T of the first problem." }, { "start": 293.84, "end": 301.28, "text": " The goal is to type S and strategically type the Backspace button instead of the letter" }, { "start": 301.28, "end": 305.08, "text": " at S to go from S to T." }, { "start": 305.08, "end": 311.47999999999996, "text": " So in this case, we start with S. So the first letter is A, but we choose to type the Backspace" }, { "start": 311.47999999999996, "end": 316.64, "text": " button, which would not type A and would delete what we have, but we have nothing." }, { "start": 316.64, "end": 321.15999999999997, "text": " So yeah, then we would type B. Sorry about that." }, { "start": 321.15999999999997, "end": 327.68, "text": " And we would type B. Then we would type A, then we would type B. And instead of the last" }, { "start": 327.68, "end": 331.91999999999996, "text": " A we get and type the Backspace button, which would delete the letter before it." }, { "start": 331.91999999999996, "end": 333.47999999999996, "text": " And we'd end up with B, A." }, { "start": 333.48, "end": 339.88, "text": " Therefore, we got from S to T and therefore we output the letter the word yes." }, { "start": 339.88, "end": 347.68, "text": " Okay, so we are tasked with writing an algorithm that automatically determines whether it's" }, { "start": 347.68, "end": 356.84000000000003, "text": " possible to go from S to T in any of these test cases and output the corresponding answer." }, { "start": 356.84000000000003, "end": 362.48, "text": " This is challenging by itself, but you only get the problem right if you can do it for" }, { "start": 362.48, "end": 364.48, "text": " all the test cases." }, { "start": 364.48, "end": 370.08000000000004, "text": " And the way these problems are evaluated is that on the test server, they have a whole" }, { "start": 370.08000000000004, "end": 377.6, "text": " bunch more of these test cases, including checking all the corner cases like very long" }, { "start": 377.6, "end": 384.32, "text": " inputs, no input at all, only inputs containing the letter A, if for some reason you expected" }, { "start": 384.32, "end": 386.08000000000004, "text": " a B to be there." }, { "start": 386.08000000000004, "end": 391.88, "text": " And so they test all the edge cases, and you need to be correct in all of them in order" }, { "start": 391.88, "end": 394.32, "text": " to get the points." }, { "start": 394.32, "end": 398.12, "text": " This is extremely challenging even for a human." }, { "start": 398.12, "end": 402.88, "text": " The output that you're supposed to give is an algorithm like this." }, { "start": 402.88, "end": 404.8, "text": " You can see it's not an easy thing." }, { "start": 404.8, "end": 406.8, "text": " It's not just a snippet." }, { "start": 406.8, "end": 408.28, "text": " It's a full blown algorithm." }, { "start": 408.28, "end": 409.52, "text": " It contains inputs." }, { "start": 409.52, "end": 416.8, "text": " So you read the inputs, even that to program an algorithm to come up with that piece of" }, { "start": 416.8, "end": 424.64, "text": " code is already challenging by itself to firstly read that first line and then read as many" }, { "start": 424.64, "end": 426.08, "text": " inputs." }, { "start": 426.08, "end": 429.24, "text": " Then you need to build lists and reverse lists." }, { "start": 429.24, "end": 434.56, "text": " Then you go into a while loop where you pop of things of list depending on comparisons." }, { "start": 434.56, "end": 442.24, "text": " And in the end, you output the correct thing depending on whether that list is zero or" }, { "start": 442.24, "end": 443.90000000000003, "text": " empty or not empty." }, { "start": 443.9, "end": 449.52, "text": " So as you can see, this is a challenging task." }, { "start": 449.52, "end": 451.28, "text": " And this is just one data point." }, { "start": 451.28, "end": 456.28, "text": " The next data point isn't going to be another variant on two strings and typing the back" }, { "start": 456.28, "end": 457.44, "text": " space button." }, { "start": 457.44, "end": 461.44, "text": " The next data point is going to be a completely different problem." }, { "start": 461.44, "end": 470.28, "text": " Like searching for shortest paths and some graph or something with denominators of numbers" }, { "start": 470.28, "end": 474.88, "text": " or numerators or something like this, right?" }, { "start": 474.88, "end": 479.71999999999997, "text": " It is very diverse set of problems and very challenging even for humans." }, { "start": 479.71999999999997, "end": 484.09999999999997, "text": " And the fact that an algorithm can tackle it is very remarkable." }, { "start": 484.09999999999997, "end": 488.35999999999996, "text": " So how do they do it?" }, { "start": 488.35999999999996, "end": 489.35999999999996, "text": " That's our question today." }, { "start": 489.35999999999996, "end": 495.64, "text": " If you guessed that it has something to do with large language models, then and transformers" }, { "start": 495.64, "end": 497.2, "text": " and so on." }, { "start": 497.2, "end": 499.79999999999995, "text": " And yes, kudos." }, { "start": 499.8, "end": 501.16, "text": " You got it." }, { "start": 501.16, "end": 504.6, "text": " But there is a lot more to it." }, { "start": 504.6, "end": 508.04, "text": " And this is really an engineering effort." }, { "start": 508.04, "end": 514.2, "text": " And I think we should appreciate just how far you can push a system to get continuous" }, { "start": 514.2, "end": 515.76, "text": " improvements." }, { "start": 515.76, "end": 520.52, "text": " What they do first, though, is they collect a data set." }, { "start": 520.52, "end": 526.02, "text": " They do train on a open source code from GitHub." }, { "start": 526.02, "end": 527.96, "text": " That is the pre training data set." }, { "start": 527.96, "end": 530.74, "text": " This is very similar to OpenAI's codex model." }, { "start": 530.74, "end": 535.38, "text": " So OpenAI's codex model is trained on code from GitHub." }, { "start": 535.38, "end": 539.6800000000001, "text": " And it can simply do next token prediction on code." }, { "start": 539.6800000000001, "end": 543.48, "text": " And I have to say I've tried codex and I'm pretty happy with its suggestions." }, { "start": 543.48, "end": 545.88, "text": " It's very good." }, { "start": 545.88, "end": 550.4000000000001, "text": " But it can give me like longer snippets than an autocomplete." }, { "start": 550.4000000000001, "end": 553.64, "text": " But it cannot solve any kind of problems like this." }, { "start": 553.64, "end": 555.5600000000001, "text": " It can just continue code." }, { "start": 555.56, "end": 561.9599999999999, "text": " In any case, they collect this pre training data set, they have whatever 700 gigabytes" }, { "start": 561.9599999999999, "end": 570.0799999999999, "text": " of code that they train on, and they run their regular language modeling objective on that" }, { "start": 570.0799999999999, "end": 571.68, "text": " piece of code." }, { "start": 571.68, "end": 576.9599999999999, "text": " Then they fine tune on an appropriate data set of code contests." }, { "start": 576.9599999999999, "end": 581.4399999999999, "text": " So this is a mixture data set that they scrape from multiple websites, for example, code" }, { "start": 581.4399999999999, "end": 584.5999999999999, "text": " forces description to code, code net." }, { "start": 584.6, "end": 594.0400000000001, "text": " These are these are papers, previous papers or competition settings that they have collected" }, { "start": 594.0400000000001, "end": 600.6, "text": " these data sets from and the data sets again, this here is one data point, right?" }, { "start": 600.6, "end": 602.5600000000001, "text": " This is a problem description." }, { "start": 602.5600000000001, "end": 609.64, "text": " And usually these these data sets, they contain one or multiple solutions, not all of them" }, { "start": 609.64, "end": 615.12, "text": " might be correct, but they contain about an order of magnitude more solutions than they" }, { "start": 615.12, "end": 620.72, "text": " contain text or problem descriptions." }, { "start": 620.72, "end": 623.14, "text": " So first they collect a data set." }, { "start": 623.14, "end": 626, "text": " And then they train on that data set." }, { "start": 626, "end": 629.1, "text": " So that could be the story right here." }, { "start": 629.1, "end": 631.36, "text": " But it is not." }, { "start": 631.36, "end": 635.64, "text": " The entire pipeline is a bit more complicated." }, { "start": 635.64, "end": 639.6, "text": " You can see first, there's GitHub, we collect pre training data." }, { "start": 639.6, "end": 647.5600000000001, "text": " We do pre training, then fine tuning on pairs of problems and solutions of these code contests" }, { "start": 647.5600000000001, "end": 648.5600000000001, "text": " data set." }, { "start": 648.5600000000001, "end": 653.6, "text": " This is, as I said, a collection of various data sets that contain these that contain" }, { "start": 653.6, "end": 660.94, "text": " these code challenge type of problems, lead code style problems, and they do fine tuning." }, { "start": 660.94, "end": 666.08, "text": " By the way, their model is a transformer model, you could guess it." }, { "start": 666.08, "end": 669.26, "text": " They do have a special they have an encoder decoder model." }, { "start": 669.26, "end": 674.36, "text": " So you have some sort of an encoder, and they choose to make the encoder shallow and the" }, { "start": 674.36, "end": 678.4, "text": " decoder the decoder deep." }, { "start": 678.4, "end": 682.1, "text": " And there are specific reasons for that, which we'll get to in a second." }, { "start": 682.1, "end": 689.36, "text": " But the encoder mainly handles the description, which is so the description is natural language" }, { "start": 689.36, "end": 694, "text": " mostly contains, you know, some some code snippets and so on." }, { "start": 694, "end": 697.56, "text": " However, it contains mostly the description." }, { "start": 697.56, "end": 699.3199999999999, "text": " That's the encoder." }, { "start": 699.3199999999999, "end": 705.2399999999999, "text": " The benefit of using an encoder decoder architecture over a decoder only is that you do get by" }, { "start": 705.2399999999999, "end": 708.1199999999999, "text": " directionality in the encoder." }, { "start": 708.1199999999999, "end": 713.04, "text": " And as they do here, you can make them different sizes, which means that you can shrink the" }, { "start": 713.04, "end": 718.88, "text": " encoder, which makes you sample able to sample faster and sampling is going to be very important" }, { "start": 718.88, "end": 721.92, "text": " for this system right here in just a second." }, { "start": 721.92, "end": 729.8399999999999, "text": " And then the decoder will be a autoregressive decoder where they just well int J equals" }, { "start": 729.8399999999999, "end": 732.36, "text": " five, yada yada yada." }, { "start": 732.36, "end": 737.56, "text": " So this is this is actually going to produce the code token by token in sort of a language" }, { "start": 737.56, "end": 738.9, "text": " modeling way." }, { "start": 738.9, "end": 745.56, "text": " Their objective is is they have a masked language model objective at the end coder." }, { "start": 745.56, "end": 748.8, "text": " And then the decoder obviously there is cross attention right here." }, { "start": 748.8, "end": 750.88, "text": " There's there's self attention in the encoder." }, { "start": 750.88, "end": 754.4, "text": " There's self attention causal self attention in the decoder." }, { "start": 754.4, "end": 758.74, "text": " And then there is cross attention from the decoder to the encoder." }, { "start": 758.74, "end": 765.08, "text": " And they have a a language modeling objective in the decoder." }, { "start": 765.08, "end": 769.84, "text": " They do say it's quite important to have the master language modeling loss additionally" }, { "start": 769.84, "end": 776.94, "text": " in the encoder because it apparently makes the encoder understand this the stuff in inside" }, { "start": 776.94, "end": 779.12, "text": " of it, the stuff that it's fed a lot better." }, { "start": 779.12, "end": 782.1, "text": " I'm just going to believe them right here." }, { "start": 782.1, "end": 787.6, "text": " So now that we have this model, we can we can fine tune it on these data sets, right?" }, { "start": 787.6, "end": 792.4, "text": " We can feed a description right here, and we can feed one of the solutions." }, { "start": 792.4, "end": 795.4, "text": " And that could already be it." }, { "start": 795.4, "end": 797.34, "text": " However, that's not it." }, { "start": 797.34, "end": 801.68, "text": " It turns out that most of the time, this doesn't actually solve the problem." }, { "start": 801.68, "end": 807.94, "text": " So you feed in a description, and you sample the solution, it is not it does not go well." }, { "start": 807.94, "end": 809.8000000000001, "text": " So what do they do?" }, { "start": 809.8000000000001, "end": 812.5, "text": " Well, there are two ways." }, { "start": 812.5, "end": 817.0400000000001, "text": " The first way is you try to make your model a lot better at like thinking and coming up" }, { "start": 817.0400000000001, "end": 820.0400000000001, "text": " with solutions and reasoning abstractly and so on." }, { "start": 820.0400000000001, "end": 824.4000000000001, "text": " But that doesn't sound very deep learning and transformer like." }, { "start": 824.4000000000001, "end": 829.6, "text": " So what do we do is we just do large scale sampling." }, { "start": 829.6, "end": 834.6800000000001, "text": " That essentially means you have a problem, you get a new problem, you feed this into" }, { "start": 834.68, "end": 842.92, "text": " your decoder right here, and then you just sample like a bunch of solutions from your" }, { "start": 842.92, "end": 843.92, "text": " decoder." }, { "start": 843.92, "end": 847.88, "text": " Sorry, I just said decoder over here, it put this into the encoder, you let the decoder" }, { "start": 847.88, "end": 855.2399999999999, "text": " run and you generate a ginormous a ginormous amount of outputs." }, { "start": 855.2399999999999, "end": 861.1999999999999, "text": " So you can do this with language models, you can sample according to some temperature," }, { "start": 861.2, "end": 865.2800000000001, "text": " you can do some other stuff, you do nucleus sampling and whatnot." }, { "start": 865.2800000000001, "end": 870.6800000000001, "text": " But you can generate diverse outputs from the decoder." }, { "start": 870.6800000000001, "end": 878.5600000000001, "text": " And they do, they sample 1000s up to a million different outputs from the decoder." }, { "start": 878.5600000000001, "end": 883.72, "text": " So now they have this large set of potential solutions." }, { "start": 883.72, "end": 885.46, "text": " And what do they do with it?" }, { "start": 885.46, "end": 888.96, "text": " This is very important, they do filter, and they cluster." }, { "start": 888.96, "end": 892.1600000000001, "text": " So first, the filtering happens." }, { "start": 892.1600000000001, "end": 898.24, "text": " And it might not surprise you, but the filtering happens on these example inputs that we saw" }, { "start": 898.24, "end": 899.24, "text": " right here." }, { "start": 899.24, "end": 905.0400000000001, "text": " So with every problem, you get a tiny amount of example inputs and corresponding example" }, { "start": 905.0400000000001, "end": 911.46, "text": " outputs, they simply let all of the programs they generate run on these example inputs." }, { "start": 911.46, "end": 916.6800000000001, "text": " And the ones that don't crash, they evaluate whether they do get the example outputs." }, { "start": 916.68, "end": 921.4, "text": " And if they do get the example outputs correctly, they keep them around, otherwise they discard" }, { "start": 921.4, "end": 922.4, "text": " them." }, { "start": 922.4, "end": 925.9599999999999, "text": " This is obviously vastly different from how humans solve these things." }, { "start": 925.9599999999999, "end": 931.68, "text": " Humans don't just generate giant amounts of solutions and then let them run on this tiny" }, { "start": 931.68, "end": 933.7199999999999, "text": " amount of example problems." }, { "start": 933.7199999999999, "end": 940.26, "text": " But this eliminates as they say, it eliminates over 99% of these sample things." }, { "start": 940.26, "end": 950.88, "text": " So you end up with a slice right here of this data that you've generated by simply evaluating" }, { "start": 950.88, "end": 954.24, "text": " on these example cases that you had." }, { "start": 954.24, "end": 958.88, "text": " So it's quite important that these are there for the system to work." }, { "start": 958.88, "end": 966.88, "text": " I wonder if we could replace this, because we have this approach as well in, for example," }, { "start": 966.88, "end": 971.36, "text": " Ali, where a lot of stuff is generated and then clip is used to rerank." }, { "start": 971.36, "end": 975.4399999999999, "text": " I wonder if something like this could be done here." }, { "start": 975.4399999999999, "end": 982.88, "text": " But they have several helper models in here in order to help the system during training." }, { "start": 982.88, "end": 992.12, "text": " So I don't know if another helper model might be even appropriate." }, { "start": 992.12, "end": 996.44, "text": " So this leaves them with a tiny amount of solutions, which could still be a lot, right?" }, { "start": 996.44, "end": 1000, "text": " 10% out of a million is still a lot of solutions." }, { "start": 1000, "end": 1003.96, "text": " And they keep themselves to just submitting 10 of them." }, { "start": 1003.96, "end": 1008.44, "text": " As a human, sometimes these code platforms, they have actually a limit on how many things" }, { "start": 1008.44, "end": 1011.72, "text": " you can try to submit." }, { "start": 1011.72, "end": 1014.2, "text": " And 10 is like a reasonable limit." }, { "start": 1014.2, "end": 1020.8800000000001, "text": " It gives you a little bit of, as a human, a little bit of, you're not anxious to submit" }, { "start": 1020.8800000000001, "end": 1024, "text": " a solution if you think it's the correct one." }, { "start": 1024, "end": 1025, "text": " Sorry." }, { "start": 1025, "end": 1028.48, "text": " You also, you can submit a few times, but not too often." }, { "start": 1028.48, "end": 1032.3, "text": " Like you can't brute force the test set that's on the server." }, { "start": 1032.3, "end": 1038.52, "text": " So they need to get down from these still large amount of solutions to 10 solutions." }, { "start": 1038.52, "end": 1041.68, "text": " And that's where this clustering comes in." }, { "start": 1041.68, "end": 1048.84, "text": " So the goal is to end up with this small select set of candidates to execute and evaluate." }, { "start": 1048.84, "end": 1051.12, "text": " And what do they do with the clustering?" }, { "start": 1051.12, "end": 1053.24, "text": " This is where one of these helper models gets in." }, { "start": 1053.24, "end": 1056.32, "text": " So all of these things right here, they are programs." }, { "start": 1056.32, "end": 1060.28, "text": " They're programs that could take inputs and outputs." }, { "start": 1060.28, "end": 1063.72, "text": " And there are many, many of them." }, { "start": 1063.72, "end": 1065.66, "text": " What we want to do is we want to cluster them." }, { "start": 1065.66, "end": 1071.08, "text": " A lot of these programs are going to be different in the tokens that they use, like in the exact" }, { "start": 1071.08, "end": 1075.36, "text": " code, but they're going to be essentially the equivalent program to each other." }, { "start": 1075.36, "end": 1079.72, "text": " Like they're going to be the same program, isomorphic to each other." }, { "start": 1079.72, "end": 1084.7, "text": " However, graph isomorphism, like let's say we parse them in a syntax tree and check" }, { "start": 1084.7, "end": 1086.92, "text": " graph isomorphism." }, { "start": 1086.92, "end": 1089.92, "text": " I do believe that's like a really hard problem." }, { "start": 1089.92, "end": 1094.8, "text": " I might be mistaken, but I think that's used in cryptography to show like a really hard" }, { "start": 1094.8, "end": 1095.8, "text": " problem." }, { "start": 1095.8, "end": 1099.84, "text": " So it's not really graph isomorphism on the syntax tree." }, { "start": 1099.84, "end": 1103.96, "text": " It might not even get all the isomorphic programs." }, { "start": 1103.96, "end": 1105.08, "text": " So what do we do?" }, { "start": 1105.08, "end": 1108.88, "text": " Our plan is going to be, we want to group these programs into the same ones." }, { "start": 1108.88, "end": 1113.66, "text": " So maybe these three here are actually the same and this one here is actually the same." }, { "start": 1113.66, "end": 1116.38, "text": " So we'd like to figure that out." }, { "start": 1116.38, "end": 1117.38, "text": " How do we do it?" }, { "start": 1117.38, "end": 1120.5600000000002, "text": " We just feed like a whole bunch of inputs." }, { "start": 1120.5600000000002, "end": 1126.5200000000002, "text": " We just generate a whole bunch of inputs to the programs." }, { "start": 1126.5200000000002, "end": 1134.3200000000002, "text": " And this is, we train a little model that can take descriptions, like problem descriptions," }, { "start": 1134.32, "end": 1140.48, "text": " and generate new input output pairs, not even input output pairs, just inputs." }, { "start": 1140.48, "end": 1145.4399999999998, "text": " So we take a problem and we take these example inputs and it can generate new ones." }, { "start": 1145.4399999999998, "end": 1148.32, "text": " Now we don't care what the output is." }, { "start": 1148.32, "end": 1153.52, "text": " What we do care is we just feed all of them to all of the models, like all of them go" }, { "start": 1153.52, "end": 1157.36, "text": " to all of the models and we just observe the outputs." }, { "start": 1157.36, "end": 1164.6799999999998, "text": " And we say, well, whenever two programs have the same outputs on all of these test cases" }, { "start": 1164.6799999999998, "end": 1167.8, "text": " that we came up with, they are the same program." }, { "start": 1167.8, "end": 1174.32, "text": " We don't, again, we don't know the solutions to these inputs because we made them up." }, { "start": 1174.32, "end": 1181.04, "text": " But we can assume that if two programs output the same thing for all kinds of inputs, that" }, { "start": 1181.04, "end": 1183.3999999999999, "text": " they're essentially the equivalent program." }, { "start": 1183.4, "end": 1189.64, "text": " Note that we can't just input random garbage right here, because the programs might differ" }, { "start": 1189.64, "end": 1192.52, "text": " with respect to how they handle edge cases and so on." }, { "start": 1192.52, "end": 1197.5600000000002, "text": " So it is good to have an informed model be the one that's inputting things into these" }, { "start": 1197.5600000000002, "end": 1198.5600000000002, "text": " models." }, { "start": 1198.5600000000002, "end": 1199.5600000000002, "text": " But this lets us figure out groups." }, { "start": 1199.5600000000002, "end": 1204.24, "text": " Let's say, okay, all of these models responded the same to all of these inputs that we gave" }, { "start": 1204.24, "end": 1205.24, "text": " them." }, { "start": 1205.24, "end": 1210.24, "text": " So we'll just consider that the same program and we'll just submit one of them as the one" }, { "start": 1210.24, "end": 1211.24, "text": " of the 10." }, { "start": 1211.24, "end": 1214.52, "text": " Then we go to the next bucket, submit one of those and so on." }, { "start": 1214.52, "end": 1219.14, "text": " We start with the largest bucket, and then we progressively go to the smaller buckets." }, { "start": 1219.14, "end": 1223.52, "text": " And if we still have some some budget left, we go to the largest bucket again and sample" }, { "start": 1223.52, "end": 1225.2, "text": " a different one." }, { "start": 1225.2, "end": 1227.2, "text": " But that's essentially how we group programs." }, { "start": 1227.2, "end": 1231.4, "text": " And that's how they get it down to fairly small set of candidates." }, { "start": 1231.4, "end": 1233.48, "text": " Why do they start with the largest bucket?" }, { "start": 1233.48, "end": 1243.44, "text": " The reasoning is that there are many ways that wrong programs can be wrong." }, { "start": 1243.44, "end": 1251.04, "text": " So selecting the largest bucket, I don't know, we'll have to read what they're saying." }, { "start": 1251.04, "end": 1257.24, "text": " But essentially, they say there are many ways to introduce bugs." }, { "start": 1257.24, "end": 1263.4, "text": " And therefore, they expect the wrong programs to be in smaller but distinct buckets." }, { "start": 1263.4, "end": 1267.72, "text": " And that's the system that is how they solve the programming competition." }, { "start": 1267.72, "end": 1275.92, "text": " This might not be as flashy as you know, you imagined, but it's still very, very impressive." }, { "start": 1275.92, "end": 1280, "text": " This strategy of generating a whole bunch of things and then selecting, I think has" }, { "start": 1280, "end": 1286.4, "text": " been popularized more and more in recent times." }, { "start": 1286.4, "end": 1293.24, "text": " As I said, for example, with systems like Dali, we've seen that generative models can" }, { "start": 1293.24, "end": 1296.76, "text": " be used to generate very diverse sets of outputs." }, { "start": 1296.76, "end": 1301.92, "text": " If they are post processed correctly, we can end up with something that the generative" }, { "start": 1301.92, "end": 1306, "text": " model by itself could not necessarily have done." }, { "start": 1306, "end": 1307, "text": " Right." }, { "start": 1307, "end": 1309.6, "text": " This is the base of the system." }, { "start": 1309.6, "end": 1315.76, "text": " Now, as I already said, there are a lot of engineering things right here." }, { "start": 1315.76, "end": 1324.48, "text": " Most notably, if you are going to sample such a large amount of things in order to answer" }, { "start": 1324.48, "end": 1329.76, "text": " a single data point, sampling needs to be very, very fast." }, { "start": 1329.76, "end": 1333.36, "text": " And a lot of their engineering choices are in order to make sampling fast." }, { "start": 1333.36, "end": 1340.18, "text": " For example, as you can see, their encoders are consistently smaller than their decoders." }, { "start": 1340.18, "end": 1346.48, "text": " They have shallow encoders, but deep decoders, precisely for that reason, making the encoder" }, { "start": 1346.48, "end": 1352.24, "text": " more shallow saves on parameters, saves on forward propagation, makes sampling a lot" }, { "start": 1352.24, "end": 1353.24, "text": " faster." }, { "start": 1353.24, "end": 1354.76, "text": " Hey, this is Janek from the future." }, { "start": 1354.76, "end": 1356.64, "text": " Just a small correction right here." }, { "start": 1356.64, "end": 1361.54, "text": " I claimed that the shallowness of the encoder would help with the sampling speed, which" }, { "start": 1361.54, "end": 1363.1200000000001, "text": " is not entirely true." }, { "start": 1363.1200000000001, "end": 1369.16, "text": " In fact, in sampling, the decoder is the bottleneck because you can reuse the encoders encoding" }, { "start": 1369.16, "end": 1372.76, "text": " over and over again as you autoregressively sample." }, { "start": 1372.76, "end": 1377.92, "text": " So the decoder being small would help the sampling speed, but they figured that the" }, { "start": 1377.92, "end": 1382.96, "text": " decoder really needs to be deep in order to keep up performance." }, { "start": 1382.96, "end": 1387.96, "text": " The encoder being shallow helps really during training because during training, I don't" }, { "start": 1387.96, "end": 1389.8000000000002, "text": " do anything autoregressively." }, { "start": 1389.8000000000002, "end": 1394.6000000000001, "text": " And therefore, any part being smaller really helps the speed during training." }, { "start": 1394.6000000000001, "end": 1397.72, "text": " So just small correction back to the video." }, { "start": 1397.72, "end": 1408.92, "text": " They also use the shared, they use a system like a transformer variant that shares all" }, { "start": 1408.92, "end": 1412.6000000000001, "text": " of the values and keys across the heads." }, { "start": 1412.6000000000001, "end": 1418.34, "text": " As you can see right here, for example, here we have six query heads, but all of the keys" }, { "start": 1418.34, "end": 1421.3600000000001, "text": " and values are shared among those heads." }, { "start": 1421.36, "end": 1427.78, "text": " This again saves computation and makes sampling a lot faster." }, { "start": 1427.78, "end": 1433.6999999999998, "text": " So that is how they make this sampling even tractable, right?" }, { "start": 1433.6999999999998, "end": 1439.56, "text": " Because these choices influence how many solutions you can generate at once." }, { "start": 1439.56, "end": 1447.1999999999998, "text": " And yeah, they already say it's a massive effort to generate these solutions at runtime." }, { "start": 1447.2, "end": 1452, "text": " Although I wonder, what does that mean, like a couple of seconds or what?" }, { "start": 1452, "end": 1455.52, "text": " Because humans are time limited in these challenges." }, { "start": 1455.52, "end": 1462.64, "text": " And that's one of the major obstacles is that you're under time pressure as a human." }, { "start": 1462.64, "end": 1466.32, "text": " So I wonder how that kind of plays into codecs right here." }, { "start": 1466.32, "end": 1470.88, "text": " What do they mean by, it's a lot of effort to generate these things and how much time" }, { "start": 1470.88, "end": 1472.48, "text": " does it actually take?" }, { "start": 1472.48, "end": 1478.16, "text": " In any case, they have a lots of intricacies right here." }, { "start": 1478.16, "end": 1484.42, "text": " For example, they add additional meta information to the problem description." }, { "start": 1484.42, "end": 1489.64, "text": " So they feed this stuff here into the problem description as well." }, { "start": 1489.64, "end": 1497, "text": " For example, what the language is, whether or not the solution that the training, so" }, { "start": 1497, "end": 1502.18, "text": " in the training data, they know whether a solution is correct or not." }, { "start": 1502.18, "end": 1506.8, "text": " Whether or not it's the correct solution." }, { "start": 1506.8, "end": 1509.64, "text": " And also tags, tags might help you." }, { "start": 1509.64, "end": 1513.52, "text": " For example, this is dynamic programming, the implementation." }, { "start": 1513.52, "end": 1515.72, "text": " I don't know what implementation tag is." }, { "start": 1515.72, "end": 1521.2, "text": " Oh, maybe you must implement an actual algorithm instead of just solving a decidability problem," }, { "start": 1521.2, "end": 1525.68, "text": " a rating to indicate how hard the problem is." }, { "start": 1525.68, "end": 1528.68, "text": " These things are not known at test time." }, { "start": 1528.68, "end": 1534.42, "text": " However, they've discovered that if they include them at training time, it helps a lot." }, { "start": 1534.42, "end": 1538.96, "text": " And obviously at test time, you can not just always input correct solution, right?" }, { "start": 1538.96, "end": 1544.2, "text": " That's how you can let your model train on even incorrect solutions and still not have" }, { "start": 1544.2, "end": 1551.24, "text": " the incorrect solutions during training contaminate the model trying to produce correct solutions." }, { "start": 1551.24, "end": 1555.28, "text": " So there's potentially something that the model can learn from the incorrect solutions." }, { "start": 1555.28, "end": 1558.72, "text": " Yeah, at test time, you just always put correct solution." }, { "start": 1558.72, "end": 1562.96, "text": " It's a bit pretentious, but you know, it is what it is." }, { "start": 1562.96, "end": 1568.8, "text": " And they also discover that by varying the tags right here, obviously, they don't have" }, { "start": 1568.8, "end": 1573.28, "text": " the tags because they could give a hint in how you solve the problem." }, { "start": 1573.28, "end": 1576.56, "text": " But they can just put like random tags there." }, { "start": 1576.56, "end": 1580.68, "text": " And that would even increase the diversity of the things they sample." }, { "start": 1580.68, "end": 1586.8400000000001, "text": " And that's ultimately what they go for right here, a very diverse set of potential solutions" }, { "start": 1586.8400000000001, "end": 1590.0800000000002, "text": " that they can then filter down and cluster down." }, { "start": 1590.0800000000002, "end": 1597.0800000000002, "text": " So I thought this was quite smart to include sort of data that you only know at training" }, { "start": 1597.0800000000002, "end": 1601.0800000000002, "text": " time and then use that in a creative manner." }, { "start": 1601.0800000000002, "end": 1610.24, "text": " It's sort of like prompt engineering in GPT-3, but in an automated and planned fashion, right?" }, { "start": 1610.24, "end": 1612.64, "text": " So they go through a lot of things, right?" }, { "start": 1612.64, "end": 1617.96, "text": " I have no time to go through all of this, but I highly encourage you to read all of" }, { "start": 1617.96, "end": 1618.96, "text": " it." }, { "start": 1618.96, "end": 1621.84, "text": " They have various techniques right here." }, { "start": 1621.84, "end": 1626.76, "text": " They do tempering, they do value conditioning that also helps value prediction that also" }, { "start": 1626.76, "end": 1632.28, "text": " helps this is a little bit like reinforcement learning where you add additional proxy losses" }, { "start": 1632.28, "end": 1637.68, "text": " in order to make the model understand the problem space better or maybe learn more relevant" }, { "start": 1637.68, "end": 1640.1200000000001, "text": " features." }, { "start": 1640.12, "end": 1645.36, "text": " They do reweighting of the gradient with this technique called gold." }, { "start": 1645.36, "end": 1655, "text": " And yeah, as I can just if you're if you're interested, this is very, very detailed paper." }, { "start": 1655, "end": 1659.08, "text": " And I found it also quite easy and straightforward to read." }, { "start": 1659.08, "end": 1662.78, "text": " And I hope you have the same experience." }, { "start": 1662.78, "end": 1668.4399999999998, "text": " As we said, they get to the filtering and they say filtering removes approximately 99%" }, { "start": 1668.44, "end": 1674.3600000000001, "text": " of model samples, although the exact amount depends on the problem and the model." }, { "start": 1674.3600000000001, "end": 1679.18, "text": " And filtering can still leave thousands or tens of thousands of candidate samples for" }, { "start": 1679.18, "end": 1683.5800000000002, "text": " many problems." }, { "start": 1683.5800000000002, "end": 1686.48, "text": " So that's why they filter them." }, { "start": 1686.48, "end": 1690.68, "text": " They filter them down and after filtering, they use this clustering algorithm, which" }, { "start": 1690.68, "end": 1691.68, "text": " I've already described." }, { "start": 1691.68, "end": 1695.3200000000002, "text": " So I won't do that again right here." }, { "start": 1695.32, "end": 1702.72, "text": " But now we go into the results already and the results are themselves quite interesting," }, { "start": 1702.72, "end": 1709.04, "text": " not only because of the performance of the model, which is pretty good, at least for" }, { "start": 1709.04, "end": 1710.04, "text": " some of the models." }, { "start": 1710.04, "end": 1713.84, "text": " So they train different models right here in different sizes, but also because they" }, { "start": 1713.84, "end": 1721.28, "text": " do very detailed investigations into what the individual contributions that they introduced" }, { "start": 1721.28, "end": 1722.28, "text": " brought." }, { "start": 1722.28, "end": 1728.04, "text": " So as you can see right here, for example, this metric right here, by the way, 10 at" }, { "start": 1728.04, "end": 1732.48, "text": " 10k, it means they submit 10 examples at the end." }, { "start": 1732.48, "end": 1736.28, "text": " So this is after the whole clustering and so on." }, { "start": 1736.28, "end": 1740.04, "text": " And they generate 10,000 candidate solutions." }, { "start": 1740.04, "end": 1747.32, "text": " So at that size, if they consult their 9 billion parameter model, you can see they get a pass" }, { "start": 1747.32, "end": 1754.6799999999998, "text": " rate or a solve rate of 22.6% of the validation set examples that they have." }, { "start": 1754.6799999999998, "end": 1759.8799999999999, "text": " If they use their 41 billion parameter model, that increases." }, { "start": 1759.8799999999999, "end": 1765.08, "text": " And if they additionally use clustering instead of just randomly sampling 10 examples from" }, { "start": 1765.08, "end": 1770.04, "text": " the filtered data set, they get 26.2%." }, { "start": 1770.04, "end": 1774.9199999999998, "text": " You can see right here, both size and the additional features that they build in, get" }, { "start": 1774.9199999999998, "end": 1776.24, "text": " them a large gain." }, { "start": 1776.24, "end": 1779.92, "text": " And this is consistent across all the sizes and so on." }, { "start": 1779.92, "end": 1784.44, "text": " And what you can also see is that sampling more distinctly helps." }, { "start": 1784.44, "end": 1790.64, "text": " For example, if you go to 100,000 or a million samples, even though you only submit 10 of" }, { "start": 1790.64, "end": 1799.3, "text": " them at the end still, if you sample more, all of the models automatically get better," }, { "start": 1799.3, "end": 1801.24, "text": " as you can see." }, { "start": 1801.24, "end": 1808.32, "text": " Yeah, so that is, I think that that is a good lesson and an indication of what could be" }, { "start": 1808.32, "end": 1814.52, "text": " done more in the future to augment our generative models with post-processing." }, { "start": 1814.52, "end": 1817.2, "text": " So the paper is quite long." }, { "start": 1817.2, "end": 1819.64, "text": " It's actually copied again right here." }, { "start": 1819.64, "end": 1825.32, "text": " We'll just jump more into the results section, because there are some other very interesting" }, { "start": 1825.32, "end": 1828.04, "text": " things." }, { "start": 1828.04, "end": 1835.54, "text": " For example, if you look at how the models compare in their size, there's clearly, as" }, { "start": 1835.54, "end": 1840.68, "text": " we already saw, there is an advantage to being larger, which you can see right here." }, { "start": 1840.68, "end": 1847.78, "text": " 300 million parameters performing okay, 41 billion parameters performing a lot better." }, { "start": 1847.78, "end": 1855.24, "text": " You can see at this point right here, the small model solves not even 20% of problems," }, { "start": 1855.24, "end": 1861.64, "text": " the large model solves more than half of the problems more than the small model." }, { "start": 1861.64, "end": 1868.1200000000001, "text": " You can also see that when they are unrestricted, so unlimited attempts instead of just 10 attempts." }, { "start": 1868.1200000000001, "end": 1873.4, "text": " So unlimited attempts, we don't need clustering, we don't need filtering, we could filter," }, { "start": 1873.4, "end": 1874.4, "text": " right?" }, { "start": 1874.4, "end": 1879.08, "text": " Because there's zero chance that a problem that doesn't pass the test inputs will actually" }, { "start": 1879.08, "end": 1882.32, "text": " pass the server inputs." }, { "start": 1882.32, "end": 1889.84, "text": " But no clustering, no selecting, no sub selecting, you can see that the models, they just get" }, { "start": 1889.84, "end": 1894.3999999999999, "text": " better as you sample more, which makes sense, right?" }, { "start": 1894.3999999999999, "end": 1899.6799999999998, "text": " This must be a monotonous function as you sample more, your chance of getting some of" }, { "start": 1899.6799999999998, "end": 1904.1399999999999, "text": " some solution being correct is like gets more and more." }, { "start": 1904.1399999999999, "end": 1910.6399999999999, "text": " But there are so many programs, like the space of possible programs is so huge." }, { "start": 1910.64, "end": 1916.44, "text": " Even the space of possible programs in these datasets is like, or that that would confer" }, { "start": 1916.44, "end": 1918.4, "text": " to these is so large." }, { "start": 1918.4, "end": 1925.5600000000002, "text": " It is really astonishing to me to see that there is really this improvement." }, { "start": 1925.5600000000002, "end": 1926.5600000000002, "text": " It's log linear." }, { "start": 1926.5600000000002, "end": 1928.4, "text": " Yes, this is a log scale." }, { "start": 1928.4, "end": 1937.3200000000002, "text": " But still, it seems crazy that you can actually get a better performance by just sampling" }, { "start": 1937.32, "end": 1940.8999999999999, "text": " more searching through the space more according to the language models." }, { "start": 1940.8999999999999, "end": 1945.8799999999999, "text": " Also notable is that the large models have a bigger slope than the small models." }, { "start": 1945.8799999999999, "end": 1948.6, "text": " I've overdone it a bit with my drawing right here." }, { "start": 1948.6, "end": 1950.6, "text": " But I hope you can still see it." }, { "start": 1950.6, "end": 1958.12, "text": " So the large models have better scaling properties with respect to sampling from them, which" }, { "start": 1958.12, "end": 1964.46, "text": " is also interesting, and will be another, I think, addition to the common knowledge" }, { "start": 1964.46, "end": 1968.52, "text": " of how these models, like of the scaling laws of these models." }, { "start": 1968.52, "end": 1975.4, "text": " So whether you filter them down to 10 problems, which at some point gets you diminishing returns," }, { "start": 1975.4, "end": 1980, "text": " or whether you don't filter them, in which case, I don't see any diminishing returns" }, { "start": 1980, "end": 1982.88, "text": " right here, it just kind of speeds up." }, { "start": 1982.88, "end": 1985.94, "text": " Again, these are log scales on the bottom." }, { "start": 1985.94, "end": 1992.8400000000001, "text": " So it seems to concur very well with the scaling laws we have in that in order to get like" }, { "start": 1992.84, "end": 1998.9199999999998, "text": " a linear improvement in performance, you need an exponential improvement in data, compute," }, { "start": 1998.9199999999998, "end": 2002.56, "text": " or in this case, samples." }, { "start": 2002.56, "end": 2008.02, "text": " The next thing they so they look at they look at various things right here, like how long" }, { "start": 2008.02, "end": 2014.6799999999998, "text": " they train, obviously, with more training compute, again, our solve rate goes up." }, { "start": 2014.6799999999998, "end": 2021.4399999999998, "text": " Again, this seems to be a log linear relationship, which is also very interesting." }, { "start": 2021.44, "end": 2027.92, "text": " And also the the solve rate goes up with more sampling compute, which is kind of the same" }, { "start": 2027.92, "end": 2029.24, "text": " plot as above." }, { "start": 2029.24, "end": 2035.3200000000002, "text": " But here it's measured in terms of compute, and not necessarily in terms of of number" }, { "start": 2035.3200000000002, "end": 2036.3200000000002, "text": " of samples." }, { "start": 2036.3200000000002, "end": 2042.0800000000002, "text": " Obviously, the larger models, they do take longer time to forward propagate and therefore" }, { "start": 2042.0800000000002, "end": 2043.38, "text": " use more compute." }, { "start": 2043.38, "end": 2048.28, "text": " But interestingly, because of their scaling property, you can see that at the beginning," }, { "start": 2048.28, "end": 2055.84, "text": " because they take longer, they need more compute to reach the same pass rate or solve rate." }, { "start": 2055.84, "end": 2063.6600000000003, "text": " However, as you go up with the compute because of their slope being being higher right here," }, { "start": 2063.6600000000003, "end": 2066.92, "text": " they eventually will surpass the other models." }, { "start": 2066.92, "end": 2072.96, "text": " And even seen from a compute perspective, it will be cheaper to use the larger model" }, { "start": 2072.96, "end": 2078.48, "text": " than to use the small models for the same performance." }, { "start": 2078.48, "end": 2086.48, "text": " Yeah, here, they investigate their decisions with respect to how fast they can sample." }, { "start": 2086.48, "end": 2095.2, "text": " You see right here, the alpha code model can sample at 4.74 samples per TPU second." }, { "start": 2095.2, "end": 2100.78, "text": " If they were to use a decoder only model, they would be a lot slower because now obviously" }, { "start": 2100.78, "end": 2108.1600000000003, "text": " the decoder has a bigger length, which means the attention matrix has a bigger, a bigger" }, { "start": 2108.1600000000003, "end": 2109.8, "text": " size, I guess." }, { "start": 2109.8, "end": 2116.84, "text": " They also allocate more blocks to the decoder so that the parameters are approximately equal," }, { "start": 2116.84, "end": 2122.96, "text": " which then means all in all means that this architecture is in total slower because it" }, { "start": 2122.96, "end": 2126.7200000000003, "text": " has more connections, it has more blocks." }, { "start": 2126.72, "end": 2133, "text": " Then they also they test with the regular transformer like standard multi-head attention" }, { "start": 2133, "end": 2136.7999999999997, "text": " and that's just kind of abysmal." }, { "start": 2136.7999999999997, "end": 2141.7999999999997, "text": " So this is due to the fact that they use this shared query attention right here in their" }, { "start": 2141.7999999999997, "end": 2142.7999999999997, "text": " architecture." }, { "start": 2142.7999999999997, "end": 2150.64, "text": " And yeah, yes, okay, this is the same, the same encoder decoder split, but they use a" }, { "start": 2150.64, "end": 2157.92, "text": " different, they don't use the shared query." }, { "start": 2157.92, "end": 2159.4, "text": " So that is speed." }, { "start": 2159.4, "end": 2163.8799999999997, "text": " Now what I also find interesting is the pre-training data set." }, { "start": 2163.8799999999997, "end": 2169.8399999999997, "text": " I'm sorry, we'll go through a lot of results right here, but they're all very interesting." }, { "start": 2169.8399999999997, "end": 2175.56, "text": " So the pre-training data set used also influences the performance at the end." }, { "start": 2175.56, "end": 2183.6, "text": " So as you can see, if they restrict themselves to GitHub, but Python only instead of GitHub" }, { "start": 2183.6, "end": 2189.84, "text": " all languages and all languages means something like Python and C++ and Julia and things like" }, { "start": 2189.84, "end": 2193.68, "text": " this, but it's still programming languages." }, { "start": 2193.68, "end": 2197.72, "text": " So if they use Python only, their solve rate drops dramatically." }, { "start": 2197.72, "end": 2205, "text": " However, if they use Massive Text and Massive Text does contain some GitHub data, but it's" }, { "start": 2205, "end": 2208.84, "text": " also a natural language data set, it doesn't drop as much." }, { "start": 2208.84, "end": 2211.64, "text": " I just, I think that's quite interesting." }, { "start": 2211.64, "end": 2213.8, "text": " Like why might that be?" }, { "start": 2213.8, "end": 2215.6, "text": " I don't know." }, { "start": 2215.6, "end": 2224.52, "text": " Yeah, here they list up all the advancements and don't want to go through them, but you" }, { "start": 2224.52, "end": 2229.62, "text": " can just see how just how engineering plays in here." }, { "start": 2229.62, "end": 2232.56, "text": " It's not just I have an idea and I built the model." }, { "start": 2232.56, "end": 2233.56, "text": " No, no, no." }, { "start": 2233.56, "end": 2241.48, "text": " It's, you know, if I just built the model, I get 10.4% right here, but then I add multi," }, { "start": 2241.48, "end": 2244.44, "text": " I add the encoder loss of the mask language model." }, { "start": 2244.44, "end": 2248.7999999999997, "text": " I add the tempering, I add the tags and ratings." }, { "start": 2248.7999999999997, "end": 2255, "text": " So the little snippet they put in front that they randomize at test time, right?" }, { "start": 2255, "end": 2261.04, "text": " I add value predictions, I add this weighting of the gradient, I add the clustering." }, { "start": 2261.04, "end": 2266.52, "text": " You can see that with everything they add, they get improvement after improvement." }, { "start": 2266.52, "end": 2273.2799999999997, "text": " So I guess what the lesson here is that there might always be a way to sort of push your" }, { "start": 2273.2799999999997, "end": 2281.24, "text": " system even further by just adding something, something smart or alternatively just scaling" }, { "start": 2281.24, "end": 2283.7599999999998, "text": " by a factor of 10." }, { "start": 2283.7599999999998, "end": 2289.84, "text": " But you know, that I guess that's the sad story of deep learning, right?" }, { "start": 2289.84, "end": 2294.2000000000003, "text": " Because these things, they kind of give you a constant improvement, right?" }, { "start": 2294.2000000000003, "end": 2296.84, "text": " You can see that across all of the things right here." }, { "start": 2296.84, "end": 2303.08, "text": " For example, the first the mask language modeling gives you maybe not here, but maybe not here," }, { "start": 2303.08, "end": 2305.08, "text": " but here like a 2%." }, { "start": 2305.08, "end": 2306.88, "text": " This is about 2%." }, { "start": 2306.88, "end": 2310.76, "text": " This is about 2% improvement." }, { "start": 2310.76, "end": 2316.26, "text": " And you know, some of these things, they scale with size, but some of them also kind of give" }, { "start": 2316.26, "end": 2318.4, "text": " you a constant improvement." }, { "start": 2318.4, "end": 2325.1600000000003, "text": " And the you can always get the same improvement, but just scaling up models, right?" }, { "start": 2325.1600000000003, "end": 2329.76, "text": " In fact, you look at you have to get all of these improvements right here." }, { "start": 2329.76, "end": 2332.44, "text": " Or you just scale up the model by a factor of 10." }, { "start": 2332.44, "end": 2335.28, "text": " And you get like also an improvement." }, { "start": 2335.28, "end": 2337.8, "text": " Sad story of deep learning." }, { "start": 2337.8, "end": 2348.36, "text": " Yeah, this right here is a comparison of this is a comparison of the filtering and" }, { "start": 2348.36, "end": 2349.7400000000002, "text": " clustering algorithms." }, { "start": 2349.7400000000002, "end": 2355.52, "text": " So if they just do no filtering, they just select 10 outputs at random, obviously, their" }, { "start": 2355.52, "end": 2361.1800000000003, "text": " solve rate is just zero, because they generate like most of the generated samples, they are" }, { "start": 2361.1800000000003, "end": 2363.88, "text": " just garbage, they don't." }, { "start": 2363.88, "end": 2365.1800000000003, "text": " Well they don't solve the problem." }, { "start": 2365.1800000000003, "end": 2369.34, "text": " So if they now filter that already gives the biggest boost, right?" }, { "start": 2369.34, "end": 2373.6200000000003, "text": " That eliminates the 99% that fail on the test inputs." }, { "start": 2373.62, "end": 2380.38, "text": " And therefore, that is that is pretty, pretty significant improvement." }, { "start": 2380.38, "end": 2387.68, "text": " If they also add clustering, then as you can see, especially at the larger sample budgets," }, { "start": 2387.68, "end": 2389.6, "text": " the clustering helps a lot." }, { "start": 2389.6, "end": 2392.54, "text": " And the blue line here is a theoretical upper bound." }, { "start": 2392.54, "end": 2398.08, "text": " So the blue line is where they just submit every single thing that they sample and see" }, { "start": 2398.08, "end": 2400.3199999999997, "text": " how much that would solve." }, { "start": 2400.32, "end": 2406.56, "text": " So this is theoretical upper bound if they could always sample and select not sample" }, { "start": 2406.56, "end": 2412.82, "text": " the correct but if they could always select the correct things from the things they sampled," }, { "start": 2412.82, "end": 2416.0800000000004, "text": " you can see that there is still a big gap." }, { "start": 2416.0800000000004, "end": 2422.6000000000004, "text": " So even though they do this whole clustering thing, they seem to be still unable in, let's" }, { "start": 2422.6, "end": 2430.64, "text": " say about 10% or so about 10 percentage points or so of solutions to actually come up with" }, { "start": 2430.64, "end": 2436.72, "text": " the to select the correct solution among all of their candidates, which is surprising," }, { "start": 2436.72, "end": 2438.18, "text": " right?" }, { "start": 2438.18, "end": 2439.64, "text": " Maybe not, maybe not." }, { "start": 2439.64, "end": 2444.72, "text": " I mean, yeah, I don't know." }, { "start": 2444.72, "end": 2446.92, "text": " They do test against baselines." }, { "start": 2446.92, "end": 2454.7200000000003, "text": " And I guess the only thing to be said is that the baselines, they sometimes succeed on easy" }, { "start": 2454.7200000000003, "end": 2455.7200000000003, "text": " problems." }, { "start": 2455.7200000000003, "end": 2463.48, "text": " You can see right here that in the introductory problems, something like codex doesn't perform" }, { "start": 2463.48, "end": 2465.08, "text": " too poorly." }, { "start": 2465.08, "end": 2471.88, "text": " However, as soon as you go to like competition level problems, and this is a different data" }, { "start": 2471.88, "end": 2476.76, "text": " set right here in different methodologies in order to make the models comparable." }, { "start": 2476.76, "end": 2483.92, "text": " And their alpha code just shines quite out shines its competitors quite a bit." }, { "start": 2483.92, "end": 2487.6000000000004, "text": " And this is the one one billion model." }, { "start": 2487.6000000000004, "end": 2490.96, "text": " This is not even the larger model." }, { "start": 2490.96, "end": 2496.84, "text": " They do compare whether or not the model just copies over code." }, { "start": 2496.84, "end": 2501.7200000000003, "text": " And they have a lot of ways to investigate that and they find that largely no, it doesn't" }, { "start": 2501.7200000000003, "end": 2505.32, "text": " copy more code than humans copy." }, { "start": 2505.32, "end": 2510.6400000000003, "text": " Therefore, so also humans in these competitions, they they have some algorithm in mind that" }, { "start": 2510.6400000000003, "end": 2515.2000000000003, "text": " they've seen somewhere they just write it down again, or they even actively copy from" }, { "start": 2515.2000000000003, "end": 2516.8, "text": " other solutions." }, { "start": 2516.8, "end": 2520.76, "text": " They do investigate quantitatively and qualitatively that right here." }, { "start": 2520.76, "end": 2525.32, "text": " And they find that the model largely does not." }, { "start": 2525.32, "end": 2531.4, "text": " It does not copy over entire solutions from somewhere else." }, { "start": 2531.4, "end": 2537.52, "text": " Like it doesn't just try out all the things that it has seen so far." }, { "start": 2537.52, "end": 2539.48, "text": " There are other tricks right here." }, { "start": 2539.48, "end": 2544.44, "text": " Sorry, there are also ablations, which I this video is already too long." }, { "start": 2544.44, "end": 2549.08, "text": " So I don't want to necessarily go into it into all of the things." }, { "start": 2549.08, "end": 2557.08, "text": " One interesting thing is that they report that their validation loss after very short" }, { "start": 2557.08, "end": 2558.6800000000003, "text": " time increases." }, { "start": 2558.68, "end": 2561.72, "text": " So you can see right here, the validation loss drops." }, { "start": 2561.72, "end": 2564.14, "text": " And after a while, it increases again." }, { "start": 2564.14, "end": 2567.16, "text": " This would indicate overfitting usually." }, { "start": 2567.16, "end": 2570.7999999999997, "text": " And you can see that for the rest of the run, the validation loss increases." }, { "start": 2570.7999999999997, "end": 2578.96, "text": " However, their real metric, the true metric, the solve rate actually increases too throughout." }, { "start": 2578.96, "end": 2583.9199999999996, "text": " You can see right here, the solve rate increasing throughout the run." }, { "start": 2583.92, "end": 2589.46, "text": " First diminishing returns, but it does continue to increase, which means that the validation" }, { "start": 2589.46, "end": 2593.64, "text": " loss is not necessarily a good metric." }, { "start": 2593.64, "end": 2600.94, "text": " They do have an explanation for this, namely that these coding models, there's not one" }, { "start": 2600.94, "end": 2604, "text": " correct solution, not even in the data set, right?" }, { "start": 2604, "end": 2610.92, "text": " The data set contains many instances of problem A, and then solution one, solution two, solution" }, { "start": 2610.92, "end": 2612.52, "text": " three, solution four." }, { "start": 2612.52, "end": 2618.14, "text": " So if the model learned to produce solution one for problem A, which is a correct solution," }, { "start": 2618.14, "end": 2624.36, "text": " but the current data point wants the model to produce solution two, right?" }, { "start": 2624.36, "end": 2628.04, "text": " Because you're doing language modeling, you need to select one that you train on." }, { "start": 2628.04, "end": 2631.48, "text": " Then that would technically be wrong." }, { "start": 2631.48, "end": 2640.48, "text": " And therefore, if you measure this on the validation set, you might actually get worse." }, { "start": 2640.48, "end": 2646.72, "text": " Yet still, you might actually increase in your ability to solve the actual problems." }, { "start": 2646.72, "end": 2652.04, "text": " This leads me to believe a little bit that, you know, is the training loss even appropriate" }, { "start": 2652.04, "end": 2653.04, "text": " for this thing?" }, { "start": 2653.04, "end": 2658.38, "text": " I mean, it's fine, you know, the validation loss goes up, I can understand why and why" }, { "start": 2658.38, "end": 2661.2400000000002, "text": " that might not be necessarily a problem." }, { "start": 2661.2400000000002, "end": 2668.88, "text": " But does that kind of mean that the training loss itself should be rethought and that we" }, { "start": 2668.88, "end": 2674.2000000000003, "text": " should have a better training loss for these types of models where multiple continuations," }, { "start": 2674.2000000000003, "end": 2679.4, "text": " multiple solutions exist in the data set to the same prefix?" }, { "start": 2679.4, "end": 2680.7400000000002, "text": " I don't know." }, { "start": 2680.7400000000002, "end": 2684.52, "text": " That is one of many questions that I have right here." }, { "start": 2684.52, "end": 2689.36, "text": " As I said, they have lots of other stuff, they augment the data set with some fuzzing" }, { "start": 2689.36, "end": 2691.84, "text": " procedure." }, { "start": 2691.84, "end": 2696.52, "text": " They do lots, lots of different things and investigations." }, { "start": 2696.52, "end": 2699.08, "text": " The paper also has a long appendix." }, { "start": 2699.08, "end": 2703.52, "text": " If you're into that, you can see a lot more stuff, a lot more analysis." }, { "start": 2703.52, "end": 2708.4, "text": " But I think I'm going to leave it here and jump over to the interview." }, { "start": 2708.4, "end": 2709.4, "text": " Thanks so much." }, { "start": 2709.4, "end": 2724.1600000000003, "text": " And I hope you enjoy that as well." } ]
uwfVxckuq50
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Why AI is Harder Than We Think (Machine Learning Research Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "deep learning tutorial", "introduction to deep learning", "what is deep learning", "ai winter", "ai spring", "why is ai hard", "can machines think", "can machines be conscious", "alan turing", "elon musk artificial intelligence", "self driving cars", "marvin minsky", "expert systems", "deep learning artificial intelligence", "are neural networks artificial intelligence", "why is deep learning important" ]
#aiwinter #agi #embodiedcognition The AI community has gone through regular cycles of AI Springs, where rapid progress gave rise to massive overconfidence, high funding, and overpromise, followed by these promises being unfulfilled, subsequently diving into periods of disenfranchisement and underfunding, called AI Winters. This paper examines the reasons for the repeated periods of overconfidence and identifies four fallacies that people make when they see rapid progress in AI. OUTLINE: 0:00 - Intro & Overview 2:10 - AI Springs & AI Winters 5:40 - Is the current AI boom overhyped? 15:35 - Fallacy 1: Narrow Intelligence vs General Intelligence 19:40 - Fallacy 2: Hard for humans doesn't mean hard for computers 21:45 - Fallacy 3: How we call things matters 28:15 - Fallacy 4: Embodied Cognition 35:30 - Conclusion & Comments Paper: https://arxiv.org/abs/2104.12871 My Video on Shortcut Learning: https://youtu.be/D-eg7k8YSfs Abstract: Since its beginning in the 1950s, the field of artificial intelligence has cycled several times between periods of optimistic predictions and massive investment ("AI spring") and periods of disappointment, loss of confidence, and reduced funding ("AI winter"). Even with today's seemingly fast pace of AI breakthroughs, the development of long-promised technologies such as self-driving cars, housekeeping robots, and conversational companions has turned out to be much harder than many people expected. One reason for these repeating cycles is our limited understanding of the nature and complexity of intelligence itself. In this paper I describe four fallacies in common assumptions made by AI researchers, which can lead to overconfident predictions about the field. I conclude by discussing the open questions spurred by these fallacies, including the age-old challenge of imbuing machines with humanlike common sense. Authors: Melanie Mitchell Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher LinkedIn: https://www.linkedin.com/in/yannic-kilcher-488534136/ BiliBili: https://space.bilibili.com/1824646584 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Hello there, welcome back. Today we're going to look at why AI is harder than we think by Melanie Mitchell of the Santa Fe Institute. This paper argues that the cycles of AI spring and AI winter come about by people making too overconfident of predictions and then everything breaks down. And Mitchell here goes into why people make these overconfident predictions. She outlines four fallacies that researchers make and details them and gives some suggestions of what can be done better. So it's a bit of a different paper than we usually look at, but I'd still be interested in your opinions. Let me know in the comments what you think. Share this video out and of course subscribe if you're interested in machine learning content. All right, why AI is harder than we think. In the abstract here, Mitchell makes the case that since the 1950s when AI was sort of beginning to develop, there were repeating periods of what are called AI springs, which are periods of optimistic predictions and massive investment. And on the other hand, periods of disappointment, loss of confidence and reduced funding, which are called AI winters. And she says, even today, where AI has a number of breakthroughs, the development of long promised technologies, such as self driving cars, housekeeping robots and conversational companions has turned out to be much harder than many people expected. And she says one reason of this is our limited understanding, she says, of the nature and complexity of intelligence itself. And there are four fallacies she describes and common assumptions which can lead to these overconfident predictions. So if you know anything a little bit about the history of AI, you are aware that there is this cycle of these springs and winters. And this has been the case from the very beginning. And she outlines very clearly here that, you know, when, for example, the perceptron was invented, people thought, oh, we're going to do all of this extremely cool things. Here, Claude Shannon said, I confidently expect that within a matter of 10 to 15 years, something will emerge from the laboratory, which is not too far from the robots of science fiction fame. And Marvin Minsky forecasts that within a generation, the problems of creating artificial intelligence will be substantially solved. So this is due to the fact they saw real good progress in a very short amount of time and they just extrapolated that progress. And that did not turn out to be the case. And then, of course, there was a winter, a downturn in enthusiasm after all of these promises didn't materialize. Then again, in the 1980s, there were more more AI systems coming up. There was a upswing again and a disappointment again. And then in the 1990s and 2000s, finally, machine learning was introduced. By the way, the 1980s, the time of like expert systems. So people first people developed the other perceptron and thought that was the that was the best. And then expert systems, people thought if we just kind of develop these rules and have these rule solvers and sort of these rule searching algorithms, then we can build AI that did not turn out. And now in the current paradigm, we are in the machine learning paradigm where people develop machine learning algorithms and they think, OK, that's the way to go. So she makes the case here that also this time we might be in a period of overconfidence. She says, however, around 2000 deep learning in which brain inspired multilayer neural networks are trained from data emerged from this backwater from its backwater position and rose to superstar status in machine learning has been around since the 1970s. But recently, with big data sets and big compute, you know, we can we can scale up to a large number of unsolved challenges and solve them. So we can do speech recognition, machine translation, chatbot, image recognition, game playing, protein folding and many more things. And people, let's say, call this AI. Right. In essence, this is machine learning and machine learning and AI are almost synonymous nowadays. But we shouldn't forget that AI is a different thing than machine learning. It's just that many people today believe that you can use machine learning in order to achieve AI. And there was all at once a new round of optimism about the prospects of what has been variously called general, true or human level AI. And she goes through a little bit of what tech CEOs say like co-founder of Google DeepMind predicted that in 2008 that human level AI will be passed in the mid 2020s. I guess that's soon. Mark Zuckerberg declared that one of Facebook goals for the next five to 10 years is to basically get better than human level at all the primary human senses, vision, hearing, language and general cognition. Also, that would be very soon. These 10 years come to an end. So she says, in spite of all this optimism, it didn't take long for cracks to appear in deep learning's facade of intelligence. So already she's calling it a facade of intelligence and not intelligence itself. Turns out, like all AI systems of the past, deep learning can exhibit brittleness, unpredictable errors when facing situations that differ from the training data. She says these things are susceptible to shortcut learning. I've done a video on shortcut learning. If you're interested in that, it's a criticism of neural networks that is well summarized here by saying learning statistical associations in the training data. That allow the machine to produce correct answers, but sometimes for the wrong reasons. One should add the correct answers in the test data set. And this stems a lot from the fact of how these data sets are generated. So maybe there was this famous paper that where they tried to detect criminality from a face portrait. And they just happened to assemble their data set. They took all the criminal ones from their mugshots. But they took all the non-criminal ones from LinkedIn. And the model could just learn who is dressed well and who smiles and had nothing to do with actual criminality. And this shortcut learning is essentially where you say, look, you know, the way you construct the data set, you might there might be something in there where the model learns to give you the correct answer on your test set, because that's constructed equally. However, it doesn't really learn the true thing you want it to learn. Right. That is certainly, certainly exists. However, that is, I feel that is like a data set problem, not a problem with deep learning itself. Now, humans have that, right. So, by the way, in other words, these mechanisms don't learn the concepts we are trying to teach them, but rather they learn shortcuts to correct answers on the training set. And such shortcuts will not lead to good generalizations. So, if you think of humans, humans do that as well. Like if, you know, with branding and all, like if you ever bought a pair of Nike shoes, and you didn't exactly check their quality or evaluate them and so on, like maybe some of you do, but others are just like, oh, it's this brand that, you know, tells me something about its it's made like about the quality of the shoes or something like this. Like, you know, they're not the cheapest and you know, they're not the cheapest manufacturer, even though that might not be true. But you attach all of this to the brand symbol. And so essentially, humans perform shortcut learning all the time. But you know, point taken, these networks are brittle, they sometimes learn the wrong attack. They're of course, they're vulnerable to adversarial perturbations, though I don't think that's like a that's like a an exact criticism. It just means that the networks, they see the world in a little bit a different way than we do, right. And you can exploit that little difference in order to make them do weird things. But you know, you need to really target that it's not like that happens by itself. The I think the big challenge here is what what she says next. However, it seems clear from their non human like errors and vulnerability to adversarial perturbations that these systems are not actually understanding the data the process, at least not in the human sense of understand. It's still a matter of debate in the AI community, whether such understanding can be achieved by adding network layers, and more training data, or whether something more fundamental is missing. So a couple of comments right here, this understanding and she says this correctly, it's like in the human sense of understand and puts it in quotes. It's like, I don't think I've met yet anyone who can actually tell me what understanding means and or suggest a rigorous test for understanding. I think Wally Sabah came the closest to actually, you know, put saying look here, if this and this and this happens, then I claim it understands but most people just say something like, well, I'll, I'll know it when I see it, right. So, this seems a bit the sorry moving the bit of moving the goalpost of what it means to, to understand. But I agree, most people here wouldn't think that today's AI systems actually understand the data in the same way humans do for whatever definition of understand that is commonly used. The other point here is whether that understanding can be achieved by adding network layers and more training data or whether something more fundamental is missing. Now, you have to remember that, you know, human intelligence, however smart it might be, runs on hardware, right, it runs on neurons. And later, the authors here make the case for embodied cognition, but ultimately it runs on hardware, like it's in, it's an algorithm implemented in hardware and in very much all the same, it's all neurons. Sure, they're super specialized in some fashions, but ultimately you only have the chemistry that you have. And we know for a fact that intelligence arises from an algorithm on that hardware. So, yes, you can ask whether the current neural networks architectures are going to be sufficient, but I don't, I don't know what fundamental thing here might be missing like there might be better approaches, more efficient approaches and so on. But ultimately, the human brain is hardware too. But yeah, we could more purpose built, let's say network architectures if we know that something specific is missing. Maybe it's a different structure of network or a different type of algorithm on the hardware, we could build that in. Okay, so as we go on. She is going to into her four fallacies right now. And remember, so she claims that because these fallacies exist, people make overconfident predictions about the future of AI, and we shouldn't do that because if we make overconfident predictions, that means we won't meet our goals. And then we will, you know, the funding will dry up because we've set too high expectations, and then we'll go into another AI winter, which is a valid thing to say, though at some point, she also quotes Elon Musk here about you know, self driving cars and that they're not fully, fully self driving. I think that's, that's up here. Yeah, so, Elon Musk 2019 promised a year from now we'll have over a million cars with full self driving software and everything. And despite attempts to redefine full self driving into existence, none of these predictions have come true. So, so this reference here is to a link where the where Tesla I think towards the DMV so towards the regulators they say oh we're actually not doing fully self driving. So I think it's a bit, it's a bit, it's a bit, it's a bit weird to criticize, you know, Tesla on on that, like, I'm sure no other company ever has said has had a different tone and messaging when they do marketing than when they talk to the regularities like I'm sure that that never happens. Anywhere on the planet except with Tesla right. And that being said, Elon Musk does over promise all the time. On the other hand, he also achieves things that no one else achieves, I think it drives certain people mad that even though he's like over promising so much he still like achieves insane results, just not as insane as he promises, but I like that it makes people mad a bit. Okay, so first fallacy is narrow intelligence is on a continuum with general intelligence. So that's the fallacy the fallacy is thinking that if we develop something like deep blue. It was hailed as the first step of an AI revolution, or GPT three was called a step towards general intelligence. And the fallacy here is that we think that there's this this continuum, like, if we get better on individual tasks, we make progress towards general AI. The first step fallacy is the claim that ever since our first work on computer intelligence, we have been inching along a continuum, at the end of which is AI, so that any improvement in our programs, no matter how trivial counts as progress. It was like claiming that the first monkey that climbed a tree was making progress towards landing on the moon. This has connections to like Kenneth Stanley, as work on on exploration on reinforcement learning without, you know, goal, goal, undirected reinforcement learning, exploration based learning, where you can deceive yourself by just going towards a goal. Maybe you need an entirely different approach. And I guess the the fallacy here is to to say that whatever progress we make, you know, we're going to interpret that as our whatever successes we have, we're going to interpret that as, as a success, or as a step towards general AI. And, you know, honestly, I get it, I get it. Deep Blue is not general AI. And I get it that with like a min-max search tree, and a bunch of handcrafted rules, you cannot get to general AI. However, you know, the principles are still in use, like Deep Blue isn't so different from AlphaGo. And the concept that you need like an AI that goes to a certain depth as a look ahead, in order to achieve AI is not stupid, like it is. And the demonstration that such a systems can beat human at a previously unbeaten task is, I think, definitely progress towards general AI. I doubt we'll ever be able to do that. Towards general AI, I doubt we'll find a general AI that does not have something that at least resembles such a module. The same with GPT-3. Like, I'm fairly convinced that a general AI will have some some type of self supervised learning of language going on. And to not call GPT-3 a step into the direction of general intelligence. Like, sure, it, you know, all the criticism, it's just interpolating training data, yada, yada, yada. You can leverage that. But it's undeniable that that GPT-3 and the family of models there are tremendous progress, and I would argue progress towards general AI. I guess the more question is, how much of a progress is it? Like, is it halfway there? Or is it 1% there? In a way, the monkey climbing on the moon is a bit of progress going towards the moon because they, you know, they see the moon and they may want to go to the moon. Yeah. So I agree a little bit. I don't know. I don't know how, how, how valid that is, though. Fallacy two, easy things are easy and hard things are hard. So that's the fallacy where the correct, the corrected version would actually be easy things are hard and hard things are easy. And this is all about arguing that we assume that, you know, the hard problems for computers are also the hard problems for humans. So whenever we solve the hard problems for humans, we think, wow, that's a, you know, the computer must be super smart because only a super smart human would achieve such a thing. For example, researchers at Google DeepMind in talking about AlphaGo's triumph described the game of Go as one of the most challenging of domains. But correctly, this paper asks challenging for whom? For humans, perhaps. But as psychologist Gary Marcus pointed out, there are domains, including games, that while easy for humans are much more challenging than Go for AI systems. One example is charades. And this is a, it's a valid criticism that people, you know, fall, people fall victim to. How often have you seen someone interact with not even an AI system, but any, anything technical and asking like, why can't the stupid computer just, you know, do this? Like, how easy is that? You know, and you, you have maybe coded previously and you recognize it. It's not that easy, even though it seems super easy to a human. Yeah, so that's correct. It's a correct criticism. I do think deep learning has brought us a lot closer here, like in all of these things where humaness shines. I think deep learning, especially in the perception domain, has brought us a lot closer. Though this paper argues that there's still this kind of notion of common sense that isn't yet there for machines, which I also agree. Fallacy number three, the lure of wishful mnemonics. And this is a bit about how we call things. So the argument is, the argument is here. A major source of simple mindedness in AI programs is the use of mnemonics like understand or goal to refer to programs and data structures. If a researcher calls the main loop of his program understand, he is until proven innocent, merely begging the question. He may mislead a lot of people, most prominently himself. What he should do instead is refer to the main loop as G0034 and see how it can, how, if he can conceive itself or anyone else that G0034 implements at least some part of understanding. Many instructive example of wishful mnemonics by AI researchers come to mind once you see this point. So this is about how we talk about AI systems and the fact that we call things as we do. They give a more recent example here. Again, for deep, for some reason, deep mind is a lot. So IBM Watson is of course here too, deep mind as well. You know, granted, they do make a lot of claims about intelligence and their systems. So so Demis Hassabis says AlphaGo's goal is to be the best human players, not just mimic them. David Silver said, we can always ask AlphaGo how well it thinks it's doing during the game. It was only towards the end of the game that AlphaGo thought it would win. And the cursive words here are goal, thinks and thought it would win. And this, the fallacy here is that we use these words, and we sort of ascribe human tendencies, human wants, human needs to those systems. So the author here argues that AlphaGo doesn't have a goal per se, right? We just say this. AlphaGo doesn't think anything about itself and winning doesn't mean anything to it. Now, I agree that by calling things certain names, we implicitly, you know, we imply that there's something happening, we ascribe human-ness to these machines that might not exist. However, I don't necessarily agree that AlphaGo, for example, has no goal. Like, you know, what does it mean to have a goal? You know, how can you even measure that humans have a goal, right? Unless you ask someone like, what's your goal? But if you can't ask human, you observe their behavior, they seem to be acting, you know, to achieve a certain result, AlphaGo does the same. Like, I don't see why AlphaGo doesn't have a goal in the same way. At least you can't give me like a tangible definition of goal that does not include AlphaGo unless you explicitly carve it such that, you know, AlphaGo is excluded. But the same with, you know, how it thinks it's doing during the game. It was only towards the end that AlphaGo thought it would win. This is a bit more dicey, right? Because actually AlphaGo isn't even thinking how much it would win against in the current game. It's actually evaluating its value function against itself, right? So against the sort of the best opponent it knows. So it constantly underestimates its chances of winning because, you know, unless someone is better than AlphaGo. However, again, you know, of course, winning doesn't mean anything to AlphaGo. However, what does, you know, you also can't do this for a human like, hey, human, what does winning mean? Who knows, right? AlphaGo does have a concept of winning a game of getting positive reward. Like there is a clear state in its state space that relates to a winning game position. So again, it's a valid criticism that we shouldn't attribute human-ness to these machines. However, I do think a lot of a lot of these examples here are not as clear, right? The more clear ones are down here. Now, when we have data sets and tasks such as the Stanford question and answering data set, this is SQUAD short, or the the race reading comprehension data set, the general language understanding evaluation, right? Glue and its derivative super glue. These these are named, of course, if you if you work with them, you know fairly quickly that this is if it is question answering, it's a very limited set of question answering. Like it's a very specific kind of question answering. It's not the ability to answer questions. And you know that. But you have to give it some name, right? The the thought here is that to the public, it might seem that, you know, when when then the press writes things as Microsoft's AI has outperformed humans in natural language understanding, then that might be overly that might appear overly optimistic, which is, of course, true. However, the researchers I feel are only mildly to blame for this. You know, of course, there's marketing and research, but I would maybe, you know, like there's a high chance that in this article here, it was the journalist that massively up those statements to gather more clicks. And I agree, though, that to the public, then it's over promising. Maybe there's a politician that reads this right directs more funding because wow, and so on. And then you get this over promising and disappointment cycle. Then fallacy four is intelligence is all in the brain. And this is about embodied cognition and we should pay more attention to embodied cognition. So the fallacy is that intelligence is all in the brain. And she criticized here the information processing model of the mind, and essentially saying that there is lots of evidence that here the assumption that intelligence is on the brain has led to the speculation that to achieve human level AI, we simply need to scale up machines to match the brain's computing capacity and then develop the appropriate software for this brain matching hardware. Okay, so Jeff Hinton is there saying, you know, in the brain, we have X many connections, you know, once this is a hardware problem. However, there are these researchers in embodied cognition gaining steam since the mid 1970s, and they have a lot of evidence. Body cognition means that the representation of conceptual knowledge is dependent on the body. It's multimodal, not a modal symbolic or abstract. This theory suggests that our thoughts are grounded or inextricably associated with perception, action, emotion, and that our brain and body work together to have cognition. There is there's a lot of evidence that, you know, we work that way, our intelligence works that way. However, I so if if I have to leverage some criticism here, I would say maybe the maybe the author here also has a bit of a human ness fallacy in making this argument, right? Just because human intelligence has those properties doesn't mean that that's the only way to reach intelligence, even human level intelligence, or human like intelligence. Just because humans don't work without a body doesn't necessarily mean right that we can't build intelligence. Otherwise, I could also say, so the argument, I mean, there, there is, there are good arguments for this, don't get me wrong. But if you say something like, look, all the intelligence we ever see is body based, like human intelligence is the only intelligence we know. And that is intelligence that interacts with a body right in acts in the world and so on. I can also I can also hear it's not it's not at all clear. So instead, what we've learned from research and embodied cognition is that human intelligence seems to be a strongly integrated system with closely interconnected attributes, including emotions, desires, strong sense of selfhood and autonomy, and a common sense understanding of the world is not at all clear that these attributes can be separated. I want to leave out the common sense understanding of the world right now and and and focus on like the embodiment in the same vein, you can say, you know, all human intelligence we've ever encountered looks something like, you know, like, like, like this, there's a brain stem right here. There's the frontal thing I am terrible at drawing brains. This is a brain. Okay, brain. And all human intelligence looks like this. And you know, maybe there is the spine. And there are the, the nerves here. So this is a nervous system, human intelligence looks like this. Why don't you know, our computers, you know, must also look like this otherwise, because all the intelligence we ever see looks like this. Right. So since you know, since we don't have that, we need to build it. It's not it's not like I get it. We all this intelligence we see is a brain and the central nervous system and the body doesn't mean that we need it. Even it might be that, you know, the evolutionary pressure on humans, given their body made their intelligence super entangled and the development of intelligence dependent on having a body. But again, ultimately, we have to acknowledge that intelligence is something that's implemented in hardware. And it is the case that, you know, paraplegics have intelligence. I get it. Things like things like emotions and desires and so on. They're still there and, and they might play a role in the development of intelligence, but in, you know, paraplegics have intelligence, but what doesn't have intelligence is someone who's been to the guillotine, right, that there's no intelligence there in, you know, the, the body part. So there's, there's fairly good evidence, I'd say that intelligence exists independent of the body, because we can remove like every part of the body and still have intelligence except the brain. However, the body and embodiment might be necessary to efficiently develop intelligence and the same in my sense goes a bit for common sense. This common sense is a bit of, it's a bit of a mystery word that people use, I feel. So common sense, they mean like, oh, you know, the things that you just know, right. But I would say, you know, this, this is this common sense that people mean is the result of ginormous years of evolution, you know, built into your brain or at least making your brain extremely adapt to learning these things really quickly, right. That's what evolution has done. So in that way, it is very much a scale problem. It's very much a data plus scale problem. And maybe some, you know, clever neuromorphic algorithms or something like this, but it's not, it's not like, you know, we all we have to put in common sense, it seems like a scale problem. We could accelerate it by, you know, directly programming in common sense, but it's not the it's not like a qualitatively different thing, at least I feel. I do agree that embodiment is probably a good way to go in order to develop a general AI in order to push the next boundary of AI, especially in a kind of multi multimodal, multi sensory intelligence. And also reinforcement learning. So models that act in the world and observe their own actions, but we have that kind of to like, they're like a recommender system like YouTube or something they do, you know, the actions have influence on the system and so on. It just doesn't handle it super well for now. So that were the four fallacies. She lays out a bit of a future future plan here, especially, you know, what we call the future future plan here, especially, you know, focusing on, you know, we need to get these machines, a bit of common sense that's still missing, we attribute too much humanness to them. We need to go after maybe more after embodied cognition because that seems to be very promising. We shouldn't use wishful mnemonics. So we shouldn't call our things something like maybe something like attention, like we shouldn't maybe call our, our routines attention because, you know, it's not the same kind of attention that we call attention. We shouldn't assume that the same things are hard for humans as they are for machines. And finally, we where was it, we shouldn't assume that just any new solved task as a step towards general intelligence. Those are the four fallacies. And that was this paper, I invite you to read it in full. It's some has some good stuff in what I didn't read right now. Go check it out. Tell me what you think in the comments and I'll see you next time. Bye bye.
[ { "start": 0, "end": 7, "text": " Hello there, welcome back. Today we're going to look at why AI is harder than we think" }, { "start": 7, "end": 17, "text": " by Melanie Mitchell of the Santa Fe Institute. This paper argues that the cycles of AI spring and AI winter" }, { "start": 17, "end": 23, "text": " come about by people making too overconfident of predictions and then everything breaks down." }, { "start": 23, "end": 31, "text": " And Mitchell here goes into why people make these overconfident predictions. She outlines four fallacies" }, { "start": 31, "end": 38, "text": " that researchers make and details them and gives some suggestions of what can be done better." }, { "start": 38, "end": 45, "text": " So it's a bit of a different paper than we usually look at, but I'd still be interested in your opinions." }, { "start": 45, "end": 52, "text": " Let me know in the comments what you think. Share this video out and of course subscribe if you're interested in machine learning content." }, { "start": 52, "end": 64, "text": " All right, why AI is harder than we think. In the abstract here, Mitchell makes the case that since the 1950s" }, { "start": 64, "end": 73, "text": " when AI was sort of beginning to develop, there were repeating periods of what are called AI springs," }, { "start": 73, "end": 78, "text": " which are periods of optimistic predictions and massive investment." }, { "start": 78, "end": 87, "text": " And on the other hand, periods of disappointment, loss of confidence and reduced funding, which are called AI winters." }, { "start": 87, "end": 97, "text": " And she says, even today, where AI has a number of breakthroughs, the development of long promised technologies," }, { "start": 97, "end": 106, "text": " such as self driving cars, housekeeping robots and conversational companions has turned out to be much harder than many people expected." }, { "start": 106, "end": 118, "text": " And she says one reason of this is our limited understanding, she says, of the nature and complexity of intelligence itself." }, { "start": 118, "end": 126, "text": " And there are four fallacies she describes and common assumptions which can lead to these overconfident predictions." }, { "start": 126, "end": 136, "text": " So if you know anything a little bit about the history of AI, you are aware that there is this cycle of these springs and winters." }, { "start": 136, "end": 145, "text": " And this has been the case from the very beginning. And she outlines very clearly here that, you know, when, for example," }, { "start": 145, "end": 152, "text": " the perceptron was invented, people thought, oh, we're going to do all of this extremely cool things." }, { "start": 152, "end": 161, "text": " Here, Claude Shannon said, I confidently expect that within a matter of 10 to 15 years, something will emerge from the laboratory," }, { "start": 161, "end": 166, "text": " which is not too far from the robots of science fiction fame." }, { "start": 166, "end": 174, "text": " And Marvin Minsky forecasts that within a generation, the problems of creating artificial intelligence will be substantially solved." }, { "start": 174, "end": 185, "text": " So this is due to the fact they saw real good progress in a very short amount of time and they just extrapolated that progress." }, { "start": 185, "end": 198, "text": " And that did not turn out to be the case. And then, of course, there was a winter, a downturn in enthusiasm after all of these promises didn't materialize." }, { "start": 198, "end": 206, "text": " Then again, in the 1980s, there were more more AI systems coming up." }, { "start": 206, "end": 217, "text": " There was a upswing again and a disappointment again. And then in the 1990s and 2000s, finally, machine learning was introduced." }, { "start": 217, "end": 221, "text": " By the way, the 1980s, the time of like expert systems." }, { "start": 221, "end": 231, "text": " So people first people developed the other perceptron and thought that was the that was the best." }, { "start": 231, "end": 241, "text": " And then expert systems, people thought if we just kind of develop these rules and have these rule solvers and sort of these rule searching algorithms," }, { "start": 241, "end": 244, "text": " then we can build AI that did not turn out." }, { "start": 244, "end": 255, "text": " And now in the current paradigm, we are in the machine learning paradigm where people develop machine learning algorithms and they think, OK, that's the way to go." }, { "start": 255, "end": 264, "text": " So she makes the case here that also this time we might be in a period of overconfidence." }, { "start": 264, "end": 276, "text": " She says, however, around 2000 deep learning in which brain inspired multilayer neural networks are trained from data emerged from this backwater from its backwater position" }, { "start": 276, "end": 282, "text": " and rose to superstar status in machine learning has been around since the 1970s." }, { "start": 282, "end": 293, "text": " But recently, with big data sets and big compute, you know, we can we can scale up to a large number of unsolved challenges and solve them." }, { "start": 293, "end": 302, "text": " So we can do speech recognition, machine translation, chatbot, image recognition, game playing, protein folding and many more things." }, { "start": 302, "end": 306, "text": " And people, let's say, call this AI." }, { "start": 306, "end": 312, "text": " Right. In essence, this is machine learning and machine learning and AI are almost synonymous nowadays." }, { "start": 312, "end": 317, "text": " But we shouldn't forget that AI is a different thing than machine learning." }, { "start": 317, "end": 327, "text": " It's just that many people today believe that you can use machine learning in order to achieve AI." }, { "start": 327, "end": 340, "text": " And there was all at once a new round of optimism about the prospects of what has been variously called general, true or human level AI." }, { "start": 340, "end": 354, "text": " And she goes through a little bit of what tech CEOs say like co-founder of Google DeepMind predicted that in 2008 that human level AI will be passed in the mid 2020s." }, { "start": 354, "end": 369, "text": " I guess that's soon. Mark Zuckerberg declared that one of Facebook goals for the next five to 10 years is to basically get better than human level at all the primary human senses, vision, hearing, language and general cognition." }, { "start": 369, "end": 375, "text": " Also, that would be very soon. These 10 years come to an end." }, { "start": 375, "end": 386, "text": " So she says, in spite of all this optimism, it didn't take long for cracks to appear in deep learning's facade of intelligence." }, { "start": 386, "end": 392, "text": " So already she's calling it a facade of intelligence and not intelligence itself." }, { "start": 392, "end": 403, "text": " Turns out, like all AI systems of the past, deep learning can exhibit brittleness, unpredictable errors when facing situations that differ from the training data." }, { "start": 403, "end": 408, "text": " She says these things are susceptible to shortcut learning." }, { "start": 408, "end": 421, "text": " I've done a video on shortcut learning. If you're interested in that, it's a criticism of neural networks that is well summarized here by saying learning statistical associations in the training data." }, { "start": 421, "end": 426, "text": " That allow the machine to produce correct answers, but sometimes for the wrong reasons." }, { "start": 426, "end": 431, "text": " One should add the correct answers in the test data set." }, { "start": 431, "end": 436, "text": " And this stems a lot from the fact of how these data sets are generated." }, { "start": 436, "end": 445, "text": " So maybe there was this famous paper that where they tried to detect criminality from a face portrait." }, { "start": 445, "end": 454, "text": " And they just happened to assemble their data set. They took all the criminal ones from their mugshots." }, { "start": 454, "end": 458, "text": " But they took all the non-criminal ones from LinkedIn." }, { "start": 458, "end": 467, "text": " And the model could just learn who is dressed well and who smiles and had nothing to do with actual criminality." }, { "start": 467, "end": 484, "text": " And this shortcut learning is essentially where you say, look, you know, the way you construct the data set, you might there might be something in there where the model learns to give you the correct answer on your test set, because that's constructed equally." }, { "start": 484, "end": 490, "text": " However, it doesn't really learn the true thing you want it to learn." }, { "start": 490, "end": 503, "text": " Right. That is certainly, certainly exists. However, that is, I feel that is like a data set problem, not a problem with deep learning itself." }, { "start": 503, "end": 516, "text": " Now, humans have that, right. So, by the way, in other words, these mechanisms don't learn the concepts we are trying to teach them, but rather they learn shortcuts to correct answers on the training set." }, { "start": 516, "end": 524, "text": " And such shortcuts will not lead to good generalizations. So, if you think of humans, humans do that as well." }, { "start": 524, "end": 543, "text": " Like if, you know, with branding and all, like if you ever bought a pair of Nike shoes, and you didn't exactly check their quality or evaluate them and so on, like maybe some of you do, but others are just like, oh, it's this brand that, you know, tells me something about its" }, { "start": 543, "end": 556, "text": " it's made like about the quality of the shoes or something like this. Like, you know, they're not the cheapest and you know, they're not the cheapest manufacturer, even though that might not be true." }, { "start": 556, "end": 566, "text": " But you attach all of this to the brand symbol. And so essentially, humans perform shortcut learning all the time." }, { "start": 566, "end": 580, "text": " But you know, point taken, these networks are brittle, they sometimes learn the wrong attack. They're of course, they're vulnerable to adversarial perturbations, though I don't think that's like a that's like a an exact criticism." }, { "start": 580, "end": 590, "text": " It just means that the networks, they see the world in a little bit a different way than we do, right. And you can exploit that little difference in order to make them do weird things." }, { "start": 590, "end": 597, "text": " But you know, you need to really target that it's not like that happens by itself." }, { "start": 597, "end": 603, "text": " The I think the big challenge here is what what she says next." }, { "start": 603, "end": 617, "text": " However, it seems clear from their non human like errors and vulnerability to adversarial perturbations that these systems are not actually understanding the data the process, at least not in the human sense of understand." }, { "start": 617, "end": 627, "text": " It's still a matter of debate in the AI community, whether such understanding can be achieved by adding network layers, and more training data, or whether something more fundamental is missing." }, { "start": 627, "end": 649, "text": " So a couple of comments right here, this understanding and she says this correctly, it's like in the human sense of understand and puts it in quotes. It's like, I don't think I've met yet anyone who can actually tell me what understanding means and or suggest a rigorous test for understanding." }, { "start": 649, "end": 665, "text": " I think Wally Sabah came the closest to actually, you know, put saying look here, if this and this and this happens, then I claim it understands but most people just say something like, well, I'll, I'll know it when I see it, right." }, { "start": 665, "end": 679, "text": " So, this seems a bit the sorry moving the bit of moving the goalpost of what it means to, to understand." }, { "start": 679, "end": 695, "text": " But I agree, most people here wouldn't think that today's AI systems actually understand the data in the same way humans do for whatever definition of understand that is commonly used." }, { "start": 695, "end": 717, "text": " The other point here is whether that understanding can be achieved by adding network layers and more training data or whether something more fundamental is missing. Now, you have to remember that, you know, human intelligence, however smart it might be, runs on hardware, right, it runs on neurons." }, { "start": 717, "end": 733, "text": " And later, the authors here make the case for embodied cognition, but ultimately it runs on hardware, like it's in, it's an algorithm implemented in hardware and in very much all the same, it's all neurons." }, { "start": 733, "end": 749, "text": " Sure, they're super specialized in some fashions, but ultimately you only have the chemistry that you have. And we know for a fact that intelligence arises from an algorithm on that hardware." }, { "start": 749, "end": 767, "text": " So, yes, you can ask whether the current neural networks architectures are going to be sufficient, but I don't, I don't know what fundamental thing here might be missing like there might be better approaches, more efficient approaches and so on." }, { "start": 767, "end": 773, "text": " But ultimately, the human brain is hardware too." }, { "start": 773, "end": 783, "text": " But yeah, we could more purpose built, let's say network architectures if we know that something specific is missing." }, { "start": 783, "end": 793, "text": " Maybe it's a different structure of network or a different type of algorithm on the hardware, we could build that in." }, { "start": 793, "end": 799, "text": " Okay, so as we go on." }, { "start": 799, "end": 803, "text": " She is going to into her four fallacies right now." }, { "start": 803, "end": 823, "text": " And remember, so she claims that because these fallacies exist, people make overconfident predictions about the future of AI, and we shouldn't do that because if we make overconfident predictions, that means we won't meet our goals." }, { "start": 823, "end": 845, "text": " And then we will, you know, the funding will dry up because we've set too high expectations, and then we'll go into another AI winter, which is a valid thing to say, though at some point, she also quotes Elon Musk here about you know, self driving cars and that they're not fully, fully self driving." }, { "start": 845, "end": 848, "text": " I think that's, that's up here." }, { "start": 848, "end": 859, "text": " Yeah, so, Elon Musk 2019 promised a year from now we'll have over a million cars with full self driving software and everything." }, { "start": 859, "end": 867, "text": " And despite attempts to redefine full self driving into existence, none of these predictions have come true." }, { "start": 867, "end": 882, "text": " So, so this reference here is to a link where the where Tesla I think towards the DMV so towards the regulators they say oh we're actually not doing fully self driving." }, { "start": 882, "end": 905, "text": " So I think it's a bit, it's a bit, it's a bit, it's a bit weird to criticize, you know, Tesla on on that, like, I'm sure no other company ever has said has had a different tone and messaging when they do marketing than when they talk to the regularities like I'm sure that that never happens." }, { "start": 905, "end": 927, "text": " Anywhere on the planet except with Tesla right. And that being said, Elon Musk does over promise all the time. On the other hand, he also achieves things that no one else achieves, I think it drives certain people mad that even though he's like over promising so much he still like achieves" }, { "start": 927, "end": 938, "text": " insane results, just not as insane as he promises, but I like that it makes people mad a bit." }, { "start": 938, "end": 962, "text": " Okay, so first fallacy is narrow intelligence is on a continuum with general intelligence. So that's the fallacy the fallacy is thinking that if we develop something like deep blue. It was hailed as the first step of an AI revolution, or GPT three was called a step towards general intelligence." }, { "start": 962, "end": 991, "text": " And the fallacy here is that we think that there's this this continuum, like, if we get better on individual tasks, we make progress towards general AI. The first step fallacy is the claim that ever since our first work on computer intelligence, we have been inching along a continuum, at the end of which is AI, so that any improvement in our programs, no matter how trivial counts as progress." }, { "start": 991, "end": 1013, "text": " It was like claiming that the first monkey that climbed a tree was making progress towards landing on the moon. This has connections to like Kenneth Stanley, as work on on exploration on reinforcement learning without, you know, goal, goal, undirected reinforcement learning, exploration based" }, { "start": 1013, "end": 1042, "text": " learning, where you can deceive yourself by just going towards a goal. Maybe you need an entirely different approach. And I guess the the fallacy here is to to say that whatever progress we make, you know, we're going to interpret that as our whatever successes we have, we're going to interpret that as, as a success, or as a step towards general AI. And, you know, honestly," }, { "start": 1042, "end": 1067, "text": " I get it, I get it. Deep Blue is not general AI. And I get it that with like a min-max search tree, and a bunch of handcrafted rules, you cannot get to general AI. However, you know, the principles are still in use, like Deep Blue isn't so different from AlphaGo. And the concept that you need like an AI" }, { "start": 1067, "end": 1093, "text": " that goes to a certain depth as a look ahead, in order to achieve AI is not stupid, like it is. And the demonstration that such a systems can beat human at a previously unbeaten task is, I think, definitely progress towards general AI. I doubt we'll ever be able to do that." }, { "start": 1093, "end": 1120, "text": " Towards general AI, I doubt we'll find a general AI that does not have something that at least resembles such a module. The same with GPT-3. Like, I'm fairly convinced that a general AI will have some some type of self supervised learning of language going on." }, { "start": 1120, "end": 1147, "text": " And to not call GPT-3 a step into the direction of general intelligence. Like, sure, it, you know, all the criticism, it's just interpolating training data, yada, yada, yada. You can leverage that. But it's undeniable that that GPT-3 and the family of models there are tremendous progress, and I would argue progress towards general AI." }, { "start": 1147, "end": 1169, "text": " I guess the more question is, how much of a progress is it? Like, is it halfway there? Or is it 1% there? In a way, the monkey climbing on the moon is a bit of progress going towards the moon because they, you know, they see the moon and they may want to go to the moon. Yeah." }, { "start": 1169, "end": 1179, "text": " So I agree a little bit. I don't know. I don't know how, how, how valid that is, though." }, { "start": 1179, "end": 1205, "text": " Fallacy two, easy things are easy and hard things are hard. So that's the fallacy where the correct, the corrected version would actually be easy things are hard and hard things are easy. And this is all about arguing that we assume that, you know, the hard problems for computers are also the hard problems for humans." }, { "start": 1205, "end": 1215, "text": " So whenever we solve the hard problems for humans, we think, wow, that's a, you know, the computer must be super smart because only a super smart human would achieve such a thing." }, { "start": 1215, "end": 1226, "text": " For example, researchers at Google DeepMind in talking about AlphaGo's triumph described the game of Go as one of the most challenging of domains." }, { "start": 1226, "end": 1240, "text": " But correctly, this paper asks challenging for whom? For humans, perhaps. But as psychologist Gary Marcus pointed out, there are domains, including games, that while easy for humans are much more challenging than Go for AI systems." }, { "start": 1240, "end": 1260, "text": " One example is charades. And this is a, it's a valid criticism that people, you know, fall, people fall victim to. How often have you seen someone interact with not even an AI system, but any, anything technical and asking like, why can't the stupid computer just, you know, do this?" }, { "start": 1260, "end": 1275, "text": " Like, how easy is that? You know, and you, you have maybe coded previously and you recognize it. It's not that easy, even though it seems super easy to a human." }, { "start": 1275, "end": 1295, "text": " Yeah, so that's correct. It's a correct criticism. I do think deep learning has brought us a lot closer here, like in all of these things where humaness shines. I think deep learning, especially in the perception domain, has brought us a lot closer." }, { "start": 1295, "end": 1306, "text": " Though this paper argues that there's still this kind of notion of common sense that isn't yet there for machines, which I also agree." }, { "start": 1306, "end": 1330, "text": " Fallacy number three, the lure of wishful mnemonics. And this is a bit about how we call things. So the argument is, the argument is here. A major source of simple mindedness in AI programs is the use of mnemonics like understand or goal to refer to programs and data structures." }, { "start": 1330, "end": 1339, "text": " If a researcher calls the main loop of his program understand, he is until proven innocent, merely begging the question." }, { "start": 1339, "end": 1360, "text": " He may mislead a lot of people, most prominently himself. What he should do instead is refer to the main loop as G0034 and see how it can, how, if he can conceive itself or anyone else that G0034 implements at least some part of understanding." }, { "start": 1360, "end": 1377, "text": " Many instructive example of wishful mnemonics by AI researchers come to mind once you see this point. So this is about how we talk about AI systems and the fact that we call things as we do." }, { "start": 1377, "end": 1393, "text": " They give a more recent example here. Again, for deep, for some reason, deep mind is a lot. So IBM Watson is of course here too, deep mind as well. You know, granted, they do make a lot of claims about intelligence and their systems." }, { "start": 1393, "end": 1410, "text": " So so Demis Hassabis says AlphaGo's goal is to be the best human players, not just mimic them. David Silver said, we can always ask AlphaGo how well it thinks it's doing during the game." }, { "start": 1410, "end": 1432, "text": " It was only towards the end of the game that AlphaGo thought it would win. And the cursive words here are goal, thinks and thought it would win. And this, the fallacy here is that we use these words, and we sort of ascribe human tendencies, human wants, human needs to those systems." }, { "start": 1432, "end": 1449, "text": " So the author here argues that AlphaGo doesn't have a goal per se, right? We just say this. AlphaGo doesn't think anything about itself and winning doesn't mean anything to it." }, { "start": 1449, "end": 1464, "text": " Now, I agree that by calling things certain names, we implicitly, you know, we imply that there's something happening, we ascribe human-ness to these machines that might not exist." }, { "start": 1464, "end": 1479, "text": " However, I don't necessarily agree that AlphaGo, for example, has no goal. Like, you know, what does it mean to have a goal? You know, how can you even measure that humans have a goal, right?" }, { "start": 1479, "end": 1495, "text": " Unless you ask someone like, what's your goal? But if you can't ask human, you observe their behavior, they seem to be acting, you know, to achieve a certain result, AlphaGo does the same. Like, I don't see why AlphaGo doesn't have a goal in the same way." }, { "start": 1495, "end": 1509, "text": " At least you can't give me like a tangible definition of goal that does not include AlphaGo unless you explicitly carve it such that, you know, AlphaGo is excluded." }, { "start": 1509, "end": 1518, "text": " But the same with, you know, how it thinks it's doing during the game. It was only towards the end that AlphaGo thought it would win." }, { "start": 1518, "end": 1532, "text": " This is a bit more dicey, right? Because actually AlphaGo isn't even thinking how much it would win against in the current game. It's actually evaluating its value function against itself, right?" }, { "start": 1532, "end": 1545, "text": " So against the sort of the best opponent it knows. So it constantly underestimates its chances of winning because, you know, unless someone is better than AlphaGo." }, { "start": 1545, "end": 1552, "text": " However, again, you know, of course, winning doesn't mean anything to AlphaGo." }, { "start": 1552, "end": 1561, "text": " However, what does, you know, you also can't do this for a human like, hey, human, what does winning mean?" }, { "start": 1561, "end": 1566, "text": " Who knows, right? AlphaGo does have a concept of winning a game of getting positive reward." }, { "start": 1566, "end": 1580, "text": " Like there is a clear state in its state space that relates to a winning game position. So again, it's a valid criticism that we shouldn't attribute human-ness to these machines." }, { "start": 1580, "end": 1588, "text": " However, I do think a lot of a lot of these examples here are not as clear, right?" }, { "start": 1588, "end": 1607, "text": " The more clear ones are down here. Now, when we have data sets and tasks such as the Stanford question and answering data set, this is SQUAD short, or the the race reading comprehension data set, the general language understanding evaluation, right?" }, { "start": 1607, "end": 1623, "text": " Glue and its derivative super glue. These these are named, of course, if you if you work with them, you know fairly quickly that this is if it is question answering, it's a very limited set of question answering." }, { "start": 1623, "end": 1633, "text": " Like it's a very specific kind of question answering. It's not the ability to answer questions. And you know that. But you have to give it some name, right?" }, { "start": 1633, "end": 1658, "text": " The the thought here is that to the public, it might seem that, you know, when when then the press writes things as Microsoft's AI has outperformed humans in natural language understanding, then that might be overly that might appear overly optimistic, which is, of course, true." }, { "start": 1658, "end": 1680, "text": " However, the researchers I feel are only mildly to blame for this. You know, of course, there's marketing and research, but I would maybe, you know, like there's a high chance that in this article here, it was the journalist that massively up those statements to gather more clicks." }, { "start": 1680, "end": 1694, "text": " And I agree, though, that to the public, then it's over promising. Maybe there's a politician that reads this right directs more funding because wow, and so on. And then you get this over promising and disappointment cycle." }, { "start": 1694, "end": 1710, "text": " Then fallacy four is intelligence is all in the brain. And this is about embodied cognition and we should pay more attention to embodied cognition. So the fallacy is that intelligence is all in the brain." }, { "start": 1710, "end": 1737, "text": " And she criticized here the information processing model of the mind, and essentially saying that there is lots of evidence that here the assumption that intelligence is on the brain has led to the speculation that to achieve human level AI, we simply need to scale up machines to match the brain's computing capacity and then develop the appropriate software for this brain matching hardware." }, { "start": 1737, "end": 1758, "text": " Okay, so Jeff Hinton is there saying, you know, in the brain, we have X many connections, you know, once this is a hardware problem. However, there are these researchers in embodied cognition gaining steam since the mid 1970s, and they have a lot of evidence." }, { "start": 1758, "end": 1780, "text": " Body cognition means that the representation of conceptual knowledge is dependent on the body. It's multimodal, not a modal symbolic or abstract. This theory suggests that our thoughts are grounded or inextricably associated with perception, action, emotion, and that our brain and body work together to have cognition." }, { "start": 1780, "end": 1803, "text": " There is there's a lot of evidence that, you know, we work that way, our intelligence works that way. However, I so if if I have to leverage some criticism here, I would say maybe the maybe the author here also has a bit of a human ness fallacy in making this argument, right?" }, { "start": 1803, "end": 1823, "text": " Just because human intelligence has those properties doesn't mean that that's the only way to reach intelligence, even human level intelligence, or human like intelligence. Just because humans don't work without a body doesn't necessarily mean right that we can't build intelligence." }, { "start": 1823, "end": 1841, "text": " Otherwise, I could also say, so the argument, I mean, there, there is, there are good arguments for this, don't get me wrong. But if you say something like, look, all the intelligence we ever see is body based, like human intelligence is the only intelligence we know." }, { "start": 1841, "end": 1864, "text": " And that is intelligence that interacts with a body right in acts in the world and so on. I can also I can also hear it's not it's not at all clear. So instead, what we've learned from research and embodied cognition is that human intelligence seems to be a strongly integrated system with closely" }, { "start": 1864, "end": 1877, "text": " interconnected attributes, including emotions, desires, strong sense of selfhood and autonomy, and a common sense understanding of the world is not at all clear that these attributes can be separated." }, { "start": 1877, "end": 1899, "text": " I want to leave out the common sense understanding of the world right now and and and focus on like the embodiment in the same vein, you can say, you know, all human intelligence we've ever encountered looks something like, you know, like, like, like this, there's a brain stem right here." }, { "start": 1899, "end": 1918, "text": " There's the frontal thing I am terrible at drawing brains. This is a brain. Okay, brain. And all human intelligence looks like this. And you know, maybe there is the spine. And there are the, the nerves here. So this is a nervous system, human intelligence looks like this." }, { "start": 1918, "end": 1938, "text": " Why don't you know, our computers, you know, must also look like this otherwise, because all the intelligence we ever see looks like this. Right. So since you know, since we don't have that, we need to build it. It's not it's not like I get it." }, { "start": 1938, "end": 1962, "text": " We all this intelligence we see is a brain and the central nervous system and the body doesn't mean that we need it. Even it might be that, you know, the evolutionary pressure on humans, given their body made their intelligence super entangled and the development of intelligence dependent on having a body." }, { "start": 1962, "end": 1978, "text": " But again, ultimately, we have to acknowledge that intelligence is something that's implemented in hardware. And it is the case that, you know, paraplegics have intelligence. I get it. Things like things like emotions and desires and so on." }, { "start": 1978, "end": 1998, "text": " They're still there and, and they might play a role in the development of intelligence, but in, you know, paraplegics have intelligence, but what doesn't have intelligence is someone who's been to the guillotine, right, that there's no intelligence there in, you know, the, the body part." }, { "start": 1998, "end": 2012, "text": " So there's, there's fairly good evidence, I'd say that intelligence exists independent of the body, because we can remove like every part of the body and still have intelligence except the brain." }, { "start": 2012, "end": 2033, "text": " However, the body and embodiment might be necessary to efficiently develop intelligence and the same in my sense goes a bit for common sense. This common sense is a bit of, it's a bit of a mystery word that people use, I feel." }, { "start": 2033, "end": 2054, "text": " So common sense, they mean like, oh, you know, the things that you just know, right. But I would say, you know, this, this is this common sense that people mean is the result of ginormous years of evolution, you know, built into your brain or at least making your brain extremely adapt to learning these things really quickly, right." }, { "start": 2054, "end": 2074, "text": " That's what evolution has done. So in that way, it is very much a scale problem. It's very much a data plus scale problem. And maybe some, you know, clever neuromorphic algorithms or something like this, but it's not, it's not like, you know, we all we have to put in common sense, it seems like a scale problem." }, { "start": 2074, "end": 2103, "text": " We could accelerate it by, you know, directly programming in common sense, but it's not the it's not like a qualitatively different thing, at least I feel. I do agree that embodiment is probably a good way to go in order to develop a general AI in order to push the next boundary of AI, especially in a kind of multi multimodal, multi sensory intelligence." }, { "start": 2103, "end": 2120, "text": " And also reinforcement learning. So models that act in the world and observe their own actions, but we have that kind of to like, they're like a recommender system like YouTube or something they do, you know, the actions have influence on the system and so on." }, { "start": 2120, "end": 2129.2, "text": " It just doesn't handle it super well for now. So that were the four fallacies. She lays out a bit of a future future plan here, especially, you know, what we call the" }, { "start": 2129.2, "end": 2142.2, "text": " future future plan here, especially, you know, focusing on, you know, we need to get these machines, a bit of common sense that's still missing, we attribute too much humanness to them." }, { "start": 2142.2, "end": 2150.2, "text": " We need to go after maybe more after embodied cognition because that seems to be very promising." }, { "start": 2150.2, "end": 2168.2, "text": " We shouldn't use wishful mnemonics. So we shouldn't call our things something like maybe something like attention, like we shouldn't maybe call our, our routines attention because, you know, it's not the same kind of attention that we call attention." }, { "start": 2168.2, "end": 2184.2, "text": " We shouldn't assume that the same things are hard for humans as they are for machines. And finally, we where was it, we shouldn't assume that just any new solved task as a step towards general intelligence." }, { "start": 2184.2, "end": 2200.2, "text": " Those are the four fallacies. And that was this paper, I invite you to read it in full. It's some has some good stuff in what I didn't read right now. Go check it out. Tell me what you think in the comments and I'll see you next time. Bye bye." } ]
qS-iYnp00uc
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Parti - Scaling Autoregressive Models for Content-Rich Text-to-Image Generation (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "diffusion models", "what is deep learning", "deep learning tutorial", "introduction to deep learning", "generative models", "parti", "google parti", "google party", "google pathways", "google imagen", "image", "dalle", "dalle2", "dalle 2", "dall e 2", "dall e 2 vs graphic designer", "anubis" ]
#parti #ai #aiart Parti is a new autoregressive text-to-image model that shows just how much scale can achieve. This model's outputs are crips, accurate, realistic, and can combine arbitrary styles, concepts, and fulfil even challenging requests. OUTLINE: 0:00 - Introduction 2:40 - Example Outputs 6:00 - Model Architecture 17:15 - Datasets (incl. PartiPrompts) 21:45 - Experimental Results 27:00 - Picking a cherry tree 29:30 - Failure cases 33:20 - Final comments Website: https://parti.research.google/ Paper: https://arxiv.org/abs/2206.10789 Github: https://github.com/google-research/parti Links: Homepage: https://ykilcher.com Merch: https://ykilcher.com/merch YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://ykilcher.com/discord LinkedIn: https://www.linkedin.com/in/ykilcher If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Not a day goes by in AI research in which we don't get a new image generation model these days. So take a look at the top row right here and listen to the prompt that generated them. Oil on canvas painting of a blue night sky with roiling energy. A fuzzy and bright yellow crescent moon shining at the top. Below the exploding yellow stars and radiating swirls of blue, a distant village sits quietly on the right. Connecting earth and sky is a flame-like cypress tree with curling and swaying branches on the left. A church spire rises as a beacon over rolling blue hills. That is a 67 word description of Starry Night by Vincent van Gogh. And it is also the prompt that generated the top row of images. And the paper does this to show that image generation models, specifically this one, they have become super duper capable of incorporating not only wild concepts, as you can see here, co-locating the Eiffel Tower with the Sydney skyline and fireworks and whatnot, but also, you know, minute details about things in the image and where things are and how things look. So we've gone from essentially conditional GANs where we could create one of 10 classes to something where we can input like a little essay about what we want to see and get it out. So this is by a group of researchers out of Google Research. And they are a parallel work to the Imogen model that you might have seen. So this model or the paper is called Scaling Autoregressive Models for Content-Rich Text to Image Generation. But the model is called, let me grab if I can, let me grab a pen. The model is called PARTI. And I have no clue how to pronounce this. This could be party. Maybe the pronunciation is on the art or on the part because it's pathways like it's, or partai or I have no idea. Let's call it PARTI. And PARTI is a model that generates images from text as we have so many models. However, it doesn't do this in the same style as like Imogen, which is a diffusion model. It is an autoregressive model. So here you can see a bunch of other outputs like this. This is insane. Look at the left side right here. A photo of a frog reading the newspaper named Toaday. The newspaper is named Toaday. Like how crazy is that? That in itself is pretty funny. But we know that these image to sorry, these text to image models are pretty bad at spelling stuff in images. Well, not this model, as you can see right here, it gets it completely right. It doesn't always get it right, but it gets it right often enough. Or this one, portrait of a statue of the Egyptian god Anubis wearing aviator goggles like another connoisseur of fine eyewear. White t-shirt and a leather jacket. The city of Los Angeles is in the background. High res DSLR photograph. That's literally that's the academic version of the Unreal Engine trick right here. And you can see the images spot on. So this requires a lot of knowledge, not only of, you know, what a DSLR photograph is, but also how the skyline of Los Angeles looks, how the Egyptian got an Anubis looks right. And the composition of things together like these, this god was never in a leather jacket depicted, I guess, maybe on the internet you'll find anything. But you can see a bunch of more examples right here. I specifically love the thing on the left side here. You can see that they generated images. So the prompt is three quarters front view of a XYZ coming around a curve in a mountain road looking over a green valley on a cloudy day. So X here is any of the colors blue, red and yellow. Y is any of the numbers. 1977, 1997 and 2017. And Z is any of these car types. And now look that the model can essentially track the historical evolution of these cars. So not only does it know what a Porsche is, it also knows how a Porsche in 77 looked like. Maybe it's not exactly the correct year, but this is pretty crazy. You can see a bunch more examples right here. They do a lot of examples with animals. I specifically like the raccoon here in the style of cubism. So this is going to be very, very powerful technology. We can immediately see that, you know, the quality of these models gets fast, gets quickly, sorry, gets well, gets better so quickly that in the foreseeable future, we're going to have super powerful tools to just create and edit images from text. Look at the left side here, a giant cobra snake made from salad. You know, I'm sure they even say these are cherry picked, but still this is insane. Now, I would love to tell you that behind all of this cool development is a really cool idea like is a smart architecture and something like this. But I'm afraid it is not. It is simply scale and not simply scale. I mean, you have to have the sort of correct base architecture. There is nothing like particularly there's no cool invention in architecture or a neat trick involved or anything like this. It's really just plug basic things together, make them really big, train them for long on a lot of data and you'll get quality. So this is the model overview right here, the overview of this party or part time model. This is, as I already said, in contrast to image and it is an auto regressive model, so not a diffusion model. What happens is that on this side here, you have this VQ GAN image encoder and decoder. Well, they don't call them encoder and decoder, they call them tokenizer and de tokenizer. So if you are not aware, auto regressive models, they work on tokens. Now, tokens in usually in natural language processing are words or part of words. So these would be tokens, token one, token two, and so on until token N. And then what you would try to do is you would try always to predict the next token. That's what makes it auto regressive. You feed in parts of a token sequence, like parts of a sentence, you try to predict the next one. That's exactly what you see right here in the architecture. So you pass in the start of sentence token, you try to predict the first token, then you pass in the first token. And then from these two, you try to predict the second token. And then you put that here from these three, you try to predict the third token and so on. That's the auto regressivity. In text, that works well. However, in images, it's not quite obvious how to do that. That's why you first need to get from the image space to the token space. So we need a way for any given image that we get out a sequence of tokens. And it can't be the pixels themselves. We would like to have tokens that are kind of latent and have sort of a bit of meaning, not just individual pixels, because that, first of all, is too many pixels. And second of all, there's not too much, let's say, information in the single pixel. So what we do is we have these image tokenizer and detokenizer. This is a VQGAN that is powered by a vision transformer. So essentially, this is a model that takes this image, it ships it through a bunch of layers. And at the end, so let's say the image at the beginning has a bunch of rows, a bunch of columns with its pixels. This goes through a series of maybe downscalings and so on. No, actually, it's because it's a vision transformer. It probably even tokenizes, like it patches the image at the very beginning. So these would be image patches. Then these are transformed by a transformer to a latent space. Maybe they are compressed. And then you get tokens. So at the end, you can take these things right here or the things that correspond to them in the latent representation. You can take those as image tokens and you can unroll essentially this image and then feed it into this model. Hey, just a short interjection here from Janek from the future. The idea, I forgot, the idea behind the whole setup here is behind the whole VQGAN is obviously that these things here are tokens, which means that they come from a set vocabulary. So the way you train a VQGAN isn't just to give you this latent representation of like token like things, but then you also quantize them. So there is also a vocabulary somewhere where you have a set defined set of tokens. I believe in their case, they have like 8,000 tokens or so. And your image, your image tokens must be of these 8,000. So the image has a bunch of tokens, but they all must be one of the things in the vocabulary here. Now, the vocabulary is also learned. There are some techniques by which to learn the vocabulary. But this quantization is actually what then enables you to treat essentially to treat it as a sequence of language tokens, which also come from a vocabulary. All right. Back to Janek in the past. The image tokenizer is trained as an as it says here as a VQGAN, which means that you encode and then you decode again and you try to get out the same image. And at the end, this representation here in the middle is really valuable because it's a tokenized representation of an image. So you put that into the transformer right here. And this is, as we said, an autoregressive model. So it gets as an input, obviously, the sequence so far, it tries to predict the next image token, but also gets as an input, the text. So this is the prompt that the user puts in. So the prompt is encoded in a transformer encoder and is then fed in as a side input as a target for attention. So whenever in the layer here you have queries, keys and values, I'm going to guess the query can also look at the transformer encoder. The query can also look at the keys right here. So over here, you'd only have keys and values. If you don't know what the attend, what this all of this means, I have a video on attention is all you need where you can learn how attention mechanisms work. So essentially, the way this is trained is the following. You attach a sentence here or a description of an image and you attach an image right here. The image is then patched. It is fed through the VQGAN encoder. Its latent representation is obtained. That latent representation is put here. And then you essentially train a decoder language model that has cross attention into the text representation of the prompt. So you simply train this thing right here like you would train a GPT model or any other model. And this thing right here is trained, as I said, as an imagery construction model. And this thing right here is trained, I guess, jointly with this. Actually don't know. This could this could not be true, but I think it is true. I think it is trained jointly. So that's the model, as I said, is very basic. I wish I could tell you something more interesting right here, but I can't. It's a standard, you know, bunch of transformers in sequence. Essentially, every single component right here is a transformer. And because every single thing is a transformer, you can scale this thing by a lot. By the way, here you can see a bunch of the I'm not going to go into the architectural details. Quite quite as much. But they do also train an up sampler. So they have images of resolution 256 by 256. Ultimately, they do train an up sampler as well, where so here this is the up sampler super resolution up sampler where they can go from their pipeline, which does 256 by 256 to a 1024 by 1024. Picture essentially. But this is just up sampling. Right. So there is, I mean, technically no extra information right here. This doesn't get to look at the prompt or anything like this. It simply gets to look at this image and then make a four times larger image out of that. So where did we leave off? Oh, yeah, I also wanted to say if you now want to get an image out of this thing, so not training, but inference. What you do is you attach only the prompt right here. You encode the prompt, you put the start of sentence token right here. You let the model generate one. Then you put that here, too. Then you put that here, three and so on. You let the model generate the image tokens here. You take those image tokens, you feed, you arrange it into the latent representation of the VQ again. And you use the decoder right here in order to generate the final image. So that's the whole flow. And then you put it through the super resolution if you want that. Here you can see the basics, the basic architectural layouts. So there is the smallest model has 350 million parameter. You can see it has 12 encoder and 12 decoder layer. It's pretty standard transformer scaling laws right here. I mean, scaling laws, pretty standard transformer architectural laws. They go through a 750 million parameter model, 3 billion. And the last one here has 20 billion parameters. So that's a decently sized model. It's not as large as the large language models. And they do use things like sparse con attention and things like this. But it is, you know, it's pretty large, I would say. You could not run that at home very easily. So where does that get us? They have a big description right here how they solve this architecturally, how they short the model, how they use parallelism, which is very interesting. I'm just not an expert at it. So if you're interested, I'll leave you to read this part. I found the at least the drawings are pretty cool. So apparently this the signal is routed like, you know, like so, like so and so. So like in like a snake type of arrangement so that always you can pipeline so that always one thing is essentially busy as you send data to the next thing and so on. But as I said, I'm not the expert in this and I'd rather want to get to the other things, which are the data sets that they use. So they have three data sets, three main data sets right here. One is Emma's Coco. Now, Emma's Coco, as they show right here for the image on the right hand side, it simply says a bowl of broccoli and apples with a utensil. So it just kind of is a high level description of what's in the image. Like an image, simple image caption right for this image right here. Whereas the localized narratives data set, you can see that its description is way longer. It's more linguistically prosaic, but it is also much more descriptive of the actual image. Like so the top is if you want to tell someone what's in an image and the bottom is more like if you want to like really paint the picture, like no pun intended. Or if you want to describe the picture to someone so that they could maybe recreate it in some way. And it turns out that we are now at the point with these image generation models where they are so good that we need data sets like the bottom one to really push them to their limits. And not only that, but the authors here find that there are even problems with that because these image data sets, they're always created in a way that an image is given and then the humans are asked to write a description, which is really good because then you have image and description together. However, the authors here know that this prevents, for example, fantasy pictures like we saw before the raccoon and cubism that it doesn't exist. So it can't be in any data set or anubis in a leather jacket doesn't exist. So it can't be in any data set. So while we rely on generalization during training for the model to learn these things, we actually need data sets like that to evaluate whether they can really do these things. Right. Otherwise, we're left with sort of subjective evaluation. So they come up with their own data set, which is called party prompts. That's actually also the thing they release as far as I understand. And obviously, as all of the recent works in big models, this thing isn't released. There's no code. There's no I mean, the code would be trivial. There's no weights. There's no training recipe. There's no some of the data sets are proprietary, if I understand correctly. So the paper is more open about what they do, but still that there is no way of accessing this. So party prompts. This is a data set that essentially only consists of prompts. So there's no images in this data set. And I believe the only way you can really assess thing is you can let the model generate stuff and then you can let humans rate it. That's essentially it. The party prompts. It is pretty interesting because they create these prompts by letting the prompt engineers sort of they choose, for example, a challenge. So the challenge might be perspective. Right. Which could be, you know, I need a prompt that asks for some object in some in some specific perspective that is unusual. Or, yeah, quantity. Like I need a prompt that a that asks for a given number of things because we know that these models, they're not super good at counting. Right. I mean, we also thought the models aren't super good at spelling. And now it turns out, well, if we just make them bigger, they are. So, you know, I'm fairly confident they're going to be good at counting in short while. That's the challenge. There's also, if I recall correctly, this is this upper table right here, like categories. So there are categories, animals, there are categories, illustrations and so on. So you can see this is a diverse set of category challenge combinations and they make a bunch of prompts for each one. I think they have about 1600 prompts in total in this party prompt eval set, which is a pretty neat thing to have. Even if it comes without images. So now they train the thing with their whole architectural shebangs with the parallelism and the pipelining and the yada, yada, yada on TPU v4, I think. So this is a huge operation. So what does that give us? I want to just jump the evals here on the metrics because yes, yes, yes, they're very good, very good. They're also very good as rated by humans, humans, very good, which is what's interesting is they have, for example, a retrieval baseline, which simply retrieves images from the training data set. And even if the if the obviously image text match, the party model wins because you can actually create an image and not retrieve one. But even in image realism, you can see the retrieval is only slightly higher in realism, right? Every single image is real that the retrieval retrieves. And still the humans rate the realism of party almost the same, which is quite speaking for the model. The loss curves are also pretty interesting, especially interesting that the 20 billion model here, it takes quite a time to come down here. Right. It kind of has to get surpassed by the three billion model initially and then overtakes it, which maybe means that we haven't exactly found the right training recipes yet for these largest of models. So this now is the cool part where they put the model, the models next to one another. So this is the same prompt with all of these different models. And you can just see where scale gets you. This is a portrait photo of a kangaroo wearing an orange hoodie and blue sunglasses standing on the grass in front of the Sydney Opera House, holding a sign on the chest that says welcome friends. And you can see my this these these things right here, this and this there may be like Dolly Mini kind of style pictures. And there are also that scale. All right. And then we go to the three B model. And this is something that would be familiar maybe from something like Dolly or Dolly, maybe between Dolly and Dolly to write these things. You can see they're bad at spelling. But as soon as you go bigger, all of a sudden, welcome friends. But a boom. There it is. Not bad at spelling anymore. All you need to scale. That's crazy. The sign very deep learning. Look, as the model learns to spell, initially, it can only do Russian or whatever. And and just eventually it would actually be funny if that was like actual Russian and it said very deep learning. Can you imagine how crazy that would be? In any case, and also the Grand Canyon, right? So there's kind of structure here and so on. But this very, very deep learning. Perfect. A blue Porsche parked in front of a yellow brick wall. You can see it doesn't always work. But it works better and better and better with scale. Crazy. And here this is like maybe like is this a direct shot at Gary Marcus? Because the challenge is like an an astronaut riding a horse. So astronaut riding a horse in the forest, even the three billion model. Oh, no, it's going to be a horse riding an astronaut, which is going to come up later. And I promise it's going to be funny. But yeah, an astronaut riding a horse in the water in front of them, water lilies and so on. A map of the United States made out of sushi. So as you can see, these these results are fairly insane. Infinity, the back of a violin, four cats surrounding a dog. So now they're really testing these individual categories. Infinity is an abstract concept. Back of violin is perspective. Four cats surrounding a dog is this quantity metric. You can you can see there are four cats, right? So, yeah, I'm pretty confident that with with scale, these types of problems are going to be solved. Scroll gives an apple to a bird. Yeah, so what's interesting is they have this narrative of what they call growing a cherry tree. So obviously, these samples here are cherry picked, which means that they take out whatever they think are good samples to present in the paper. However, they detail fairly extensively how they arrive at this thing. So what they do is they don't just come up with these long prompts by themselves. Well, these aren't long. OK. But, you know, these long prompts with Anubis in front in a leather jacket in front of Los Angeles skyline, they don't just come up with them on the spot. They have a process of coming up with them and the process is detailed here. So, for example, they have this idea of combining like a sloth with a van. Right. So they start by just exploring the model and entering things like a smiling sloth, like what comes out. Right. And a van parked on grass. There are always good images and bad images that turn out and they sort of learn how to have to tweak the prompt to get what they want. Once they're happy, they go on. So they modify the prompt a bit. So there is the smiling sloth wearing a leather jacket, a cowboy hat and a kilt or wearing a bow tie and holding a quarterstaff. So they kind of explore, they go more and more, as you can see, as you go down this tree, this cherry tree, as they call it, they go down and down. They detail well. Sometimes there's problems. This one, I believe, has two arms on this side and so on. So, but still they refine and refine and refine. They finally try to combine them. Right. Yeah. Here is a combination. They refine again. They try to combine the two prompts again. And at the end, they get to something that they might be happy with. For example, the thing here on the left, like this one right here. But I found this pretty interesting, like this process of arriving at these things. So you can't just enter any old long sentence and expect the model to do well. But what turns, what might, what will work often better, at least as they describe it, is to go through this process right here, which also means that full artistic freedom is a bit away. So it is almost like, yes, you are guiding the model with your inputs, but also the model is kind of guiding you by what it does well and what it doesn't do well if you go via this process. And if you don't go via this process, then I guess you can expect that you can expect that it might not work as well. So they also have some big failure cases, which is pretty cool. For example, the failure cases like color bleeding, where you describe the color of one of the things in the image and sort of the other take on that color. There's also counting failures and so on, localization failures. For example, here the prompt is, the prompt is, Oh yeah, the Great Pyramid of Giza situated in front of Mount Everest. That's the bottom two pictures should be that. You can see this. Okay, I mean, this isn't, this isn't too bad. But this here is just like the pyramid with sort of a Mount Everest cover. Right. You can see these models, they sometimes if they can't fulfill the prompt directly, they'll kind of mix. They'll just try to get it done somehow and get it really close in text embedding space. That's exactly what you can see right here. There's a bunch, a bunch of examples. And this one, I told you, it's the horse riding on an astronaut. So they have to actually specify the horse is sitting on an astronaut because the riding is just, is just riding indicates too much that the horse is on the bottom. But I just found the horse riding on the astronaut to be absolutely hilarious, especially this one. Yeah, but all in all, I guess what I wanted to say is that this is complaining on a very, very high level. Right. The paper itself is like moving the goal posts already by sort of criticizing itself for, oh, well, I specified like nine apples in a perfect arrangement. I don't have or write 10 red apples and it's only eight red apples. Like what a loser model. Look at that. I mean, this is it is crazy good how these models are and the failure cases here are, you know, yes, they're failure cases. But I don't think that if you told me three, four years ago that this is the type of error that we're at solving that I would have said, yeah, I believe that I would have way guessed. We're still at the point where, you know, we we have mode collapses. We can't create most of the text stuff. We have artifacts and all kinds of things. And I think this is yeah, it's it's kind of mind blowing how fast the progress here is. Obviously, half a year ago or so. Yeah, I would have expected something like this, but I believe, yeah, a lot of people must be very surprised and including me. Yeah, like spelling mistakes, like complaining that, you know, sometimes text is still not spelled right. Like, you know, even though, right, Dali couldn't do it at all. And now this thing is doing it almost perfectly, as you can see right here, combining abstract concepts. Look at the thing on top. It's it's insane. Or here like, oh, this leg is in the behind the race car. Come on. This is better than I guess anyone had expected. So, yeah, I don't want to waste your time too much more. I just thought this was absolutely cool. And I'm very excited to see where this is going next. Of course, huge bummer that we don't get access to this. I hope this finds its way into some products that we can use. As you know, I'm all for these companies making making money with their inventions. I mean, I think it's cool that they are inventing and, you know, if they want to make some cash off of it, you know, good for them. But I do hope that we actually get to use it. And I it's going to be a fun future where for every presentation or anything, if you need like an illustration, you just you just type it. You don't go to the Internet to search an appropriate stock photo. You just type it. It's so cool. Or you want to change something in a picture. You just erase it. You just say, whatever here, change that part to something else. So cool. No Photoshop skills anymore. No drawing skills anymore. Just you and your mind and your creativity. All right. That was it. As I said, the paper presented in this new system is fairly simple. All it does is scale a bunch of transformers in sequence. Essentially, I presented a evaluation benchmark, these party prompts, and it presented their model, which is ridiculously insane. That was it for me. Let me know what you think and I'll see you around. Bye bye.
[ { "start": 0, "end": 7, "text": " Not a day goes by in AI research in which we don't get a new image generation model these days." }, { "start": 7, "end": 13, "text": " So take a look at the top row right here and listen to the prompt that generated them." }, { "start": 13, "end": 18, "text": " Oil on canvas painting of a blue night sky with roiling energy." }, { "start": 18, "end": 22, "text": " A fuzzy and bright yellow crescent moon shining at the top." }, { "start": 22, "end": 29, "text": " Below the exploding yellow stars and radiating swirls of blue, a distant village sits quietly on the right." }, { "start": 29, "end": 37, "text": " Connecting earth and sky is a flame-like cypress tree with curling and swaying branches on the left." }, { "start": 37, "end": 42, "text": " A church spire rises as a beacon over rolling blue hills." }, { "start": 42, "end": 48, "text": " That is a 67 word description of Starry Night by Vincent van Gogh." }, { "start": 48, "end": 52, "text": " And it is also the prompt that generated the top row of images." }, { "start": 52, "end": 66, "text": " And the paper does this to show that image generation models, specifically this one, they have become super duper capable of incorporating not only wild concepts," }, { "start": 66, "end": 73, "text": " as you can see here, co-locating the Eiffel Tower with the Sydney skyline and fireworks and whatnot," }, { "start": 73, "end": 80, "text": " but also, you know, minute details about things in the image and where things are and how things look." }, { "start": 80, "end": 94, "text": " So we've gone from essentially conditional GANs where we could create one of 10 classes to something where we can input like a little essay about what we want to see and get it out." }, { "start": 94, "end": 100, "text": " So this is by a group of researchers out of Google Research." }, { "start": 100, "end": 107, "text": " And they are a parallel work to the Imogen model that you might have seen." }, { "start": 107, "end": 114, "text": " So this model or the paper is called Scaling Autoregressive Models for Content-Rich Text to Image Generation." }, { "start": 114, "end": 121, "text": " But the model is called, let me grab if I can, let me grab a pen." }, { "start": 121, "end": 126, "text": " The model is called PARTI." }, { "start": 126, "end": 129, "text": " And I have no clue how to pronounce this." }, { "start": 129, "end": 133, "text": " This could be party." }, { "start": 133, "end": 146, "text": " Maybe the pronunciation is on the art or on the part because it's pathways like it's, or partai or I have no idea." }, { "start": 146, "end": 148, "text": " Let's call it PARTI." }, { "start": 148, "end": 153, "text": " And PARTI is a model that generates images from text as we have so many models." }, { "start": 153, "end": 160, "text": " However, it doesn't do this in the same style as like Imogen, which is a diffusion model." }, { "start": 160, "end": 163, "text": " It is an autoregressive model." }, { "start": 163, "end": 166, "text": " So here you can see a bunch of other outputs like this." }, { "start": 166, "end": 167, "text": " This is insane." }, { "start": 167, "end": 169, "text": " Look at the left side right here." }, { "start": 169, "end": 175, "text": " A photo of a frog reading the newspaper named Toaday." }, { "start": 175, "end": 177, "text": " The newspaper is named Toaday." }, { "start": 177, "end": 180, "text": " Like how crazy is that?" }, { "start": 180, "end": 183, "text": " That in itself is pretty funny." }, { "start": 183, "end": 191, "text": " But we know that these image to sorry, these text to image models are pretty bad at spelling stuff in images." }, { "start": 191, "end": 195, "text": " Well, not this model, as you can see right here, it gets it completely right." }, { "start": 195, "end": 199, "text": " It doesn't always get it right, but it gets it right often enough." }, { "start": 199, "end": 212, "text": " Or this one, portrait of a statue of the Egyptian god Anubis wearing aviator goggles like another connoisseur of fine eyewear." }, { "start": 212, "end": 214, "text": " White t-shirt and a leather jacket." }, { "start": 214, "end": 217, "text": " The city of Los Angeles is in the background." }, { "start": 217, "end": 219, "text": " High res DSLR photograph." }, { "start": 219, "end": 224, "text": " That's literally that's the academic version of the Unreal Engine trick right here." }, { "start": 224, "end": 227, "text": " And you can see the images spot on." }, { "start": 227, "end": 240, "text": " So this requires a lot of knowledge, not only of, you know, what a DSLR photograph is, but also how the skyline of Los Angeles looks, how the Egyptian got an Anubis looks right." }, { "start": 240, "end": 251, "text": " And the composition of things together like these, this god was never in a leather jacket depicted, I guess, maybe on the internet you'll find anything." }, { "start": 251, "end": 254, "text": " But you can see a bunch of more examples right here." }, { "start": 254, "end": 258, "text": " I specifically love the thing on the left side here." }, { "start": 258, "end": 261, "text": " You can see that they generated images." }, { "start": 261, "end": 273, "text": " So the prompt is three quarters front view of a XYZ coming around a curve in a mountain road looking over a green valley on a cloudy day." }, { "start": 273, "end": 276, "text": " So X here is any of the colors blue, red and yellow." }, { "start": 276, "end": 280, "text": " Y is any of the numbers." }, { "start": 280, "end": 284, "text": " 1977, 1997 and 2017." }, { "start": 284, "end": 288, "text": " And Z is any of these car types." }, { "start": 288, "end": 296, "text": " And now look that the model can essentially track the historical evolution of these cars." }, { "start": 296, "end": 304, "text": " So not only does it know what a Porsche is, it also knows how a Porsche in 77 looked like." }, { "start": 304, "end": 309, "text": " Maybe it's not exactly the correct year, but this is pretty crazy." }, { "start": 309, "end": 311, "text": " You can see a bunch more examples right here." }, { "start": 311, "end": 313, "text": " They do a lot of examples with animals." }, { "start": 313, "end": 319, "text": " I specifically like the raccoon here in the style of cubism." }, { "start": 319, "end": 324, "text": " So this is going to be very, very powerful technology." }, { "start": 324, "end": 338, "text": " We can immediately see that, you know, the quality of these models gets fast, gets quickly, sorry, gets well, gets better so quickly that in the foreseeable future," }, { "start": 338, "end": 343, "text": " we're going to have super powerful tools to just create and edit images from text." }, { "start": 343, "end": 348, "text": " Look at the left side here, a giant cobra snake made from salad." }, { "start": 348, "end": 356, "text": " You know, I'm sure they even say these are cherry picked, but still this is insane." }, { "start": 356, "end": 367, "text": " Now, I would love to tell you that behind all of this cool development is a really cool idea like is a smart architecture and something like this." }, { "start": 367, "end": 369, "text": " But I'm afraid it is not." }, { "start": 369, "end": 373, "text": " It is simply scale and not simply scale." }, { "start": 373, "end": 377, "text": " I mean, you have to have the sort of correct base architecture." }, { "start": 377, "end": 386, "text": " There is nothing like particularly there's no cool invention in architecture or a neat trick involved or anything like this." }, { "start": 386, "end": 395, "text": " It's really just plug basic things together, make them really big, train them for long on a lot of data and you'll get quality." }, { "start": 395, "end": 401, "text": " So this is the model overview right here, the overview of this party or part time model." }, { "start": 401, "end": 409, "text": " This is, as I already said, in contrast to image and it is an auto regressive model, so not a diffusion model." }, { "start": 409, "end": 416, "text": " What happens is that on this side here, you have this VQ GAN image encoder and decoder." }, { "start": 416, "end": 422, "text": " Well, they don't call them encoder and decoder, they call them tokenizer and de tokenizer." }, { "start": 422, "end": 431, "text": " So if you are not aware, auto regressive models, they work on tokens." }, { "start": 431, "end": 438, "text": " Now, tokens in usually in natural language processing are words or part of words." }, { "start": 438, "end": 443, "text": " So these would be tokens, token one, token two, and so on until token N." }, { "start": 443, "end": 448, "text": " And then what you would try to do is you would try always to predict the next token." }, { "start": 448, "end": 450, "text": " That's what makes it auto regressive." }, { "start": 450, "end": 455, "text": " You feed in parts of a token sequence, like parts of a sentence, you try to predict the next one." }, { "start": 455, "end": 459, "text": " That's exactly what you see right here in the architecture." }, { "start": 459, "end": 465, "text": " So you pass in the start of sentence token, you try to predict the first token, then you pass in the first token." }, { "start": 465, "end": 469, "text": " And then from these two, you try to predict the second token." }, { "start": 469, "end": 474, "text": " And then you put that here from these three, you try to predict the third token and so on." }, { "start": 474, "end": 476, "text": " That's the auto regressivity." }, { "start": 476, "end": 483, "text": " In text, that works well. However, in images, it's not quite obvious how to do that." }, { "start": 483, "end": 490, "text": " That's why you first need to get from the image space to the token space." }, { "start": 490, "end": 496, "text": " So we need a way for any given image that we get out a sequence of tokens." }, { "start": 496, "end": 500, "text": " And it can't be the pixels themselves." }, { "start": 500, "end": 510, "text": " We would like to have tokens that are kind of latent and have sort of a bit of meaning, not just individual pixels," }, { "start": 510, "end": 512, "text": " because that, first of all, is too many pixels." }, { "start": 512, "end": 521, "text": " And second of all, there's not too much, let's say, information in the single pixel." }, { "start": 521, "end": 524, "text": " So what we do is we have these image tokenizer and detokenizer." }, { "start": 524, "end": 530, "text": " This is a VQGAN that is powered by a vision transformer." }, { "start": 530, "end": 535, "text": " So essentially, this is a model that takes this image, it ships it through a bunch of layers." }, { "start": 535, "end": 542, "text": " And at the end, so let's say the image at the beginning has a bunch of rows, a bunch of columns with its pixels." }, { "start": 542, "end": 547, "text": " This goes through a series of maybe downscalings and so on." }, { "start": 547, "end": 549, "text": " No, actually, it's because it's a vision transformer." }, { "start": 549, "end": 555, "text": " It probably even tokenizes, like it patches the image at the very beginning." }, { "start": 555, "end": 557, "text": " So these would be image patches." }, { "start": 557, "end": 561, "text": " Then these are transformed by a transformer to a latent space." }, { "start": 561, "end": 565, "text": " Maybe they are compressed." }, { "start": 565, "end": 569, "text": " And then you get tokens." }, { "start": 569, "end": 577, "text": " So at the end, you can take these things right here or the things that correspond to them in the latent representation." }, { "start": 577, "end": 585, "text": " You can take those as image tokens and you can unroll essentially this image and then feed it into this model." }, { "start": 585, "end": 589, "text": " Hey, just a short interjection here from Janek from the future." }, { "start": 589, "end": 600, "text": " The idea, I forgot, the idea behind the whole setup here is behind the whole VQGAN is obviously that these things here are tokens," }, { "start": 600, "end": 603, "text": " which means that they come from a set vocabulary." }, { "start": 603, "end": 613, "text": " So the way you train a VQGAN isn't just to give you this latent representation of like token like things, but then you also quantize them." }, { "start": 613, "end": 621, "text": " So there is also a vocabulary somewhere where you have a set defined set of tokens." }, { "start": 621, "end": 626, "text": " I believe in their case, they have like 8,000 tokens or so." }, { "start": 626, "end": 633, "text": " And your image, your image tokens must be of these 8,000." }, { "start": 633, "end": 640, "text": " So the image has a bunch of tokens, but they all must be one of the things in the vocabulary here." }, { "start": 640, "end": 642, "text": " Now, the vocabulary is also learned." }, { "start": 642, "end": 645, "text": " There are some techniques by which to learn the vocabulary." }, { "start": 645, "end": 656, "text": " But this quantization is actually what then enables you to treat essentially to treat it as a sequence of language tokens, which also come from a vocabulary." }, { "start": 656, "end": 657, "text": " All right." }, { "start": 657, "end": 659, "text": " Back to Janek in the past." }, { "start": 659, "end": 671, "text": " The image tokenizer is trained as an as it says here as a VQGAN, which means that you encode and then you decode again and you try to get out the same image." }, { "start": 671, "end": 678, "text": " And at the end, this representation here in the middle is really valuable because it's a tokenized representation of an image." }, { "start": 678, "end": 684, "text": " So you put that into the transformer right here." }, { "start": 684, "end": 687, "text": " And this is, as we said, an autoregressive model." }, { "start": 687, "end": 697, "text": " So it gets as an input, obviously, the sequence so far, it tries to predict the next image token, but also gets as an input, the text." }, { "start": 697, "end": 701, "text": " So this is the prompt that the user puts in." }, { "start": 701, "end": 712, "text": " So the prompt is encoded in a transformer encoder and is then fed in as a side input as a target for attention." }, { "start": 712, "end": 723, "text": " So whenever in the layer here you have queries, keys and values, I'm going to guess the query can also look at the transformer encoder." }, { "start": 723, "end": 725, "text": " The query can also look at the keys right here." }, { "start": 725, "end": 730, "text": " So over here, you'd only have keys and values." }, { "start": 730, "end": 740, "text": " If you don't know what the attend, what this all of this means, I have a video on attention is all you need where you can learn how attention mechanisms work." }, { "start": 740, "end": 744, "text": " So essentially, the way this is trained is the following." }, { "start": 744, "end": 750, "text": " You attach a sentence here or a description of an image and you attach an image right here." }, { "start": 750, "end": 752, "text": " The image is then patched." }, { "start": 752, "end": 758, "text": " It is fed through the VQGAN encoder." }, { "start": 758, "end": 760, "text": " Its latent representation is obtained." }, { "start": 760, "end": 764, "text": " That latent representation is put here." }, { "start": 764, "end": 777, "text": " And then you essentially train a decoder language model that has cross attention into the text representation of the prompt." }, { "start": 777, "end": 784, "text": " So you simply train this thing right here like you would train a GPT model or any other model." }, { "start": 784, "end": 790, "text": " And this thing right here is trained, as I said, as an imagery construction model." }, { "start": 790, "end": 794, "text": " And this thing right here is trained, I guess, jointly with this." }, { "start": 794, "end": 795, "text": " Actually don't know." }, { "start": 795, "end": 799, "text": " This could this could not be true, but I think it is true." }, { "start": 799, "end": 801, "text": " I think it is trained jointly." }, { "start": 801, "end": 805, "text": " So that's the model, as I said, is very basic." }, { "start": 805, "end": 811, "text": " I wish I could tell you something more interesting right here, but I can't." }, { "start": 811, "end": 815, "text": " It's a standard, you know, bunch of transformers in sequence." }, { "start": 815, "end": 819, "text": " Essentially, every single component right here is a transformer." }, { "start": 819, "end": 826, "text": " And because every single thing is a transformer, you can scale this thing by a lot." }, { "start": 826, "end": 834, "text": " By the way, here you can see a bunch of the I'm not going to go into the architectural details." }, { "start": 834, "end": 837, "text": " Quite quite as much." }, { "start": 837, "end": 840, "text": " But they do also train an up sampler." }, { "start": 840, "end": 844, "text": " So they have images of resolution 256 by 256." }, { "start": 844, "end": 863, "text": " Ultimately, they do train an up sampler as well, where so here this is the up sampler super resolution up sampler where they can go from their pipeline, which does 256 by 256 to a 1024 by 1024." }, { "start": 863, "end": 865, "text": " Picture essentially." }, { "start": 865, "end": 867, "text": " But this is just up sampling." }, { "start": 867, "end": 868, "text": " Right." }, { "start": 868, "end": 872, "text": " So there is, I mean, technically no extra information right here." }, { "start": 872, "end": 876, "text": " This doesn't get to look at the prompt or anything like this." }, { "start": 876, "end": 882, "text": " It simply gets to look at this image and then make a four times larger image out of that." }, { "start": 882, "end": 885, "text": " So where did we leave off?" }, { "start": 885, "end": 891, "text": " Oh, yeah, I also wanted to say if you now want to get an image out of this thing, so not training, but inference." }, { "start": 891, "end": 896, "text": " What you do is you attach only the prompt right here." }, { "start": 896, "end": 901, "text": " You encode the prompt, you put the start of sentence token right here." }, { "start": 901, "end": 903, "text": " You let the model generate one." }, { "start": 903, "end": 905, "text": " Then you put that here, too." }, { "start": 905, "end": 908, "text": " Then you put that here, three and so on." }, { "start": 908, "end": 911, "text": " You let the model generate the image tokens here." }, { "start": 911, "end": 918, "text": " You take those image tokens, you feed, you arrange it into the latent representation of the VQ again." }, { "start": 918, "end": 923, "text": " And you use the decoder right here in order to generate the final image." }, { "start": 923, "end": 926, "text": " So that's the whole flow." }, { "start": 926, "end": 930, "text": " And then you put it through the super resolution if you want that." }, { "start": 930, "end": 934, "text": " Here you can see the basics, the basic architectural layouts." }, { "start": 934, "end": 938, "text": " So there is the smallest model has 350 million parameter." }, { "start": 938, "end": 942, "text": " You can see it has 12 encoder and 12 decoder layer." }, { "start": 942, "end": 946, "text": " It's pretty standard transformer scaling laws right here." }, { "start": 946, "end": 951, "text": " I mean, scaling laws, pretty standard transformer architectural laws." }, { "start": 951, "end": 956, "text": " They go through a 750 million parameter model, 3 billion." }, { "start": 956, "end": 961, "text": " And the last one here has 20 billion parameters." }, { "start": 961, "end": 963, "text": " So that's a decently sized model." }, { "start": 963, "end": 966, "text": " It's not as large as the large language models." }, { "start": 966, "end": 972, "text": " And they do use things like sparse con attention and things like this." }, { "start": 972, "end": 976, "text": " But it is, you know, it's pretty large, I would say." }, { "start": 976, "end": 980, "text": " You could not run that at home very easily." }, { "start": 980, "end": 983, "text": " So where does that get us?" }, { "start": 983, "end": 992, "text": " They have a big description right here how they solve this architecturally, how they short the model, how they use parallelism, which is very interesting." }, { "start": 992, "end": 995, "text": " I'm just not an expert at it." }, { "start": 995, "end": 999, "text": " So if you're interested, I'll leave you to read this part." }, { "start": 999, "end": 1003, "text": " I found the at least the drawings are pretty cool." }, { "start": 1003, "end": 1013, "text": " So apparently this the signal is routed like, you know, like so, like so and so." }, { "start": 1013, "end": 1027, "text": " So like in like a snake type of arrangement so that always you can pipeline so that always one thing is essentially busy as you send data to the next thing and so on." }, { "start": 1027, "end": 1037, "text": " But as I said, I'm not the expert in this and I'd rather want to get to the other things, which are the data sets that they use." }, { "start": 1037, "end": 1040, "text": " So they have three data sets, three main data sets right here." }, { "start": 1040, "end": 1042, "text": " One is Emma's Coco." }, { "start": 1042, "end": 1050, "text": " Now, Emma's Coco, as they show right here for the image on the right hand side, it simply says a bowl of broccoli and apples with a utensil." }, { "start": 1050, "end": 1054, "text": " So it just kind of is a high level description of what's in the image." }, { "start": 1054, "end": 1059, "text": " Like an image, simple image caption right for this image right here." }, { "start": 1059, "end": 1067, "text": " Whereas the localized narratives data set, you can see that its description is way longer." }, { "start": 1067, "end": 1076, "text": " It's more linguistically prosaic, but it is also much more descriptive of the actual image." }, { "start": 1076, "end": 1086, "text": " Like so the top is if you want to tell someone what's in an image and the bottom is more like if you want to like really paint the picture, like no pun intended." }, { "start": 1086, "end": 1094, "text": " Or if you want to describe the picture to someone so that they could maybe recreate it in some way." }, { "start": 1094, "end": 1106, "text": " And it turns out that we are now at the point with these image generation models where they are so good that we need data sets like the bottom one to really push them to their limits." }, { "start": 1106, "end": 1118, "text": " And not only that, but the authors here find that there are even problems with that because these image data sets, they're always created in a way that an image is given and then the humans are asked to write a description," }, { "start": 1118, "end": 1123, "text": " which is really good because then you have image and description together." }, { "start": 1123, "end": 1136, "text": " However, the authors here know that this prevents, for example, fantasy pictures like we saw before the raccoon and cubism that it doesn't exist." }, { "start": 1136, "end": 1141, "text": " So it can't be in any data set or anubis in a leather jacket doesn't exist." }, { "start": 1141, "end": 1143, "text": " So it can't be in any data set." }, { "start": 1143, "end": 1157, "text": " So while we rely on generalization during training for the model to learn these things, we actually need data sets like that to evaluate whether they can really do these things." }, { "start": 1157, "end": 1161, "text": " Right. Otherwise, we're left with sort of subjective evaluation." }, { "start": 1161, "end": 1167, "text": " So they come up with their own data set, which is called party prompts." }, { "start": 1167, "end": 1180, "text": " That's actually also the thing they release as far as I understand. And obviously, as all of the recent works in big models, this thing isn't released." }, { "start": 1180, "end": 1185, "text": " There's no code. There's no I mean, the code would be trivial. There's no weights." }, { "start": 1185, "end": 1187, "text": " There's no training recipe." }, { "start": 1187, "end": 1193, "text": " There's no some of the data sets are proprietary, if I understand correctly." }, { "start": 1193, "end": 1199, "text": " So the paper is more open about what they do, but still that there is no way of accessing this." }, { "start": 1199, "end": 1204, "text": " So party prompts. This is a data set that essentially only consists of prompts." }, { "start": 1204, "end": 1207, "text": " So there's no images in this data set." }, { "start": 1207, "end": 1217, "text": " And I believe the only way you can really assess thing is you can let the model generate stuff and then you can let humans rate it." }, { "start": 1217, "end": 1219, "text": " That's essentially it." }, { "start": 1219, "end": 1232, "text": " The party prompts. It is pretty interesting because they create these prompts by letting the prompt engineers sort of they choose, for example, a challenge." }, { "start": 1232, "end": 1236, "text": " So the challenge might be perspective. Right." }, { "start": 1236, "end": 1247, "text": " Which could be, you know, I need a prompt that asks for some object in some in some specific perspective that is unusual." }, { "start": 1247, "end": 1250, "text": " Or, yeah, quantity." }, { "start": 1250, "end": 1260, "text": " Like I need a prompt that a that asks for a given number of things because we know that these models, they're not super good at counting." }, { "start": 1260, "end": 1265, "text": " Right. I mean, we also thought the models aren't super good at spelling." }, { "start": 1265, "end": 1268, "text": " And now it turns out, well, if we just make them bigger, they are." }, { "start": 1268, "end": 1275, "text": " So, you know, I'm fairly confident they're going to be good at counting in short while." }, { "start": 1275, "end": 1278, "text": " That's the challenge." }, { "start": 1278, "end": 1284, "text": " There's also, if I recall correctly, this is this upper table right here, like categories." }, { "start": 1284, "end": 1289, "text": " So there are categories, animals, there are categories, illustrations and so on." }, { "start": 1289, "end": 1297, "text": " So you can see this is a diverse set of category challenge combinations and they make a bunch of prompts for each one." }, { "start": 1297, "end": 1304, "text": " I think they have about 1600 prompts in total in this party prompt eval set, which is a pretty neat thing to have." }, { "start": 1304, "end": 1307, "text": " Even if it comes without images." }, { "start": 1307, "end": 1320, "text": " So now they train the thing with their whole architectural shebangs with the parallelism and the pipelining and the yada, yada, yada on TPU v4, I think." }, { "start": 1320, "end": 1323, "text": " So this is a huge operation. So what does that give us?" }, { "start": 1323, "end": 1331, "text": " I want to just jump the evals here on the metrics because yes, yes, yes, they're very good, very good." }, { "start": 1331, "end": 1343, "text": " They're also very good as rated by humans, humans, very good, which is what's interesting is they have, for example, a retrieval baseline, which simply retrieves images from the training data set." }, { "start": 1343, "end": 1353, "text": " And even if the if the obviously image text match, the party model wins because you can actually create an image and not retrieve one." }, { "start": 1353, "end": 1361, "text": " But even in image realism, you can see the retrieval is only slightly higher in realism, right?" }, { "start": 1361, "end": 1366, "text": " Every single image is real that the retrieval retrieves." }, { "start": 1366, "end": 1375, "text": " And still the humans rate the realism of party almost the same, which is quite speaking for the model." }, { "start": 1375, "end": 1385, "text": " The loss curves are also pretty interesting, especially interesting that the 20 billion model here, it takes quite a time to come down here." }, { "start": 1385, "end": 1401, "text": " Right. It kind of has to get surpassed by the three billion model initially and then overtakes it, which maybe means that we haven't exactly found the right training recipes yet for these largest of models." }, { "start": 1401, "end": 1410, "text": " So this now is the cool part where they put the model, the models next to one another." }, { "start": 1410, "end": 1415, "text": " So this is the same prompt with all of these different models." }, { "start": 1415, "end": 1418, "text": " And you can just see where scale gets you." }, { "start": 1418, "end": 1430, "text": " This is a portrait photo of a kangaroo wearing an orange hoodie and blue sunglasses standing on the grass in front of the Sydney Opera House, holding a sign on the chest that says welcome friends." }, { "start": 1430, "end": 1440, "text": " And you can see my this these these things right here, this and this there may be like Dolly Mini kind of style pictures." }, { "start": 1440, "end": 1442, "text": " And there are also that scale." }, { "start": 1442, "end": 1445, "text": " All right. And then we go to the three B model." }, { "start": 1445, "end": 1455, "text": " And this is something that would be familiar maybe from something like Dolly or Dolly, maybe between Dolly and Dolly to write these things." }, { "start": 1455, "end": 1461, "text": " You can see they're bad at spelling. But as soon as you go bigger, all of a sudden, welcome friends." }, { "start": 1461, "end": 1465, "text": " But a boom. There it is. Not bad at spelling anymore." }, { "start": 1465, "end": 1470, "text": " All you need to scale. That's crazy. The sign very deep learning." }, { "start": 1470, "end": 1477, "text": " Look, as the model learns to spell, initially, it can only do Russian or whatever." }, { "start": 1477, "end": 1486, "text": " And and just eventually it would actually be funny if that was like actual Russian and it said very deep learning." }, { "start": 1486, "end": 1489, "text": " Can you imagine how crazy that would be?" }, { "start": 1489, "end": 1493, "text": " In any case, and also the Grand Canyon, right?" }, { "start": 1493, "end": 1498, "text": " So there's kind of structure here and so on. But this very, very deep learning." }, { "start": 1498, "end": 1501, "text": " Perfect." }, { "start": 1501, "end": 1509, "text": " A blue Porsche parked in front of a yellow brick wall. You can see it doesn't always work." }, { "start": 1509, "end": 1515, "text": " But it works better and better and better with scale." }, { "start": 1515, "end": 1522, "text": " Crazy. And here this is like maybe like is this a direct shot at Gary Marcus?" }, { "start": 1522, "end": 1526, "text": " Because the challenge is like an an astronaut riding a horse." }, { "start": 1526, "end": 1531, "text": " So astronaut riding a horse in the forest, even the three billion model." }, { "start": 1531, "end": 1536, "text": " Oh, no, it's going to be a horse riding an astronaut, which is going to come up later." }, { "start": 1536, "end": 1539, "text": " And I promise it's going to be funny." }, { "start": 1539, "end": 1546, "text": " But yeah, an astronaut riding a horse in the water in front of them, water lilies and so on." }, { "start": 1546, "end": 1550, "text": " A map of the United States made out of sushi." }, { "start": 1550, "end": 1559, "text": " So as you can see, these these results are fairly insane. Infinity, the back of a violin, four cats surrounding a dog." }, { "start": 1559, "end": 1564, "text": " So now they're really testing these individual categories. Infinity is an abstract concept." }, { "start": 1564, "end": 1569, "text": " Back of violin is perspective. Four cats surrounding a dog is this quantity metric." }, { "start": 1569, "end": 1572, "text": " You can you can see there are four cats, right?" }, { "start": 1572, "end": 1578, "text": " So, yeah, I'm pretty confident that with with scale, these types of problems are going to be solved." }, { "start": 1578, "end": 1581, "text": " Scroll gives an apple to a bird." }, { "start": 1583, "end": 1592, "text": " Yeah, so what's interesting is they have this narrative of what they call growing a cherry tree." }, { "start": 1592, "end": 1601, "text": " So obviously, these samples here are cherry picked, which means that they take out whatever they think are good samples to present in the paper." }, { "start": 1601, "end": 1608, "text": " However, they detail fairly extensively how they arrive at this thing." }, { "start": 1608, "end": 1614, "text": " So what they do is they don't just come up with these long prompts by themselves." }, { "start": 1614, "end": 1616, "text": " Well, these aren't long. OK." }, { "start": 1616, "end": 1625, "text": " But, you know, these long prompts with Anubis in front in a leather jacket in front of Los Angeles skyline, they don't just come up with them on the spot." }, { "start": 1625, "end": 1632, "text": " They have a process of coming up with them and the process is detailed here." }, { "start": 1632, "end": 1639, "text": " So, for example, they have this idea of combining like a sloth with a van." }, { "start": 1639, "end": 1647, "text": " Right. So they start by just exploring the model and entering things like a smiling sloth, like what comes out." }, { "start": 1647, "end": 1650, "text": " Right. And a van parked on grass." }, { "start": 1650, "end": 1659, "text": " There are always good images and bad images that turn out and they sort of learn how to have to tweak the prompt to get what they want." }, { "start": 1659, "end": 1661, "text": " Once they're happy, they go on." }, { "start": 1661, "end": 1664, "text": " So they modify the prompt a bit." }, { "start": 1664, "end": 1673, "text": " So there is the smiling sloth wearing a leather jacket, a cowboy hat and a kilt or wearing a bow tie and holding a quarterstaff." }, { "start": 1673, "end": 1684, "text": " So they kind of explore, they go more and more, as you can see, as you go down this tree, this cherry tree, as they call it, they go down and down." }, { "start": 1684, "end": 1685, "text": " They detail well." }, { "start": 1685, "end": 1687, "text": " Sometimes there's problems." }, { "start": 1687, "end": 1692, "text": " This one, I believe, has two arms on this side and so on." }, { "start": 1692, "end": 1696, "text": " So, but still they refine and refine and refine." }, { "start": 1696, "end": 1698, "text": " They finally try to combine them." }, { "start": 1698, "end": 1700, "text": " Right. Yeah." }, { "start": 1700, "end": 1701, "text": " Here is a combination." }, { "start": 1701, "end": 1703, "text": " They refine again." }, { "start": 1703, "end": 1706, "text": " They try to combine the two prompts again." }, { "start": 1706, "end": 1711, "text": " And at the end, they get to something that they might be happy with." }, { "start": 1711, "end": 1716, "text": " For example, the thing here on the left, like this one right here." }, { "start": 1716, "end": 1722, "text": " But I found this pretty interesting, like this process of arriving at these things." }, { "start": 1722, "end": 1745, "text": " So you can't just enter any old long sentence and expect the model to do well. But what turns, what might, what will work often better, at least as they describe it, is to go through this process right here, which also means that full artistic freedom is a bit away." }, { "start": 1745, "end": 1758, "text": " So it is almost like, yes, you are guiding the model with your inputs, but also the model is kind of guiding you by what it does well and what it doesn't do well if you go via this process." }, { "start": 1758, "end": 1769, "text": " And if you don't go via this process, then I guess you can expect that you can expect that it might not work as well." }, { "start": 1769, "end": 1788, "text": " So they also have some big failure cases, which is pretty cool. For example, the failure cases like color bleeding, where you describe the color of one of the things in the image and sort of the other take on that color." }, { "start": 1788, "end": 1793, "text": " There's also counting failures and so on, localization failures." }, { "start": 1793, "end": 1801, "text": " For example, here the prompt is, the prompt is," }, { "start": 1801, "end": 1814, "text": " Oh yeah, the Great Pyramid of Giza situated in front of Mount Everest. That's the bottom two pictures should be that. You can see this. Okay, I mean, this isn't, this isn't too bad." }, { "start": 1814, "end": 1828, "text": " But this here is just like the pyramid with sort of a Mount Everest cover. Right. You can see these models, they sometimes if they can't fulfill the prompt directly, they'll kind of mix." }, { "start": 1828, "end": 1839, "text": " They'll just try to get it done somehow and get it really close in text embedding space. That's exactly what you can see right here." }, { "start": 1839, "end": 1847, "text": " There's a bunch, a bunch of examples. And this one, I told you, it's the horse riding on an astronaut." }, { "start": 1847, "end": 1859, "text": " So they have to actually specify the horse is sitting on an astronaut because the riding is just, is just riding indicates too much that the horse is on the bottom." }, { "start": 1859, "end": 1869, "text": " But I just found the horse riding on the astronaut to be absolutely hilarious, especially this one." }, { "start": 1869, "end": 1880, "text": " Yeah, but all in all, I guess what I wanted to say is that this is complaining on a very, very high level. Right." }, { "start": 1880, "end": 1894, "text": " The paper itself is like moving the goal posts already by sort of criticizing itself for, oh, well, I specified like nine apples in a perfect arrangement." }, { "start": 1894, "end": 1904, "text": " I don't have or write 10 red apples and it's only eight red apples. Like what a loser model. Look at that." }, { "start": 1904, "end": 1917, "text": " I mean, this is it is crazy good how these models are and the failure cases here are, you know, yes, they're failure cases." }, { "start": 1917, "end": 1932, "text": " But I don't think that if you told me three, four years ago that this is the type of error that we're at solving that I would have said, yeah, I believe that I would have way guessed." }, { "start": 1932, "end": 1939, "text": " We're still at the point where, you know, we we have mode collapses. We can't create most of the text stuff." }, { "start": 1939, "end": 1950, "text": " We have artifacts and all kinds of things. And I think this is yeah, it's it's kind of mind blowing how fast the progress here is." }, { "start": 1950, "end": 1964, "text": " Obviously, half a year ago or so. Yeah, I would have expected something like this, but I believe, yeah, a lot of people must be very surprised and including me." }, { "start": 1964, "end": 1971, "text": " Yeah, like spelling mistakes, like complaining that, you know, sometimes text is still not spelled right." }, { "start": 1971, "end": 1983, "text": " Like, you know, even though, right, Dali couldn't do it at all. And now this thing is doing it almost perfectly, as you can see right here, combining abstract concepts." }, { "start": 1983, "end": 1991, "text": " Look at the thing on top. It's it's insane. Or here like, oh, this leg is in the behind the race car." }, { "start": 1991, "end": 1998, "text": " Come on. This is better than I guess anyone had expected." }, { "start": 1998, "end": 2006, "text": " So, yeah, I don't want to waste your time too much more. I just thought this was absolutely cool." }, { "start": 2006, "end": 2015, "text": " And I'm very excited to see where this is going next. Of course, huge bummer that we don't get access to this." }, { "start": 2015, "end": 2026, "text": " I hope this finds its way into some products that we can use. As you know, I'm all for these companies making making money with their inventions." }, { "start": 2026, "end": 2035, "text": " I mean, I think it's cool that they are inventing and, you know, if they want to make some cash off of it, you know, good for them." }, { "start": 2035, "end": 2047, "text": " But I do hope that we actually get to use it. And I it's going to be a fun future where for every presentation or anything, if you need like an illustration, you just you just type it." }, { "start": 2047, "end": 2056, "text": " You don't go to the Internet to search an appropriate stock photo. You just type it. It's so cool. Or you want to change something in a picture." }, { "start": 2056, "end": 2062, "text": " You just erase it. You just say, whatever here, change that part to something else. So cool." }, { "start": 2062, "end": 2069, "text": " No Photoshop skills anymore. No drawing skills anymore. Just you and your mind and your creativity." }, { "start": 2069, "end": 2080, "text": " All right. That was it. As I said, the paper presented in this new system is fairly simple. All it does is scale a bunch of transformers in sequence." }, { "start": 2080, "end": 2092, "text": " Essentially, I presented a evaluation benchmark, these party prompts, and it presented their model, which is ridiculously insane." }, { "start": 2092, "end": 2099, "text": " That was it for me. Let me know what you think and I'll see you around. Bye bye." } ]
H6Qiegq_36c
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Processing Megapixel Images with Deep Attention-Sampling Models
[ "Science & Technology" ]
[ "machine learning", "deep learning", "research", "attention", "attention sampling", "attention model", "attention distribution", "megapixel images", "large images", "artificial intelligence", "megapixel mnist", "street sign dataset", "monte carlo", "speed", "memory", "cnn", "convolutional neural networks", "limited resources", "ai", "image recognition", "image classifier" ]
Current CNNs have to downsample large images before processing them, which can lose a lot of detail information. This paper proposes attention sampling, which learns to selectively process parts of any large image in full resolution, while discarding uninteresting bits. This leads to enormous gains in speed and memory consumption. https://arxiv.org/abs/1905.03711 Abstract: Existing deep architectures cannot operate on very large signals such as megapixel images due to computational and memory constraints. To tackle this limitation, we propose a fully differentiable end-to-end trainable model that samples and processes only a fraction of the full resolution input image. The locations to process are sampled from an attention distribution computed from a low resolution view of the input. We refer to our method as attention sampling and it can process images of several megapixels with a standard single GPU setup. We show that sampling from the attention distribution results in an unbiased estimator of the full model with minimal variance, and we derive an unbiased estimator of the gradient that we use to train our model end-to-end with a normal SGD procedure. This new method is evaluated on three classification tasks, where we show that it allows to reduce computation and memory footprint by an order of magnitude for the same accuracy as classical architectures. We also show the consistency of the sampling that indeed focuses on informative parts of the input images. Authors: Angelos Katharopoulos, François Fleuret
Hi there, today we're looking at processing megapixel images with deep attention sampling models by Angelos Kateropoulos and François Fleuret. This is another paper that I saw the talk of at ICML and it's a pretty cool idea, it's pretty simple and apparently it works very well. So consider the following image here of a street situation and ask yourself if a self-driving car sees this, what are the kind of things it needs to be aware of? So of course one of the things it needs to be aware of is like the road, the cars and so on but also what's encircled in red here, the street sign and the street sign especially is important because there's a number on it and you want to see what the number is otherwise you won't be able to adjust your speed. So if this is now a really large image, so if the camera is really good and the dimensions of this image are really large, then current machine learning methods have a problem because current machine learning methods kind of go up to maybe something like 200 by 200 pixels or the current image net models, some down sample and so on. So if this is much larger than this, what current machine learning models would do is they would simply down sample, like compress the size, just compress it a bit and so on. And by that, as you see here on the right, if the original patch in the image you could cut it out and enlarge it, it would look like this. If you compress the whole image, the same patch would now look like this, blurred. So in the bottom half you'd be able to recognize the number, in the top half you wouldn't. So a standard CNN might be able to recognize the road and the car still at the lower resolution but not the speed sign. What we want is a method that can selectively pay attention to parts of the image that it finds interesting and then look at those parts in full detail while basically deciding to discard other parts completely such as the sky here. So this paper is one that does this and does so in a very efficient manner. So the basic premise is very simple. All right, I'm going to show you on this on the same image. So what you do is first you actually compress the image. So this image will become a smaller image, right? So here maybe this is 1000 by 2000, you compress it down to maybe 100 by 200. Still the same image but compressed. Here's the road, here's a bunch of trees. I'm very good at drawing trees. And here's this street sign and here is a car and here is another car. All right, so and there is a sky up here. So now what you do is on this smaller version you classify every location. I guess you could classify, you could subsample but you want to classify every single location on it on how interesting is it. And what they do is they take this and just put it through what they call an attention network which is just this it just a neural network. In their case it's a CNN that for each location here for each blue location outputs a function a of a and let's call it a x y at coordinates x and y of this image x. Okay, this is stupid notation. That's a of x so the image is x at coordinates i, j. Right, so all of these blue things here are i's and j's. Different i's and j's. And then what does this gives you now if you normalize correctly, so if you normalize over all the a's and i, j, a, i, j. If you normalize this gives you a distribution over this image. So if we look at it in like 1D this gives you like a distribution not a continuous one in this case a discrete one. How interesting is each patch and at the end if you have this distribution, so let's finish here, what you want to do is you want to say which are the most interesting locations. So this one's pretty high and these are very high so that might correspond to over here that might correspond to some location. So this location is very high and these locations are very interesting and only in these locations you take them out and then only those you process in full resolution. So you might have extracted let's say four patches so now you have four of these patches and each of them individually you run through a second neural network which is called another CNN which is called F the feature network. So the feature network will take a patch and output a vector of features. So it will feed those in and output the vector of features and then what you do is you simply your final output which they call G, let me colorize this so G which is G is now the final output let's not call it G let's call it O. Output is you sum over all the patches you have extracted down here so the patch number P over all your patches and you sum these features F of patch P right and P might be at location IJ let's put IJ here so IJ in the extracted patches and you weigh each feature by how much attention it got at that location. So it looks more complicated than it is what you do is you simply determine these features by using this neural network only at the position where this neural network says are interesting then you get the features from the interesting positions and you basically just weigh them by how much attention they got in the attention distribution and that will be your final output of the network and it makes intuitive sense like one network decides what is interesting the other network decides what are we going to do with the interesting things in this image. And the cool thing about this is you can basically decide how many of these patches here how many you want to extract you can decide at what resolution you want to process this image and all of this are parameters that you set by how much time you have for computation and how much memory you have for your computation so that's pretty cool pretty module we can scale up we can scale down and the another cool thing is the theoretical guarantees that they give so basically here they prove that the way they do it especially by extracting the patch especially if they have an unbiased sorry especially have if they have sampling without replacement is that if they weigh the things correctly and if they do the things correctly they show that this is actually an unbiased estimator of the true neural network if you were to evaluate on the full image basically on each patch in full resolution so only taking the ones where the attention focuses is an unbiased estimator and not only is it an unbiased estimator it is in fact the estimator with the smallest variance and that's what they prove here so the minimum variance estimator and this is this is pretty pretty interesting pretty cool and works pretty well they also show how to derive the gradient update when you train with this attention sampling so now you train your neural you train your machine learning system not on the whole image but only on a subset of the image patches but it still behaves in expectation as if you were to train on the entire image so pretty neat so here they show how this compares to full CNN in this case we have the full CNN where the picture is simply down sampled and then classified and this is what's called megapixel amnest so in megapixel amnest you have a large image and you put three digits in there there are the same for example five five five from the amnest data set you put two random digits others like two three and you put also a bunch of noise noise patches somewhere so the task is to recognize which is the dominant digit here in this case it would be five right five five where was the other one five here so if you give this to a regular CNN you see it does about this well this is the training loss here training loss and this is the test loss and it takes this much time right time per epoch here and this much time to evaluate sorry if you now use this attention sampling and as I said you can actually modulate how many patches you want to take so as you go down you take more patches we would expect it to take more time this is exactly what happens you see for example down here in the test error if you take five patches per image it takes very little time but the error I mean the error is still better than the if you use the CNN simply because you can now pay attention to details much more as you use more patches your test error drops the also your training loss they drop so using more patches will be actually give you a better and better and better performing model but you sacrifice a little bit of time but still not never as as slow as with the full with that with the CNN so even though it's a down sampled CNN right so that is very interesting and very cool that not only do they beat the the baseline in terms of error but also a lot in terms of speed if you look at what the model does as it learns here you see for a given image this is always the same image from the data set at the beginning they have actually marked where the relevant the three relevant digits are in the picture with the red circle so if you look at how over the training of this model how this distribution evolves is pretty interesting yellow basically means high attention so at the beginning you have high attention everywhere in the image right and then as you go on and on and on you see for example here it pays attention to all the locations where basically where there is something in the image right this could be one of these three digits but it could also be one of the digits that it's trying to that is trying to distract the model like the false digits or the noise patches and as you go more and more and more it really learns to only pay attention to the relevant digits and then classify those at full resolution so this really shows the this this kind of attention distribution learns something very meaningful they do more experiments on two data sets namely this is a histopathology data set right here where the goal is I think to recognize this epithelial cells this type of cell and you can see that this here is the baseline and this here is the new method and the baseline basically what it does is it does similar thing namely it processes the image in patches but it processes every single patch maybe in succession but it still processes every single patch where the attention sampling only processes the patches that the attention sampling distribution suggests and this other data set here is a street sign data set that you saw at the beginning right here and the the again I think this is the baseline and this is the attention sample so both learn to pay attention to the street signs but again the attention sampling much more efficient so here you see the baseline performance the attention sampling performance is similar in terms of test error but if you look at how much time the baseline uses per sample and how much memory and then compare this to the attention sampling you see that they save at least an order of magnitude in time and memory and the same thing goes for the street sign data set you see test error here and then test error is similar for the attention sampling but again time memory much much lower so the attention sampling is faster and is more memory efficient than the baseline and that makes it makes it easy to process these megapixel images even on here they say process megapixel images in a single CPU or GPU and that really I like this because it kind of brings their research back to let's say regular people or maybe universities that don't have as much money as large companies and so all in all very cool paper very neat experiments to have a lot in the appendix check it out where they show their attention distribution in these images their theoretical analysis is pretty easy to follow if you want to check that out and with that thanks for listening and bye bye
[ { "start": 0, "end": 4.92, "text": " Hi there, today we're looking at processing megapixel images with deep" }, { "start": 4.92, "end": 12.72, "text": " attention sampling models by Angelos Kateropoulos and François Fleuret." }, { "start": 12.72, "end": 20.88, "text": " This is another paper that I saw the talk of at ICML and it's a pretty cool idea," }, { "start": 20.88, "end": 26.52, "text": " it's pretty simple and apparently it works very well. So consider the" }, { "start": 26.52, "end": 35.72, "text": " following image here of a street situation and ask yourself if a" }, { "start": 35.72, "end": 42.760000000000005, "text": " self-driving car sees this, what are the kind of things it needs to be aware of?" }, { "start": 42.760000000000005, "end": 48.28, "text": " So of course one of the things it needs to be aware of is like the road, the cars" }, { "start": 48.28, "end": 54.36, "text": " and so on but also what's encircled in red here, the street sign and the street" }, { "start": 54.36, "end": 59.88, "text": " sign especially is important because there's a number on it and you want to" }, { "start": 59.88, "end": 65.64, "text": " see what the number is otherwise you won't be able to adjust your speed. So if" }, { "start": 65.64, "end": 70.36, "text": " this is now a really large image, so if the camera is really good and the" }, { "start": 70.36, "end": 75.08, "text": " dimensions of this image are really large, then current machine learning" }, { "start": 75.08, "end": 81.88, "text": " methods have a problem because current machine learning methods kind of go up" }, { "start": 81.88, "end": 88.72, "text": " to maybe something like 200 by 200 pixels or the current image net models," }, { "start": 88.72, "end": 93.92, "text": " some down sample and so on. So if this is much larger than this, what current" }, { "start": 93.92, "end": 98.72, "text": " machine learning models would do is they would simply down sample, like compress" }, { "start": 98.72, "end": 105.46, "text": " the size, just compress it a bit and so on. And by that, as you see here on the" }, { "start": 105.46, "end": 110.32, "text": " right, if the original patch in the image you could cut it" }, { "start": 110.32, "end": 115.8, "text": " out and enlarge it, it would look like this. If you compress the whole image, the" }, { "start": 115.8, "end": 121.72, "text": " same patch would now look like this, blurred. So in the bottom half you'd be" }, { "start": 121.72, "end": 128, "text": " able to recognize the number, in the top half you wouldn't. So a standard CNN might" }, { "start": 128, "end": 132.16, "text": " be able to recognize the road and the car still at the lower resolution but" }, { "start": 132.16, "end": 138.35999999999999, "text": " not the speed sign. What we want is a method that can selectively pay" }, { "start": 138.36, "end": 145.04000000000002, "text": " attention to parts of the image that it finds interesting and then look at those" }, { "start": 145.04000000000002, "end": 150.60000000000002, "text": " parts in full detail while basically deciding to discard other parts" }, { "start": 150.60000000000002, "end": 158.04000000000002, "text": " completely such as the sky here. So this paper is one that does this and does so" }, { "start": 158.04000000000002, "end": 166.12, "text": " in a very efficient manner. So the basic premise is very simple. All right, I'm" }, { "start": 166.12, "end": 172.04, "text": " going to show you on this on the same image. So what you do is first you" }, { "start": 172.04, "end": 177.88, "text": " actually compress the image. So this image will become a smaller image, right?" }, { "start": 177.88, "end": 187.24, "text": " So here maybe this is 1000 by 2000, you compress it down to maybe 100 by 200." }, { "start": 187.24, "end": 191.68, "text": " Still the same image but compressed. Here's the road, here's a bunch of" }, { "start": 191.68, "end": 198.56, "text": " trees. I'm very good at drawing trees. And here's this street sign and here is a" }, { "start": 198.56, "end": 207.6, "text": " car and here is another car. All right, so and there is a sky up here. So now" }, { "start": 207.6, "end": 215.56, "text": " what you do is on this smaller version you classify every location. I guess" }, { "start": 215.56, "end": 220.44, "text": " you could classify, you could subsample but you want to classify every single" }, { "start": 220.44, "end": 229.4, "text": " location on it on how interesting is it. And what they do is they take this and" }, { "start": 229.4, "end": 234.44, "text": " just put it through what they call an attention network which is just this it" }, { "start": 234.44, "end": 242.16, "text": " just a neural network. In their case it's a CNN that for each location here for" }, { "start": 242.16, "end": 254.48, "text": " each blue location outputs a function a of a and let's call it a x y at" }, { "start": 254.48, "end": 264.8, "text": " coordinates x and y of this image x. Okay, this is stupid notation. That's a of x" }, { "start": 264.8, "end": 272.12, "text": " so the image is x at coordinates i, j. Right, so all of these blue things here" }, { "start": 272.12, "end": 279.40000000000003, "text": " are i's and j's. Different i's and j's. And then what does this gives you now if" }, { "start": 279.40000000000003, "end": 286.8, "text": " you normalize correctly, so if you normalize over all the a's and i, j, a, i, j. If you" }, { "start": 286.8, "end": 292.08000000000004, "text": " normalize this gives you a distribution over this image. So if we look at it in" }, { "start": 292.08, "end": 299.56, "text": " like 1D this gives you like a distribution not a continuous one in" }, { "start": 299.56, "end": 310.68, "text": " this case a discrete one. How interesting is each patch and at the end if you have" }, { "start": 310.68, "end": 315.71999999999997, "text": " this distribution, so let's finish here, what you want to do is you want to say" }, { "start": 315.71999999999997, "end": 320.68, "text": " which are the most interesting locations. So this one's pretty high and these are" }, { "start": 320.68, "end": 328.6, "text": " very high so that might correspond to over here that might correspond to some" }, { "start": 328.6, "end": 334.12, "text": " location. So this location is very high and these locations are very interesting" }, { "start": 334.12, "end": 341.92, "text": " and only in these locations you take them out and then only those you process" }, { "start": 341.92, "end": 347.16, "text": " in full resolution. So you might have extracted let's say four patches so now" }, { "start": 347.16, "end": 357.36, "text": " you have four of these patches and each of them individually you run through a" }, { "start": 357.36, "end": 364.24, "text": " second neural network which is called another CNN which is called F the" }, { "start": 364.24, "end": 370.56, "text": " feature network. So the feature network will take a patch and output a vector of" }, { "start": 370.56, "end": 379.8, "text": " features. So it will feed those in and output the vector of features and" }, { "start": 379.8, "end": 391.8, "text": " then what you do is you simply your final output which they call G, let me" }, { "start": 391.8, "end": 406.56, "text": " colorize this so G which is G is now the final output let's not call it G let's" }, { "start": 406.56, "end": 419.88, "text": " call it O. Output is you sum over all the patches you have extracted down here so" }, { "start": 419.88, "end": 432.04, "text": " the patch number P over all your patches and you sum these features F of patch P" }, { "start": 432.04, "end": 444.15999999999997, "text": " right and P might be at location IJ let's put IJ here so IJ in the extracted" }, { "start": 444.16, "end": 451.32000000000005, "text": " patches and you weigh each feature by how much attention it got at that" }, { "start": 451.32000000000005, "end": 457.56, "text": " location. So it looks more complicated than it is what you do is you" }, { "start": 457.56, "end": 463.56, "text": " simply determine these features by using this neural network only at the position" }, { "start": 463.56, "end": 467.64000000000004, "text": " where this neural network says are interesting then you get the features" }, { "start": 467.64, "end": 474.24, "text": " from the interesting positions and you basically just weigh them by how much" }, { "start": 474.24, "end": 479.36, "text": " attention they got in the attention distribution and that will be your final" }, { "start": 479.36, "end": 484.59999999999997, "text": " output of the network and it makes intuitive sense like one network decides" }, { "start": 484.59999999999997, "end": 489.84, "text": " what is interesting the other network decides what are we going to do with the" }, { "start": 489.84, "end": 497.52, "text": " interesting things in this image. And the cool thing about this is you" }, { "start": 497.52, "end": 503.35999999999996, "text": " can basically decide how many of these patches here how many you want to" }, { "start": 503.35999999999996, "end": 508.35999999999996, "text": " extract you can decide at what resolution you want to process this" }, { "start": 508.35999999999996, "end": 516.48, "text": " image and all of this are parameters that you set by how much time you have" }, { "start": 516.48, "end": 522, "text": " for computation and how much memory you have for your computation so that's" }, { "start": 522, "end": 526.64, "text": " pretty cool pretty module we can scale up we can scale down and the another cool" }, { "start": 526.64, "end": 531.52, "text": " thing is the theoretical guarantees that they give so basically here they prove" }, { "start": 531.52, "end": 540.52, "text": " that the way they do it especially by extracting the patch especially if they" }, { "start": 540.52, "end": 545.28, "text": " have an unbiased sorry especially have if they have sampling without replacement" }, { "start": 545.28, "end": 553.52, "text": " is that if they weigh the things correctly and if they do the things" }, { "start": 553.52, "end": 558.6, "text": " correctly they show that this is actually an unbiased estimator of the" }, { "start": 558.6, "end": 566.0799999999999, "text": " true neural network if you were to evaluate on the full image basically on" }, { "start": 566.0799999999999, "end": 575.36, "text": " each patch in full resolution so only taking the ones where the attention" }, { "start": 575.36, "end": 582.88, "text": " focuses is an unbiased estimator and not only is it an unbiased estimator it is" }, { "start": 582.88, "end": 587.52, "text": " in fact the estimator with the smallest variance and that's what they prove" }, { "start": 587.52, "end": 598.32, "text": " here so the minimum variance estimator and this is this is pretty pretty" }, { "start": 598.32, "end": 603.56, "text": " interesting pretty cool and works pretty well they also show how to derive the" }, { "start": 603.56, "end": 609.52, "text": " gradient update when you train with this attention sampling so now you train your" }, { "start": 609.52, "end": 614.28, "text": " neural you train your machine learning system not on the whole image but only" }, { "start": 614.28, "end": 621.4399999999999, "text": " on a subset of the image patches but it still behaves in expectation as if you" }, { "start": 621.4399999999999, "end": 626.8, "text": " were to train on the entire image so pretty neat so here they show how this" }, { "start": 626.8, "end": 635.64, "text": " compares to full CNN in this case we have the full CNN where the picture is" }, { "start": 635.64, "end": 641.6, "text": " simply down sampled and then classified and this is what's called megapixel" }, { "start": 641.6, "end": 647.04, "text": " amnest so in megapixel amnest you have a large image and you put three digits in" }, { "start": 647.04, "end": 652.3199999999999, "text": " there there are the same for example five five five from the amnest data set" }, { "start": 652.3199999999999, "end": 658.84, "text": " you put two random digits others like two three and you put also a bunch of" }, { "start": 658.84, "end": 665.6, "text": " noise noise patches somewhere so the task is to recognize which is the" }, { "start": 665.6, "end": 671.4, "text": " dominant digit here in this case it would be five right five five where was" }, { "start": 671.4, "end": 678.5600000000001, "text": " the other one five here so if you give this to a regular CNN you see it does" }, { "start": 678.5600000000001, "end": 683.84, "text": " about this well this is the training loss here training loss and this is the" }, { "start": 683.84, "end": 690.96, "text": " test loss and it takes this much time right time per epoch here and this much" }, { "start": 690.96, "end": 698.84, "text": " time to evaluate sorry if you now use this attention sampling and as I said" }, { "start": 698.84, "end": 702.64, "text": " you can actually modulate how many patches you want to take so as you go" }, { "start": 702.64, "end": 708.44, "text": " down you take more patches we would expect it to take more time this is" }, { "start": 708.44, "end": 712.48, "text": " exactly what happens you see for example down here in the test error if you take" }, { "start": 712.48, "end": 719.4, "text": " five patches per image it takes very little time but the error I mean the" }, { "start": 719.4, "end": 724.44, "text": " error is still better than the if you use the CNN simply because you can now" }, { "start": 724.44, "end": 732.28, "text": " pay attention to details much more as you use more patches your test error" }, { "start": 732.28, "end": 737.28, "text": " drops the also your training loss they drop so using more patches will be" }, { "start": 737.28, "end": 742.28, "text": " actually give you a better and better and better performing model but you" }, { "start": 742.28, "end": 749.3199999999999, "text": " sacrifice a little bit of time but still not never as as slow as with the full" }, { "start": 749.3199999999999, "end": 757.16, "text": " with that with the CNN so even though it's a down sampled CNN right so that" }, { "start": 757.16, "end": 762.64, "text": " is very interesting and very cool that not only do they beat the the baseline" }, { "start": 762.64, "end": 768.92, "text": " in terms of error but also a lot in terms of speed if you look at what the" }, { "start": 768.92, "end": 774.92, "text": " model does as it learns here you see for a given image this is always the same" }, { "start": 774.92, "end": 779.5999999999999, "text": " image from the data set at the beginning they have actually marked where the" }, { "start": 779.5999999999999, "end": 785.8399999999999, "text": " relevant the three relevant digits are in the picture with the red circle so if" }, { "start": 785.8399999999999, "end": 793.64, "text": " you look at how over the training of this model how this distribution evolves" }, { "start": 793.64, "end": 798.76, "text": " is pretty interesting yellow basically means high attention so at the beginning" }, { "start": 798.76, "end": 806.8, "text": " you have high attention everywhere in the image right and then as you go on and" }, { "start": 806.8, "end": 812.24, "text": " on and on you see for example here it pays attention to all the locations" }, { "start": 812.24, "end": 818.8, "text": " where basically where there is something in the image right this could be one of" }, { "start": 818.8, "end": 823, "text": " these three digits but it could also be one of the digits that it's trying to" }, { "start": 823, "end": 827.4399999999999, "text": " that is trying to distract the model like the false digits or the noise" }, { "start": 827.44, "end": 834.2, "text": " patches and as you go more and more and more it really learns to only pay" }, { "start": 834.2, "end": 839.6800000000001, "text": " attention to the relevant digits and then classify those at full resolution" }, { "start": 839.6800000000001, "end": 845.2800000000001, "text": " so this really shows the this this kind of attention distribution learns" }, { "start": 845.2800000000001, "end": 855.08, "text": " something very meaningful they do more experiments on two data sets namely this" }, { "start": 855.08, "end": 861.76, "text": " is a histopathology data set right here where the goal is I think to recognize" }, { "start": 861.76, "end": 873.44, "text": " this epithelial cells this type of cell and you can see that this here is the" }, { "start": 873.44, "end": 882.6800000000001, "text": " baseline and this here is the new method and the baseline basically what it does" }, { "start": 882.68, "end": 887.4399999999999, "text": " is it does similar thing namely it processes the image in patches but it" }, { "start": 887.4399999999999, "end": 895.12, "text": " processes every single patch maybe in succession but it still processes every" }, { "start": 895.12, "end": 899.64, "text": " single patch where the attention sampling only processes the patches that" }, { "start": 899.64, "end": 906.88, "text": " the attention sampling distribution suggests and this other data set here" }, { "start": 906.88, "end": 912.5999999999999, "text": " is a street sign data set that you saw at the beginning right here and the" }, { "start": 912.6, "end": 920.6, "text": " the again I think this is the baseline and this is the attention sample so both" }, { "start": 920.6, "end": 925.44, "text": " learn to pay attention to the street signs but again the attention sampling" }, { "start": 925.44, "end": 933.52, "text": " much more efficient so here you see the baseline performance the attention" }, { "start": 933.52, "end": 939.6, "text": " sampling performance is similar in terms of test error but if you look at how" }, { "start": 939.6, "end": 945.6, "text": " much time the baseline uses per sample and how much memory and then compare" }, { "start": 945.6, "end": 951.48, "text": " this to the attention sampling you see that they save at least an order of" }, { "start": 951.48, "end": 956.6, "text": " magnitude in time and memory and the same thing goes for the street sign" }, { "start": 956.6, "end": 964.24, "text": " data set you see test error here and then test error is similar for the" }, { "start": 964.24, "end": 973.48, "text": " attention sampling but again time memory much much lower so the attention" }, { "start": 973.48, "end": 982, "text": " sampling is faster and is more memory efficient than the baseline and that" }, { "start": 982, "end": 988.6, "text": " makes it makes it easy to process these megapixel images even on here they say" }, { "start": 988.6, "end": 995.5600000000001, "text": " process megapixel images in a single CPU or GPU and that really I like this" }, { "start": 995.5600000000001, "end": 1001.44, "text": " because it kind of brings their research back to let's say regular people or" }, { "start": 1001.44, "end": 1009.8000000000001, "text": " maybe universities that don't have as much money as large companies and so all" }, { "start": 1009.8000000000001, "end": 1014.6800000000001, "text": " in all very cool paper very neat experiments to have a lot in the" }, { "start": 1014.68, "end": 1020, "text": " appendix check it out where they show their attention distribution in these" }, { "start": 1020, "end": 1025.32, "text": " images their theoretical analysis is pretty easy to follow if you want to" }, { "start": 1025.32, "end": 1045.12, "text": " check that out and with that thanks for listening and bye bye" } ]
X2k7n4FuI7c
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding&Generation
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "zeta alpha", "blip", "language vision pre training", "language vision pre-training", "deep learning pre-training", "clip pre-training", "blip pretraining", "parameter sharing", "sequence to sequence", "image captioning", "vqa", "visual question answering", "fine-tuning", "vit", "vision transformer", "salesforce" ]
#blip #review #ai Cross-modal pre-training has been all the rage lately in deep learning, especially training vision and language models together. However, there are a number of issues, such as low quality datasets that limit the performance of any model trained on it, and also the fact that pure contrastive pre-training cannot be easily fine-tuned for most downstream tasks. BLIP unifies different tasks and objectives in a single pre-training run and achieves a much more versatile model, which the paper immediately uses to create, filter, clean and thus bootstrap its own dataset to improve performance even more! Sponsor: Zeta Alpha https://zeta-alpha.com Use code YANNIC for 20% off! OUTLINE: 0:00 - Intro 0:50 - Sponsor: Zeta Alpha 3:40 - Paper Overview 6:40 - Vision-Language Pre-Training 11:15 - Contributions of the paper 14:30 - Model architecture: many parts for many tasks 19:50 - How data flows in the model 26:50 - Parameter sharing between the modules 29:45 - Captioning & Filtering bootstrapping 41:10 - Fine-tuning the model for downstream tasks Paper: https://arxiv.org/abs/2201.12086 Code: https://github.com/salesforce/BLIP Demo: https://huggingface.co/spaces/Salesforce/BLIP Abstract: Vision-Language Pre-training (VLP) has advanced the performance for many vision-language tasks. However, most existing pre-trained models only excel in either understanding-based tasks or generation-based tasks. Furthermore, performance improvement has been largely achieved by scaling up the dataset with noisy image-text pairs collected from the web, which is a suboptimal source of supervision. In this paper, we propose BLIP, a new VLP framework which transfers flexibly to both vision-language understanding and generation tasks. BLIP effectively utilizes the noisy web data by bootstrapping the captions, where a captioner generates synthetic captions and a filter removes the noisy ones. We achieve state-of-the-art results on a wide range of vision-language tasks, such as image-text retrieval (+2.7% in average recall@1), image captioning (+2.8% in CIDEr), and VQA (+1.6% in VQA score). BLIP also demonstrates strong generalization ability when directly transferred to video-language tasks in a zero-shot manner. Code, models, and datasets are released at this https URL. Authors: Junnan Li, Dongxu Li, Caiming Xiong, Steven Hoi Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher LinkedIn: https://www.linkedin.com/in/ykilcher BiliBili: https://space.bilibili.com/2017636191 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Hey, y'all, this is a comprehensive paper review of the paper on blip. This is a model and a technique for bootstrapping one's own data set in vision and language pre training, which is pretty cool. So the video is a comprehensive review, we'll dive into the paper, we'll see what the paper is about, I'll explain you what's in it. And by the end of the video, you should have a good understanding of what's in the paper. In the next video, which I'm going to release tomorrow, there's going to be an interview with the authors of the paper. So also be sure to check that out because that answers a few very, very interesting questions that I had while reading the paper itself. So I wish you a lot of fun. Let me know what you think in the comments and I'll see you around. Bye bye. Hey there, this video is sponsored by Zeta Alpha, which is a new neural discovery and recommendation engine for papers. Yes, for scientific papers for trends in research and code in AI. Their goal is to become your research assistant and streamline how you organize, share and stay up to date on the latest R&D. This is really cool because the flood of papers in machine learning is sheer overwhelming in recent months. Zeta Alpha uses neural embedding based search and can give you the best recommendation of research that matches your interest and that you don't want to miss. And what better way than to just try it out. So first I start off searching for today's paper, which is the blip paper. And this is really cool because not only do I get the paper, I also get the GitHub code implementation and I can directly see the impact on social media that this paper has. This is much better than something like Google Scholar, which would just give me a few links to the paper itself. I can now save this paper under a tagging category that I'm just going to invent right now. And I can use Zeta Alpha to find similar research. Here I'm going to limit my search to the last three months. So I make sure that I don't miss anything that has recently been going on that I should know about when reviewing this paper. Now I also like a bunch of those other papers. So I'm going to save them as well to the same category. Once I have a bunch of papers in my category, I can use again Zeta Alpha's recommendation engine to give me more suggested papers to add to the same category based on what I have already in there. And I can also share this entire category with my teammates because everything Zeta Alpha does is not only for individuals, but also for teams. This is really powerful and can dramatically accelerate your discovery of new and relevant research. Now this doesn't only work for categories that you define. Once you interact with the search engine, Zeta Alpha is going to be able to give you a list a feed of recommendations from archive, from conferences, from blogs, from GitHub, and much more. This saves you a ton of time and lets you stay up to date with whatever is happening. If you're at all into ML research, this is hyper relevant for you. And I definitely invite you to check it out. Now they do have a free tier, but I got you a great deal. If you go over there right now and use code Yannick, you'll get 20% off a personal assistant subscription. Again, go to Zeta-Alpha.com, use code Yannick for 20% off right now. Thanks again so much to Zeta Alpha for sponsoring today's video. And now let's get into it. See ya. Hello there. Today we'll look at Blip Bootstrapping Language Image Pre-Training for Unified Vision Language Understanding and Generation by Junan Li, Dongxu Li, Taiming Xiong, Stephen Hoy. Yeah, that's it. Of Salesforce Research. So this paper proposes two things. One is a new architecture. And I want to say a new conglomeration of existing things. So an arrangement of modules for multitask pre-training. This model will take in an image text pair and perform multiple tasks on it. It has multiple losses and therefore ends up being able to do multiple things. Now that being said, this is a pre-training method. So the idea is that for any of these modules, you'll take them, you recompose them downstream and you fine tune them on a task, although they do have some zero shot results. So this is one thing. And this could be really cool if this alone turns out to be successful because it leads the path to a future where we have much more dynamic compositions of models and where we would pre-train these models with a lot of different tasks in one thing rather than pre-training them on just a single task like language modeling. The other thing is a bootstrapping method for the data. And these two things are not necessarily disconnected, although I do lament the fact that it's two things in one paper a little bit. But there's a bootstrapping method for these image text data set that includes training captioners and filters, which means that there is a part that learns to synthetically generate data and then there is a part that learns to distinguish good from bad data. And that allows them to collect lots and lots of data from the internet and filter out bad, badly poorly labeled images, which there exists a lot on the internet, and also allows them to augment the data set by labeling images themselves. So this is also really interesting and it feeds really well back into their model because their model is uniquely capable of doing this, being the multitask model that it is. So we're going to go through the architecture through the data set bootstrapping method. And keep in mind that I think if this catches on, there could be a recipes in here for future research that lead us to a much more dynamic world where we compose these modules, much like we compose different modules, low level modules in deep learning. We could compose these higher level modules and losses and do lots more multitask pre-training, maybe even dynamically configured. But let's dive in. So vision language pre-training, they say, has recently been the hit. For example, if you think of something like clip, and that's not even pre-training, but there are lots of architectures that do vision language pre-training, meaning they take pairs of images and text. So you'll have like some sort of an image and you'll have like some sort of text that goes with it. And you'll try to come up with a system that connects the two in any way. They say the major, the existing methods have two major limitations. So first of all, the, what they call the model perspective, they say they are either the existing methods are either encoder based or an encoder decoder architecture. So in an encoder based setup, what you would do is you would take in both of these things and you would try to come up with probably a number that represents how well they fit together. So are they good together or not? This is the clip architecture essentially. So in encoder based models, they criticize that encoder based are less straightforward to directly transfer to text generation tasks. So it's not, it's not simple to take clip and actually make it produce something. Remember if we have to, if you have to produce an actual image with clip, we need to do this diffusion clip guided diffusion or clip guided GANs, VQ GANs. So it's really cumbersome to make clip generate an image and it's probably even more cumbersome to make it generate text because it's not trained on that. So they criticize on these methods. It's not easy to make them do generation tasks. Whereas encoder decoder models have not been successfully adopted for image text retrieval tasks. So an encoder decoder model is where you would take the image probably and then make it produce the text. And then you train it as a language model to autoregressively produce the caption. And that's really neat for producing captions, but you cannot necessarily do this task up here very easily with such a model. You will, you will be able to do some things, but they're not necessarily successful because the task is really a different task. So both, both approaches for doing this currently are not ideal. The other thing is the data perspective. They criticize that these models are pre-trained on image text pairs that are essentially scraped from the internet. So collected from the internet. And they say noisy web text is suboptimal for vision language learning. We've known for a long time that there is a trade off between scale of data and quality of data. And ideally you'd have both. However, if you scrape from the internet, so let's say you scrape websites and there is like some text and there is an image somewhere and the image will have alt text. And that's what's usually used as the label in these systems. So if you don't know in the HTML, if you have an image tag, that's how, that's how the browser knows it's an image. You have the image tag, you have the source attribute, which leads, it's a URL usually that leads to the image, but then you also have an alt attribute. And it's really recommended that you put an alt, an alt property to the point where frameworks and linters and so on, they will yell at you if you don't have it. So what does this do? This specifically is for visually impaired people, for screen readers, but also for bots to know what is in the image. So you put the description there. However, a lot of people don't do that. And I think it makes it actually worse that linters and so on almost require you to do it. Because if you don't want to do it, you're just going to put like some some dumb stuff there like image, or people do lots of search engine optimizations in there. So since you know, the search engines don't usually look at the image itself, but at the alt text, they try to come up with buzzwordy things, so that it's ranked high in search results. So not necessarily the best quality data. And their bootstrapping, their bootstrapping method right here is is helping in that of getting higher quality data out of the internet. So how do they do this? The first thing they propose is this model, the multimodal mixture of encoder decoder. They say it can operate either as a unimodal encoder, or an image grounded text, the encoder or an image grounded text decoder. So yeah, we're going to look at these things. But I think here they say can operate either as one or this or that. It's not like this. It's not like that exact same model can do this. It's just that they put all of these models into one big model. And then they just use the part of the model that does the particular thing. So it's not necessarily super duper unified is what I wanted to say. Yeah, they train the three, the three sub parts of their models with three objectives, which we're also going to look at. The second part is this captioning and filtering. This is what this is what boosts the data set quality. They say they learn from noisy image text pairs by cleaning them by producing more and cleaning them. They train a captioner, which whose goal is to produce synthetic captions given web images and a filter to remove noisy captions from both the original web text and synthetic text. So the captioner will get images produce labels for these images or produce alt text. And then the filter goes over both the generated ones and the collected ones and just filters out everything that it deems to be qualitatively low standard. Of course, this needs to be trained on a high quality data set. But these sort of bootstrapping methods we've seen a number of times in the recent past that they actually work. In fact, this model, this paper here seems to be a good accumulation of sort of recognitions and good practices over the last few years. And we're going to point those out as we go through their their contributions. Here they say we show that the caption and the filter work together to achieve substantial performance improvement, which, okay, I don't know what substantial means in these kinds of tasks, but it's I it's an improvement. They are they achieve state of the art performance in a wide range of vision language tasks. And interestingly, also, this is a property of maybe synthetic data generation, they show more diverse captions yield larger gains. This might also be a good lesson for people who want to go and apply these methods. Lastly, they say next to having state of the art in downstream fine tune tasks, they also achieve zero short performance when directly transferring our models to two video language tasks. So they were they were never trained on video language tasks, never pre trained, never fine tuned, yet still they have a good zero short performance, which is okay. Like if you understand images, then there are going to be some video tasks that are your that you're particularly good at. Right. So let's dive into the model. And I've already shown you a diagram of the model. They quickly go through this here. They have three parts. They have actually, well, I want to say four parts to their model. One part one is a visual transformer, a VIT as the image encoder. So again, they take an image and they take a piece of text and now they do stuff with it. And the first part is they encode the image using a visual transformer. That's all they do with the image they encoded using a bit with the text, they do three, three different things. The first thing is they also just encode the text unimodally. So put the text through an encoder. And that with those two things already, they've essentially reproduced clip. Except they say it's the same as BERT. Yeah. So they've reproduced clip with those two things, because now they can set it up this visual transformer and the unimodal encoder, they can set it up as a similarity metric. So the unimodal encoder will give you some vector in an embedding space, the visual transformer will give you some vector in an embedding space, you can set up a contrastive loss to check whether these two things go together and whether they are apart from let's say any other encoded image or text. You can do this via contrastive learning, you can do it via regularized methods. But essentially, this is what we've come to known as encoder only models. The second thing they have is this image grounded text encoder. So the image grounded text encoder does almost the same thing as the unimodal text encoder. However, it doesn't encode the text separately. It jointly encodes the text while incorporating attention into the visual transformer. We're going to see how that goes in a second. But essentially, it produces a vector, let's say this one. And while producing that on the path, as it produces that, it incorporates information from the visual transformer. So it will, this here is the output of the visual transformer, it will incorporate that at multiple layers here via cross attention into the process. So this here is really a joint kind of encoding of the text given the image. That's why it's called image grounded text encoder. What this can do is you can build a classifier on top of this, like a binary classifier, because it is a representation of the text that has but that has already the information of the image inside of it. So it's kind of a joint representation of the image and the text. So you can build a classifier, for example, whether or not the two things go together again, but you don't have to use a contrastive loss, you can in fact use a supervised loss and classify and build a classifier. The third thing is this image grounded text decoder. Now again, being image grounded, that is a long, what is going on? Something's up here. There's an image grounded text decoder. The image grounded text decoder is much like the image grounded text encoder in that it incorporates cell across attention. However, it's a text decoder. So what it will do is it will actually produce text. So it will auto aggressively produce the text while incorporating again, information via cross attention from the visual representation. You can see that they have a different section on the pre training objectives. These just map to these three parts. So there's the image text contrastive loss, which is the loss for the first part. There is the image, the image text matching loss, which is the loss for the second part. And again, this is just a binary classification task where the model uses a linear layer head, they call it an ITM, an image text, text matching head, but it's a linear layer to predict whether an image text pair is positive, which means matched or negative unmatched given their multi modal feature. The special thing here is they do have a hard negative mining strategy. So they go to the top part here, they go to the joint, no, sorry, to the disjoint encoding to this part, and they look which ones are the hard negatives, which means that negatives that have a high contrastive similarity, and they use those specifically to train this loss here. The last loss is a language modeling loss, which is obviously relevant for the third part. This is a cross entropy loss, it maximizes the likelihood of the text in an autoregressive manner. If we put all of this together, we get this model right here. Again, if we go through it, the input data are two things, the input data are the image down here, and the piece of text here. Again, we know these go together because we've scraped them from the web. So these two, we know they go together. This is not an unsupervised training. This is essentially supervised learning for two things that we know go together. The first thing is we're going to encode the image through the image encoder. That's the image encoder. This is the image representation. This is just a bit. This is a visual transformer. I don't think they freeze it, but they may start from a checkpoint. All of this is jointly trained. So all of these losses, as I understand them, are jointly trained. So then we have the vision representation. What we can do is we can put the text first of all through the text encoder. You can see we can append different tokens right here to let the encoder know what we're currently doing because we also have some parameter sharing going on. So the text encoder gets the input text. It will also compute an encoding. And then we have this contrastive loss between the two encodings. They need to be close for pairs that we know go together, and they need to be far apart for other pairs. You can do something like in-batch negatives, or you can, as we said, mine hard negatives from this part. Well that makes no sense. You can mine hard negatives for that part over here, given this part over here. Which makes me believe, okay, maybe I haven't read closely enough. Maybe they also just train one of the losses maybe for each batch because they have to sample differently for the things. It doesn't make too much of a difference whether they train it really all jointly, jointly, or always activate one of the three text pathways. This would be interesting to figure out. So the last thing, the second thing they do is they give it to this image grounded text encoder. Again, this gets the text and a little token to show what's going on. It will encode, and now you can see that it has this cross attention module. And the cross attention module, as it encodes, it incorporates information that comes from all the way over here, comes all the way over here from the image. So the image representation is part of the encoding here, which means this thing has information about both the text and the image. Now yeah, of course, it's still a, it's still, it's not symmetric, right? We don't, the joint encoding is asymmetric in the sense that it is the text that is encoded based on the image. And that allows them to, you to only compute the image representation once. So they only need to do this pathway on the left here once, and then they can reuse that representation for all of the, for all of the different paths in the text here. Yeah, you can see that on the left, this is the difference on the left here. This is skipped, the cross attention is skipped. We don't have cross attention, it's just an encoding of the text itself. And here it's really a joint encoding, which means that this thing here contains information on both the image and the text. And we can perform any sort of task that we want with this joint encoding. In our case, we simply train it on a very similar objective as the contrastive loss in that it's a binary classification. It needs to figure out whether or not the two things actually go together or not. The third thing, again, almost the same is this decoder, the text decoder, same input except there's a little decode token. There is a difference in that this is bidirectional. The other two modules have bidirectional self-attention because they are encoders, so they get to use bidirectionality. Here we use causal self-attention, which essentially means that in the text you only get to attend things. So if you produce a particular token right here, you only get to attend to tokens that are behind yourself. This is a bit of a hack, because otherwise we couldn't train these things with batches or in parallel. It is definitely possible to use bidirectional self-attention as long as you cap, as long as you mask whatever comes next. So you want to mask sort of the future, but within the past you could totally use bidirectional self-attention. Again, this is just a hack to make training easier, but it's come to be a popular hack, so everyone's doing it. Again, you can see there's cross-attention coming from the image, and here you can really see that it's necessary. If I want to actually produce text, I need some sort of information of what I want to produce. So this language modeling loss here really needs the cross-attention, really needs the input from the image. So again, this comes from here, from the image representation. So there you have it. It's an unholy concoction of many different things in one. And this is all trained jointly. And yeah, I'm excited about this because I think not necessarily this particular arrangement. I have lots of stuff to criticize or lots of choices here that are kind of arbitrary. Why this asymmetry in, you know, I have the image encoded once and I have cross-attention into all the text encoders. Why not the other way around? Why don't we do image generation tasks? Why don't we do any sort of masked modeling, like masked language modeling? This could even be in the image. There's lots of stuff, let's say, to criticize. But I think what this thing shows is that a good recipe for the future could be to combine lots of these different methods together, combine lots of them into one big thing. Reusing parts intelligently and then train them jointly. We could even think of frameworks that do this automatically or that allow you to really easily set this up with a few lines of code and it will figure out by itself, like the framework would figure out itself, what it can compose and how it could reuse. What you can also see right here is I've overshadowed it a little bit with my thing right here, but there's color and the color indicates shared parameters, which is also really interesting. So you can see that essentially the text encoders aren't three separate encoders, but they largely share parameters. For example, the feedforward parameters are shared. The cross-attention parameters, they're all shared, except of course they're not active in this encoder. The bidirectional self-attention parameters are shared. The causal self-attention, those ones are separate over here, but if we had some sort of other autoregressive module, they would be shared too. So you'd share whatever you could in these architectures and that reduces the overhead, but also in their evaluations really helps, which I guess makes sense. Well, I don't know. If the tasks are too distant, you might get this catastrophic forgetting, but in their case it does help. Yes, which I could guess, right? For example, the bidirectional self-attention right here, since these two modules are almost doing the same task, it's reasonable that they would share parameters. So we've gone through a whole lot of things that they say down here. They do reason through their choices a little bit, even though I think these choices, they are either arbitrary or they're guided by experiments, just seeing what works better. They do bring up some hypotheses of what they think, why do things work and why do things don't work. They say that text encoder and decoder share all parameters except for the self-attention layer. The reason is that the differences between the encoding and decoding tasks are best captured by the self-attention layers. So they're essentially saying that whether you want to encode or decode, that is mostly going to be different in the attention layers, not from the architectural perspective, but from sort of the how the task is done perspective. And that I don't think necessarily you can say this, right? Like you can't necessarily say the feed forward layers have a similar job in or have similar features and perform similar functions, whether you're encoding or decoding. I don't just don't think that's out of the box, really evident that we need to be supported by evidence. So yeah. But it seems to work well in empirical evaluations and so I'm going to I'm going to with them sharing the parameters, but the reasoning are more hypotheses. So the second part they go into is this cap field. Again, this is a bit disconnected, although it plays well into their model. Here they criticize how these data sets are usually collected. They say alt text often do not accurately describe the visual content of the images that are scraped from the web. And that's why they have a bootstrapping method. So what they do is they collect a data set from the internet. And yeah, well, I find this diagram here to be a little bit complicated. So we're just going to make our own. So they have the internet, I'm going to this is a globe with, you know, the lines and so on. So we're going to collect a big chunk of data of pairs of images and text, images and alt text from the web, really noisy. And what we're going to do with this stuff is we're going to train a first blip architecture or a first now how they call it MED architecture, multi something something, whatever their model is on top. We're just going to train that with this noisy data, and that's going to be our first iteration model. Now this is really noisy so far and so on. But what we're going to do then is we're going to fine tune this. We're going to fine tune a filter and a captioner. So we're going to fine tune a filter and a captioner on supervised data. There exist some supervised data sets. And one of them, I believe, is the Coco data set. Yes, the Coco data set. So this step here, we need supervised data and supervised data of image text pairs. So human made captions for existing images, which it's a sort of a proxy for quality. So of these things, we can be sure that the quality is relatively high. If we could find some sort of an automated way to get really high quality image text pair data, it doesn't necessarily need to be human labeled. It just needs to be high in quality. So they use that to train a filter and a captioner. Now what is the filter and the captioning model? Now these are going to be fine tuned versions of their MED models. For example, the captioner takes in an image and gives you a caption, a synthetic caption. Now this is something our model can do. If we just take two parts, so we take this part and we take this part right here. This is now a captioning model. So the idea here, the general idea of BLIP of this MED model is that we pre train all of these things together and we sub select or we rearrange even the different sub components and then fine tune them on a downstream task. And one easy way is to take two components, simply deactivate all others and let them run in inference mode. So now we have a captioning model. The captioning, the filtering model on the other hand, very similar, but it takes an image and a piece of text both inside and it will output a score of whether the two things go together or not. Now this, of course we can achieve in multiple ways, but we can achieve this in the probably the most high quality way by taking the image encoder and taking this part right here that is specifically trained to jointly encode. You might ask, why don't we use this module right here and then use this contrastive estimation? We could also do that, definitely. But usually there are always multiple ways of determining similarity. You can have sort of the two stack encoder. So here is the image and here is the text. You can have separate encoders for them and then at the end determine whether they go together. And that's usually good if you want to do something like a search index because you can pre-compute a lot of these things. You can pre-compute all the embeddings for the images and then at inference time, if you have a query using text, you want to search an image via text, you only need to encode the text. Whereas with a joint encoding, it's really different. You need to input both into the encoder and that will give you a score at the end. And if you want to build a search engine like this, then for every single time you issue a query, what you need to do is you need to go through the whole data set and encode the query here together with all of the images, get the score for each one and then evaluate that. And you can see there is a trade-off, the left side is way friendlier computation-wise if you have an existing data set. The right side is qualitatively higher because during computation through these layers, the two things can already attend to one another, whereas really the only interaction here is the end over here. So this is qualitatively better estimate of whether the two things match or don't match. And that's why we're going to have the filter here. Since we're working, since we're filtering the data set, we can jointly encode the two things anyway. So we're going to fine tune that part to become our filter. So now we have a fine tuned part, one captioner, one filter. What can we do now? Well, we can take our data set, this thing right here, and we can use the captioner to produce another data set by just taking the images. So we just take the images here, we put them through the captioner and we get another data set. So we get another data set, it's going to have the same images, right? And it's going to have different texts. So I'm going to put this. So this is a synthetic data set. We can then join the two data sets together. So join the two data sets, and then we can put them both through the filter. So we're going to put them both through the filter. And the filter will simply filter out any image text pair that is not adequate, which means that it will filter out any image text pair which doesn't match well together, given the fine tuning of the filter on the supervised or high quality data set. So then we end up with a data set of, and we can restrict it like to only have one caption for each image or something like this. And we end up with a data set of image text pairs, which is large because we've augmented it with synthetic data, but also is of high quality because we have done the filtering. Now all of this being said, again, this highly relies on the quality of the data set that we fine tune on and of the diversity of that data set as well. Because you can also imagine if that data set isn't containing much of the domain that you're looking at, then your filter will learn to essentially down rank everything because it says, well, my data set says these two things don't go well together because I actually have just no data in that region. So there's a bit of danger in doing this. You really need to pay attention at what data set you're fine tuning. But this is how you bootstrap a good data set. So you can see go from here to here. And you can think of multiple things. Again, I think this paper is less about the particular method they choose. And I think more about what could be recipes for the future. And I think in the recent times, we've seen a lot of synthetic data generation, first of all, being really helpful. We've seen this in a number of reinforcement learning applications, a number of even NLP applications. So synthetic data is really, really picking up, I want to say, with advances in SIM to real and so on. And then also this approach of filtering. This has come up more and more in recent years, where generative models are paired with discriminative models that either rerank their outputs or filter their outputs for quality. This seems to be a very good recipe for achieving generative tasks in general. Not only train a generator, but train a ranker or filter on top of that. It's pretty computationally efficient. It's easy to implement. And yeah, I think it's a good recipe for the future. And one can think of various ways here to improve this, like to do this bootstrapping multiple times, to collect the supervised data set in a different manner and so on. I think there's a lot of possibilities here that are not yet explored, which I find to be pretty, pretty cool. So that's essentially all. Yeah. Okay, no, I was actually wrong here. You can see the filter is actually fine tuned on both of the objectives to learn whether a text matches the image. So this it's both the contrastive and the the single classifier loss. So I do think I do think the filter like what they actually pay attention to at the end is going to be this thing right here is going to be the classification head. But I guess it doesn't hurt to use both losses as you fine tune it. And since all parameters are shared, essentially, you really don't have you really don't have you can like it's it's easy to try and it's not too much of an overhead. So that's the methods. Again, they have this concoction of modules that they all pre train jointly with their respective losses. And then on the other hand, they have this bootstrapping method where they can directly use their model, right? That's the way these integrate these two. Since they have a model that can do all of these different things, they can fine tune that model to become a filter or to become a captioner. And the same thing holds for the results downstream. Here they have some examples, by the way, of generated. And so the bottom text is always a generated one. The top text is one from the data set. Anything that's red is filtered out by the filter. Anything that's green is accepted by the filter. Yeah, so they they also discuss a little bit of the dangers of doing this, of training the filtering and the captioning on this from the same pre training state on the same data set, which is that like there is some going to be some confirmation bias in that the filter will up rank things that the captioner produces because they're essentially learn from the same data. That's why they don't share. They fine tune them separately to combat this a little bit. But I still think that you're going to have some of that in there definitely. But you know, it's this is, you know, this is a real data from bridge near my house, which might be true, right? But it's not very descriptive and the filter realizes it. Yet a flock of birds flying over a lake at sunset. That's pretty descriptive. Another interesting thing is that they use nucleus sampling here, which is a common strategy. But they do find that using nucleus sampling leads to better performance and that because it generates more diverse and surprising captions, which contain more new information that the model could benefit from this, they compare this to beam search and beam search essentially goes for the highest likelihood sample. It tends to generate safe captions that are common in the data set, hence offering less extra knowledge. I think that's also really cool recognition right here that if we sample things from generative models, we might have different goals. And therefore it might not always be good to like it might be good to have an objective or a sampling method that encourages diversity. We've already seen this in alpha code. And my question there was already a little bit. Do we even have the correct training procedures for this? Because we train maximum likelihood? Or do we have the correct sampling procedures for this? All of these are interesting questions. And I think this kind of research validates that it's not all the same, like, depending on what we want to do, our training and sampling procedures need to adjust. I don't want to dive too deep into the results. They are outperforming other things by some margin. Like I don't necessarily agree that they outperform things so heavily as they advertise. But you know, that's research currently. Again, they allude to the fact that they share parameters here. And why that is, they say, sharing all the layers except for the self attention leads to better performance compared to not sharing. That's the part I believe, right? Totally. You share numbers go up good. But then they say, if the shared attention layers are shared, the model's performance would degrade to the conflict between the encoding and the decoding tasks. And this, I think, yeah, this stuff needs needs evidence. Because I mean, yeah, I'm fine with just going with the numbers. Here you can see the various ways they combine the things, for example, for visual question answering, they first encode the image, then they feed that to the text encoder, then they feed that to the decoder. So you can see, you can not only sub select modules, but you can rearrange them, right? Because you fine tune, you can adjust the parameters. So this connection already exists in the previous model, this connection doesn't. So you can sort of rearrange and recombine these modules to do various things. You can see here, we have two image or a double image encoder, or I guess the image encoder get just gets two samples. And then we also have two, one, a duplication of these cross attention modules. And then we output that into a newly trained merge layer. So this is the exciting part right here. And I feel I feel really don't want to necessarily go into this because we might go into this in the interview. But I feel a future where we have frameworks, coding frameworks, where this kind of stuff could be supported in an automatic fashion where I don't have to, you know, go and really hand define exactly how I want these things combined. But I could have a more high level descriptive language that allows me to do this whole pre training arrangements and this recombination for downstream fine tuning. That's really exciting. All right, I'm going to leave it at that. I hope you had a good overview. If you want to dive into the results, you know, feel free, there's lots of tables in here. And then we have a pro evaluation, which is really cool because it lends a lot of credence to their methods. And with that, let me know what you think in the comments and bye bye.
[ { "start": 0, "end": 10.24, "text": " Hey, y'all, this is a comprehensive paper review of the paper on blip." }, { "start": 10.24, "end": 16.56, "text": " This is a model and a technique for bootstrapping one's own data set in vision and language" }, { "start": 16.56, "end": 18.76, "text": " pre training, which is pretty cool." }, { "start": 18.76, "end": 24.12, "text": " So the video is a comprehensive review, we'll dive into the paper, we'll see what the paper" }, { "start": 24.12, "end": 27, "text": " is about, I'll explain you what's in it." }, { "start": 27, "end": 31.88, "text": " And by the end of the video, you should have a good understanding of what's in the paper." }, { "start": 31.88, "end": 35.8, "text": " In the next video, which I'm going to release tomorrow, there's going to be an interview" }, { "start": 35.8, "end": 38.36, "text": " with the authors of the paper." }, { "start": 38.36, "end": 43.58, "text": " So also be sure to check that out because that answers a few very, very interesting" }, { "start": 43.58, "end": 46.74, "text": " questions that I had while reading the paper itself." }, { "start": 46.74, "end": 48.8, "text": " So I wish you a lot of fun." }, { "start": 48.8, "end": 51.6, "text": " Let me know what you think in the comments and I'll see you around." }, { "start": 51.6, "end": 52.6, "text": " Bye bye." }, { "start": 52.6, "end": 57.160000000000004, "text": " Hey there, this video is sponsored by Zeta Alpha, which is a new neural discovery and" }, { "start": 57.160000000000004, "end": 59.120000000000005, "text": " recommendation engine for papers." }, { "start": 59.120000000000005, "end": 64.88, "text": " Yes, for scientific papers for trends in research and code in AI." }, { "start": 64.88, "end": 69.72, "text": " Their goal is to become your research assistant and streamline how you organize, share and" }, { "start": 69.72, "end": 72.5, "text": " stay up to date on the latest R&D." }, { "start": 72.5, "end": 77.36, "text": " This is really cool because the flood of papers in machine learning is sheer overwhelming" }, { "start": 77.36, "end": 78.52000000000001, "text": " in recent months." }, { "start": 78.52, "end": 83.78, "text": " Zeta Alpha uses neural embedding based search and can give you the best recommendation of" }, { "start": 83.78, "end": 87.8, "text": " research that matches your interest and that you don't want to miss." }, { "start": 87.8, "end": 90.47999999999999, "text": " And what better way than to just try it out." }, { "start": 90.47999999999999, "end": 94.8, "text": " So first I start off searching for today's paper, which is the blip paper." }, { "start": 94.8, "end": 99.12, "text": " And this is really cool because not only do I get the paper, I also get the GitHub code" }, { "start": 99.12, "end": 104.64, "text": " implementation and I can directly see the impact on social media that this paper has." }, { "start": 104.64, "end": 110.12, "text": " This is much better than something like Google Scholar, which would just give me a few links" }, { "start": 110.12, "end": 111.46000000000001, "text": " to the paper itself." }, { "start": 111.46000000000001, "end": 116.6, "text": " I can now save this paper under a tagging category that I'm just going to invent right" }, { "start": 116.6, "end": 117.6, "text": " now." }, { "start": 117.6, "end": 120.6, "text": " And I can use Zeta Alpha to find similar research." }, { "start": 120.6, "end": 123.88, "text": " Here I'm going to limit my search to the last three months." }, { "start": 123.88, "end": 128.2, "text": " So I make sure that I don't miss anything that has recently been going on that I should" }, { "start": 128.2, "end": 130.6, "text": " know about when reviewing this paper." }, { "start": 130.6, "end": 133.08, "text": " Now I also like a bunch of those other papers." }, { "start": 133.08, "end": 135.60000000000002, "text": " So I'm going to save them as well to the same category." }, { "start": 135.60000000000002, "end": 140.84, "text": " Once I have a bunch of papers in my category, I can use again Zeta Alpha's recommendation" }, { "start": 140.84, "end": 146.4, "text": " engine to give me more suggested papers to add to the same category based on what I have" }, { "start": 146.4, "end": 147.8, "text": " already in there." }, { "start": 147.8, "end": 153.56, "text": " And I can also share this entire category with my teammates because everything Zeta" }, { "start": 153.56, "end": 157.96, "text": " Alpha does is not only for individuals, but also for teams." }, { "start": 157.96, "end": 163, "text": " This is really powerful and can dramatically accelerate your discovery of new and relevant" }, { "start": 163, "end": 164, "text": " research." }, { "start": 164, "end": 166.88, "text": " Now this doesn't only work for categories that you define." }, { "start": 166.88, "end": 170.84, "text": " Once you interact with the search engine, Zeta Alpha is going to be able to give you" }, { "start": 170.84, "end": 177.08, "text": " a list a feed of recommendations from archive, from conferences, from blogs, from GitHub," }, { "start": 177.08, "end": 178.08, "text": " and much more." }, { "start": 178.08, "end": 182.68, "text": " This saves you a ton of time and lets you stay up to date with whatever is happening." }, { "start": 182.68, "end": 186.38, "text": " If you're at all into ML research, this is hyper relevant for you." }, { "start": 186.38, "end": 188.38, "text": " And I definitely invite you to check it out." }, { "start": 188.38, "end": 191.72, "text": " Now they do have a free tier, but I got you a great deal." }, { "start": 191.72, "end": 196.84, "text": " If you go over there right now and use code Yannick, you'll get 20% off a personal assistant" }, { "start": 196.84, "end": 197.84, "text": " subscription." }, { "start": 197.84, "end": 203.04, "text": " Again, go to Zeta-Alpha.com, use code Yannick for 20% off right now." }, { "start": 203.04, "end": 206.57999999999998, "text": " Thanks again so much to Zeta Alpha for sponsoring today's video." }, { "start": 206.57999999999998, "end": 208.52, "text": " And now let's get into it." }, { "start": 208.52, "end": 219.24, "text": " See ya." }, { "start": 219.24, "end": 220.24, "text": " Hello there." }, { "start": 220.24, "end": 225, "text": " Today we'll look at Blip Bootstrapping Language Image Pre-Training for Unified Vision Language" }, { "start": 225, "end": 231.44, "text": " Understanding and Generation by Junan Li, Dongxu Li, Taiming Xiong, Stephen Hoy." }, { "start": 231.44, "end": 232.88, "text": " Yeah, that's it." }, { "start": 232.88, "end": 234.66, "text": " Of Salesforce Research." }, { "start": 234.66, "end": 237.38, "text": " So this paper proposes two things." }, { "start": 237.38, "end": 239.56, "text": " One is a new architecture." }, { "start": 239.56, "end": 244.46, "text": " And I want to say a new conglomeration of existing things." }, { "start": 244.46, "end": 249.32000000000002, "text": " So an arrangement of modules for multitask pre-training." }, { "start": 249.32, "end": 254.98, "text": " This model will take in an image text pair and perform multiple tasks on it." }, { "start": 254.98, "end": 259.96, "text": " It has multiple losses and therefore ends up being able to do multiple things." }, { "start": 259.96, "end": 262.44, "text": " Now that being said, this is a pre-training method." }, { "start": 262.44, "end": 268.56, "text": " So the idea is that for any of these modules, you'll take them, you recompose them downstream" }, { "start": 268.56, "end": 274.2, "text": " and you fine tune them on a task, although they do have some zero shot results." }, { "start": 274.2, "end": 275.2, "text": " So this is one thing." }, { "start": 275.2, "end": 280.32, "text": " And this could be really cool if this alone turns out to be successful because it leads" }, { "start": 280.32, "end": 288, "text": " the path to a future where we have much more dynamic compositions of models and where we" }, { "start": 288, "end": 295.09999999999997, "text": " would pre-train these models with a lot of different tasks in one thing rather than pre-training" }, { "start": 295.09999999999997, "end": 300, "text": " them on just a single task like language modeling." }, { "start": 300, "end": 304.44, "text": " The other thing is a bootstrapping method for the data." }, { "start": 304.44, "end": 310.32, "text": " And these two things are not necessarily disconnected, although I do lament the fact that it's two" }, { "start": 310.32, "end": 312.76, "text": " things in one paper a little bit." }, { "start": 312.76, "end": 319.16, "text": " But there's a bootstrapping method for these image text data set that includes training" }, { "start": 319.16, "end": 327.04, "text": " captioners and filters, which means that there is a part that learns to synthetically generate" }, { "start": 327.04, "end": 333.42, "text": " data and then there is a part that learns to distinguish good from bad data." }, { "start": 333.42, "end": 341.16, "text": " And that allows them to collect lots and lots of data from the internet and filter out bad," }, { "start": 341.16, "end": 346.72, "text": " badly poorly labeled images, which there exists a lot on the internet, and also allows them" }, { "start": 346.72, "end": 352.16, "text": " to augment the data set by labeling images themselves." }, { "start": 352.16, "end": 357.12, "text": " So this is also really interesting and it feeds really well back into their model because" }, { "start": 357.12, "end": 363.54, "text": " their model is uniquely capable of doing this, being the multitask model that it is." }, { "start": 363.54, "end": 368.84000000000003, "text": " So we're going to go through the architecture through the data set bootstrapping method." }, { "start": 368.84000000000003, "end": 376.7, "text": " And keep in mind that I think if this catches on, there could be a recipes in here for future" }, { "start": 376.7, "end": 382.3, "text": " research that lead us to a much more dynamic world where we compose these modules, much" }, { "start": 382.3, "end": 386.82, "text": " like we compose different modules, low level modules in deep learning." }, { "start": 386.82, "end": 393.48, "text": " We could compose these higher level modules and losses and do lots more multitask pre-training," }, { "start": 393.48, "end": 395.82, "text": " maybe even dynamically configured." }, { "start": 395.82, "end": 397.64, "text": " But let's dive in." }, { "start": 397.64, "end": 404.88, "text": " So vision language pre-training, they say, has recently been the hit." }, { "start": 404.88, "end": 410.36, "text": " For example, if you think of something like clip, and that's not even pre-training, but" }, { "start": 410.36, "end": 415.32, "text": " there are lots of architectures that do vision language pre-training, meaning they take pairs" }, { "start": 415.32, "end": 418.04, "text": " of images and text." }, { "start": 418.04, "end": 421.86, "text": " So you'll have like some sort of an image and you'll have like some sort of text that" }, { "start": 421.86, "end": 423.52, "text": " goes with it." }, { "start": 423.52, "end": 428.88, "text": " And you'll try to come up with a system that connects the two in any way." }, { "start": 428.88, "end": 433.18, "text": " They say the major, the existing methods have two major limitations." }, { "start": 433.18, "end": 440.68, "text": " So first of all, the, what they call the model perspective, they say they are either the" }, { "start": 440.68, "end": 446.8, "text": " existing methods are either encoder based or an encoder decoder architecture." }, { "start": 446.8, "end": 452.40000000000003, "text": " So in an encoder based setup, what you would do is you would take in both of these things" }, { "start": 452.40000000000003, "end": 457.68, "text": " and you would try to come up with probably a number that represents how well they fit" }, { "start": 457.68, "end": 458.68, "text": " together." }, { "start": 458.68, "end": 461.24, "text": " So are they good together or not?" }, { "start": 461.24, "end": 465.74, "text": " This is the clip architecture essentially." }, { "start": 465.74, "end": 472.04, "text": " So in encoder based models, they criticize that encoder based are less straightforward" }, { "start": 472.04, "end": 475.62, "text": " to directly transfer to text generation tasks." }, { "start": 475.62, "end": 481.2, "text": " So it's not, it's not simple to take clip and actually make it produce something." }, { "start": 481.2, "end": 486.74, "text": " Remember if we have to, if you have to produce an actual image with clip, we need to do this" }, { "start": 486.74, "end": 492.7, "text": " diffusion clip guided diffusion or clip guided GANs, VQ GANs." }, { "start": 492.7, "end": 497.52, "text": " So it's really cumbersome to make clip generate an image and it's probably even more cumbersome" }, { "start": 497.52, "end": 501.64, "text": " to make it generate text because it's not trained on that." }, { "start": 501.64, "end": 503.52, "text": " So they criticize on these methods." }, { "start": 503.52, "end": 506.84, "text": " It's not easy to make them do generation tasks." }, { "start": 506.84, "end": 512.3199999999999, "text": " Whereas encoder decoder models have not been successfully adopted for image text retrieval" }, { "start": 512.3199999999999, "end": 513.3199999999999, "text": " tasks." }, { "start": 513.3199999999999, "end": 520.12, "text": " So an encoder decoder model is where you would take the image probably and then make it produce" }, { "start": 520.12, "end": 521.12, "text": " the text." }, { "start": 521.12, "end": 526.44, "text": " And then you train it as a language model to autoregressively produce the caption." }, { "start": 526.44, "end": 532.68, "text": " And that's really neat for producing captions, but you cannot necessarily do this task up" }, { "start": 532.68, "end": 536.5, "text": " here very easily with such a model." }, { "start": 536.5, "end": 541.66, "text": " You will, you will be able to do some things, but they're not necessarily successful because" }, { "start": 541.66, "end": 544.54, "text": " the task is really a different task." }, { "start": 544.54, "end": 550.12, "text": " So both, both approaches for doing this currently are not ideal." }, { "start": 550.12, "end": 552.84, "text": " The other thing is the data perspective." }, { "start": 552.84, "end": 558.68, "text": " They criticize that these models are pre-trained on image text pairs that are essentially scraped" }, { "start": 558.68, "end": 559.68, "text": " from the internet." }, { "start": 559.68, "end": 562.42, "text": " So collected from the internet." }, { "start": 562.42, "end": 566.76, "text": " And they say noisy web text is suboptimal for vision language learning." }, { "start": 566.76, "end": 571.52, "text": " We've known for a long time that there is a trade off between scale of data and quality" }, { "start": 571.52, "end": 572.52, "text": " of data." }, { "start": 572.52, "end": 574.5600000000001, "text": " And ideally you'd have both." }, { "start": 574.56, "end": 580.8399999999999, "text": " However, if you scrape from the internet, so let's say you scrape websites and there" }, { "start": 580.8399999999999, "end": 585.3199999999999, "text": " is like some text and there is an image somewhere and the image will have alt text." }, { "start": 585.3199999999999, "end": 589.7199999999999, "text": " And that's what's usually used as the label in these systems." }, { "start": 589.7199999999999, "end": 595.3599999999999, "text": " So if you don't know in the HTML, if you have an image tag, that's how, that's how the browser" }, { "start": 595.3599999999999, "end": 596.3599999999999, "text": " knows it's an image." }, { "start": 596.3599999999999, "end": 601.1999999999999, "text": " You have the image tag, you have the source attribute, which leads, it's a URL usually" }, { "start": 601.2, "end": 605.5200000000001, "text": " that leads to the image, but then you also have an alt attribute." }, { "start": 605.5200000000001, "end": 612.6600000000001, "text": " And it's really recommended that you put an alt, an alt property to the point where frameworks" }, { "start": 612.6600000000001, "end": 616.32, "text": " and linters and so on, they will yell at you if you don't have it." }, { "start": 616.32, "end": 618.48, "text": " So what does this do?" }, { "start": 618.48, "end": 623.5200000000001, "text": " This specifically is for visually impaired people, for screen readers, but also for bots" }, { "start": 623.5200000000001, "end": 625.6800000000001, "text": " to know what is in the image." }, { "start": 625.6800000000001, "end": 627.5200000000001, "text": " So you put the description there." }, { "start": 627.52, "end": 631.84, "text": " However, a lot of people don't do that." }, { "start": 631.84, "end": 636.88, "text": " And I think it makes it actually worse that linters and so on almost require you to do" }, { "start": 636.88, "end": 637.88, "text": " it." }, { "start": 637.88, "end": 641.6, "text": " Because if you don't want to do it, you're just going to put like some some dumb stuff" }, { "start": 641.6, "end": 647.6, "text": " there like image, or people do lots of search engine optimizations in there." }, { "start": 647.6, "end": 652.0799999999999, "text": " So since you know, the search engines don't usually look at the image itself, but at the" }, { "start": 652.08, "end": 657.5600000000001, "text": " alt text, they try to come up with buzzwordy things, so that it's ranked high in search" }, { "start": 657.5600000000001, "end": 658.5600000000001, "text": " results." }, { "start": 658.5600000000001, "end": 661.88, "text": " So not necessarily the best quality data." }, { "start": 661.88, "end": 669.1600000000001, "text": " And their bootstrapping, their bootstrapping method right here is is helping in that of" }, { "start": 669.1600000000001, "end": 673.34, "text": " getting higher quality data out of the internet." }, { "start": 673.34, "end": 674.6800000000001, "text": " So how do they do this?" }, { "start": 674.6800000000001, "end": 681.8000000000001, "text": " The first thing they propose is this model, the multimodal mixture of encoder decoder." }, { "start": 681.8, "end": 688.64, "text": " They say it can operate either as a unimodal encoder, or an image grounded text, the encoder" }, { "start": 688.64, "end": 691.4399999999999, "text": " or an image grounded text decoder." }, { "start": 691.4399999999999, "end": 694.4799999999999, "text": " So yeah, we're going to look at these things." }, { "start": 694.4799999999999, "end": 701.24, "text": " But I think here they say can operate either as one or this or that." }, { "start": 701.24, "end": 702.24, "text": " It's not like this." }, { "start": 702.24, "end": 704.88, "text": " It's not like that exact same model can do this." }, { "start": 704.88, "end": 709.8199999999999, "text": " It's just that they put all of these models into one big model." }, { "start": 709.82, "end": 714.6, "text": " And then they just use the part of the model that does the particular thing." }, { "start": 714.6, "end": 721, "text": " So it's not necessarily super duper unified is what I wanted to say." }, { "start": 721, "end": 727.2800000000001, "text": " Yeah, they train the three, the three sub parts of their models with three objectives," }, { "start": 727.2800000000001, "end": 728.8000000000001, "text": " which we're also going to look at." }, { "start": 728.8000000000001, "end": 732.08, "text": " The second part is this captioning and filtering." }, { "start": 732.08, "end": 736.6400000000001, "text": " This is what this is what boosts the data set quality." }, { "start": 736.64, "end": 743.3199999999999, "text": " They say they learn from noisy image text pairs by cleaning them by producing more and" }, { "start": 743.3199999999999, "end": 744.36, "text": " cleaning them." }, { "start": 744.36, "end": 750.88, "text": " They train a captioner, which whose goal is to produce synthetic captions given web images" }, { "start": 750.88, "end": 757.06, "text": " and a filter to remove noisy captions from both the original web text and synthetic text." }, { "start": 757.06, "end": 763.48, "text": " So the captioner will get images produce labels for these images or produce alt text." }, { "start": 763.48, "end": 769.52, "text": " And then the filter goes over both the generated ones and the collected ones and just filters" }, { "start": 769.52, "end": 773.3000000000001, "text": " out everything that it deems to be qualitatively low standard." }, { "start": 773.3000000000001, "end": 777.24, "text": " Of course, this needs to be trained on a high quality data set." }, { "start": 777.24, "end": 782.24, "text": " But these sort of bootstrapping methods we've seen a number of times in the recent past" }, { "start": 782.24, "end": 783.6, "text": " that they actually work." }, { "start": 783.6, "end": 791.36, "text": " In fact, this model, this paper here seems to be a good accumulation of sort of recognitions" }, { "start": 791.36, "end": 794.12, "text": " and good practices over the last few years." }, { "start": 794.12, "end": 801.24, "text": " And we're going to point those out as we go through their their contributions." }, { "start": 801.24, "end": 805.5600000000001, "text": " Here they say we show that the caption and the filter work together to achieve substantial" }, { "start": 805.5600000000001, "end": 810.98, "text": " performance improvement, which, okay, I don't know what substantial means in these kinds" }, { "start": 810.98, "end": 815.26, "text": " of tasks, but it's I it's an improvement." }, { "start": 815.26, "end": 821.12, "text": " They are they achieve state of the art performance in a wide range of vision language tasks." }, { "start": 821.12, "end": 826.8, "text": " And interestingly, also, this is a property of maybe synthetic data generation, they show" }, { "start": 826.8, "end": 830.28, "text": " more diverse captions yield larger gains." }, { "start": 830.28, "end": 836.44, "text": " This might also be a good lesson for people who want to go and apply these methods." }, { "start": 836.44, "end": 842.68, "text": " Lastly, they say next to having state of the art in downstream fine tune tasks, they also" }, { "start": 842.68, "end": 849.72, "text": " achieve zero short performance when directly transferring our models to two video language" }, { "start": 849.72, "end": 850.72, "text": " tasks." }, { "start": 850.72, "end": 856.88, "text": " So they were they were never trained on video language tasks, never pre trained, never fine" }, { "start": 856.88, "end": 861.08, "text": " tuned, yet still they have a good zero short performance, which is okay." }, { "start": 861.08, "end": 865.24, "text": " Like if you understand images, then there are going to be some video tasks that are" }, { "start": 865.24, "end": 868.64, "text": " your that you're particularly good at." }, { "start": 868.64, "end": 870.44, "text": " Right." }, { "start": 870.44, "end": 873.44, "text": " So let's dive into the model." }, { "start": 873.44, "end": 876.24, "text": " And I've already shown you a diagram of the model." }, { "start": 876.24, "end": 879.0600000000001, "text": " They quickly go through this here." }, { "start": 879.0600000000001, "end": 880.44, "text": " They have three parts." }, { "start": 880.44, "end": 885.48, "text": " They have actually, well, I want to say four parts to their model." }, { "start": 885.48, "end": 891.6800000000001, "text": " One part one is a visual transformer, a VIT as the image encoder." }, { "start": 891.6800000000001, "end": 895.84, "text": " So again, they take an image and they take a piece of text and now they do stuff with" }, { "start": 895.84, "end": 896.84, "text": " it." }, { "start": 896.84, "end": 902.1600000000001, "text": " And the first part is they encode the image using a visual transformer." }, { "start": 902.1600000000001, "end": 907.72, "text": " That's all they do with the image they encoded using a bit with the text, they do three," }, { "start": 907.72, "end": 909.5200000000001, "text": " three different things." }, { "start": 909.52, "end": 914.16, "text": " The first thing is they also just encode the text unimodally." }, { "start": 914.16, "end": 917.52, "text": " So put the text through an encoder." }, { "start": 917.52, "end": 922.6, "text": " And that with those two things already, they've essentially reproduced clip." }, { "start": 922.6, "end": 926.52, "text": " Except they say it's the same as BERT." }, { "start": 926.52, "end": 928.0799999999999, "text": " Yeah." }, { "start": 928.0799999999999, "end": 932.92, "text": " So they've reproduced clip with those two things, because now they can set it up this" }, { "start": 932.92, "end": 940.56, "text": " visual transformer and the unimodal encoder, they can set it up as a similarity metric." }, { "start": 940.56, "end": 945.0799999999999, "text": " So the unimodal encoder will give you some vector in an embedding space, the visual transformer" }, { "start": 945.0799999999999, "end": 949.9599999999999, "text": " will give you some vector in an embedding space, you can set up a contrastive loss to" }, { "start": 949.9599999999999, "end": 956.06, "text": " check whether these two things go together and whether they are apart from let's say" }, { "start": 956.06, "end": 960.1999999999999, "text": " any other encoded image or text." }, { "start": 960.2, "end": 965.32, "text": " You can do this via contrastive learning, you can do it via regularized methods." }, { "start": 965.32, "end": 970.24, "text": " But essentially, this is what we've come to known as encoder only models." }, { "start": 970.24, "end": 975.26, "text": " The second thing they have is this image grounded text encoder." }, { "start": 975.26, "end": 982.6800000000001, "text": " So the image grounded text encoder does almost the same thing as the unimodal text encoder." }, { "start": 982.6800000000001, "end": 986.36, "text": " However, it doesn't encode the text separately." }, { "start": 986.36, "end": 993.88, "text": " It jointly encodes the text while incorporating attention into the visual transformer." }, { "start": 993.88, "end": 996.12, "text": " We're going to see how that goes in a second." }, { "start": 996.12, "end": 1001.04, "text": " But essentially, it produces a vector, let's say this one." }, { "start": 1001.04, "end": 1007.76, "text": " And while producing that on the path, as it produces that, it incorporates information" }, { "start": 1007.76, "end": 1009.36, "text": " from the visual transformer." }, { "start": 1009.36, "end": 1014.84, "text": " So it will, this here is the output of the visual transformer, it will incorporate that" }, { "start": 1014.84, "end": 1019.5400000000001, "text": " at multiple layers here via cross attention into the process." }, { "start": 1019.5400000000001, "end": 1026.56, "text": " So this here is really a joint kind of encoding of the text given the image." }, { "start": 1026.56, "end": 1030.04, "text": " That's why it's called image grounded text encoder." }, { "start": 1030.04, "end": 1035.92, "text": " What this can do is you can build a classifier on top of this, like a binary classifier," }, { "start": 1035.92, "end": 1042.64, "text": " because it is a representation of the text that has but that has already the information" }, { "start": 1042.64, "end": 1044.44, "text": " of the image inside of it." }, { "start": 1044.44, "end": 1047.1200000000001, "text": " So it's kind of a joint representation of the image and the text." }, { "start": 1047.1200000000001, "end": 1053.0800000000002, "text": " So you can build a classifier, for example, whether or not the two things go together" }, { "start": 1053.0800000000002, "end": 1059.92, "text": " again, but you don't have to use a contrastive loss, you can in fact use a supervised loss" }, { "start": 1059.92, "end": 1063.72, "text": " and classify and build a classifier." }, { "start": 1063.72, "end": 1068.8, "text": " The third thing is this image grounded text decoder." }, { "start": 1068.8, "end": 1075.8, "text": " Now again, being image grounded, that is a long, what is going on?" }, { "start": 1075.8, "end": 1077.68, "text": " Something's up here." }, { "start": 1077.68, "end": 1080.6399999999999, "text": " There's an image grounded text decoder." }, { "start": 1080.6399999999999, "end": 1085.8, "text": " The image grounded text decoder is much like the image grounded text encoder in that it" }, { "start": 1085.8, "end": 1088.6399999999999, "text": " incorporates cell across attention." }, { "start": 1088.6399999999999, "end": 1091.36, "text": " However, it's a text decoder." }, { "start": 1091.36, "end": 1095.08, "text": " So what it will do is it will actually produce text." }, { "start": 1095.08, "end": 1102.1999999999998, "text": " So it will auto aggressively produce the text while incorporating again, information via" }, { "start": 1102.1999999999998, "end": 1106.3, "text": " cross attention from the visual representation." }, { "start": 1106.3, "end": 1111.32, "text": " You can see that they have a different section on the pre training objectives." }, { "start": 1111.32, "end": 1113.62, "text": " These just map to these three parts." }, { "start": 1113.62, "end": 1118.8, "text": " So there's the image text contrastive loss, which is the loss for the first part." }, { "start": 1118.8, "end": 1125.62, "text": " There is the image, the image text matching loss, which is the loss for the second part." }, { "start": 1125.62, "end": 1132.28, "text": " And again, this is just a binary classification task where the model uses a linear layer head," }, { "start": 1132.28, "end": 1139.12, "text": " they call it an ITM, an image text, text matching head, but it's a linear layer to predict whether" }, { "start": 1139.12, "end": 1145, "text": " an image text pair is positive, which means matched or negative unmatched given their" }, { "start": 1145, "end": 1148.48, "text": " multi modal feature." }, { "start": 1148.48, "end": 1153.1200000000001, "text": " The special thing here is they do have a hard negative mining strategy." }, { "start": 1153.1200000000001, "end": 1160.88, "text": " So they go to the top part here, they go to the joint, no, sorry, to the disjoint encoding" }, { "start": 1160.88, "end": 1168.72, "text": " to this part, and they look which ones are the hard negatives, which means that negatives" }, { "start": 1168.72, "end": 1175, "text": " that have a high contrastive similarity, and they use those specifically to train this" }, { "start": 1175, "end": 1177.08, "text": " loss here." }, { "start": 1177.08, "end": 1183.12, "text": " The last loss is a language modeling loss, which is obviously relevant for the third" }, { "start": 1183.12, "end": 1184.12, "text": " part." }, { "start": 1184.12, "end": 1188.56, "text": " This is a cross entropy loss, it maximizes the likelihood of the text in an autoregressive" }, { "start": 1188.56, "end": 1190.32, "text": " manner." }, { "start": 1190.32, "end": 1194.6, "text": " If we put all of this together, we get this model right here." }, { "start": 1194.6, "end": 1200.08, "text": " Again, if we go through it, the input data are two things, the input data are the image" }, { "start": 1200.08, "end": 1203.3999999999999, "text": " down here, and the piece of text here." }, { "start": 1203.4, "end": 1208.16, "text": " Again, we know these go together because we've scraped them from the web." }, { "start": 1208.16, "end": 1210.72, "text": " So these two, we know they go together." }, { "start": 1210.72, "end": 1214.1200000000001, "text": " This is not an unsupervised training." }, { "start": 1214.1200000000001, "end": 1220.1200000000001, "text": " This is essentially supervised learning for two things that we know go together." }, { "start": 1220.1200000000001, "end": 1224.8200000000002, "text": " The first thing is we're going to encode the image through the image encoder." }, { "start": 1224.8200000000002, "end": 1226.14, "text": " That's the image encoder." }, { "start": 1226.14, "end": 1228.42, "text": " This is the image representation." }, { "start": 1228.42, "end": 1230.18, "text": " This is just a bit." }, { "start": 1230.18, "end": 1234.24, "text": " This is a visual transformer." }, { "start": 1234.24, "end": 1238.64, "text": " I don't think they freeze it, but they may start from a checkpoint." }, { "start": 1238.64, "end": 1240.44, "text": " All of this is jointly trained." }, { "start": 1240.44, "end": 1246.04, "text": " So all of these losses, as I understand them, are jointly trained." }, { "start": 1246.04, "end": 1248.64, "text": " So then we have the vision representation." }, { "start": 1248.64, "end": 1253.16, "text": " What we can do is we can put the text first of all through the text encoder." }, { "start": 1253.16, "end": 1257.6200000000001, "text": " You can see we can append different tokens right here to let the encoder know what we're" }, { "start": 1257.62, "end": 1262.1399999999999, "text": " currently doing because we also have some parameter sharing going on." }, { "start": 1262.1399999999999, "end": 1265.32, "text": " So the text encoder gets the input text." }, { "start": 1265.32, "end": 1268.6799999999998, "text": " It will also compute an encoding." }, { "start": 1268.6799999999998, "end": 1272.6, "text": " And then we have this contrastive loss between the two encodings." }, { "start": 1272.6, "end": 1279.28, "text": " They need to be close for pairs that we know go together, and they need to be far apart" }, { "start": 1279.28, "end": 1280.28, "text": " for other pairs." }, { "start": 1280.28, "end": 1286.12, "text": " You can do something like in-batch negatives, or you can, as we said, mine hard negatives" }, { "start": 1286.12, "end": 1289.8799999999999, "text": " from this part." }, { "start": 1289.8799999999999, "end": 1291.1999999999998, "text": " Well that makes no sense." }, { "start": 1291.1999999999998, "end": 1301.56, "text": " You can mine hard negatives for that part over here, given this part over here." }, { "start": 1301.56, "end": 1306.2399999999998, "text": " Which makes me believe, okay, maybe I haven't read closely enough." }, { "start": 1306.2399999999998, "end": 1311.7199999999998, "text": " Maybe they also just train one of the losses maybe for each batch because they have to" }, { "start": 1311.7199999999998, "end": 1315.4599999999998, "text": " sample differently for the things." }, { "start": 1315.46, "end": 1320.1200000000001, "text": " It doesn't make too much of a difference whether they train it really all jointly, jointly," }, { "start": 1320.1200000000001, "end": 1323.98, "text": " or always activate one of the three text pathways." }, { "start": 1323.98, "end": 1327.8, "text": " This would be interesting to figure out." }, { "start": 1327.8, "end": 1333.64, "text": " So the last thing, the second thing they do is they give it to this image grounded text" }, { "start": 1333.64, "end": 1334.64, "text": " encoder." }, { "start": 1334.64, "end": 1338.8400000000001, "text": " Again, this gets the text and a little token to show what's going on." }, { "start": 1338.8400000000001, "end": 1344.02, "text": " It will encode, and now you can see that it has this cross attention module." }, { "start": 1344.02, "end": 1350.48, "text": " And the cross attention module, as it encodes, it incorporates information that comes from" }, { "start": 1350.48, "end": 1355.6399999999999, "text": " all the way over here, comes all the way over here from the image." }, { "start": 1355.6399999999999, "end": 1360.76, "text": " So the image representation is part of the encoding here, which means this thing has" }, { "start": 1360.76, "end": 1365.08, "text": " information about both the text and the image." }, { "start": 1365.08, "end": 1370.84, "text": " Now yeah, of course, it's still a, it's still, it's not symmetric, right?" }, { "start": 1370.84, "end": 1377.4399999999998, "text": " We don't, the joint encoding is asymmetric in the sense that it is the text that is encoded" }, { "start": 1377.4399999999998, "end": 1379.04, "text": " based on the image." }, { "start": 1379.04, "end": 1383.6, "text": " And that allows them to, you to only compute the image representation once." }, { "start": 1383.6, "end": 1388.72, "text": " So they only need to do this pathway on the left here once, and then they can reuse that" }, { "start": 1388.72, "end": 1394.84, "text": " representation for all of the, for all of the different paths in the text here." }, { "start": 1394.84, "end": 1398.84, "text": " Yeah, you can see that on the left, this is the difference on the left here." }, { "start": 1398.84, "end": 1401.48, "text": " This is skipped, the cross attention is skipped." }, { "start": 1401.48, "end": 1406.1999999999998, "text": " We don't have cross attention, it's just an encoding of the text itself." }, { "start": 1406.1999999999998, "end": 1412.02, "text": " And here it's really a joint encoding, which means that this thing here contains information" }, { "start": 1412.02, "end": 1414.12, "text": " on both the image and the text." }, { "start": 1414.12, "end": 1418.84, "text": " And we can perform any sort of task that we want with this joint encoding." }, { "start": 1418.84, "end": 1423.8799999999999, "text": " In our case, we simply train it on a very similar objective as the contrastive loss" }, { "start": 1423.8799999999999, "end": 1426.76, "text": " in that it's a binary classification." }, { "start": 1426.76, "end": 1432.36, "text": " It needs to figure out whether or not the two things actually go together or not." }, { "start": 1432.36, "end": 1438, "text": " The third thing, again, almost the same is this decoder, the text decoder, same input" }, { "start": 1438, "end": 1441.46, "text": " except there's a little decode token." }, { "start": 1441.46, "end": 1445.6, "text": " There is a difference in that this is bidirectional." }, { "start": 1445.6, "end": 1452.24, "text": " The other two modules have bidirectional self-attention because they are encoders, so they get to use" }, { "start": 1452.24, "end": 1455.04, "text": " bidirectionality." }, { "start": 1455.04, "end": 1461.58, "text": " Here we use causal self-attention, which essentially means that in the text you only get to attend" }, { "start": 1461.58, "end": 1462.7, "text": " things." }, { "start": 1462.7, "end": 1467.8, "text": " So if you produce a particular token right here, you only get to attend to tokens that" }, { "start": 1467.8, "end": 1470.46, "text": " are behind yourself." }, { "start": 1470.46, "end": 1476.72, "text": " This is a bit of a hack, because otherwise we couldn't train these things with batches" }, { "start": 1476.72, "end": 1479.42, "text": " or in parallel." }, { "start": 1479.42, "end": 1485.3600000000001, "text": " It is definitely possible to use bidirectional self-attention as long as you cap, as long" }, { "start": 1485.3600000000001, "end": 1487.68, "text": " as you mask whatever comes next." }, { "start": 1487.68, "end": 1493.3200000000002, "text": " So you want to mask sort of the future, but within the past you could totally use bidirectional" }, { "start": 1493.3200000000002, "end": 1494.3200000000002, "text": " self-attention." }, { "start": 1494.3200000000002, "end": 1501.22, "text": " Again, this is just a hack to make training easier, but it's come to be a popular hack," }, { "start": 1501.22, "end": 1503.3200000000002, "text": " so everyone's doing it." }, { "start": 1503.3200000000002, "end": 1508.16, "text": " Again, you can see there's cross-attention coming from the image, and here you can really" }, { "start": 1508.16, "end": 1510.44, "text": " see that it's necessary." }, { "start": 1510.44, "end": 1516.5600000000002, "text": " If I want to actually produce text, I need some sort of information of what I want to" }, { "start": 1516.5600000000002, "end": 1517.5600000000002, "text": " produce." }, { "start": 1517.5600000000002, "end": 1523.3400000000001, "text": " So this language modeling loss here really needs the cross-attention, really needs the" }, { "start": 1523.3400000000001, "end": 1524.8200000000002, "text": " input from the image." }, { "start": 1524.8200000000002, "end": 1529.2, "text": " So again, this comes from here, from the image representation." }, { "start": 1529.2, "end": 1530.3200000000002, "text": " So there you have it." }, { "start": 1530.3200000000002, "end": 1536.1200000000001, "text": " It's an unholy concoction of many different things in one." }, { "start": 1536.12, "end": 1539.1999999999998, "text": " And this is all trained jointly." }, { "start": 1539.1999999999998, "end": 1546.6, "text": " And yeah, I'm excited about this because I think not necessarily this particular arrangement." }, { "start": 1546.6, "end": 1553.7199999999998, "text": " I have lots of stuff to criticize or lots of choices here that are kind of arbitrary." }, { "start": 1553.7199999999998, "end": 1560.08, "text": " Why this asymmetry in, you know, I have the image encoded once and I have cross-attention" }, { "start": 1560.08, "end": 1563.12, "text": " into all the text encoders." }, { "start": 1563.12, "end": 1564.28, "text": " Why not the other way around?" }, { "start": 1564.28, "end": 1566.76, "text": " Why don't we do image generation tasks?" }, { "start": 1566.76, "end": 1571.86, "text": " Why don't we do any sort of masked modeling, like masked language modeling?" }, { "start": 1571.86, "end": 1574.12, "text": " This could even be in the image." }, { "start": 1574.12, "end": 1577.48, "text": " There's lots of stuff, let's say, to criticize." }, { "start": 1577.48, "end": 1585.68, "text": " But I think what this thing shows is that a good recipe for the future could be to combine" }, { "start": 1585.68, "end": 1592.76, "text": " lots of these different methods together, combine lots of them into one big thing." }, { "start": 1592.76, "end": 1597.16, "text": " Reusing parts intelligently and then train them jointly." }, { "start": 1597.16, "end": 1603.16, "text": " We could even think of frameworks that do this automatically or that allow you to really" }, { "start": 1603.16, "end": 1607.96, "text": " easily set this up with a few lines of code and it will figure out by itself, like the" }, { "start": 1607.96, "end": 1613.2, "text": " framework would figure out itself, what it can compose and how it could reuse." }, { "start": 1613.2, "end": 1620.72, "text": " What you can also see right here is I've overshadowed it a little bit with my thing right here, but" }, { "start": 1620.72, "end": 1626.56, "text": " there's color and the color indicates shared parameters, which is also really interesting." }, { "start": 1626.56, "end": 1632.8600000000001, "text": " So you can see that essentially the text encoders aren't three separate encoders, but they largely" }, { "start": 1632.8600000000001, "end": 1633.98, "text": " share parameters." }, { "start": 1633.98, "end": 1637.46, "text": " For example, the feedforward parameters are shared." }, { "start": 1637.46, "end": 1642.32, "text": " The cross-attention parameters, they're all shared, except of course they're not active" }, { "start": 1642.32, "end": 1644.24, "text": " in this encoder." }, { "start": 1644.24, "end": 1647.42, "text": " The bidirectional self-attention parameters are shared." }, { "start": 1647.42, "end": 1652.28, "text": " The causal self-attention, those ones are separate over here, but if we had some sort" }, { "start": 1652.28, "end": 1658.5600000000002, "text": " of other autoregressive module, they would be shared too." }, { "start": 1658.5600000000002, "end": 1664.92, "text": " So you'd share whatever you could in these architectures and that reduces the overhead," }, { "start": 1664.92, "end": 1670.72, "text": " but also in their evaluations really helps, which I guess makes sense." }, { "start": 1670.72, "end": 1672.5800000000002, "text": " Well, I don't know." }, { "start": 1672.58, "end": 1677.52, "text": " If the tasks are too distant, you might get this catastrophic forgetting, but in their" }, { "start": 1677.52, "end": 1680.48, "text": " case it does help." }, { "start": 1680.48, "end": 1685.28, "text": " Yes, which I could guess, right?" }, { "start": 1685.28, "end": 1690, "text": " For example, the bidirectional self-attention right here, since these two modules are almost" }, { "start": 1690, "end": 1696.8799999999999, "text": " doing the same task, it's reasonable that they would share parameters." }, { "start": 1696.88, "end": 1702.5400000000002, "text": " So we've gone through a whole lot of things that they say down here." }, { "start": 1702.5400000000002, "end": 1709.2800000000002, "text": " They do reason through their choices a little bit, even though I think these choices, they" }, { "start": 1709.2800000000002, "end": 1714.92, "text": " are either arbitrary or they're guided by experiments, just seeing what works better." }, { "start": 1714.92, "end": 1720.9, "text": " They do bring up some hypotheses of what they think, why do things work and why do things" }, { "start": 1720.9, "end": 1722.4, "text": " don't work." }, { "start": 1722.4, "end": 1726.8000000000002, "text": " They say that text encoder and decoder share all parameters except for the self-attention" }, { "start": 1726.8000000000002, "end": 1727.8000000000002, "text": " layer." }, { "start": 1727.8000000000002, "end": 1731.46, "text": " The reason is that the differences between the encoding and decoding tasks are best captured" }, { "start": 1731.46, "end": 1733.42, "text": " by the self-attention layers." }, { "start": 1733.42, "end": 1739.44, "text": " So they're essentially saying that whether you want to encode or decode, that is mostly" }, { "start": 1739.44, "end": 1746.1200000000001, "text": " going to be different in the attention layers, not from the architectural perspective, but" }, { "start": 1746.1200000000001, "end": 1749.68, "text": " from sort of the how the task is done perspective." }, { "start": 1749.68, "end": 1753.52, "text": " And that I don't think necessarily you can say this, right?" }, { "start": 1753.52, "end": 1759.8, "text": " Like you can't necessarily say the feed forward layers have a similar job in or have similar" }, { "start": 1759.8, "end": 1764.52, "text": " features and perform similar functions, whether you're encoding or decoding." }, { "start": 1764.52, "end": 1771.42, "text": " I don't just don't think that's out of the box, really evident that we need to be supported" }, { "start": 1771.42, "end": 1772.6000000000001, "text": " by evidence." }, { "start": 1772.6000000000001, "end": 1774.52, "text": " So yeah." }, { "start": 1774.52, "end": 1781.32, "text": " But it seems to work well in empirical evaluations and so I'm going to I'm going to with them" }, { "start": 1781.32, "end": 1788.02, "text": " sharing the parameters, but the reasoning are more hypotheses." }, { "start": 1788.02, "end": 1791.16, "text": " So the second part they go into is this cap field." }, { "start": 1791.16, "end": 1796.24, "text": " Again, this is a bit disconnected, although it plays well into their model." }, { "start": 1796.24, "end": 1800.6399999999999, "text": " Here they criticize how these data sets are usually collected." }, { "start": 1800.64, "end": 1805.8000000000002, "text": " They say alt text often do not accurately describe the visual content of the images" }, { "start": 1805.8000000000002, "end": 1807.8400000000001, "text": " that are scraped from the web." }, { "start": 1807.8400000000001, "end": 1810.5200000000002, "text": " And that's why they have a bootstrapping method." }, { "start": 1810.5200000000002, "end": 1815.4, "text": " So what they do is they collect a data set from the internet." }, { "start": 1815.4, "end": 1822.6000000000001, "text": " And yeah, well, I find this diagram here to be a little bit complicated." }, { "start": 1822.6000000000001, "end": 1825.1000000000001, "text": " So we're just going to make our own." }, { "start": 1825.1000000000001, "end": 1829.96, "text": " So they have the internet, I'm going to this is a globe with, you know, the lines and so" }, { "start": 1829.96, "end": 1830.96, "text": " on." }, { "start": 1830.96, "end": 1838.08, "text": " So we're going to collect a big chunk of data of pairs of images and text, images and alt" }, { "start": 1838.08, "end": 1841.44, "text": " text from the web, really noisy." }, { "start": 1841.44, "end": 1848.32, "text": " And what we're going to do with this stuff is we're going to train a first blip architecture" }, { "start": 1848.32, "end": 1854.56, "text": " or a first now how they call it MED architecture, multi something something, whatever their" }, { "start": 1854.56, "end": 1856, "text": " model is on top." }, { "start": 1856, "end": 1861.6, "text": " We're just going to train that with this noisy data, and that's going to be our first iteration" }, { "start": 1861.6, "end": 1862.68, "text": " model." }, { "start": 1862.68, "end": 1866.92, "text": " Now this is really noisy so far and so on." }, { "start": 1866.92, "end": 1871.72, "text": " But what we're going to do then is we're going to fine tune this." }, { "start": 1871.72, "end": 1875.66, "text": " We're going to fine tune a filter and a captioner." }, { "start": 1875.66, "end": 1881.48, "text": " So we're going to fine tune a filter and a captioner on supervised data." }, { "start": 1881.48, "end": 1886.1200000000001, "text": " There exist some supervised data sets." }, { "start": 1886.1200000000001, "end": 1890, "text": " And one of them, I believe, is the Coco data set." }, { "start": 1890, "end": 1892.52, "text": " Yes, the Coco data set." }, { "start": 1892.52, "end": 1899.96, "text": " So this step here, we need supervised data and supervised data of image text pairs." }, { "start": 1899.96, "end": 1908.1, "text": " So human made captions for existing images, which it's a sort of a proxy for quality." }, { "start": 1908.1, "end": 1912.84, "text": " So of these things, we can be sure that the quality is relatively high." }, { "start": 1912.84, "end": 1918.6, "text": " If we could find some sort of an automated way to get really high quality image text" }, { "start": 1918.6, "end": 1922.84, "text": " pair data, it doesn't necessarily need to be human labeled." }, { "start": 1922.84, "end": 1926.04, "text": " It just needs to be high in quality." }, { "start": 1926.04, "end": 1928.86, "text": " So they use that to train a filter and a captioner." }, { "start": 1928.86, "end": 1932.56, "text": " Now what is the filter and the captioning model?" }, { "start": 1932.56, "end": 1938.76, "text": " Now these are going to be fine tuned versions of their MED models." }, { "start": 1938.76, "end": 1946.44, "text": " For example, the captioner takes in an image and gives you a caption, a synthetic caption." }, { "start": 1946.44, "end": 1949.5, "text": " Now this is something our model can do." }, { "start": 1949.5, "end": 1957.6, "text": " If we just take two parts, so we take this part and we take this part right here." }, { "start": 1957.6, "end": 1960.6799999999998, "text": " This is now a captioning model." }, { "start": 1960.68, "end": 1968, "text": " So the idea here, the general idea of BLIP of this MED model is that we pre train all" }, { "start": 1968, "end": 1975.52, "text": " of these things together and we sub select or we rearrange even the different sub components" }, { "start": 1975.52, "end": 1979.0800000000002, "text": " and then fine tune them on a downstream task." }, { "start": 1979.0800000000002, "end": 1985.3600000000001, "text": " And one easy way is to take two components, simply deactivate all others and let them" }, { "start": 1985.3600000000001, "end": 1986.6000000000001, "text": " run in inference mode." }, { "start": 1986.6000000000001, "end": 1989.3200000000002, "text": " So now we have a captioning model." }, { "start": 1989.32, "end": 1995.2, "text": " The captioning, the filtering model on the other hand, very similar, but it takes an" }, { "start": 1995.2, "end": 2002.6799999999998, "text": " image and a piece of text both inside and it will output a score of whether the two" }, { "start": 2002.6799999999998, "end": 2005.22, "text": " things go together or not." }, { "start": 2005.22, "end": 2011.6399999999999, "text": " Now this, of course we can achieve in multiple ways, but we can achieve this in the probably" }, { "start": 2011.6399999999999, "end": 2017.6399999999999, "text": " the most high quality way by taking the image encoder and taking this part right here that" }, { "start": 2017.64, "end": 2020.48, "text": " is specifically trained to jointly encode." }, { "start": 2020.48, "end": 2027.8000000000002, "text": " You might ask, why don't we use this module right here and then use this contrastive estimation?" }, { "start": 2027.8000000000002, "end": 2031.2800000000002, "text": " We could also do that, definitely." }, { "start": 2031.2800000000002, "end": 2037.96, "text": " But usually there are always multiple ways of determining similarity." }, { "start": 2037.96, "end": 2041.8200000000002, "text": " You can have sort of the two stack encoder." }, { "start": 2041.8200000000002, "end": 2044.2, "text": " So here is the image and here is the text." }, { "start": 2044.2, "end": 2048.96, "text": " You can have separate encoders for them and then at the end determine whether they go" }, { "start": 2048.96, "end": 2049.96, "text": " together." }, { "start": 2049.96, "end": 2054.4, "text": " And that's usually good if you want to do something like a search index because you" }, { "start": 2054.4, "end": 2057.12, "text": " can pre-compute a lot of these things." }, { "start": 2057.12, "end": 2062.04, "text": " You can pre-compute all the embeddings for the images and then at inference time, if" }, { "start": 2062.04, "end": 2066.48, "text": " you have a query using text, you want to search an image via text, you only need to encode" }, { "start": 2066.48, "end": 2068.76, "text": " the text." }, { "start": 2068.76, "end": 2072.12, "text": " Whereas with a joint encoding, it's really different." }, { "start": 2072.12, "end": 2080.16, "text": " You need to input both into the encoder and that will give you a score at the end." }, { "start": 2080.16, "end": 2085.68, "text": " And if you want to build a search engine like this, then for every single time you issue" }, { "start": 2085.68, "end": 2091.08, "text": " a query, what you need to do is you need to go through the whole data set and encode the" }, { "start": 2091.08, "end": 2097.72, "text": " query here together with all of the images, get the score for each one and then evaluate" }, { "start": 2097.72, "end": 2098.72, "text": " that." }, { "start": 2098.72, "end": 2103.8799999999997, "text": " And you can see there is a trade-off, the left side is way friendlier computation-wise" }, { "start": 2103.8799999999997, "end": 2105.9599999999996, "text": " if you have an existing data set." }, { "start": 2105.9599999999996, "end": 2114.2799999999997, "text": " The right side is qualitatively higher because during computation through these layers, the" }, { "start": 2114.2799999999997, "end": 2120.56, "text": " two things can already attend to one another, whereas really the only interaction here is" }, { "start": 2120.56, "end": 2123.2, "text": " the end over here." }, { "start": 2123.2, "end": 2132.08, "text": " So this is qualitatively better estimate of whether the two things match or don't match." }, { "start": 2132.08, "end": 2140.24, "text": " And that's why we're going to have the filter here." }, { "start": 2140.24, "end": 2143.9199999999996, "text": " Since we're working, since we're filtering the data set, we can jointly encode the two" }, { "start": 2143.9199999999996, "end": 2145.16, "text": " things anyway." }, { "start": 2145.16, "end": 2149.3199999999997, "text": " So we're going to fine tune that part to become our filter." }, { "start": 2149.32, "end": 2153.6400000000003, "text": " So now we have a fine tuned part, one captioner, one filter." }, { "start": 2153.6400000000003, "end": 2155.1200000000003, "text": " What can we do now?" }, { "start": 2155.1200000000003, "end": 2163.0800000000004, "text": " Well, we can take our data set, this thing right here, and we can use the captioner to" }, { "start": 2163.0800000000004, "end": 2167.2400000000002, "text": " produce another data set by just taking the images." }, { "start": 2167.2400000000002, "end": 2172.6000000000004, "text": " So we just take the images here, we put them through the captioner and we get another data" }, { "start": 2172.6000000000004, "end": 2173.6000000000004, "text": " set." }, { "start": 2173.6000000000004, "end": 2177.6000000000004, "text": " So we get another data set, it's going to have the same images, right?" }, { "start": 2177.6, "end": 2179.56, "text": " And it's going to have different texts." }, { "start": 2179.56, "end": 2181.02, "text": " So I'm going to put this." }, { "start": 2181.02, "end": 2185.02, "text": " So this is a synthetic data set." }, { "start": 2185.02, "end": 2189.52, "text": " We can then join the two data sets together." }, { "start": 2189.52, "end": 2196.7599999999998, "text": " So join the two data sets, and then we can put them both through the filter." }, { "start": 2196.7599999999998, "end": 2200.24, "text": " So we're going to put them both through the filter." }, { "start": 2200.24, "end": 2207.46, "text": " And the filter will simply filter out any image text pair that is not adequate, which" }, { "start": 2207.46, "end": 2214.7200000000003, "text": " means that it will filter out any image text pair which doesn't match well together, given" }, { "start": 2214.7200000000003, "end": 2220, "text": " the fine tuning of the filter on the supervised or high quality data set." }, { "start": 2220, "end": 2225.7200000000003, "text": " So then we end up with a data set of, and we can restrict it like to only have one caption" }, { "start": 2225.7200000000003, "end": 2228.04, "text": " for each image or something like this." }, { "start": 2228.04, "end": 2233.68, "text": " And we end up with a data set of image text pairs, which is large because we've augmented" }, { "start": 2233.68, "end": 2240.24, "text": " it with synthetic data, but also is of high quality because we have done the filtering." }, { "start": 2240.24, "end": 2246.04, "text": " Now all of this being said, again, this highly relies on the quality of the data set that" }, { "start": 2246.04, "end": 2250.7999999999997, "text": " we fine tune on and of the diversity of that data set as well." }, { "start": 2250.7999999999997, "end": 2257.12, "text": " Because you can also imagine if that data set isn't containing much of the domain that" }, { "start": 2257.12, "end": 2262.62, "text": " you're looking at, then your filter will learn to essentially down rank everything because" }, { "start": 2262.62, "end": 2268.52, "text": " it says, well, my data set says these two things don't go well together because I actually" }, { "start": 2268.52, "end": 2270.52, "text": " have just no data in that region." }, { "start": 2270.52, "end": 2273.2, "text": " So there's a bit of danger in doing this." }, { "start": 2273.2, "end": 2277.12, "text": " You really need to pay attention at what data set you're fine tuning." }, { "start": 2277.12, "end": 2279.68, "text": " But this is how you bootstrap a good data set." }, { "start": 2279.68, "end": 2282.56, "text": " So you can see go from here to here." }, { "start": 2282.56, "end": 2285, "text": " And you can think of multiple things." }, { "start": 2285, "end": 2290.8599999999997, "text": " Again, I think this paper is less about the particular method they choose." }, { "start": 2290.86, "end": 2296.3, "text": " And I think more about what could be recipes for the future." }, { "start": 2296.3, "end": 2302.7200000000003, "text": " And I think in the recent times, we've seen a lot of synthetic data generation, first" }, { "start": 2302.7200000000003, "end": 2304.28, "text": " of all, being really helpful." }, { "start": 2304.28, "end": 2310.6, "text": " We've seen this in a number of reinforcement learning applications, a number of even NLP" }, { "start": 2310.6, "end": 2311.6, "text": " applications." }, { "start": 2311.6, "end": 2318.98, "text": " So synthetic data is really, really picking up, I want to say, with advances in SIM to" }, { "start": 2318.98, "end": 2320.58, "text": " real and so on." }, { "start": 2320.58, "end": 2324.08, "text": " And then also this approach of filtering." }, { "start": 2324.08, "end": 2330.7599999999998, "text": " This has come up more and more in recent years, where generative models are paired with discriminative" }, { "start": 2330.7599999999998, "end": 2336.6, "text": " models that either rerank their outputs or filter their outputs for quality." }, { "start": 2336.6, "end": 2343.88, "text": " This seems to be a very good recipe for achieving generative tasks in general." }, { "start": 2343.88, "end": 2349.08, "text": " Not only train a generator, but train a ranker or filter on top of that." }, { "start": 2349.08, "end": 2351.84, "text": " It's pretty computationally efficient." }, { "start": 2351.84, "end": 2353.6, "text": " It's easy to implement." }, { "start": 2353.6, "end": 2357.52, "text": " And yeah, I think it's a good recipe for the future." }, { "start": 2357.52, "end": 2362.58, "text": " And one can think of various ways here to improve this, like to do this bootstrapping" }, { "start": 2362.58, "end": 2372.2799999999997, "text": " multiple times, to collect the supervised data set in a different manner and so on." }, { "start": 2372.2799999999997, "end": 2378.54, "text": " I think there's a lot of possibilities here that are not yet explored, which I find to" }, { "start": 2378.54, "end": 2381.66, "text": " be pretty, pretty cool." }, { "start": 2381.66, "end": 2384.48, "text": " So that's essentially all." }, { "start": 2384.48, "end": 2385.48, "text": " Yeah." }, { "start": 2385.48, "end": 2387.6, "text": " Okay, no, I was actually wrong here." }, { "start": 2387.6, "end": 2393.4, "text": " You can see the filter is actually fine tuned on both of the objectives to learn whether" }, { "start": 2393.4, "end": 2397.52, "text": " a text matches the image." }, { "start": 2397.52, "end": 2404.82, "text": " So this it's both the contrastive and the the single classifier loss." }, { "start": 2404.82, "end": 2413.26, "text": " So I do think I do think the filter like what they actually pay attention to at the end" }, { "start": 2413.26, "end": 2419.7000000000003, "text": " is going to be this thing right here is going to be the classification head." }, { "start": 2419.7000000000003, "end": 2426, "text": " But I guess it doesn't hurt to use both losses as you fine tune it." }, { "start": 2426, "end": 2431.4, "text": " And since all parameters are shared, essentially, you really don't have you really don't have" }, { "start": 2431.4, "end": 2435.36, "text": " you can like it's it's easy to try and it's not too much of an overhead." }, { "start": 2435.36, "end": 2436.6800000000003, "text": " So that's the methods." }, { "start": 2436.6800000000003, "end": 2443.28, "text": " Again, they have this concoction of modules that they all pre train jointly with their" }, { "start": 2443.28, "end": 2445.08, "text": " respective losses." }, { "start": 2445.08, "end": 2450.76, "text": " And then on the other hand, they have this bootstrapping method where they can directly" }, { "start": 2450.76, "end": 2452.56, "text": " use their model, right?" }, { "start": 2452.56, "end": 2455.86, "text": " That's the way these integrate these two." }, { "start": 2455.86, "end": 2460.64, "text": " Since they have a model that can do all of these different things, they can fine tune" }, { "start": 2460.64, "end": 2465.14, "text": " that model to become a filter or to become a captioner." }, { "start": 2465.14, "end": 2469.8799999999997, "text": " And the same thing holds for the results downstream." }, { "start": 2469.8799999999997, "end": 2473.7599999999998, "text": " Here they have some examples, by the way, of generated." }, { "start": 2473.7599999999998, "end": 2477.8799999999997, "text": " And so the bottom text is always a generated one." }, { "start": 2477.8799999999997, "end": 2481.16, "text": " The top text is one from the data set." }, { "start": 2481.16, "end": 2484.72, "text": " Anything that's red is filtered out by the filter." }, { "start": 2484.72, "end": 2490.64, "text": " Anything that's green is accepted by the filter." }, { "start": 2490.64, "end": 2497.2799999999997, "text": " Yeah, so they they also discuss a little bit of the dangers of doing this, of training" }, { "start": 2497.2799999999997, "end": 2502.2, "text": " the filtering and the captioning on this from the same pre training state on the same data" }, { "start": 2502.2, "end": 2509.4399999999996, "text": " set, which is that like there is some going to be some confirmation bias in that the filter" }, { "start": 2509.44, "end": 2515.88, "text": " will up rank things that the captioner produces because they're essentially learn from the" }, { "start": 2515.88, "end": 2517.12, "text": " same data." }, { "start": 2517.12, "end": 2518.76, "text": " That's why they don't share." }, { "start": 2518.76, "end": 2522.42, "text": " They fine tune them separately to combat this a little bit." }, { "start": 2522.42, "end": 2527.48, "text": " But I still think that you're going to have some of that in there definitely." }, { "start": 2527.48, "end": 2536.2400000000002, "text": " But you know, it's this is, you know, this is a real data from bridge near my house," }, { "start": 2536.2400000000002, "end": 2537.68, "text": " which might be true, right?" }, { "start": 2537.68, "end": 2540.8399999999997, "text": " But it's not very descriptive and the filter realizes it." }, { "start": 2540.8399999999997, "end": 2544, "text": " Yet a flock of birds flying over a lake at sunset." }, { "start": 2544, "end": 2546.3199999999997, "text": " That's pretty descriptive." }, { "start": 2546.3199999999997, "end": 2552.2799999999997, "text": " Another interesting thing is that they use nucleus sampling here, which is a common strategy." }, { "start": 2552.2799999999997, "end": 2559.52, "text": " But they do find that using nucleus sampling leads to better performance and that because" }, { "start": 2559.52, "end": 2564.72, "text": " it generates more diverse and surprising captions, which contain more new information that the" }, { "start": 2564.72, "end": 2571.3199999999997, "text": " model could benefit from this, they compare this to beam search and beam search essentially" }, { "start": 2571.3199999999997, "end": 2573.7799999999997, "text": " goes for the highest likelihood sample." }, { "start": 2573.7799999999997, "end": 2577.64, "text": " It tends to generate safe captions that are common in the data set, hence offering less" }, { "start": 2577.64, "end": 2578.72, "text": " extra knowledge." }, { "start": 2578.72, "end": 2586.68, "text": " I think that's also really cool recognition right here that if we sample things from generative" }, { "start": 2586.68, "end": 2589.3199999999997, "text": " models, we might have different goals." }, { "start": 2589.32, "end": 2595, "text": " And therefore it might not always be good to like it might be good to have an objective" }, { "start": 2595, "end": 2597.2400000000002, "text": " or a sampling method that encourages diversity." }, { "start": 2597.2400000000002, "end": 2599.2000000000003, "text": " We've already seen this in alpha code." }, { "start": 2599.2000000000003, "end": 2601.6400000000003, "text": " And my question there was already a little bit." }, { "start": 2601.6400000000003, "end": 2605.32, "text": " Do we even have the correct training procedures for this?" }, { "start": 2605.32, "end": 2607.84, "text": " Because we train maximum likelihood?" }, { "start": 2607.84, "end": 2611.6400000000003, "text": " Or do we have the correct sampling procedures for this?" }, { "start": 2611.6400000000003, "end": 2613.56, "text": " All of these are interesting questions." }, { "start": 2613.56, "end": 2619.7599999999998, "text": " And I think this kind of research validates that it's not all the same, like, depending" }, { "start": 2619.7599999999998, "end": 2624.7999999999997, "text": " on what we want to do, our training and sampling procedures need to adjust." }, { "start": 2624.7999999999997, "end": 2627.6, "text": " I don't want to dive too deep into the results." }, { "start": 2627.6, "end": 2632.08, "text": " They are outperforming other things by some margin." }, { "start": 2632.08, "end": 2637.08, "text": " Like I don't necessarily agree that they outperform things so heavily as they advertise." }, { "start": 2637.08, "end": 2639.52, "text": " But you know, that's research currently." }, { "start": 2639.52, "end": 2645, "text": " Again, they allude to the fact that they share parameters here." }, { "start": 2645, "end": 2650.6, "text": " And why that is, they say, sharing all the layers except for the self attention leads" }, { "start": 2650.6, "end": 2653.24, "text": " to better performance compared to not sharing." }, { "start": 2653.24, "end": 2654.88, "text": " That's the part I believe, right?" }, { "start": 2654.88, "end": 2655.88, "text": " Totally." }, { "start": 2655.88, "end": 2658.08, "text": " You share numbers go up good." }, { "start": 2658.08, "end": 2661.44, "text": " But then they say, if the shared attention layers are shared, the model's performance" }, { "start": 2661.44, "end": 2666.52, "text": " would degrade to the conflict between the encoding and the decoding tasks." }, { "start": 2666.52, "end": 2673.48, "text": " And this, I think, yeah, this stuff needs needs evidence." }, { "start": 2673.48, "end": 2677.88, "text": " Because I mean, yeah, I'm fine with just going with the numbers." }, { "start": 2677.88, "end": 2683.08, "text": " Here you can see the various ways they combine the things, for example, for visual question" }, { "start": 2683.08, "end": 2688.6, "text": " answering, they first encode the image, then they feed that to the text encoder, then they" }, { "start": 2688.6, "end": 2690.06, "text": " feed that to the decoder." }, { "start": 2690.06, "end": 2695.88, "text": " So you can see, you can not only sub select modules, but you can rearrange them, right?" }, { "start": 2695.88, "end": 2698.4, "text": " Because you fine tune, you can adjust the parameters." }, { "start": 2698.4, "end": 2703.6400000000003, "text": " So this connection already exists in the previous model, this connection doesn't." }, { "start": 2703.6400000000003, "end": 2709.12, "text": " So you can sort of rearrange and recombine these modules to do various things." }, { "start": 2709.12, "end": 2714.88, "text": " You can see here, we have two image or a double image encoder, or I guess the image encoder" }, { "start": 2714.88, "end": 2716.8, "text": " get just gets two samples." }, { "start": 2716.8, "end": 2723.38, "text": " And then we also have two, one, a duplication of these cross attention modules." }, { "start": 2723.38, "end": 2728.92, "text": " And then we output that into a newly trained merge layer." }, { "start": 2728.92, "end": 2731.88, "text": " So this is the exciting part right here." }, { "start": 2731.88, "end": 2736.84, "text": " And I feel I feel really don't want to necessarily go into this because we might go into this" }, { "start": 2736.84, "end": 2739.1, "text": " in the interview." }, { "start": 2739.1, "end": 2746.2000000000003, "text": " But I feel a future where we have frameworks, coding frameworks, where this kind of stuff" }, { "start": 2746.2000000000003, "end": 2751.92, "text": " could be supported in an automatic fashion where I don't have to, you know, go and really" }, { "start": 2751.92, "end": 2755.76, "text": " hand define exactly how I want these things combined." }, { "start": 2755.76, "end": 2761.48, "text": " But I could have a more high level descriptive language that allows me to do this whole pre" }, { "start": 2761.48, "end": 2767.04, "text": " training arrangements and this recombination for downstream fine tuning." }, { "start": 2767.04, "end": 2768.04, "text": " That's really exciting." }, { "start": 2768.04, "end": 2770.08, "text": " All right, I'm going to leave it at that." }, { "start": 2770.08, "end": 2771.94, "text": " I hope you had a good overview." }, { "start": 2771.94, "end": 2776.7200000000003, "text": " If you want to dive into the results, you know, feel free, there's lots of tables in" }, { "start": 2776.7200000000003, "end": 2777.7200000000003, "text": " here." }, { "start": 2777.72, "end": 2782.16, "text": " And then we have a pro evaluation, which is really cool because it lends a lot of credence" }, { "start": 2782.16, "end": 2783.8799999999997, "text": " to their methods." }, { "start": 2783.88, "end": 2810.28, "text": " And with that, let me know what you think in the comments and bye bye." } ]
a-VQfQqIMrE
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
mixup: Beyond Empirical Risk Minimization (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "classifier", "dnn", "cnn", "high dimensions", "class boundaries", "mixing", "interpolation", "latent", "beta", "regularizer", "regularization", "generalization", "adversarial examples", "smooth" ]
Neural Networks often draw hard boundaries in high-dimensional space, which makes them very brittle. Mixup is a technique that linearly interpolates between data and labels at training time and achieves much smoother and more regular class boundaries. OUTLINE: 0:00 - Intro 0:30 - The problem with ERM 2:50 - Mixup 6:40 - Code 9:35 - Results https://arxiv.org/abs/1710.09412 Abstract: Large deep neural networks are powerful, but exhibit undesirable behaviors such as memorization and sensitivity to adversarial examples. In this work, we propose mixup, a simple learning principle to alleviate these issues. In essence, mixup trains a neural network on convex combinations of pairs of examples and their labels. By doing so, mixup regularizes the neural network to favor simple linear behavior in-between training examples. Our experiments on the ImageNet-2012, CIFAR-10, CIFAR-100, Google commands and UCI datasets show that mixup improves the generalization of state-of-the-art neural network architectures. We also find that mixup reduces the memorization of corrupt labels, increases the robustness to adversarial examples, and stabilizes the training of generative adversarial networks. Authors: Hongyi Zhang, Moustapha Cisse, Yann N. Dauphin, David Lopez-Paz Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there! Today we'll look at Mix-Up Beyond Empirical Risk Minimization by Hongyi Cheng, Mustafa Sis, Jan Endauphin and David Lopez-Paz. So this paper is actually pretty simple, but it introduces a technique that apparently helps with training classifiers, and I haven't seen it used in practice, so there must be at least something to it. It is ultimately very simple. So usually you input a data point X into your neural network in deep learning. So f of X, that's your neural network. Your neural network has parameters theta. You get some output y hat, and along with the X you also have a y, a true label, and then you have a loss function that compares what you output with your true label, and then you just try to make that loss smaller. You want to adjust your parameters so next time you see data point X, its output will be a little closer to the true label y. And we call this empirical risk minimization, because you don't actually think that your X comes from some data distribution D, like the space of all natural images or the space of all of language, but what you actually have is a data set of a finite amount of data that you can sample X and Y from. So instead of minimizing your true risk, you minimize your empirical risk, the empirical misc-renimization right here. Now what's the problem with that? The problem is that you can get overly confident about your data points and nothing else, and that will hurt your generalization. So if you have a data point, let's say right here, and another one right here, your network is basically, so this is class 1, this is class 2, your network is going to maybe make decision boundaries like this, and like this, where it says okay, here is class 1 and here is class 2. But it's very conceivable that here it says, here is class 4, and over here is class 7, and right here through is class 9, and by the way, here, class 4 again. So the empirical risk minimization leaves everything in between the data points open. Now what this paper proposes is that we should not only train our classifier on these data points, but on all the data points sort of in between the two. And this is the mix-up data points. So this data point here might be constructed, if this is A and this is B, from 0.1 times B, right, and plus 0.9 times A, because it's mostly A and it's a little bit B. And now you think, what are the labels here if A belongs to class 1 and B belongs to class 2? And of course the label of this data point is 0.1 times the class of B, which is 2, plus 0.9 times the class of A, which is 1. Ultimately, because what you do is you input a class like class number 2, if you want to input this into a machine learning model, you don't just say it's class number 2. What you input is a distribution that basically has zeros everywhere. So these small things, they're 0, 0, 0, 1, 0. And this here is at class number 2. So this would be class number 1, class number 2, class number 3, right? You input a distribution like this if you want to express class number 2. Now in our sample right here, what we would input as a label is simply a mix between class 1, so 0.9, so 0.9 of class 1, 0.1 of class 2, and then 0 everywhere else. So this would be our label for the data point that we construct right here. This would be our, sorry, the top one would be our data point. Formally, you take two data points and you mix them using this lambda mixing factor. That'll give you a new data point that's in between the other data points. And you take the two corresponding labels and you mix them accordingly as well. And that will give you the label for that data point. And now your model will learn to basically smoothly interpolate. So you will teach your model. So the model on the left here is class number 1, right? That's class number 1. The thing on the right is class number 2. This here is half of class 1 and half of class 2. So the model basically learns a smooth interpolation where the situation that's here on top is probably not going to happen anymore. But what it would do is it would sort of create these iso lines around class 2 and around class 1 where it's sort of smoothly getting less and less sure about the class of the data points. But on the way, it is always either class 1 or class 2. And they say that can help the generalization performance. And it's visible. Why? Right? It's just the only thing that's not clear from the beginning is that this kind of interpolation actually makes sense. Because this means we sort of linearly interpolate between two images. So if we have two images, we just take half of one and half of the other. And that will be not a natural image. It will be kind of a blurry thing. Otherwise, you know, all our problems would be solved. And we could just linearly classify things. But in any case, in practice, it actually seems to help. Probably because interpolations of two images, linear interpolations, are still much more like something like a natural image than any random noise you could come up with. So they say this in code right here. Code is pretty simple. Simply want to mix the two things. And the mixing factor, this lambda here, comes from a beta distribution. And they use a beta, I believe, of 0.4 or something. Just want to quickly show you. This is the red line here. So the red line, as you can see, mostly, most of the time, they're going to either sample the thing on the very left or the thing on the very right. That means they either sample the first or the second data point. But some of the time, they actually sample something in the middle. And it's fairly uniform in the middle. So it appears like a good distribution to sample from if you want to sample these mixing coefficients. And by adjusting the actual number of alpha and beta here, you can determine how many times you sample the original data points versus how many times you sample something in the middle. OK. On this toy data set right here, they showcase what mixup can do. So in a classic model, you have the orange and the green data points. And blue is basically where the classifier believes it's class one. You see this very hard border here. It's quite a hard border. Now, you only have two classes here. And so the hard border is sort of a problem in itself, because if you think of, for example, adversarial examples, all they have to do is basically get over that one inch. And the classifier is already super duper sure it's the orange class. Whereas if you use mixup, your border is much, much, much more fuzzy. It's like, yeah, it's only really sure here. And out here everywhere. But in the middle, it's sort of like, I don't know. And so that's kind of a more desirable situation. Now, of course, this here works particularly in this in this linear 2D setting. But as we can see, the same reasoning applies to sort of higher, higher layers and higher dimensionality data points. I have seemed to lost the ability to zoom. Oh, no, it's back. OK. And that's basically it for this paper. This is all they do. They propose this method and then they test it. They say something interesting here that mixup converges to the classical method as alpha approaches zero. So that would push your beta distribution basically in the middle all the way down. And you would only sample from the very left or the very right. So you can smoothly interpolate between this mixing and the classic method. They so their main results are we apply this to classifiers. And what I like is, since again, is also a classifier. So the discriminator is a classifier. They also apply it to GANs and they outperform on stabilized the classic training on GANs. They show that it's more robust towards adversarial attacks because it's not so sure about intermediate things. And they generally outperform other methods. But also they do this nice investigation here where they measure the prediction error of in between data. And what it means is they say a prediction is counted as a miss if it does not belong to YI or YJ. So you have a sample right here, XI and a sample right here XJ. And you look at what the classifier says in between the two data points. So you just interpolate the two data points and just measure what the classifier says. And whenever the classifier either says YI or YJ, either either label of those two data points, you count it as correct and you only counted as incorrect if it says something else. And you can see here if you train with the classic method ERM, these errors happen much more often. That's exactly the situation I pointed out at the beginning where in the high dimensions, it can occur that all sorts of decision boundaries sneak here in between the two data points. And by interpolating between them during training, you sort of much reduce that. You reduce that effect a lot. Now, this they also say that the gradient norm of the gradients of the model with respect to input in between training data, it happens the same thing. The norm of the gradients in the middle is also much, much lower. And this investigation I find pretty cool. I have to say I've seen mix up in practice, so it might be useful. I've read a paper where they basically say, oh, it was a big transfer paper. Yeah, where they basically say it is useful if you have, for example, if you have little data and a big model, so you can sort of regularize the model and is also useful to know that they did test this with dropout. So they compared it with dropout. And the conclusion is basically that this is something else than dropout. So it's not doing the same thing. Dropout, of course, it means you drop out some of the data points in intermediate activations. And that sort of gives you a noisy version of the data point. This here can actually be combined with dropout, which means that it gives you an additional benefit. You see right here, most of the best numbers happen when you use mix up plus dropout. So it seems to be just an additional regularization on top of dropout. Pretty cool, pretty cool investigation also. All right. So if you like this, I invite you to read the paper. If you like the video, please subscribe and like and comment. And yeah, have a nice day. Bye bye.
[ { "start": 0, "end": 6.7, "text": " Hi there! Today we'll look at Mix-Up Beyond Empirical Risk Minimization by Hongyi Cheng," }, { "start": 6.7, "end": 11.66, "text": " Mustafa Sis, Jan Endauphin and David Lopez-Paz." }, { "start": 11.66, "end": 22, "text": " So this paper is actually pretty simple, but it introduces a technique that apparently helps with training classifiers," }, { "start": 22, "end": 29.5, "text": " and I haven't seen it used in practice, so there must be at least something to it." }, { "start": 29.5, "end": 32.5, "text": " It is ultimately very simple." }, { "start": 32.5, "end": 41, "text": " So usually you input a data point X into your neural network in deep learning." }, { "start": 41, "end": 48, "text": " So f of X, that's your neural network. Your neural network has parameters theta." }, { "start": 48, "end": 56, "text": " You get some output y hat, and along with the X you also have a y, a true label," }, { "start": 56, "end": 62, "text": " and then you have a loss function that compares what you output with your true label," }, { "start": 62, "end": 65, "text": " and then you just try to make that loss smaller." }, { "start": 65, "end": 75, "text": " You want to adjust your parameters so next time you see data point X, its output will be a little closer to the true label y." }, { "start": 75, "end": 82, "text": " And we call this empirical risk minimization," }, { "start": 82, "end": 90, "text": " because you don't actually think that your X comes from some data distribution D," }, { "start": 90, "end": 94, "text": " like the space of all natural images or the space of all of language," }, { "start": 94, "end": 104, "text": " but what you actually have is a data set of a finite amount of data that you can sample X and Y from." }, { "start": 104, "end": 116, "text": " So instead of minimizing your true risk, you minimize your empirical risk, the empirical misc-renimization right here." }, { "start": 116, "end": 118, "text": " Now what's the problem with that?" }, { "start": 118, "end": 124, "text": " The problem is that you can get overly confident about your data points and nothing else," }, { "start": 124, "end": 126, "text": " and that will hurt your generalization." }, { "start": 126, "end": 132, "text": " So if you have a data point, let's say right here, and another one right here," }, { "start": 132, "end": 138, "text": " your network is basically, so this is class 1, this is class 2," }, { "start": 138, "end": 143, "text": " your network is going to maybe make decision boundaries like this, and like this," }, { "start": 143, "end": 146, "text": " where it says okay, here is class 1 and here is class 2." }, { "start": 146, "end": 155, "text": " But it's very conceivable that here it says, here is class 4, and over here is class 7," }, { "start": 155, "end": 161, "text": " and right here through is class 9, and by the way, here, class 4 again." }, { "start": 161, "end": 170, "text": " So the empirical risk minimization leaves everything in between the data points open." }, { "start": 170, "end": 179, "text": " Now what this paper proposes is that we should not only train our classifier on these data points," }, { "start": 179, "end": 185, "text": " but on all the data points sort of in between the two." }, { "start": 185, "end": 190, "text": " And this is the mix-up data points." }, { "start": 190, "end": 195, "text": " So this data point here might be constructed, if this is A and this is B," }, { "start": 195, "end": 208, "text": " from 0.1 times B, right, and plus 0.9 times A, because it's mostly A and it's a little bit B." }, { "start": 208, "end": 214, "text": " And now you think, what are the labels here if A belongs to class 1 and B belongs to class 2?" }, { "start": 214, "end": 222, "text": " And of course the label of this data point is 0.1 times the class of B, which is 2," }, { "start": 222, "end": 227, "text": " plus 0.9 times the class of A, which is 1." }, { "start": 227, "end": 233, "text": " Ultimately, because what you do is you input a class like class number 2," }, { "start": 233, "end": 238, "text": " if you want to input this into a machine learning model, you don't just say it's class number 2." }, { "start": 238, "end": 245, "text": " What you input is a distribution that basically has zeros everywhere." }, { "start": 245, "end": 250, "text": " So these small things, they're 0, 0, 0, 1, 0." }, { "start": 250, "end": 252, "text": " And this here is at class number 2." }, { "start": 252, "end": 255, "text": " So this would be class number 1, class number 2, class number 3, right?" }, { "start": 255, "end": 261, "text": " You input a distribution like this if you want to express class number 2." }, { "start": 261, "end": 269, "text": " Now in our sample right here, what we would input as a label is simply a mix between class 1," }, { "start": 269, "end": 278, "text": " so 0.9, so 0.9 of class 1, 0.1 of class 2, and then 0 everywhere else." }, { "start": 278, "end": 284, "text": " So this would be our label for the data point that we construct right here." }, { "start": 284, "end": 288, "text": " This would be our, sorry, the top one would be our data point." }, { "start": 288, "end": 297, "text": " Formally, you take two data points and you mix them using this lambda mixing factor." }, { "start": 297, "end": 301, "text": " That'll give you a new data point that's in between the other data points." }, { "start": 301, "end": 305, "text": " And you take the two corresponding labels and you mix them accordingly as well." }, { "start": 305, "end": 308, "text": " And that will give you the label for that data point." }, { "start": 308, "end": 314, "text": " And now your model will learn to basically smoothly interpolate." }, { "start": 314, "end": 316, "text": " So you will teach your model." }, { "start": 316, "end": 320, "text": " So the model on the left here is class number 1, right?" }, { "start": 320, "end": 321, "text": " That's class number 1." }, { "start": 321, "end": 323, "text": " The thing on the right is class number 2." }, { "start": 323, "end": 330, "text": " This here is half of class 1 and half of class 2." }, { "start": 330, "end": 335, "text": " So the model basically learns a smooth interpolation where the situation that's here on top" }, { "start": 335, "end": 337, "text": " is probably not going to happen anymore." }, { "start": 337, "end": 344, "text": " But what it would do is it would sort of create these iso lines around class 2" }, { "start": 344, "end": 351, "text": " and around class 1 where it's sort of smoothly getting less and less sure about the class of the data points." }, { "start": 351, "end": 354, "text": " But on the way, it is always either class 1 or class 2." }, { "start": 354, "end": 357, "text": " And they say that can help the generalization performance." }, { "start": 357, "end": 359, "text": " And it's visible. Why? Right?" }, { "start": 359, "end": 367, "text": " It's just the only thing that's not clear from the beginning is that this kind of interpolation actually makes sense." }, { "start": 367, "end": 372, "text": " Because this means we sort of linearly interpolate between two images." }, { "start": 372, "end": 375, "text": " So if we have two images, we just take half of one and half of the other." }, { "start": 375, "end": 377, "text": " And that will be not a natural image." }, { "start": 377, "end": 379, "text": " It will be kind of a blurry thing." }, { "start": 379, "end": 382, "text": " Otherwise, you know, all our problems would be solved." }, { "start": 382, "end": 385, "text": " And we could just linearly classify things." }, { "start": 385, "end": 389, "text": " But in any case, in practice, it actually seems to help." }, { "start": 389, "end": 393, "text": " Probably because interpolations of two images, linear interpolations," }, { "start": 393, "end": 401, "text": " are still much more like something like a natural image than any random noise you could come up with." }, { "start": 401, "end": 406, "text": " So they say this in code right here." }, { "start": 406, "end": 407, "text": " Code is pretty simple." }, { "start": 407, "end": 410, "text": " Simply want to mix the two things." }, { "start": 410, "end": 414, "text": " And the mixing factor, this lambda here, comes from a beta distribution." }, { "start": 414, "end": 418, "text": " And they use a beta, I believe, of 0.4 or something." }, { "start": 418, "end": 421, "text": " Just want to quickly show you. This is the red line here." }, { "start": 421, "end": 427, "text": " So the red line, as you can see, mostly, most of the time," }, { "start": 427, "end": 433, "text": " they're going to either sample the thing on the very left or the thing on the very right." }, { "start": 433, "end": 437, "text": " That means they either sample the first or the second data point." }, { "start": 437, "end": 441, "text": " But some of the time, they actually sample something in the middle." }, { "start": 441, "end": 445, "text": " And it's fairly uniform in the middle." }, { "start": 445, "end": 451, "text": " So it appears like a good distribution to sample from if you want to sample these mixing coefficients." }, { "start": 451, "end": 456, "text": " And by adjusting the actual number of alpha and beta here," }, { "start": 456, "end": 464, "text": " you can determine how many times you sample the original data points versus how many times you sample something in the middle." }, { "start": 464, "end": 466, "text": " OK." }, { "start": 466, "end": 472, "text": " On this toy data set right here, they showcase what mixup can do." }, { "start": 472, "end": 476, "text": " So in a classic model, you have the orange and the green data points." }, { "start": 476, "end": 480, "text": " And blue is basically where the classifier believes it's class one." }, { "start": 480, "end": 482, "text": " You see this very hard border here." }, { "start": 482, "end": 484, "text": " It's quite a hard border." }, { "start": 484, "end": 486, "text": " Now, you only have two classes here." }, { "start": 486, "end": 494, "text": " And so the hard border is sort of a problem in itself, because if you think of, for example, adversarial examples," }, { "start": 494, "end": 499, "text": " all they have to do is basically get over that one inch." }, { "start": 499, "end": 505, "text": " And the classifier is already super duper sure it's the orange class." }, { "start": 505, "end": 509, "text": " Whereas if you use mixup, your border is much, much, much more fuzzy." }, { "start": 509, "end": 513, "text": " It's like, yeah, it's only really sure here." }, { "start": 513, "end": 516, "text": " And out here everywhere." }, { "start": 516, "end": 520, "text": " But in the middle, it's sort of like, I don't know." }, { "start": 520, "end": 524, "text": " And so that's kind of a more desirable situation." }, { "start": 524, "end": 530, "text": " Now, of course, this here works particularly in this in this linear 2D setting." }, { "start": 530, "end": 540, "text": " But as we can see, the same reasoning applies to sort of higher, higher layers and higher dimensionality data points." }, { "start": 540, "end": 543, "text": " I have seemed to lost the ability to zoom." }, { "start": 543, "end": 545, "text": " Oh, no, it's back." }, { "start": 545, "end": 546, "text": " OK." }, { "start": 546, "end": 549, "text": " And that's basically it for this paper." }, { "start": 549, "end": 550, "text": " This is all they do." }, { "start": 550, "end": 554, "text": " They propose this method and then they test it." }, { "start": 554, "end": 561, "text": " They say something interesting here that mixup converges to the classical method as alpha approaches zero." }, { "start": 561, "end": 565, "text": " So that would push your beta distribution basically in the middle all the way down." }, { "start": 565, "end": 570, "text": " And you would only sample from the very left or the very right." }, { "start": 570, "end": 577, "text": " So you can smoothly interpolate between this mixing and the classic method." }, { "start": 577, "end": 584, "text": " They so their main results are we apply this to classifiers." }, { "start": 584, "end": 588, "text": " And what I like is, since again, is also a classifier." }, { "start": 588, "end": 589, "text": " So the discriminator is a classifier." }, { "start": 589, "end": 595, "text": " They also apply it to GANs and they outperform on stabilized the classic training on GANs." }, { "start": 595, "end": 604, "text": " They show that it's more robust towards adversarial attacks because it's not so sure about intermediate things." }, { "start": 604, "end": 608, "text": " And they generally outperform other methods." }, { "start": 608, "end": 617, "text": " But also they do this nice investigation here where they measure the prediction error of in between data." }, { "start": 617, "end": 624, "text": " And what it means is they say a prediction is counted as a miss if it does not belong to YI or YJ." }, { "start": 624, "end": 629, "text": " So you have a sample right here, XI and a sample right here XJ." }, { "start": 629, "end": 635, "text": " And you look at what the classifier says in between the two data points." }, { "start": 635, "end": 639, "text": " So you just interpolate the two data points and just measure what the classifier says." }, { "start": 639, "end": 646, "text": " And whenever the classifier either says YI or YJ, either either label of those two data points," }, { "start": 646, "end": 652, "text": " you count it as correct and you only counted as incorrect if it says something else." }, { "start": 652, "end": 659, "text": " And you can see here if you train with the classic method ERM, these errors happen much more often." }, { "start": 659, "end": 665, "text": " That's exactly the situation I pointed out at the beginning where in the high dimensions," }, { "start": 665, "end": 671, "text": " it can occur that all sorts of decision boundaries sneak here in between the two data points." }, { "start": 671, "end": 681, "text": " And by interpolating between them during training, you sort of much reduce that." }, { "start": 681, "end": 686, "text": " You reduce that effect a lot." }, { "start": 686, "end": 695, "text": " Now, this they also say that the gradient norm of the gradients of the model with respect to input in between training data," }, { "start": 695, "end": 702, "text": " it happens the same thing. The norm of the gradients in the middle is also much, much lower." }, { "start": 702, "end": 711, "text": " And this investigation I find pretty cool. I have to say I've seen mix up in practice, so it might be useful." }, { "start": 711, "end": 716, "text": " I've read a paper where they basically say, oh, it was a big transfer paper." }, { "start": 716, "end": 722, "text": " Yeah, where they basically say it is useful if you have, for example, if you have little data and a big model," }, { "start": 722, "end": 728, "text": " so you can sort of regularize the model and is also useful to know that they did test this with dropout." }, { "start": 728, "end": 734, "text": " So they compared it with dropout. And the conclusion is basically that this is something else than dropout." }, { "start": 734, "end": 743, "text": " So it's not doing the same thing. Dropout, of course, it means you drop out some of the data points in intermediate activations." }, { "start": 743, "end": 747, "text": " And that sort of gives you a noisy version of the data point." }, { "start": 747, "end": 754, "text": " This here can actually be combined with dropout, which means that it gives you an additional benefit." }, { "start": 754, "end": 760, "text": " You see right here, most of the best numbers happen when you use mix up plus dropout." }, { "start": 760, "end": 766, "text": " So it seems to be just an additional regularization on top of dropout." }, { "start": 766, "end": 774, "text": " Pretty cool, pretty cool investigation also. All right. So if you like this, I invite you to read the paper." }, { "start": 774, "end": 781, "text": " If you like the video, please subscribe and like and comment. And yeah, have a nice day. Bye bye." } ]
p-zOeQCoG9c
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Weight Standardization (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "normalize", "batchnorm", "groupnorm", "layernorm", "mean", "center", "std", "standardize", "backpropagation", "convergence", "gradients", "norm", "convolution", "cnn", "convolutional neural networks", "filters", "kernel", "channel", "architecture" ]
It's common for neural networks to include data normalization such as BatchNorm or GroupNorm. This paper extends the normalization to also include the weights of the network. This surprisingly simple change leads to a boost in performance and - combined with GroupNorm - new state-of-the-art results. https://arxiv.org/abs/1903.10520 Abstract: In this paper, we propose Weight Standardization (WS) to accelerate deep network training. WS is targeted at the micro-batch training setting where each GPU typically has only 1-2 images for training. The micro-batch training setting is hard because small batch sizes are not enough for training networks with Batch Normalization (BN), while other normalization methods that do not rely on batch knowledge still have difficulty matching the performances of BN in large-batch training. Our WS ends this problem because when used with Group Normalization and trained with 1 image/GPU, WS is able to match or outperform the performances of BN trained with large batch sizes with only 2 more lines of code. In micro-batch training, WS significantly outperforms other normalization methods. WS achieves these superior results by standardizing the weights in the convolutional layers, which we show is able to smooth the loss landscape by reducing the Lipschitz constants of the loss and the gradients. The effectiveness of WS is verified on many tasks, including image classification, object detection, instance segmentation, video recognition, semantic segmentation, and point cloud recognition. The code is available here: this https URL. Authors: Siyuan Qiao, Huiyu Wang, Chenxi Liu, Wei Shen, Alan Yuille Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there! Today we're looking at weight standardization by Si Wen Jiao, Hu Yuh Wang, Xianqi Liu, Wei Shen, Alan Yuel of John Hopkins University. So weight standardization is a normalization technique for training neural networks and it goes basically in conjunction with another technique called group normalization. So if you haven't group normalization norm, that is ugly, if you haven't seen my video on group normalization and don't know what it is I suggest you go watch that first or read the group norm paper or some blog post because weight standardization is usually used together with group norm in order to work well and that's what this paper also says. Even though it's pretty much independent but here you can see their main results. So they compare batch norm, group norm and weight standardization used with group norm then they can as you can see here they can outperform in the image net top one accuracy the other two models and the important part here as you can see is batch norm is trained with large batch sizes while group norm and group norm plus weight standardization are trained with one image per GPU so they have a multi GPU setup and this is just one image per GPU and these results over here are on a mask rcnn which I believe is a recurrent model where the model is large because the kind of the model is large and therefore you can only have very small batches per worker and that means batch norm will work less. Now again we've discussed why batch norm is not a good thing when you have to go to small batch sizes because basically what people have discovered is that it is very beneficial in machine learning to normalize your data before working with it. What do we mean by it? So if you have a bunch of data points right here and let's say like this it is it is usually beneficial to first center the data like this so basically calculate its mean and shift it and then to standardize the axis so basically you divide it by the standard deviation in each direction and your data will look something like this. Of many classical methods that will improve the conditioning numbers of the requirements to solve it and so on and even of deep learning methods we just know that if you standardize your data like this it works better. So people are basically have come up with these methods that where they say well if it helps for the data at the beginning of a neural network then if after a layer the data is kind of out of whack that can happen after a layer of neural network we should maybe first before we send it to the next layer do the same thing center it again and then send it through and if after the next layer again it's out of whack we should maybe center it and standardize it again before sending it through the next layer. So in each layer you have these transformations that center and standardize the data and usually for the longest time this was a batch norm. Batch norm does this across the mini batches of the data since you can't pass the entire data set. Now group norm has come and replaced batch norm because in batch norm it's very dependent on the batch size while group norm isn't. The group norm paper has sort of made it clear that in competitive batch sizes in the large batch size regime group norm is sorry batch norm is still the king batch norm still works better. It's only when you go to very small batch sizes that group norm takes over and that's what you can see here. So here okay it's a bit unfair because batch norm is trained with a larger batch size but even if group norm were to be trained with the large batch size it would still be in the same place because no it wouldn't it would not. Sorry that is that is not the case because the batches still influence the gradient stochasticity and so on. But still batch norm is better than group norm as you can see here but here over here where you kind of have to go to the small batch sizes then batch norm is all of a sudden worse than group norm. And the weight standardization is a technique to actually make group norm better than batch norm in any of these so even in these in the large batch regime. Okay so we'll now explore weight standardization. So in the group norm paper we've looked at the diagram on the left. So basically in batch norm here is the number of data points. This is your batch. This is the channels of the batch of the individual images. Channels. And this is the height and width of the image. So this is the image itself a single channel. So a single channel in the image would be a column in this thing right here. Batch norm normalizes across the data points in a single channel. Layer norm which is a precursor to group norm normalizes only in a single data point instance but across all of the channels as you can see here. Now that frees its dependence on the batch size. Each data point is treated individually but of course it sort of convolves all the channels with each other. It doesn't distinguish them. Instance norm tries to fix this. Instance norm down here tries to fix this by saying it was a good idea to normalize each feature individually and takes it to the extreme. Basically normalizes a single image by each of these single features. But that loses too much information. Group norm comes along and says maybe some of the features naturally depend on each other. Naturally exhibit the same responses. Therefore we should normalize them in groups. So we take still a single image but we take groups in this case groups of three channels together and normalize across that. Now this here is all in data space. This all normalizes the data like we said up here when we drew this. This is all normalizing the data before passing it through the next layer. Now what actually happens in these layers? So what happens here? What happens here in a convolutional neural network is that the images get convolved with kernels. That's what a neural network layer is. So if you have an image right here of our trusty cat. I've drawn whiskers in a while. That nose is very high. The eyes must be like up here. Sorry cat. And the layer inherently has these things called kernels. Now I'm just going to draw one of these kernels right here. It's a three by three kernel and what you'll do is you'll slide the kernel across this right across like this. You slide it across across across across and for each point you convolve the kernel. So you can involve the values here with the pixels here and sum them up and that for each position in the image means that you'll basically get a new value at each point and that will be your next layer's data point. Now in these normalization techniques we usually normalize the data points. So here you have multiple channels maybe a red a green and a blue and so on and in the intermediate layers you have even more. But you also have multiple kernels. You can see here you have multiple of these kernels which will then result in multiple output channels. The old normalization methods batch norm, layer norm, group norm, they all work in they all work in this or in this space in the space of data. Whereas weight standardization works on the kernel space. So weight standardization means you want to normalize the weights of the neural network not the data. And that's why it can be used in conjunction with something like group norm or actually batch norm or layer norm. It could be used with any of these but these authors use it in conjunction with group norm. So what does it do? If you have these kernels the kernels are characterized actually a kernel is characterized by four numbers. So first of all it's the height and width of the kernel which in our case was three by three. It is characterized by two more numbers which is the CN, the in channels and the out channels. So the in channels is the number of channels that come into the layer and the out channels are the number of channels that you want to transform that into. So here you can see the in channels are listed here and the out channels are listed here and in the up-down direction which is not labeled here is the height and width. So this here would be actually a two by two kernel. So each of these slivers here is a two by two kernel in the convolutional network and then that would be the orange sliver here and then the sliver behind that would be the next two by two kernel. Weight standardization says, hey it might be as we normalize the data it might be a good idea. Sorry I was that was wrong. One column here, one of these columns is a two by two filter and then the column behind it and the column next to it, they're all two by two filters right. So you have two by two filters in the output or and you also have two by two filters for each of the input output channel combination you have a two by two filter. So you have an entire matrix of two by two filters if you can imagine that. So across the out and across the in direction. Weight standardization says, hey it might be a good idea to see that the weights for a given output channel right. This is we take one output channel and we see all the filters that transform the input into that one output channel which is going to be this many times this many times this many numbers or this many filters. Maybe we should normalize all of these to be sort of to not get out of whack because one could imagine that during training right if we start we initialize our filters somewhere here you know maybe one number this this one number here we initialize it randomly right we draw it from random and then maybe as we train it actually gets very large because it's actually plausible because after that we we you know this is our neural network layer after that we have this procedure to recenter the data right. So I could make a very large weight here multiply the data by very large weight because it gets re-centered anyway but of course if my weights get large I'll basically increase the variance and the instability and the gradients might be high and and so on. So these author think it might be a good idea to normalize these weights. So just as you normalize the data you normalize the weights and this actually turns out to be fairly easy in the sense of how you would do it. So instead of transforming X which is the input to a layer into Y using W so this is W this is your actual parameter using W you won't do this right now so this this was usually you just do you just do X times W and that gives you Y this is a convolution operation right here. Now you don't do this you do you have take W and first you subtract the mean of W this is now for a single output channel and then you divide by the standard deviation how many of this is standard deviation of W and that entire thing you now multiply by X. Now since these things here are sorry about that since these things here are just you know deterministic operation you can actually back propagate through it so the forward path of data now looks as follows you come you start you say okay my data comes in I will take my weights that my layer weights and I will first center them then scale them with its standard deviation and then I will use that thing and X in order to obtain my layer output and then I'll send that to the next layer. Now the backprop signal here is interesting because the backprop signal comes in from here and splits up in two ways it splits up into the back prop signal basically you have to back prop through the X times W hat operation we know how to do that that's just a convolutional back prop that you back prop through the convolution operation back to the last layer. Now usually when you back prop through the convolution operation you get two things you get the derivative with respect to X and you get the derivative with respect to the weights W and you can send both on and you would update your weights with that gradient but now what you'll have to do because this is not your actual parameter of the network you have to take that particular signal and you have to basically reverse the standardization and the centering before you can apply the gradient but that's all doable the actually modern frameworks will do it by themselves but it's just that the backprop path here it introduces two new operation to the forward and to the backprop path that you didn't have before but I can imagine this will actually not take you won't even notice that this is happening this is so fast so they the idea is basically pretty basic especially since the entire discussion around normalization has already happened I enjoy that this paper does go into the theory a bit more so they analyze what this weight standardization what effect it has on the Lipschitz constant of the loss for example and they also research what what what contributes more the centering of the weights or the standardization so they kind of run all these ablations where they figure out okay if we just do group norm we have one we you know we have this trajectory here and if we run group non plus equation five which is subtracting the mean you can see the blue and the orange that is quite a bit and if we only do the dividing by the standard deviation you can see it's pretty close together but there is a difference if you do both then again there is a difference to only doing the centering so they they say even though you know probably subtracting the mean gives you most of the benefit since it is so easy you should just do both and I honestly think and here in the in the in the validation error that makes basically no difference at all and they do quite a number of these ablations which I'm not going to go into too much and they do also the so the Lipschitz constant of the loss and the Lipschitz constant of the gradients they basically show that the loss and the gradients are behaved more more well-behaved when you use this weight standardization technique together with group norm they also do quite a bit of experiments where they show that their method outperforms batch norm and especially in the small batch size regime and that is something that I absolutely believe what happened here okay I we actually don't even need to go down there because if you want to read the paper I invite you to read the paper it's a very good paper I enjoyed reading it but ultimately they suggest this new method and also I have seen this one replicated across the community a number of times so it seems to be a thing that I would expect either it fizzes out and the community decides that it's about the same as batch norm and therefore not worth it or and that's what I believe since we also go into the direction of larger models which means smaller batches per worker and generally batch norm is a pain I believe this is just going to be rather standard in the future so I'll actually incorporate this if I can into my next projects so that was it for me if you like this consider subscribing consider leaving a like on the video thank you for listening if you have any comments I will very probably read them bye bye
[ { "start": 0, "end": 7.8, "text": " Hi there! Today we're looking at weight standardization by Si Wen Jiao, Hu Yuh Wang," }, { "start": 7.8, "end": 16.32, "text": " Xianqi Liu, Wei Shen, Alan Yuel of John Hopkins University. So weight standardization" }, { "start": 16.32, "end": 23.22, "text": " is a normalization technique for training neural networks and it goes basically in" }, { "start": 23.22, "end": 28.18, "text": " conjunction with another technique called group normalization. So if you" }, { "start": 28.18, "end": 36.4, "text": " haven't group normalization norm, that is ugly, if you haven't seen my video on" }, { "start": 36.4, "end": 41.36, "text": " group normalization and don't know what it is I suggest you go watch that first" }, { "start": 41.36, "end": 46.64, "text": " or read the group norm paper or some blog post because weight standardization" }, { "start": 46.64, "end": 54.16, "text": " is usually used together with group norm in order to work well and that's what" }, { "start": 54.16, "end": 61.64, "text": " this paper also says. Even though it's pretty much independent but here you can" }, { "start": 61.64, "end": 69.64, "text": " see their main results. So they compare batch norm, group norm and weight" }, { "start": 69.64, "end": 75.47999999999999, "text": " standardization used with group norm then they can as you can see here they" }, { "start": 75.47999999999999, "end": 81.44, "text": " can outperform in the image net top one accuracy the other two models and the" }, { "start": 81.44, "end": 87.36, "text": " important part here as you can see is batch norm is trained with large batch" }, { "start": 87.36, "end": 93.08, "text": " sizes while group norm and group norm plus weight standardization are trained" }, { "start": 93.08, "end": 98.92, "text": " with one image per GPU so they have a multi GPU setup and this is just one" }, { "start": 98.92, "end": 107.36, "text": " image per GPU and these results over here are on a mask rcnn" }, { "start": 107.36, "end": 117.4, "text": " which I believe is a recurrent model where the model is large because the kind" }, { "start": 117.4, "end": 122.2, "text": " of the model is large and therefore you can only have very small batches per" }, { "start": 122.2, "end": 128.56, "text": " worker and that means batch norm will work less. Now again we've discussed why" }, { "start": 128.56, "end": 134.84, "text": " batch norm is not a good thing when you have to go to small batch sizes because" }, { "start": 134.84, "end": 141.08, "text": " basically what people have discovered is that it is very beneficial in machine" }, { "start": 141.08, "end": 144.92000000000002, "text": " learning to normalize your data before working with it. What do we mean by it?" }, { "start": 144.92000000000002, "end": 151.84, "text": " So if you have a bunch of data points right here and let's say like this it is" }, { "start": 151.84, "end": 157.68, "text": " it is usually beneficial to first center the data like this so basically" }, { "start": 157.68, "end": 163.96, "text": " calculate its mean and shift it and then to standardize the axis so basically" }, { "start": 163.96, "end": 168.48000000000002, "text": " you divide it by the standard deviation in each direction and your data will" }, { "start": 168.48000000000002, "end": 172.12, "text": " look something like this. Of many classical methods that will improve the" }, { "start": 172.12, "end": 178.28, "text": " conditioning numbers of the requirements to solve it and so on and even of deep" }, { "start": 178.28, "end": 182.52, "text": " learning methods we just know that if you standardize your data like this it" }, { "start": 182.52, "end": 188.20000000000002, "text": " works better. So people are basically have come up with these methods that" }, { "start": 188.20000000000002, "end": 192.84, "text": " where they say well if it helps for the data at the beginning of a neural" }, { "start": 192.84, "end": 200.16, "text": " network then if after a layer the data is kind of out of whack that can" }, { "start": 200.16, "end": 204.36, "text": " happen after a layer of neural network we should maybe first before we send it" }, { "start": 204.36, "end": 209.28, "text": " to the next layer do the same thing center it again and then send it through" }, { "start": 209.28, "end": 214.8, "text": " and if after the next layer again it's out of whack we should maybe center it" }, { "start": 214.8, "end": 219.44, "text": " and standardize it again before sending it through the next layer. So in each" }, { "start": 219.44, "end": 225.68, "text": " layer you have these transformations that center and standardize the data and" }, { "start": 225.68, "end": 230.32, "text": " usually for the longest time this was a batch norm. Batch norm does this across" }, { "start": 230.32, "end": 235.6, "text": " the mini batches of the data since you can't pass the entire data set. Now group" }, { "start": 235.6, "end": 240.92, "text": " norm has come and replaced batch norm because in batch norm it's very" }, { "start": 240.92, "end": 247, "text": " dependent on the batch size while group norm isn't. The group norm paper has" }, { "start": 247, "end": 252, "text": " sort of made it clear that in competitive batch sizes in the large" }, { "start": 252, "end": 256.48, "text": " batch size regime group norm is sorry batch norm is still the king batch norm" }, { "start": 256.48, "end": 260.2, "text": " still works better. It's only when you go to very small batch sizes that group" }, { "start": 260.2, "end": 265.12, "text": " norm takes over and that's what you can see here. So here okay it's a bit unfair" }, { "start": 265.12, "end": 268.88, "text": " because batch norm is trained with a larger batch size but even if group norm" }, { "start": 268.88, "end": 273.72, "text": " were to be trained with the large batch size it would still be in the same place" }, { "start": 273.72, "end": 283.56, "text": " because no it wouldn't it would not. Sorry that is that is not the case" }, { "start": 283.56, "end": 289.76000000000005, "text": " because the batches still influence the gradient stochasticity and so on. But" }, { "start": 289.76000000000005, "end": 293.84000000000003, "text": " still batch norm is better than group norm as you can see here but here over" }, { "start": 293.84000000000003, "end": 299.96000000000004, "text": " here where you kind of have to go to the small batch sizes then batch norm is all" }, { "start": 299.96, "end": 307.28, "text": " of a sudden worse than group norm. And the weight standardization is a technique" }, { "start": 307.28, "end": 313.32, "text": " to actually make group norm better than batch norm in any of these so even in" }, { "start": 313.32, "end": 320.67999999999995, "text": " these in the large batch regime. Okay so we'll now explore weight" }, { "start": 320.67999999999995, "end": 326.62, "text": " standardization. So in the group norm paper we've looked at the diagram on the" }, { "start": 326.62, "end": 331.96, "text": " left. So basically in batch norm here is the number of data points. This is your" }, { "start": 331.96, "end": 339.8, "text": " batch. This is the channels of the batch of the individual images. Channels. And" }, { "start": 339.8, "end": 343.78000000000003, "text": " this is the height and width of the image. So this is the image itself a" }, { "start": 343.78000000000003, "end": 347.32, "text": " single channel. So a single channel in the image would be a column in this" }, { "start": 347.32, "end": 353.72, "text": " thing right here. Batch norm normalizes across the data points in a single" }, { "start": 353.72, "end": 360.76000000000005, "text": " channel. Layer norm which is a precursor to group norm normalizes only in a single" }, { "start": 360.76000000000005, "end": 365.72, "text": " data point instance but across all of the channels as you can see here. Now" }, { "start": 365.72, "end": 370.20000000000005, "text": " that frees its dependence on the batch size. Each data point is treated" }, { "start": 370.20000000000005, "end": 375.32000000000005, "text": " individually but of course it sort of convolves all the channels with each" }, { "start": 375.32000000000005, "end": 381.64000000000004, "text": " other. It doesn't distinguish them. Instance norm tries to fix this. Instance" }, { "start": 381.64, "end": 386.28, "text": " norm down here tries to fix this by saying it was a good idea to" }, { "start": 386.28, "end": 390.36, "text": " normalize each feature individually and takes it to the extreme. Basically" }, { "start": 390.36, "end": 398.56, "text": " normalizes a single image by each of these single features. But that loses" }, { "start": 398.56, "end": 403.76, "text": " too much information. Group norm comes along and says maybe some of the" }, { "start": 403.76, "end": 409.15999999999997, "text": " features naturally depend on each other. Naturally exhibit the same responses." }, { "start": 409.16, "end": 414.72, "text": " Therefore we should normalize them in groups. So we take still a single image" }, { "start": 414.72, "end": 420.04, "text": " but we take groups in this case groups of three channels together and normalize" }, { "start": 420.04, "end": 428.28000000000003, "text": " across that. Now this here is all in data space. This all normalizes the data like" }, { "start": 428.28000000000003, "end": 433.18, "text": " we said up here when we drew this. This is all normalizing the data before" }, { "start": 433.18, "end": 437.90000000000003, "text": " passing it through the next layer. Now what actually happens in these layers? So" }, { "start": 437.9, "end": 444.35999999999996, "text": " what happens here? What happens here in a convolutional neural network is that the" }, { "start": 444.35999999999996, "end": 448.4, "text": " images get convolved with kernels. That's what a neural network" }, { "start": 448.4, "end": 455.4, "text": " layer is. So if you have an image right here of our trusty cat. I've drawn" }, { "start": 455.4, "end": 460.64, "text": " whiskers in a while. That nose is very high. The eyes must be like up here. Sorry" }, { "start": 460.64, "end": 466.44, "text": " cat. And the layer inherently has these things called kernels. Now I'm just" }, { "start": 466.44, "end": 471.16, "text": " going to draw one of these kernels right here. It's a three by three kernel and" }, { "start": 471.16, "end": 476.2, "text": " what you'll do is you'll slide the kernel across this right across like" }, { "start": 476.2, "end": 482.56, "text": " this. You slide it across across across across and for each point you convolve" }, { "start": 482.56, "end": 488.28, "text": " the kernel. So you can involve the values here with the pixels here and sum them" }, { "start": 488.28, "end": 494.28, "text": " up and that for each position in the image means that you'll basically get a" }, { "start": 494.28, "end": 502.88, "text": " new value at each point and that will be your next layer's data point. Now in" }, { "start": 502.88, "end": 508.23999999999995, "text": " these normalization techniques we usually normalize the data points. So" }, { "start": 508.23999999999995, "end": 512, "text": " here you have multiple channels maybe a red a green and a blue and so on and in" }, { "start": 512, "end": 518.12, "text": " the intermediate layers you have even more. But you also have multiple" }, { "start": 518.12, "end": 524.64, "text": " kernels. You can see here you have multiple of these kernels which will then" }, { "start": 524.64, "end": 533.48, "text": " result in multiple output channels. The old normalization methods batch norm," }, { "start": 533.48, "end": 543.12, "text": " layer norm, group norm, they all work in they all work in this or in this space" }, { "start": 543.12, "end": 549.24, "text": " in the space of data. Whereas weight standardization works on the kernel" }, { "start": 549.24, "end": 554.68, "text": " space. So weight standardization means you want to normalize the weights of the" }, { "start": 554.68, "end": 559.96, "text": " neural network not the data. And that's why it can be used in conjunction with" }, { "start": 559.96, "end": 564.24, "text": " something like group norm or actually batch norm or layer norm. It could be used" }, { "start": 564.24, "end": 569.08, "text": " with any of these but these authors use it in conjunction with group norm. So" }, { "start": 569.08, "end": 575.12, "text": " what does it do? If you have these kernels the kernels are characterized" }, { "start": 575.12, "end": 579.4000000000001, "text": " actually a kernel is characterized by four numbers. So first of all it's the" }, { "start": 579.4000000000001, "end": 586.08, "text": " height and width of the kernel which in our case was three by three. It is" }, { "start": 586.08, "end": 591.2800000000001, "text": " characterized by two more numbers which is the CN, the in channels and the out" }, { "start": 591.2800000000001, "end": 597.94, "text": " channels. So the in channels is the number of channels that come into the" }, { "start": 597.94, "end": 601.8800000000001, "text": " layer and the out channels are the number of channels that you want to" }, { "start": 601.8800000000001, "end": 607.96, "text": " transform that into. So here you can see the in channels are listed here and the" }, { "start": 607.96, "end": 612.08, "text": " out channels are listed here and in the up-down direction which is not labeled" }, { "start": 612.08, "end": 617.2800000000001, "text": " here is the height and width. So this here would be actually a two by two" }, { "start": 617.2800000000001, "end": 624, "text": " kernel. So each of these slivers here is a two by two kernel in the" }, { "start": 624, "end": 627.84, "text": " convolutional network and then that would be the orange sliver here and then the" }, { "start": 627.84, "end": 634.52, "text": " sliver behind that would be the next two by two kernel. Weight standardization" }, { "start": 634.52, "end": 644.44, "text": " says, hey it might be as we normalize the data it might be a good idea. Sorry I" }, { "start": 644.44, "end": 651, "text": " was that was wrong. One column here, one of these columns is a two by two filter" }, { "start": 651, "end": 656.56, "text": " and then the column behind it and the column next to it, they're all two by" }, { "start": 656.56, "end": 666.52, "text": " two filters right. So you have two by two filters in the output or and you also" }, { "start": 666.52, "end": 670.28, "text": " have two by two filters for each of the input output channel" }, { "start": 670.28, "end": 674.04, "text": " combination you have a two by two filter. So you have an entire matrix of" }, { "start": 674.04, "end": 679.48, "text": " two by two filters if you can imagine that. So across the out and across the" }, { "start": 679.48, "end": 687.04, "text": " in direction. Weight standardization says, hey it might be a good idea to see that" }, { "start": 687.04, "end": 694.84, "text": " the weights for a given output channel right. This is we take one output channel" }, { "start": 694.84, "end": 700.88, "text": " and we see all the filters that transform the input into that one output" }, { "start": 700.88, "end": 706.6, "text": " channel which is going to be this many times this many times this many numbers" }, { "start": 706.6, "end": 714.0400000000001, "text": " or this many filters. Maybe we should normalize all of these to be sort of to" }, { "start": 714.0400000000001, "end": 718.48, "text": " not get out of whack because one could imagine that during training right if we" }, { "start": 718.48, "end": 724.32, "text": " start we initialize our filters somewhere here you know maybe one number" }, { "start": 724.32, "end": 728.12, "text": " this this one number here we initialize it randomly right we draw it from random" }, { "start": 728.12, "end": 734.1600000000001, "text": " and then maybe as we train it actually gets very large because it's actually" }, { "start": 734.16, "end": 738.64, "text": " plausible because after that we we you know this is our neural network layer" }, { "start": 738.64, "end": 744.56, "text": " after that we have this procedure to recenter the data right. So I could make" }, { "start": 744.56, "end": 750.24, "text": " a very large weight here multiply the data by very large weight because it" }, { "start": 750.24, "end": 756.6, "text": " gets re-centered anyway but of course if my weights get large I'll basically" }, { "start": 756.6, "end": 762.48, "text": " increase the variance and the instability and the gradients might be" }, { "start": 762.48, "end": 768, "text": " high and and so on. So these author think it might be a good idea to normalize" }, { "start": 768, "end": 773.2, "text": " these weights. So just as you normalize the data you normalize the weights and" }, { "start": 773.2, "end": 778.48, "text": " this actually turns out to be fairly easy in the sense of how you would do it." }, { "start": 778.48, "end": 787.4, "text": " So instead of transforming X which is the input to a layer into Y using W so" }, { "start": 787.4, "end": 792.04, "text": " this is W this is your actual parameter using W you won't do this right" }, { "start": 792.04, "end": 799.0799999999999, "text": " now so this this was usually you just do you just do X times W and that gives you" }, { "start": 799.0799999999999, "end": 806.0799999999999, "text": " Y this is a convolution operation right here. Now you don't do this you do you" }, { "start": 806.0799999999999, "end": 813.16, "text": " have take W and first you subtract the mean of W this is now for a single" }, { "start": 813.16, "end": 817.92, "text": " output channel and then you divide by the standard deviation how many of this is" }, { "start": 817.92, "end": 825.0799999999999, "text": " standard deviation of W and that entire thing you now multiply by X. Now since" }, { "start": 825.0799999999999, "end": 831.64, "text": " these things here are sorry about that since these things here are just you" }, { "start": 831.64, "end": 835.4, "text": " know deterministic operation you can actually back propagate through it so" }, { "start": 835.4, "end": 843.92, "text": " the forward path of data now looks as follows you come you start you say okay" }, { "start": 843.92, "end": 852.28, "text": " my data comes in I will take my weights that my layer weights and I will first" }, { "start": 852.28, "end": 858.4799999999999, "text": " center them then scale them with its standard deviation and then I will use" }, { "start": 858.4799999999999, "end": 864.16, "text": " that thing and X in order to obtain my layer output and then I'll send that to" }, { "start": 864.16, "end": 868.68, "text": " the next layer. Now the backprop signal here is interesting because the backprop" }, { "start": 868.68, "end": 874.12, "text": " signal comes in from here and splits up in two ways it splits up into the back" }, { "start": 874.12, "end": 883.16, "text": " prop signal basically you have to back prop through the X times W hat operation" }, { "start": 883.16, "end": 887.88, "text": " we know how to do that that's just a convolutional back prop that you back" }, { "start": 887.88, "end": 895.24, "text": " prop through the convolution operation back to the last layer. Now usually when" }, { "start": 895.24, "end": 899.72, "text": " you back prop through the convolution operation you get two things you get the" }, { "start": 899.72, "end": 904.92, "text": " derivative with respect to X and you get the derivative with respect to the" }, { "start": 904.92, "end": 911.08, "text": " weights W and you can send both on and you would update your weights with that" }, { "start": 911.08, "end": 917.6, "text": " gradient but now what you'll have to do because this is not your actual" }, { "start": 917.6, "end": 925.22, "text": " parameter of the network you have to take that particular signal and you have" }, { "start": 925.22, "end": 930.6, "text": " to basically reverse the standardization and the centering before you can apply" }, { "start": 930.6, "end": 936, "text": " the gradient but that's all doable the actually modern frameworks will do it by" }, { "start": 936, "end": 943.08, "text": " themselves but it's just that the backprop path here it introduces two new" }, { "start": 943.08, "end": 948.36, "text": " operation to the forward and to the backprop path that you didn't have" }, { "start": 948.36, "end": 953.36, "text": " before but I can imagine this will actually not take you won't even notice" }, { "start": 953.36, "end": 961.76, "text": " that this is happening this is so fast so they the idea is basically pretty" }, { "start": 961.76, "end": 966.52, "text": " basic especially since the entire discussion around normalization has" }, { "start": 966.52, "end": 972.88, "text": " already happened I enjoy that this paper does go into the theory a bit more so" }, { "start": 972.88, "end": 979.36, "text": " they analyze what this weight standardization what effect it has on the" }, { "start": 979.36, "end": 985.04, "text": " Lipschitz constant of the loss for example and they also research what what" }, { "start": 985.04, "end": 991.92, "text": " what contributes more the centering of the weights or the standardization so" }, { "start": 991.92, "end": 996.32, "text": " they kind of run all these ablations where they figure out okay if we just do" }, { "start": 996.32, "end": 1001.6800000000001, "text": " group norm we have one we you know we have this trajectory here and if we run" }, { "start": 1001.6800000000001, "end": 1006.04, "text": " group non plus equation five which is subtracting the mean you can see the" }, { "start": 1006.04, "end": 1013.28, "text": " blue and the orange that is quite a bit and if we only do the dividing by the" }, { "start": 1013.28, "end": 1017.8399999999999, "text": " standard deviation you can see it's pretty close together but there is a" }, { "start": 1017.8399999999999, "end": 1022.98, "text": " difference if you do both then again there is a difference to only doing the" }, { "start": 1022.98, "end": 1027.82, "text": " centering so they they say even though you know probably subtracting the mean" }, { "start": 1027.82, "end": 1035, "text": " gives you most of the benefit since it is so easy you should just do both and I" }, { "start": 1035, "end": 1041.48, "text": " honestly think and here in the in the in the validation error that makes" }, { "start": 1041.48, "end": 1047.44, "text": " basically no difference at all and they do quite a number of these ablations" }, { "start": 1047.44, "end": 1054.68, "text": " which I'm not going to go into too much and they do also the so the Lipschitz" }, { "start": 1054.68, "end": 1058.56, "text": " constant of the loss and the Lipschitz constant of the gradients they basically" }, { "start": 1058.56, "end": 1064.84, "text": " show that the loss and the gradients are behaved more more well-behaved when you" }, { "start": 1064.84, "end": 1069.1599999999999, "text": " use this weight standardization technique together with group norm they" }, { "start": 1069.1599999999999, "end": 1075.24, "text": " also do quite a bit of experiments where they show that their method outperforms" }, { "start": 1075.24, "end": 1080.3999999999999, "text": " batch norm and especially in the small batch size regime and that is something" }, { "start": 1080.3999999999999, "end": 1087.4399999999998, "text": " that I absolutely believe what happened here okay I we actually don't even need" }, { "start": 1087.4399999999998, "end": 1093.1999999999998, "text": " to go down there because if you want to read the paper I invite you to read the" }, { "start": 1093.2, "end": 1098.92, "text": " paper it's a very good paper I enjoyed reading it but ultimately they suggest" }, { "start": 1098.92, "end": 1103.68, "text": " this new method and also I have seen this one replicated across the community" }, { "start": 1103.68, "end": 1109.8400000000001, "text": " a number of times so it seems to be a thing that I would expect either it" }, { "start": 1109.8400000000001, "end": 1114.2, "text": " fizzes out and the community decides that it's about the same as batch norm" }, { "start": 1114.2, "end": 1120.64, "text": " and therefore not worth it or and that's what I believe since we also go into the" }, { "start": 1120.64, "end": 1125.64, "text": " direction of larger models which means smaller batches per worker and" }, { "start": 1125.64, "end": 1131.5200000000002, "text": " generally batch norm is a pain I believe this is just going to be rather" }, { "start": 1131.5200000000002, "end": 1137.68, "text": " standard in the future so I'll actually incorporate this if I can into my next" }, { "start": 1137.68, "end": 1144, "text": " projects so that was it for me if you like this consider subscribing consider" }, { "start": 1144, "end": 1149, "text": " leaving a like on the video thank you for listening if you have any comments I" }, { "start": 1149, "end": 1155.76, "text": " will very probably read them bye bye" } ]
to7vCdkLi4s
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Reinforcement Learning with Augmented Data (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "rl", "reinforcement learning", "sac", "ppo", "deep rl", "deep reinforcement learning", "dreamer", "curl", "pixel", "pretraining", "deepmind", "openai", "berkeley" ]
This ONE SIMPLE TRICK can take a vanilla RL algorithm to achieve state-of-the-art. What is it? Simply augment your training data before feeding it to the learner! This can be dropped into any RL pipeline and promises big improvements across the board. Paper: https://arxiv.org/abs/2004.14990 Code: https://www.github.com/MishaLaskin/rad Abstract: Learning from visual observations is a fundamental yet challenging problem in reinforcement learning (RL). Although algorithmic advancements combined with convolutional neural networks have proved to be a recipe for success, current methods are still lacking on two fronts: (a) sample efficiency of learning and (b) generalization to new environments. To this end, we present RAD: Reinforcement Learning with Augmented Data, a simple plug-and-play module that can enhance any RL algorithm. We show that data augmentations such as random crop, color jitter, patch cutout, and random convolutions can enable simple RL algorithms to match and even outperform complex state-of-the-art methods across common benchmarks in terms of data-efficiency, generalization, and wall-clock speed. We find that data diversity alone can make agents focus on meaningful information from high-dimensional observations without any changes to the reinforcement learning method. On the DeepMind Control Suite, we show that RAD is state-of-the-art in terms of data-efficiency and performance across 15 environments. We further demonstrate that RAD can significantly improve the test-time generalization on several OpenAI ProcGen benchmarks. Finally, our customized data augmentation modules enable faster wall-clock speed compared to competing RL techniques. Our RAD module and training code are available at this https URL. Authors: Michael Laskin, Kimin Lee, Adam Stooke, Lerrel Pinto, Pieter Abbeel, Aravind Srinivas Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there. Today we're going to take a short look at reinforcement learning with augmented data. This paper is by Michael Laskin, Kimmin Lee, and others from UC Berkeley and NYU. So the reason why this is a short look is because I believe the statements made in the paper are quite short and small, but they are quite grandiose. So we'll dive into it. The paper basically combines two things reinforcement learning and data augmentation. Now reinforcement learning, we've talked about a number of times, it's basically where an agent is in a world and has to learn to solve an optimization problem by repeatedly interacting with the world. You can see here, for example, this is the walker task, where this walker thing, it has two feet and basically needs to stand upright and walk for a number of steps. The further you go, the better. So by repeatedly trying this and getting better and better at it, that is reinforcement learning. The second part is the data augmentation. Now data augmentation is a pretty standard practice in supervised learning. What does it mean? So if you have a supervised learning task, for example, an image classification task, here is a picture of a cat, and the label is cat, then you can feed this through your neural network to arrive at a loss. But you only have so many pictures. You have a database and maybe you have, I don't know, 1 million images. Usually what people do is they go, let's say a number of times, like 20 or 50 times through that database to basically have the model learn each image multiple times. But what turns out to be more successful is if you do data augmentation, that means you have an in between layer right here that takes this image and some modifies it in some small way. This could be for example, it blocks out part of the image. So it simply blocks out the square here. And then you feed that through the the model. And then the next time the image comes up, it does something different. For example, it randomly crops the image to only the top right part here. And then the next time it does a bit of a color jitter. And then the next time it goes to grayscale and so on. So supervised learning has found data augmentation to be quite beneficial because not only do you make the model learn what this picture is, but you also make the model kind of learn some small variations of that picture where you can be pretty sure they would not change the label. So you would not feed the model false information that generally makes it more robust to test time discrepancies. So this paper has basically claims. If you want to do reinforcement learning, if you do simply do data augmentation with the input data to that reinforcement learning, it works much, much, much better. Now, of course, we can expect since in supervised learning, this is a general trick that it would do something for reinforcement learning as well. But this paper basically claims that this one plugin like here, so this is basically you plug this into your pipeline in reinforcement learning, this is basically as much of a gain as pretty much the last five years of research on on reinforcement learning on these things. So let's dive into it. This paper proposes just what I said, just plug in the data augmentation and then do reinforcement learning on the augmented data. They use these data augmentations. So crop we've already discussed, it's a random crop, grayscale means that the picture goes to gray, black and white with a certain probability. Cut out means that there's a little patch missing, like I said, cut out color the same but in a random color. Flip means you flip the image horizontally or vertically, according to a random probability. Rotate is the same but you instead of flip you rotate it. Random conv means you randomly convolve it with a filter. In this case, some red or blue or yellow filters. And color jitter means that you kind of jitter around the colors in a sort of in a sort of way that doesn't mess up the image too much. So you basically just kind of change the colors on the image, but the overall image still looks the same. The only thing you have to you have to pay attention to is that so in your reinforcement learning pipeline, usually if you have a walker like this, what you want to do is you have your network here and then you have you know, your policy and your value function. If you don't know what these are, we'll have we have I've treated them many times in reinforcement learning videos. What you want to do is you simply don't want to take this one current observation in here. But sometimes you want to take kind of the stacked of the last few frames so that the model kind of gets an idea what happened during let's say the last one second, right, so we can it can determine in this walker, for example, it's not only important where the legs are, which are up here right now. It is also important their momentum how they're moving right and you can you can you can determine that from the last few frames. So sometimes it's beneficial to feed the last few frames. And they say the important thing here is that these augmentations are applied consistently across the stacked frames. So basically you select on an augmentation and on the scale of that augmentation and then you apply it to these stacked frames all the same. And then in the next forward pass, you have a different set of stacked frames, then you can decide on a on a different augmentation. So that's basically the only difference between the supervised setting and this setting is that you have to consistently apply the augmentation. And you have to consistently apply this here and during training. So they formulate the classic proximal policy optimization here, which is an actor critic method. And the only time you have to really pay attention is when you plug the observation into these models here, right here, then it needs to be the same augmentation. Sorry, the same observation. So that means the observation augmented with the same data with the same augmentation procedures. All right, getting it together. Cool. So when you do this, when you do that, they say when applying our ad, which is the random random data augmentation to SAC, which is soft actor critic, right? Our data augmentations are applied to the observation past the Q and pi. So sorry, this is the thing up here. This is soft actor critic, which is a state of the art of policy algorithm for continuous control problems. And also you have to pay attention that when you feed the observations, they're the same observations, like here and here. And then proximal policy optimization is the one is a state of the art on policy algorithm for learning a continuous or discrete control policy. Okay, so as I said, they simply drop this in there. And then it turns out they outperform or match performance of many, many baselines. Here you can see curl, I've made a video on curl, which is another way of augmenting or pre training a for reinforcement learning, then state of the art things like planet or dreamer, I've made a video on dreamer as well. And then pixel SAC and state SAC is sort of a cheating algorithm because it has access to the state whereas all the other methods only have access to the to the pixels. And you can see that the data augmentation method, which is basically just plain RL plane pure SAC plus the plus the data augmentation outperforms in many times all of these other baselines. Now, here is a criticism of me. In order they never investigate, they simply say wow, this reaches the same performance or outperforms these other methods. Now, so it's the state of the art algorithm. It's important to note here that this is on the DM control 100k and 500k benchmarks, which means that there's a limit on the number of I believe frames from these control tasks that you get. So you either get 100k or you get 500k frames. So the difficulty is learning from limited data. It's not state of the art reinforcement learning method. Overall, it is the state of the art on this particular task on learning from limited data. Now, while I can believe that the augmentation would help here, I it is completely unclear whether or not the augmentation gives the same benefits as like something like dreamer, or whether the benefits from dreamer and the benefits from data augmentation are completely orthogonal. So in this paper, given that the claim is so simple that they make, I wouldn't I would expect like an investigation, what happens if I do dreamer plus the data augmentation? Maybe they've done it somewhere, and I just haven't seen it. But it just seems like they, they put this on the base basic RL algorithm, and then they claim, well, look here, it works well, but they never show that. So it could be that dreamer all this architecture, what it simply does is basically recover these gains that you could get by data augmentation, or it could be that it actually does something different, but just reaches the same amount of gain, right, it just reaches the same amount in improvement. And by combining them, you could improve it further. So not not just to get like a better number, but combining the two would actually give a lot of hints as to whether or not this augmentation works in line with the other methods, or whether the other methods are really doing something meaningfully different or not. But this is just not done here. And so they go into the they go into a question of which data augmentations contribute the most, and they get to the point where they say random crop is extremely effective. So they have this table here where they just basically combine two augmentations. And do you see, so for example, this thing here means that you apply grayscale, and then the rotate augmentation. And that gets you to whatever 300 points in this Walker, if you apply crop, and then crop, it gets you to 920 points and beats everything else. So they they say, okay, crop is the most the most effective. And I have I have the sneaking suspicion that these augmentations are so effective, simply because of how we set up these tasks, right, these reinforcement learning tasks, they don't tend to be a real world, they tend to be somewhat simulated. And as you can see here, the image is pretty clear. So you can pretty clearly see that here is the thing, there's no natural background or whatnot, it's procedurally generated, right, there are these stars that could confuse the model a bit. But still, it is so easy visually, this task, that I'm going to guess the whole reason why these image augmentations help is simply because of the way these reinforcement learning tasks right now are set up. And I'm would guess that if we had reinforcement learning in something like the real world, that the image augmentation methods would help in about the way they help unsupervised tasks in in the same data, for example, image net. So that is my sneaking suspicion. And this paper, I want to say it sort of over claims, it's how how absolutely great this works. Of course, it works great on these things. But I think there needs to be an investigation of where, why. So here they have some attention maps on where the algorithm focuses. And you can see when there's no data augmentation, it sort of focuses on good points. But when you do crop, it focuses on this ridge here, which makes sense, right, because that's the thing that needs to be kind of vertical in order for the walker to be stable. And in if you do other things, then you can see it, it doesn't really focus, it focuses on different things. So the crop method seems to make the model focus on the most important part of the image. And as the same with the cheetah task here, so if you don't do augmentation, and some of the augmentation, you can see that it actually focuses on some of these background stars, whereas in the cropped version, it focuses on not on the stars, but actually on the cheetah as a whole, which probably makes sense. Now, again, I have a bit of a I have a bit of a worry with these kinds of experiments, because we already know that crop will give you a much better score, right? So who's to say that if we could train this thing here to the same score, it wouldn't be paying attention to the same part. What they're trying to make clear here is that it is dependent on the particular type of data augmentation that the model gets a better grip on the input. But it is not really a valid comparison if we know that the crop agent performs a better score. And it could simply be that that's the reason why the attention is better, right? That that it is actually solving the problem better. So I mean, of course, this the fact that it's working better is due to the fact that you have crop augmented the data, but the fact that is focusing on the correct parts is not a property of the crop augmentation, but the property of the fact that it reaches a higher score. That was a long winded complaint, but I hope you get what I mean here. The last thing they do is they investigate generalization performance. So improving generalization on this open AI proc gen. Now, as I understand it, this is a reinforcement learning task or suite of tasks where you have procedurally generated levels. So you can sort of train on a bunch of levels, and then test the generalization to new levels that you haven't seen before. So there's a jumper here and star pilot. So they seem like this like a jump and run game or big fish. I don't even know what you have to do in big fish. But you can see that the levels seen here, this is one example, and unseen. So in this example, the background is very different. And I'm going to guess in the jumper thing, not only is the background but also the kind of generated level how you have to jump is quite different. So they investigate whether or not a agent trained on only the seen ones can generalize to the unseen ones. And this table presents the results. And as you can see, the RAD with the crop or with other things outperform the pixel paste based PPOs. Now, there is some nuance to this table here. First of all, you can see that these this crop thing is now only the winner in one of these three tasks, right in the in the big fish thing. In there is another augmentation technique here that wins over at star pilot. But you can see the difference is not that high. And in the dnd jumper with 200 levels, so this is 100 or 200 levels, the the original method is even the best. So here again, I believe this is evidence that that it is very much an interaction of these augmentations with the way the task is set up and not the augmentations themselves or the fact that you're augmenting. For example, if we look at this big fish, we've seen, okay, the what seems to change here is mainly the background, where as in the jumper example, the entire level structure seems to change. So then the the augmentation all of a sudden is not super effective anymore actually hurts, right? So I'm just not I'm just not super convinced by the claims you're making here. And one of the claims I find is, in particular, rad with random crop achieves, no wait, this point down here. Oh, yeah, achieves 55.8% gain over pixel based PPO. Okay. trained with 100 training levels outperforms the pixel based PPO with 200 training levels on both big fish and star pilot environment. This shows that data augmentation can be more effective in learning generalizable representations compared to simply increasing the number of training environments. I this statement. So again, how, like, why do you compare two different things if you don't? Like, if you don't show that maybe they're orthogonal. In fact, they are probably orthogonal because even on the 200 levels you you gain over the pixel based PPO, right? So why the comparison and then second of all, so here we see on the 100 levels, this method is better than the pixel based PPO. And then they claim that okay, they are even better on 100 levels than the pixel based PPO on 200 levels. And why that is true. If you know, if if if a is bigger than b, then probably a is going to be bigger than b plus some epsilon. And right, and and that doesn't, I just think that doesn't really warrant their statement where they say, Oh, look, this is even better. So as if the 100 levels of additional training were the standard measure of more data, like if there is going to be if you're better at the beginning, there's going to be a certain amount of data where you're still better than the other method with more data. And I don't find this super duper surprising, but they make a big claim here out of this. Alright, so in conclusion, I hope I'm not too harsh on this paper, it is a cool paper. And of course, it is cool findings. But I have a big suspicion that the augmentation here works so well simply because of how we set up these RL tasks, because they're visually quite, let's say easy. And therefore, these augmentations that are also our sort of easy abstractions of when an image is visually similar, because all of these things, right, to us as humans, we say, probably doesn't change anything if we just rotate the image. And we this is our prejudice. And we built this prejudice into these simulators for the RL tasks. So they will match up extremely well with these augmentations. And that's the reason I believe these things work and maybe not as much the the fact that you're augmenting. Okay, well, if you like this video, I invite you to check out the paper, subscribe to this channel, tell all your friends about it and leave a like in the comment. Thank you very much and bye bye.
[ { "start": 0, "end": 5.66, "text": " Hi there. Today we're going to take a short look at reinforcement learning with augmented" }, { "start": 5.66, "end": 12.58, "text": " data. This paper is by Michael Laskin, Kimmin Lee, and others from UC Berkeley and NYU." }, { "start": 12.58, "end": 17.080000000000002, "text": " So the reason why this is a short look is because I believe the statements made in the" }, { "start": 17.080000000000002, "end": 25.76, "text": " paper are quite short and small, but they are quite grandiose. So we'll dive into it." }, { "start": 25.76, "end": 32.1, "text": " The paper basically combines two things reinforcement learning and data augmentation. Now reinforcement" }, { "start": 32.1, "end": 36.84, "text": " learning, we've talked about a number of times, it's basically where an agent is in a world" }, { "start": 36.84, "end": 42.56, "text": " and has to learn to solve an optimization problem by repeatedly interacting with the" }, { "start": 42.56, "end": 50.260000000000005, "text": " world. You can see here, for example, this is the walker task, where this walker thing," }, { "start": 50.260000000000005, "end": 55.160000000000004, "text": " it has two feet and basically needs to stand upright and walk for a number of steps. The" }, { "start": 55.16, "end": 59.599999999999994, "text": " further you go, the better. So by repeatedly trying this and getting better and better" }, { "start": 59.599999999999994, "end": 66.88, "text": " at it, that is reinforcement learning. The second part is the data augmentation. Now" }, { "start": 66.88, "end": 73.06, "text": " data augmentation is a pretty standard practice in supervised learning. What does it mean?" }, { "start": 73.06, "end": 78.52, "text": " So if you have a supervised learning task, for example, an image classification task," }, { "start": 78.52, "end": 84.88, "text": " here is a picture of a cat, and the label is cat, then you can feed this through your" }, { "start": 84.88, "end": 92.83999999999999, "text": " neural network to arrive at a loss. But you only have so many pictures. You have a database" }, { "start": 92.83999999999999, "end": 100.56, "text": " and maybe you have, I don't know, 1 million images. Usually what people do is they go," }, { "start": 100.56, "end": 107.44, "text": " let's say a number of times, like 20 or 50 times through that database to basically have" }, { "start": 107.44, "end": 113.47999999999999, "text": " the model learn each image multiple times. But what turns out to be more successful is" }, { "start": 113.48, "end": 119.4, "text": " if you do data augmentation, that means you have an in between layer right here that takes" }, { "start": 119.4, "end": 129.34, "text": " this image and some modifies it in some small way. This could be for example, it blocks" }, { "start": 129.34, "end": 136.68, "text": " out part of the image. So it simply blocks out the square here. And then you feed that" }, { "start": 136.68, "end": 142.22, "text": " through the the model. And then the next time the image comes up, it does something different." }, { "start": 142.22, "end": 148.48, "text": " For example, it randomly crops the image to only the top right part here. And then the" }, { "start": 148.48, "end": 155.16, "text": " next time it does a bit of a color jitter. And then the next time it goes to grayscale" }, { "start": 155.16, "end": 160.36, "text": " and so on. So supervised learning has found data augmentation to be quite beneficial because" }, { "start": 160.36, "end": 165.76, "text": " not only do you make the model learn what this picture is, but you also make the model" }, { "start": 165.76, "end": 171.4, "text": " kind of learn some small variations of that picture where you can be pretty sure they" }, { "start": 171.4, "end": 176.22, "text": " would not change the label. So you would not feed the model false information that generally" }, { "start": 176.22, "end": 185.12, "text": " makes it more robust to test time discrepancies. So this paper has basically claims. If you" }, { "start": 185.12, "end": 193.28, "text": " want to do reinforcement learning, if you do simply do data augmentation with the input" }, { "start": 193.28, "end": 199.16, "text": " data to that reinforcement learning, it works much, much, much better. Now, of course, we" }, { "start": 199.16, "end": 203.88, "text": " can expect since in supervised learning, this is a general trick that it would do something" }, { "start": 203.88, "end": 210.54, "text": " for reinforcement learning as well. But this paper basically claims that this one plugin" }, { "start": 210.54, "end": 215.85999999999999, "text": " like here, so this is basically you plug this into your pipeline in reinforcement learning," }, { "start": 215.85999999999999, "end": 225.7, "text": " this is basically as much of a gain as pretty much the last five years of research on on" }, { "start": 225.7, "end": 234.94, "text": " reinforcement learning on these things. So let's dive into it. This paper proposes just" }, { "start": 234.94, "end": 239.6, "text": " what I said, just plug in the data augmentation and then do reinforcement learning on the" }, { "start": 239.6, "end": 245.95999999999998, "text": " augmented data. They use these data augmentations. So crop we've already discussed, it's a random" }, { "start": 245.95999999999998, "end": 253.28, "text": " crop, grayscale means that the picture goes to gray, black and white with a certain probability." }, { "start": 253.28, "end": 259.42, "text": " Cut out means that there's a little patch missing, like I said, cut out color the same" }, { "start": 259.42, "end": 265.82, "text": " but in a random color. Flip means you flip the image horizontally or vertically, according" }, { "start": 265.82, "end": 273.6, "text": " to a random probability. Rotate is the same but you instead of flip you rotate it. Random" }, { "start": 273.6, "end": 280.7, "text": " conv means you randomly convolve it with a filter. In this case, some red or blue or" }, { "start": 280.7, "end": 290.86, "text": " yellow filters. And color jitter means that you kind of jitter around the colors in a" }, { "start": 290.86, "end": 296.7, "text": " sort of in a sort of way that doesn't mess up the image too much. So you basically just" }, { "start": 296.7, "end": 301.53999999999996, "text": " kind of change the colors on the image, but the overall image still looks the same. The" }, { "start": 301.53999999999996, "end": 308.06, "text": " only thing you have to you have to pay attention to is that so in your reinforcement learning" }, { "start": 308.06, "end": 312.82, "text": " pipeline, usually if you have a walker like this, what you want to do is you have your" }, { "start": 312.82, "end": 317.02, "text": " network here and then you have you know, your policy and your value function. If you don't" }, { "start": 317.02, "end": 323.66, "text": " know what these are, we'll have we have I've treated them many times in reinforcement learning" }, { "start": 323.66, "end": 329.02, "text": " videos. What you want to do is you simply don't want to take this one current observation" }, { "start": 329.02, "end": 335.06, "text": " in here. But sometimes you want to take kind of the stacked of the last few frames so that" }, { "start": 335.06, "end": 340.6, "text": " the model kind of gets an idea what happened during let's say the last one second, right," }, { "start": 340.6, "end": 345.38, "text": " so we can it can determine in this walker, for example, it's not only important where" }, { "start": 345.38, "end": 351.78, "text": " the legs are, which are up here right now. It is also important their momentum how they're" }, { "start": 351.78, "end": 358.1, "text": " moving right and you can you can you can determine that from the last few frames. So sometimes" }, { "start": 358.1, "end": 363.58, "text": " it's beneficial to feed the last few frames. And they say the important thing here is that" }, { "start": 363.58, "end": 368.9, "text": " these augmentations are applied consistently across the stacked frames. So basically you" }, { "start": 368.9, "end": 374.41999999999996, "text": " select on an augmentation and on the scale of that augmentation and then you apply it" }, { "start": 374.41999999999996, "end": 380.06, "text": " to these stacked frames all the same. And then in the next forward pass, you have a" }, { "start": 380.06, "end": 385.02, "text": " different set of stacked frames, then you can decide on a on a different augmentation." }, { "start": 385.02, "end": 388.9, "text": " So that's basically the only difference between the supervised setting and this setting is" }, { "start": 388.9, "end": 397.5, "text": " that you have to consistently apply the augmentation. And you have to consistently apply this here" }, { "start": 397.5, "end": 406.14, "text": " and during training. So they formulate the classic proximal policy optimization here," }, { "start": 406.14, "end": 413.17999999999995, "text": " which is an actor critic method. And the only time you have to really pay attention is when" }, { "start": 413.18, "end": 421.38, "text": " you plug the observation into these models here, right here, then it needs to be the" }, { "start": 421.38, "end": 427.54, "text": " same augmentation. Sorry, the same observation. So that means the observation augmented with" }, { "start": 427.54, "end": 438.18, "text": " the same data with the same augmentation procedures. All right, getting it together. Cool. So when" }, { "start": 438.18, "end": 444.14, "text": " you do this, when you do that, they say when applying our ad, which is the random random" }, { "start": 444.14, "end": 454.02, "text": " data augmentation to SAC, which is soft actor critic, right? Our data augmentations are" }, { "start": 454.02, "end": 459.86, "text": " applied to the observation past the Q and pi. So sorry, this is the thing up here. This" }, { "start": 459.86, "end": 463.74, "text": " is soft actor critic, which is a state of the art of policy algorithm for continuous" }, { "start": 463.74, "end": 469.1, "text": " control problems. And also you have to pay attention that when you feed the observations," }, { "start": 469.1, "end": 475.14, "text": " they're the same observations, like here and here. And then proximal policy optimization" }, { "start": 475.14, "end": 479.7, "text": " is the one is a state of the art on policy algorithm for learning a continuous or discrete" }, { "start": 479.7, "end": 492.76, "text": " control policy. Okay, so as I said, they simply drop this in there. And then it turns out" }, { "start": 492.76, "end": 502.7, "text": " they outperform or match performance of many, many baselines. Here you can see curl, I've" }, { "start": 502.7, "end": 510.14, "text": " made a video on curl, which is another way of augmenting or pre training a for reinforcement" }, { "start": 510.14, "end": 516.46, "text": " learning, then state of the art things like planet or dreamer, I've made a video on dreamer" }, { "start": 516.46, "end": 522.3199999999999, "text": " as well. And then pixel SAC and state SAC is sort of a cheating algorithm because it" }, { "start": 522.32, "end": 528.12, "text": " has access to the state whereas all the other methods only have access to the to the pixels." }, { "start": 528.12, "end": 534.86, "text": " And you can see that the data augmentation method, which is basically just plain RL plane" }, { "start": 534.86, "end": 544.7, "text": " pure SAC plus the plus the data augmentation outperforms in many times all of these other" }, { "start": 544.7, "end": 553.46, "text": " baselines. Now, here is a criticism of me. In order they never investigate, they simply" }, { "start": 553.46, "end": 560.0600000000001, "text": " say wow, this reaches the same performance or outperforms these other methods. Now, so" }, { "start": 560.0600000000001, "end": 564.58, "text": " it's the state of the art algorithm. It's important to note here that this is on the" }, { "start": 564.58, "end": 572.3000000000001, "text": " DM control 100k and 500k benchmarks, which means that there's a limit on the number of" }, { "start": 572.3, "end": 578.66, "text": " I believe frames from these control tasks that you get. So you either get 100k or you" }, { "start": 578.66, "end": 585.02, "text": " get 500k frames. So the difficulty is learning from limited data. It's not state of the art" }, { "start": 585.02, "end": 589.9399999999999, "text": " reinforcement learning method. Overall, it is the state of the art on this particular" }, { "start": 589.9399999999999, "end": 596.14, "text": " task on learning from limited data. Now, while I can believe that the augmentation would" }, { "start": 596.14, "end": 605.56, "text": " help here, I it is completely unclear whether or not the augmentation gives the same benefits" }, { "start": 605.56, "end": 610.9399999999999, "text": " as like something like dreamer, or whether the benefits from dreamer and the benefits" }, { "start": 610.9399999999999, "end": 617.34, "text": " from data augmentation are completely orthogonal. So in this paper, given that the claim is" }, { "start": 617.34, "end": 622.74, "text": " so simple that they make, I wouldn't I would expect like an investigation, what happens" }, { "start": 622.74, "end": 631.1, "text": " if I do dreamer plus the data augmentation? Maybe they've done it somewhere, and I just" }, { "start": 631.1, "end": 638.46, "text": " haven't seen it. But it just seems like they, they put this on the base basic RL algorithm," }, { "start": 638.46, "end": 645.0600000000001, "text": " and then they claim, well, look here, it works well, but they never show that. So it could" }, { "start": 645.0600000000001, "end": 650.84, "text": " be that dreamer all this architecture, what it simply does is basically recover these" }, { "start": 650.84, "end": 656.26, "text": " gains that you could get by data augmentation, or it could be that it actually does something" }, { "start": 656.26, "end": 662.6600000000001, "text": " different, but just reaches the same amount of gain, right, it just reaches the same amount" }, { "start": 662.6600000000001, "end": 668.3000000000001, "text": " in improvement. And by combining them, you could improve it further. So not not just" }, { "start": 668.3000000000001, "end": 673.14, "text": " to get like a better number, but combining the two would actually give a lot of hints" }, { "start": 673.14, "end": 679.2800000000001, "text": " as to whether or not this augmentation works in line with the other methods, or whether" }, { "start": 679.28, "end": 684.38, "text": " the other methods are really doing something meaningfully different or not. But this is" }, { "start": 684.38, "end": 695.14, "text": " just not done here. And so they go into the they go into a question of which data augmentations" }, { "start": 695.14, "end": 703.9399999999999, "text": " contribute the most, and they get to the point where they say random crop is extremely effective." }, { "start": 703.94, "end": 710.22, "text": " So they have this table here where they just basically combine two augmentations. And do" }, { "start": 710.22, "end": 715.1, "text": " you see, so for example, this thing here means that you apply grayscale, and then the rotate" }, { "start": 715.1, "end": 721.48, "text": " augmentation. And that gets you to whatever 300 points in this Walker, if you apply crop," }, { "start": 721.48, "end": 728.84, "text": " and then crop, it gets you to 920 points and beats everything else. So they they say, okay," }, { "start": 728.84, "end": 738.0400000000001, "text": " crop is the most the most effective. And I have I have the sneaking suspicion that these" }, { "start": 738.0400000000001, "end": 743.44, "text": " augmentations are so effective, simply because of how we set up these tasks, right, these" }, { "start": 743.44, "end": 747.7, "text": " reinforcement learning tasks, they don't tend to be a real world, they tend to be somewhat" }, { "start": 747.7, "end": 755.02, "text": " simulated. And as you can see here, the image is pretty clear. So you can pretty clearly" }, { "start": 755.02, "end": 759.9, "text": " see that here is the thing, there's no natural background or whatnot, it's procedurally generated," }, { "start": 759.9, "end": 766.86, "text": " right, there are these stars that could confuse the model a bit. But still, it is so easy" }, { "start": 766.86, "end": 773.14, "text": " visually, this task, that I'm going to guess the whole reason why these image augmentations" }, { "start": 773.14, "end": 780.12, "text": " help is simply because of the way these reinforcement learning tasks right now are set up. And I'm" }, { "start": 780.12, "end": 786, "text": " would guess that if we had reinforcement learning in something like the real world, that the" }, { "start": 786, "end": 792.42, "text": " image augmentation methods would help in about the way they help unsupervised tasks in in" }, { "start": 792.42, "end": 801.46, "text": " the same data, for example, image net. So that is my sneaking suspicion. And this paper," }, { "start": 801.46, "end": 810.04, "text": " I want to say it sort of over claims, it's how how absolutely great this works. Of course," }, { "start": 810.04, "end": 814.9, "text": " it works great on these things. But I think there needs to be an investigation of where," }, { "start": 814.9, "end": 821.14, "text": " why. So here they have some attention maps on where the algorithm focuses. And you can" }, { "start": 821.14, "end": 827.14, "text": " see when there's no data augmentation, it sort of focuses on good points. But when you" }, { "start": 827.14, "end": 833.06, "text": " do crop, it focuses on this ridge here, which makes sense, right, because that's the thing" }, { "start": 833.06, "end": 841.4599999999999, "text": " that needs to be kind of vertical in order for the walker to be stable. And in if you" }, { "start": 841.4599999999999, "end": 848.66, "text": " do other things, then you can see it, it doesn't really focus, it focuses on different things." }, { "start": 848.66, "end": 857.06, "text": " So the crop method seems to make the model focus on the most important part of the image." }, { "start": 857.06, "end": 863.14, "text": " And as the same with the cheetah task here, so if you don't do augmentation, and some" }, { "start": 863.14, "end": 868.14, "text": " of the augmentation, you can see that it actually focuses on some of these background stars," }, { "start": 868.14, "end": 875.3, "text": " whereas in the cropped version, it focuses on not on the stars, but actually on the cheetah" }, { "start": 875.3, "end": 882.6199999999999, "text": " as a whole, which probably makes sense. Now, again, I have a bit of a I have a bit of a" }, { "start": 882.62, "end": 888.26, "text": " worry with these kinds of experiments, because we already know that crop will give you a" }, { "start": 888.26, "end": 894.66, "text": " much better score, right? So who's to say that if we could train this thing here to" }, { "start": 894.66, "end": 902.3, "text": " the same score, it wouldn't be paying attention to the same part. What they're trying to make" }, { "start": 902.3, "end": 907.62, "text": " clear here is that it is dependent on the particular type of data augmentation that" }, { "start": 907.62, "end": 915.62, "text": " the model gets a better grip on the input. But it is not really a valid comparison if" }, { "start": 915.62, "end": 924.9, "text": " we know that the crop agent performs a better score. And it could simply be that that's" }, { "start": 924.9, "end": 931.7, "text": " the reason why the attention is better, right? That that it is actually solving the problem" }, { "start": 931.7, "end": 938.46, "text": " better. So I mean, of course, this the fact that it's working better is due to the fact" }, { "start": 938.46, "end": 944.4200000000001, "text": " that you have crop augmented the data, but the fact that is focusing on the correct parts" }, { "start": 944.4200000000001, "end": 950.4200000000001, "text": " is not a property of the crop augmentation, but the property of the fact that it reaches" }, { "start": 950.4200000000001, "end": 961.34, "text": " a higher score. That was a long winded complaint, but I hope you get what I mean here. The last" }, { "start": 961.34, "end": 967.1800000000001, "text": " thing they do is they investigate generalization performance. So improving generalization on" }, { "start": 967.1800000000001, "end": 973.9, "text": " this open AI proc gen. Now, as I understand it, this is a reinforcement learning task" }, { "start": 973.9, "end": 980.6600000000001, "text": " or suite of tasks where you have procedurally generated levels. So you can sort of train" }, { "start": 980.6600000000001, "end": 988.1600000000001, "text": " on a bunch of levels, and then test the generalization to new levels that you haven't seen before." }, { "start": 988.16, "end": 994.4, "text": " So there's a jumper here and star pilot. So they seem like this like a jump and run game" }, { "start": 994.4, "end": 999.3399999999999, "text": " or big fish. I don't even know what you have to do in big fish. But you can see that the" }, { "start": 999.3399999999999, "end": 1007.26, "text": " levels seen here, this is one example, and unseen. So in this example, the background" }, { "start": 1007.26, "end": 1012.8199999999999, "text": " is very different. And I'm going to guess in the jumper thing, not only is the background" }, { "start": 1012.82, "end": 1019.3000000000001, "text": " but also the kind of generated level how you have to jump is quite different. So they investigate" }, { "start": 1019.3000000000001, "end": 1027.5800000000002, "text": " whether or not a agent trained on only the seen ones can generalize to the unseen ones." }, { "start": 1027.5800000000002, "end": 1036.5800000000002, "text": " And this table presents the results. And as you can see, the RAD with the crop or with" }, { "start": 1036.58, "end": 1046.1399999999999, "text": " other things outperform the pixel paste based PPOs. Now, there is some nuance to this table" }, { "start": 1046.1399999999999, "end": 1054.22, "text": " here. First of all, you can see that these this crop thing is now only the winner in" }, { "start": 1054.22, "end": 1062.1799999999998, "text": " one of these three tasks, right in the in the big fish thing. In there is another augmentation" }, { "start": 1062.18, "end": 1067.8600000000001, "text": " technique here that wins over at star pilot. But you can see the difference is not that" }, { "start": 1067.8600000000001, "end": 1078.26, "text": " high. And in the dnd jumper with 200 levels, so this is 100 or 200 levels, the the original" }, { "start": 1078.26, "end": 1086.3400000000001, "text": " method is even the best. So here again, I believe this is evidence that that it is very" }, { "start": 1086.34, "end": 1092.62, "text": " much an interaction of these augmentations with the way the task is set up and not the" }, { "start": 1092.62, "end": 1098.02, "text": " augmentations themselves or the fact that you're augmenting. For example, if we look" }, { "start": 1098.02, "end": 1105.26, "text": " at this big fish, we've seen, okay, the what seems to change here is mainly the background," }, { "start": 1105.26, "end": 1115.62, "text": " where as in the jumper example, the entire level structure seems to change. So" }, { "start": 1115.62, "end": 1122.3799999999999, "text": " then the the augmentation all of a sudden is not super effective anymore actually hurts," }, { "start": 1122.3799999999999, "end": 1127.5, "text": " right? So I'm just not I'm just not super convinced by the claims you're making here." }, { "start": 1127.5, "end": 1134.4599999999998, "text": " And one of the claims I find is, in particular, rad with random crop achieves, no wait, this" }, { "start": 1134.4599999999998, "end": 1145.26, "text": " point down here. Oh, yeah, achieves 55.8% gain over pixel based PPO. Okay. trained with" }, { "start": 1145.26, "end": 1151.42, "text": " 100 training levels outperforms the pixel based PPO with 200 training levels on both" }, { "start": 1151.42, "end": 1159.26, "text": " big fish and star pilot environment. This shows that data augmentation can be more effective" }, { "start": 1159.26, "end": 1164.58, "text": " in learning generalizable representations compared to simply increasing the number of" }, { "start": 1164.58, "end": 1172.1, "text": " training environments. I this statement. So again, how, like, why do you compare two different" }, { "start": 1172.1, "end": 1179.02, "text": " things if you don't? Like, if you don't show that maybe they're orthogonal. In fact, they" }, { "start": 1179.02, "end": 1185.5, "text": " are probably orthogonal because even on the 200 levels you you gain over the pixel based" }, { "start": 1185.5, "end": 1195.6999999999998, "text": " PPO, right? So why the comparison and then second of all, so here we see on the 100 levels," }, { "start": 1195.6999999999998, "end": 1201.62, "text": " this method is better than the pixel based PPO. And then they claim that okay, they are" }, { "start": 1201.62, "end": 1210.3, "text": " even better on 100 levels than the pixel based PPO on 200 levels. And why that is true. If" }, { "start": 1210.3, "end": 1220.2199999999998, "text": " you know, if if if a is bigger than b, then probably a is going to be bigger than b plus" }, { "start": 1220.22, "end": 1231.78, "text": " some epsilon. And right, and and that doesn't, I just think that doesn't really warrant their" }, { "start": 1231.78, "end": 1239.34, "text": " statement where they say, Oh, look, this is even better. So as if the 100 levels of additional" }, { "start": 1239.34, "end": 1246.58, "text": " training were the standard measure of more data, like if there is going to be if you're" }, { "start": 1246.58, "end": 1250.8999999999999, "text": " better at the beginning, there's going to be a certain amount of data where you're still" }, { "start": 1250.8999999999999, "end": 1258.6599999999999, "text": " better than the other method with more data. And I don't find this super duper surprising," }, { "start": 1258.6599999999999, "end": 1266.1399999999999, "text": " but they make a big claim here out of this. Alright, so in conclusion, I hope I'm not" }, { "start": 1266.1399999999999, "end": 1270.6599999999999, "text": " too harsh on this paper, it is a cool paper. And of course, it is cool findings. But I" }, { "start": 1270.66, "end": 1276.94, "text": " have a big suspicion that the augmentation here works so well simply because of how we" }, { "start": 1276.94, "end": 1283.78, "text": " set up these RL tasks, because they're visually quite, let's say easy. And therefore, these" }, { "start": 1283.78, "end": 1290.18, "text": " augmentations that are also our sort of easy abstractions of when an image is visually" }, { "start": 1290.18, "end": 1296.02, "text": " similar, because all of these things, right, to us as humans, we say, probably doesn't" }, { "start": 1296.02, "end": 1302.26, "text": " change anything if we just rotate the image. And we this is our prejudice. And we built" }, { "start": 1302.26, "end": 1308.86, "text": " this prejudice into these simulators for the RL tasks. So they will match up extremely" }, { "start": 1308.86, "end": 1314.58, "text": " well with these augmentations. And that's the reason I believe these things work and" }, { "start": 1314.58, "end": 1322.9, "text": " maybe not as much the the fact that you're augmenting. Okay, well, if you like this video," }, { "start": 1322.9, "end": 1328.02, "text": " I invite you to check out the paper, subscribe to this channel, tell all your friends about" }, { "start": 1328.02, "end": 1354.26, "text": " it and leave a like in the comment. Thank you very much and bye bye." } ]
rl4nUngiR2k
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
BLEURT: Learning Robust Metrics for Text Generation (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "nlp", "natural language processing", "mt", "machine translation", "transformer", "bert", "lstm", "attention", "wmt", "wikipedia", "backtranslation", "bleu", "rouge", "ngrams", "score", "metric", "comparison", "human raters", "google", "google research", "automatic", "overlap", "distribution shift" ]
Proper evaluation of text generation models, such as machine translation systems, requires expensive and slow human assessment. As these models have gotten better in previous years, proxy-scores, like BLEU, are becoming less and less useful. This paper proposes to learn a proxy score and demonstrates that it correlates well with human raters, even as the data distribution shifts. OUTLINE: 0:00 - Intro & High-Level Overview 1:00 - The Problem with Evaluating Machine Translation 5:10 - Task Evaluation as a Learning Problem 10:45 - Naive Fine-Tuning BERT 13:25 - Pre-Training on Synthetic Data 16:50 - Generating the Synthetic Data 18:30 - Priming via Auxiliary Tasks 23:35 - Experiments & Distribution Shifts 27:00 - Concerns & Conclusion Paper: https://arxiv.org/abs/2004.04696 Code: https://github.com/google-research/bleurt Abstract: Text generation has made significant advances in the last few years. Yet, evaluation metrics have lagged behind, as the most popular choices (e.g., BLEU and ROUGE) may correlate poorly with human judgments. We propose BLEURT, a learned evaluation metric based on BERT that can model human judgments with a few thousand possibly biased training examples. A key aspect of our approach is a novel pre-training scheme that uses millions of synthetic examples to help the model generalize. BLEURT provides state-of-the-art results on the last three years of the WMT Metrics shared task and the WebNLG Competition dataset. In contrast to a vanilla BERT-based approach, it yields superior results even when the training data is scarce and out-of-distribution. Abstract: Thibault Sellam, Dipanjan Das, Ankur P. Parikh Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hello there! Today we'll look at BLERT, Learning Robust Metrics for Text Generation by Thibaut Salam, Tipanjan Das and Ankur P. Parikh. So this paper on a high level proposes a new metric for text generation tasks such as machine translation by leveraging a BERT model to produce like an automated metric, an automated quality metric. And they make this BERT model robust by pre-training it on a very wide array of tasks that they can use synthetic data to train it. And therefore the model and the resulting score is very robust to shifts in distribution and they advocate that this could be used in the future to assess text generation systems. Alright, as always, if you like content like this, consider subscribing and sharing it out and leaving a like. Tell YouTube that it's good content. Of course only if you agree. So what's the problem with evaluation for text generation? So if you know the machine translation community, basically what they do is they have these datasets where they translate from one language into another. Let's say English to French. And they have a training dataset that is fairly okay-ishly large. And then they somehow need to evaluate this. So you have like a test dataset, but all you can really do is sort of calculate the perplexity of a language model that you produce or of a translation model that you produce. There's not really a metric for translation, so the gold standard is to get it to humans. So you train on this dataset, you produce a program. This is your machine translation program that you produce from the data. And you let this run on your evaluation dataset and you give the results to a bunch of human raters. These could be regular people, these could be linguists that are experts in translation in both languages. And they will score each of the outputs of the machine translation systems and at the end you will get like a number, like eight. Your system is eight good. The problem of course is this process is very very slow. So the machine translation community does this every year and it's quite slow and it's quite expensive as you know it requires these humans here to assess all of these systems output. And you want a sort of a sizable output, right? Because you want sort of a good sample of the machine translation system. So this is not really satisfactory but like an automated score like perplexity is also not satisfactory. What people have done is they've come up with proxy scores for the humans and two of those scores are called rouge and blue. And specifically blue is one of these metrics that people use and it kind of takes n-grams in the sentences. So n-grams would be like snippets of like let's say four words after one another and there would be these snippets and that the machine translation system produces and then it would go into the validation data set and look at the gold standard translation that it was also produced by humans. And it would also look at these snippets of size four and it would just kind of assess how many of the snippets overlap. Of course the machine translation system has never seen the label like the gold standard for that particular set otherwise it wouldn't you know be fair. But you basically compare n-grams of output and gold and some gold standard. You can have multiple gold standards and so on. So this blue metric is more of like a heuristic and it has been found to correlate fairly well with the humans up until recently of course with the explosion of neural machine translation and especially transformer based machine translation I guess and but also the their system that use LSTM with attention. These systems have become extremely extremely good. I don't know if you notice but Google Translate has been getting better and better really fast. I remember the the first years of Google Translate when people still made fun of it. I don't think many people make fun of it now. At least it's not a meme anymore. So the better and better these systems were the more these metrics like BLÖ and RÖS have diverged from the humans and they're not really reliable anymore especially if you compare really high skill systems to each other. BLÖ tends to not correlate well with humans and therefore we're looking for a new metric. A new metric that correlates well with humans but can be evaluated automatically. This paper here proposes this BLÖRT. Can we just stop with the variance on BÖRT? We get it, you use BÖRT for everything but you know. They say it's a learned evaluation metric based on BÖRT that can model human judgments with a few thousand possibly biased training examples. What you would do in these cases is now the creation of a metric becomes a machine learning task itself. What you'll have is a data set of things that are gold standard translations by humans. You will have the output of the machine translation system. You put them together so you have the gold standard sentence. This would be the optimal translation. You'll have whatever the machine translation produced and then you'll have a human look at it and create a score like this 8 right here. It says these two sentences match 8 good. So 8 maybe it's out of 10. This bottom thing is a very good translation for the top thing, to match the top thing. The human assesses the quality of the sample. Now you have a training data set. You have a z and z tilde or something or y. They call this y which is the gold standard label. This is y tilde, whatever the machine produced and or x tilde and then y is the label. Your task is now given x and x tilde predict whatever the human would say about this. If you can collect a few of these samples right here of different machine translation systems then you can formulate, you can make a data set out of this and formulate a machine learning task. That's exactly what these competitions have done. It's like a meta competition. It's a competition for designing the best metrics of the other competitions. Of course the difficulty here is that the data set isn't static because if you come up with a metric such as blue, let's say you come up with a better blue, you would want these other tasks to use it in the next years as well. The thing about metrics is you need to be able to compare to previous years and so on. You would want a metric that is still valid for other years and other slightly different tasks and also for other machine translation systems. If you just learn on data from this year's competitions and in five years all of these models will have become so much better and they'll produce different output and that's the difficulty. Your metric should still be valid at that point. This paper basically deals with the fact that can you learn such a metric from data that exists at one point in time that will be robust to shifts in distribution. In five years the machine translation systems are better. They maybe use different language constructs to translate certain things because they assess that better. Can you still make a good judgment about which of these systems is better than the other system? Can you still assess how humans would rate these systems? They're saying that they found the method to do this. This blurt, as they said, not only have they found the method but their method only uses a few thousand possibly biased training examples. They do this via a new pre-training scheme. They say a key aspect of our approach is a novel pre-training scheme that uses millions of synthetic examples to help the model generalize. Why is it important that it only uses a few thousand training examples? Because these are generated by humans and humans are expensive. It's not like ImageNet. You do it once, you have it for for 20 years. This is done year after year and you need real experts like translation experts. This is expensive. The fewer of these actual training examples that the thing can be efficient on, the better. They circumvent this by using millions of synthetic examples to help the model generalize. They do this in a pre-training step. Blurt provides state-of-the-art results on the last three years of the WMT metrics shared tasks. This is this meta task where you're asked to come up with a metric for the other tasks. The WebNLG competition dataset, in contrast to a vanilla bird-based approach, yields superior results even when the training data is scarce and out of distribution. Let's have a look at what they do. They say what do we need to do to fine-tune BERT for quality evaluation? If you don't know what BERT is, it's basically a model that takes in a bunch of text. You have a bunch of text and then it's a model that is a transformer. I've made a video on it if you want to check that out. As outputs you get a sequence of vectors, but important most of the time is only the first one, which you then use in a subsequent task. For example, if you want to do classification, you would put a classification layer on top to classify it into certain classes. If you want to do regression, you can do other things with these outputs right here, but for this particular paper only this CLS output is relevant. You would input this pair of gold standard and output of the machine and the output would be an output Y, which is either a number or a class label. In this case it's a number. You input these two things and out comes this whole sequence. You take the CLS output vector and you put it through a linear layer, weights and bias, and that would output a number Y. The number Y you train to be as close as possible to what the human would say about this pair X, the gold standard, and X tilde, the output of the system. This is what you would do if you were simply going about it. Just take BERT, take the model, so BERT is really good at language, take the model and train it on this data set. However, fine-tuning BERT requires a sizable amount of IID data. We don't have that in these tasks, which is less than ideal for a metric that should generalize to a variety of tasks and model drift. The problem with just applying BERT here is that you don't have enough data and it won't be a robust solution. It will only work for this particular data set that you train it on. The solution they say is you pre-train on synthetic data. What does that mean? They say the key aspect of our approach is a pre-training technique that we use to warm up BERT before fine-tuning on the rating data. You might know BERT training, which is where you do this masked language model pre-training. If you are given a piece of text, let's say you're given this piece of text right here, what you would do is you would drop out a couple of words like this one and this one and ask BERT to reconstruct it, like a denoising autoencoder. That way BERT learns about language in this particular way. They're not saying you should replace that. What they're saying is first you should do this masked language model pre-training, second you should do their synthetic pre-training and third you should do the fine-tuning. In the naive approach you would skip this step too. Their claim is that by introduction of this step too, that you could be a lot better and a lot more robust. You're already exposed to information in this step that will make you more robust to distribution shifts in this fine-tuning data later. I've advocated for this step to be called priming. Because otherwise you always have to say, okay I want to pre-train BERT but I don't mean pre-pre-training, like I don't mean this. This is already called pre-training. I want to pre-train after pre-train, so I just vote for this to be called priming. I have no idea. If you come up with stuff like this, probably you've heard it somewhere. I guess I might not be the inventor of this, but it is a good sounding word and it sort of fits. They say we generate a large number of synthetic reference candidate pairs. What they're going to do is take a bunch of text and in their case I think it's Wikipedia. For each Wikipedia article they're going to draw sentences or samples or paragraphs from Wikipedia. These are going to be your Z and then they're going to kind of muddle with them a bit. They're going to disturb them a bit, to make them a bit different, to make them go Z tilde. This simulates the difference between what the machine translation outputs and the gold standard sentence. They're usually not exactly the same, if you translate a sentence there are many ways you can do it. Their goal is to produce a data set that has sentences and perturbed versions of the sentence, but not perturbed randomly, but perturbed in a language knowledgeable way. How do they do this? They have three different ways. First of all mask filling with BERT. What they're doing is they take a BERT that can do language modeling, a pre-trained BERT. Let's again say we have this text right here and they simply drop out two words or so and fill them in again with BERT. BERT might produce the same words or it might produce slightly different words. Depending on how many you drop out you can choose the amount that you perturb these sentences. The second is they back translate. What they do with back translation is they use a machine translation model. It doesn't matter which one you take. They use any machine translation model to take a sentence and then they map it to another language, say from English to French. This is Z French and then they map it back again. The Z tilde is now the French to English translation. You need two translation models. First you translate it to French and then you translate it back again. That would sometimes give you the same sentence but often it will give you sort of a paraphrase of the sentence that you had at the beginning. That would be the second version that you could make pairs of sentences that are sort of similar. The third way is just to drop out words. They just found this to help. Now they have a giant data set of sentences and perturbed versions of sentences. What are they going to do with that giant data set? The answer is they're going to take this Z and Z tilde and you're going to put that into BERT into their thing that they prime now. This is the priming stage. This was pre-trained on mask language modeling. Now they want to prime it. What are they going to do? They're going to take this CLS vector. Of course this is not the final task and we don't have final labels for these two things. We need to somehow come up with our own tasks and labels for them. They decide to go a whole bunch of tasks. They go like... I don't even know. They go eight or so or five or so different tasks. They construct different tasks to perform with these two things. This could be metrics like BLÖ or RÖSCH or this BERT score right here. You simply calculate the n-gram overlap between Z and Z' that would be one of the scores. It could be the back translation likelihood which is how likely does a back translation model assess the sentence. Here are all the things. Six different tasks. The catch here is... What would happen for example with BLÖ is you would take a model and you would calculate the BLÖ score between those two things. But you wouldn't input that into BERT. You would ask BERT to predict the BLÖ score. BERT would be outputting B hat and B would be the actual BLÖ score. You would train BERT to predict the BLÖ score of this particular pair of inputs. One you take as the input and the other one you take as the reference. You would ask BERT to predict the BLÖ score of this. To predict the RÖSCH score. You would ask all of these signals. You ask the same model. You ask to predict all of these scores for these two things. You can calculate all of these scores by either BLÖ is like a script you run or you have some other model like a pre-trained translation model that you use to assess the... that you ask how good is this in terms of this particular task back translation and then you try to predict that score. It's important you're not training the model to perform these tasks. These tasks you already have another model that's specialized to these particular tasks and you simply ask them to score the input. You have an entailment model that outputs how much by how much does the second sentence entail the first that basically means does the second sentence follow from the first and of course this is not you know it's not actually proper input data for that task but you can still ask the model and if these are good translations of each other if these sentences match then the second one should probably follow fairly well for the first but at least you can if you make BERT predict that it will learn something useful about the relation between the two sentences. So the entire game name of the game in here is to come up with tasks that if BERT learns to predict the score of these tasks on those inputs sorry on pretending one is the input and the other one is the output or on the two inputs and then trying to predict the score then BERT would learn something useful. So that's the trick here is to come up with these pre-training tasks and you train them all at the same time and by doing it all at the same time and by doing it on many many different perturbations on these different tasks you hope that your model learns something it's kind of becoming attuned to the variations that language can have and what it needs to pay attention to and then you hope that if you then have done this then take this model and then do step three which is fine-tuning on the actual data you have you would guess that it becomes very good at that data but also it retains all of these abilities and generalizes better to other sort of distribution shifts. So that is the thing here and on this metric learning tasks they do outperform all other models right here and what I find interesting is down here where they now test for the distribution shift so what they're saying is okay this is all on data basically where you know we train on training data and evaluate on testing data and they're sort of the same they come from the same year from the same machine translation models and we don't really know how you know next year the machine translation models might be different thus our scores still hold so they try to simulate this by splitting the data and they introduce this skew factor so what they'll do is they'll split the data so usually as you can see right here the training date the ratings these are the human ratings the training data is sort of distributed like this would be the test data and the training data would almost be overlapping that if you can see like the dotted lines right here or so so you can see the overlap between the test and the trained of the human ratings is very close now they say we can we can skew that we can sort of filter the data such that in the training data only very bad sentences are and in the test data there are only very good sentences okay and this simulates the fact that you know we this might be the previous year's data that we train our metric on and then we we evaluate it on the next year's data where all the systems have become better and what this does is you can see right here the bottom axis is the test skew and the color here is the training skew okay so what interests what what we're interested in is to the right and down the colors so as these skew increases you can see right here that the the quality of the metric decreases okay the correlation with the human ratings decreases but it it still remains fairly well but especially the training skew if you update the train so if you make the training examples really bad so to say it the score just drops down and they can show pretty well here that if you add this pre training then the score except in this extreme case so the score for all of these it remains relatively high and especially remains above the blue score which is always sort of worse right so this is is pretty as pretty neat and shows this power of this pre training basically that's that's the the robustness to quality drift metric and they have a bunch of other metrics right here where they ablate and so on but I don't want to go too much into that I more want to make some comments on on this work so what what I think so first of all in a paper like this what what I would like to see is like the extrapolation right here to if and where this ever crosses the the blue score right because I mean okay it seems like yeah this skew of three is a is a big number but who knows if three is a big number right who like we can't assess that what we need to see is really the where the crossover point between the models to assess where does it where is it no longer valid and so on the second part here is that my problem with this method of splitting the data I mean yes okay you split the bad from the good but it's not it's not only that these things are getting better so right now everyone's using transformers for everyone's using BERT for everything right and BERT is a specific architecture that is going to be good at specific things at specific grammatical constructs in specific languages right so it's the mistakes it makes are very systematic now if in one year or two years all of a sudden the a new model pops up I don't know like someone discovers that graph neural networks are really good at machine translation these models are going to be attuned to a very very different set of construct they might be better overall but they're going to make a different sort of mistake and so I think just assessing the skew via just dividing up the data in into bad and good ratings I don't think that covers these distribution shifts that they set out to cover right what I would have expected is something like them because these tasks are repeated year after year and I would have expected them to for example train on 2017 and then evaluate on 2019 or something like or show like evaluate on 2017 2018 and 2019 and there we would have a much better assessment of a distribution shift over the years right so so it is not super convincing to me and what is most worrisome is if you look at their pre training tasks I mean okay there is a there's blur and rouge but there is BERT score right there is entailment which is also a BERT model and the back translation I mean who knows that's probably either going to be a transformer or a LSTM with an attention mechanism which is the attention mechanism is the basis for transformers so all of these things are basically going to make the same sort of bias mistakes right they're doing to to it's not it's not like there is Gaussian noise on top of these things all of these things are going to be weak in the same sort of assessments and not agree with like they're going to have systematic errors in them with respect to them predicting the human scores and if we evaluate our systems that some are also using exactly that thing right so these the systems we evaluate they are using the same type of models as here they're going to fall prey to the same type of mistakes and then if we switch over to systems that use some different right so next year we have some systems that use different techniques they're going to be like exactly maybe not bad in these particular things but bad in other things and then this thing will output a systematically biased assessment so it's sort of a house of like if you've seen these images of plugging in the power strip into itself and you have infinite power it's like it's very to me it seems very dangerous to have as such an overlap of architectures and methods to evaluate systems as you have in the systems themselves but I hope this will be regularly checked with human scores and assessed as to how much these systems are out of sync or in sync with humans all right this was it for me for blurt check out they have the code available the metric is available evaluate your stuff with it and bye bye
[ { "start": 0, "end": 6.08, "text": " Hello there! Today we'll look at BLERT, Learning Robust Metrics for Text Generation by" }, { "start": 6.08, "end": 12.6, "text": " Thibaut Salam, Tipanjan Das and Ankur P. Parikh. So this paper on a high level" }, { "start": 12.6, "end": 18, "text": " proposes a new metric for text generation tasks such as machine translation by" }, { "start": 18, "end": 24.240000000000002, "text": " leveraging a BERT model to produce like an automated metric, an automated quality" }, { "start": 24.24, "end": 30.479999999999997, "text": " metric. And they make this BERT model robust by pre-training it on a very wide" }, { "start": 30.479999999999997, "end": 36.32, "text": " array of tasks that they can use synthetic data to train it. And therefore" }, { "start": 36.32, "end": 43.16, "text": " the model and the resulting score is very robust to shifts in distribution and" }, { "start": 43.16, "end": 47.599999999999994, "text": " they advocate that this could be used in the future to assess text generation" }, { "start": 47.599999999999994, "end": 53, "text": " systems. Alright, as always, if you like content like this, consider subscribing" }, { "start": 53, "end": 59.96, "text": " and sharing it out and leaving a like. Tell YouTube that it's good content. Of" }, { "start": 59.96, "end": 66.08, "text": " course only if you agree. So what's the problem with evaluation for text" }, { "start": 66.08, "end": 71.56, "text": " generation? So if you know the machine translation community, basically what they" }, { "start": 71.56, "end": 75.76, "text": " do is they have these datasets where they translate from one language into" }, { "start": 75.76, "end": 82.56, "text": " another. Let's say English to French. And they have a training dataset that is" }, { "start": 82.56, "end": 89.88, "text": " fairly okay-ishly large. And then they somehow need to evaluate this. So" }, { "start": 89.88, "end": 94.8, "text": " you have like a test dataset, but all you can really do is sort of calculate the" }, { "start": 94.8, "end": 98.84, "text": " perplexity of a language model that you produce or of a translation model that" }, { "start": 98.84, "end": 103.84, "text": " you produce. There's not really a metric for translation, so the gold standard is" }, { "start": 103.84, "end": 108.6, "text": " to get it to humans. So you train on this dataset, you produce a program." }, { "start": 108.6, "end": 113.36, "text": " This is your machine translation program that you produce from the data. And you" }, { "start": 113.36, "end": 118.6, "text": " let this run on your evaluation dataset and you give the results to a" }, { "start": 118.6, "end": 123.32, "text": " bunch of human raters. These could be regular people, these could be linguists" }, { "start": 123.32, "end": 129.76, "text": " that are experts in translation in both languages. And they will score each" }, { "start": 129.76, "end": 134.07999999999998, "text": " of the outputs of the machine translation systems and at the end you" }, { "start": 134.08, "end": 139.32000000000002, "text": " will get like a number, like eight. Your system is eight good. The problem of" }, { "start": 139.32000000000002, "end": 144.20000000000002, "text": " course is this process is very very slow. So the machine translation community does" }, { "start": 144.20000000000002, "end": 148.56, "text": " this every year and it's quite slow and it's quite expensive as you know" }, { "start": 148.56, "end": 153.36, "text": " it requires these humans here to assess all of these systems output. And you want" }, { "start": 153.36, "end": 157.84, "text": " a sort of a sizable output, right? Because you want sort of a good sample" }, { "start": 157.84, "end": 165.16, "text": " of the machine translation system. So this is not really satisfactory but like" }, { "start": 165.16, "end": 169.52, "text": " an automated score like perplexity is also not satisfactory. What people have" }, { "start": 169.52, "end": 173.92000000000002, "text": " done is they've come up with proxy scores for the humans and two of those" }, { "start": 173.92000000000002, "end": 180.92000000000002, "text": " scores are called rouge and blue. And specifically blue is one of these" }, { "start": 180.92000000000002, "end": 187.08, "text": " metrics that people use and it kind of takes n-grams in the sentences. So" }, { "start": 187.08, "end": 191.8, "text": " n-grams would be like snippets of like let's say four words after one another" }, { "start": 191.8, "end": 196.16000000000003, "text": " and there would be these snippets and that the machine translation system" }, { "start": 196.16000000000003, "end": 201.12, "text": " produces and then it would go into the validation data set and look at the gold" }, { "start": 201.12, "end": 205.24, "text": " standard translation that it was also produced by humans. And it would also" }, { "start": 205.24, "end": 210.08, "text": " look at these snippets of size four and it would just kind of assess how many of" }, { "start": 210.08, "end": 214.36, "text": " the snippets overlap. Of course the machine translation system has never" }, { "start": 214.36, "end": 219.36, "text": " seen the label like the gold standard for that particular set otherwise it" }, { "start": 219.36, "end": 226.24, "text": " wouldn't you know be fair. But you basically compare n-grams of output and" }, { "start": 226.24, "end": 230.56, "text": " gold and some gold standard. You can have multiple gold standards and so on. So" }, { "start": 230.56, "end": 235.60000000000002, "text": " this blue metric is more of like a heuristic and it has been found to" }, { "start": 235.60000000000002, "end": 240.44000000000003, "text": " correlate fairly well with the humans up until recently of course with the" }, { "start": 240.44, "end": 245.28, "text": " explosion of neural machine translation and especially transformer based machine" }, { "start": 245.28, "end": 250.56, "text": " translation I guess and but also the their system that use LSTM with" }, { "start": 250.56, "end": 254.96, "text": " attention. These systems have become extremely extremely good. I don't know if" }, { "start": 254.96, "end": 261.12, "text": " you notice but Google Translate has been getting better and better really fast. I" }, { "start": 261.12, "end": 265.72, "text": " remember the the first years of Google Translate when people still made fun of" }, { "start": 265.72, "end": 271.12, "text": " it. I don't think many people make fun of it now. At least it's not a meme anymore." }, { "start": 271.12, "end": 278.16, "text": " So the better and better these systems were the more these" }, { "start": 278.16, "end": 285.32000000000005, "text": " metrics like BLÖ and RÖS have diverged from the humans and they're not really" }, { "start": 285.32000000000005, "end": 290.40000000000003, "text": " reliable anymore especially if you compare really high skill systems to" }, { "start": 290.4, "end": 296.76, "text": " each other. BLÖ tends to not correlate well with humans and therefore we're" }, { "start": 296.76, "end": 301.52, "text": " looking for a new metric. A new metric that correlates well with humans but can" }, { "start": 301.52, "end": 311.79999999999995, "text": " be evaluated automatically. This paper here proposes this BLÖRT. Can we" }, { "start": 311.79999999999995, "end": 316.76, "text": " just stop with the variance on BÖRT? We get it, you use BÖRT for everything but you" }, { "start": 316.76, "end": 324.2, "text": " know. They say it's a learned evaluation metric based on BÖRT that can" }, { "start": 324.2, "end": 331.71999999999997, "text": " model human judgments with a few thousand possibly biased training examples." }, { "start": 331.71999999999997, "end": 341.59999999999997, "text": " What you would do in these cases is now the creation of a metric becomes" }, { "start": 341.6, "end": 350.16, "text": " a machine learning task itself. What you'll have is a data set of" }, { "start": 350.16, "end": 356.96000000000004, "text": " things that are gold standard translations by humans. You will have the" }, { "start": 356.96000000000004, "end": 361.64000000000004, "text": " output of the machine translation system. You put them together so you have" }, { "start": 361.64000000000004, "end": 365.48, "text": " the gold standard sentence. This would be the optimal translation." }, { "start": 365.48, "end": 368.96000000000004, "text": " You'll have whatever the machine translation produced and then you'll" }, { "start": 368.96, "end": 374.84, "text": " have a human look at it and create a score like this 8 right here. It says" }, { "start": 374.84, "end": 382.59999999999997, "text": " these two sentences match 8 good. So 8 maybe it's out of 10." }, { "start": 382.59999999999997, "end": 387.56, "text": " This bottom thing is a very good translation for the top thing," }, { "start": 387.56, "end": 393, "text": " to match the top thing. The human assesses the quality of the sample." }, { "start": 393, "end": 402.68, "text": " Now you have a training data set. You have a z and z tilde or something or y." }, { "start": 402.68, "end": 410.76, "text": " They call this y which is the gold standard label. This is y tilde," }, { "start": 410.76, "end": 418.28, "text": " whatever the machine produced and or x tilde and then y is the label." }, { "start": 418.28, "end": 424.44, "text": " Your task is now given x and x tilde predict whatever the human would say" }, { "start": 424.44, "end": 431, "text": " about this. If you can collect a few of these samples right here of" }, { "start": 431, "end": 436.47999999999996, "text": " different machine translation systems then you can formulate, you can make a" }, { "start": 436.47999999999996, "end": 442.11999999999995, "text": " data set out of this and formulate a machine learning task. That's" }, { "start": 442.11999999999995, "end": 447.11999999999995, "text": " exactly what these competitions have done. It's like a meta competition." }, { "start": 447.12, "end": 452, "text": " It's a competition for designing the best metrics of the other" }, { "start": 452, "end": 457.68, "text": " competitions. Of course the difficulty here is that the data set" }, { "start": 457.68, "end": 462.52, "text": " isn't static because if you come up with a metric such as blue, let's say you come" }, { "start": 462.52, "end": 468.2, "text": " up with a better blue, you would want these other tasks to use it in" }, { "start": 468.2, "end": 472.24, "text": " the next years as well. The thing about metrics is you need to be able" }, { "start": 472.24, "end": 476.72, "text": " to compare to previous years and so on. You would want a metric that is" }, { "start": 476.72, "end": 481.84000000000003, "text": " still valid for other years and other slightly different" }, { "start": 481.84000000000003, "end": 486.8, "text": " tasks and also for other machine translation systems. If you just learn" }, { "start": 486.8, "end": 495.48, "text": " on data from this year's competitions and in five years all of" }, { "start": 495.48, "end": 498.8, "text": " these models will have become so much better and they'll produce different" }, { "start": 498.8, "end": 504.20000000000005, "text": " output and that's the difficulty. Your metric should still be valid at that" }, { "start": 504.2, "end": 511, "text": " point. This paper basically deals with the fact that can you" }, { "start": 511, "end": 516.84, "text": " learn such a metric from data that exists at one point in time that will" }, { "start": 516.84, "end": 522.72, "text": " be robust to shifts in distribution. In five years the machine translation" }, { "start": 522.72, "end": 526.72, "text": " systems are better. They maybe use different language constructs to" }, { "start": 526.72, "end": 531.4, "text": " translate certain things because they assess that better. Can you still" }, { "start": 531.4, "end": 535.68, "text": " make a good judgment about which of these systems is better than" }, { "start": 535.68, "end": 540.84, "text": " the other system? Can you still assess how humans would rate these systems?" }, { "start": 540.84, "end": 549.24, "text": " They're saying that they found the method to do this. This blurt, as" }, { "start": 549.24, "end": 554.48, "text": " they said, not only have they found the method but their method only uses a" }, { "start": 554.48, "end": 561.12, "text": " few thousand possibly biased training examples. They do this via a new" }, { "start": 561.12, "end": 565.6, "text": " pre-training scheme. They say a key aspect of our approach is a novel" }, { "start": 565.6, "end": 570.84, "text": " pre-training scheme that uses millions of synthetic examples to help the model" }, { "start": 570.84, "end": 575.4, "text": " generalize. Why is it important that it only uses a few thousand training" }, { "start": 575.4, "end": 581.2, "text": " examples? Because these are generated by humans and humans are expensive." }, { "start": 581.2, "end": 589.08, "text": " It's not like ImageNet. You do it once, you have it for for 20 years." }, { "start": 589.08, "end": 594.88, "text": " This is done year after year and you need real experts like translation experts." }, { "start": 594.88, "end": 600.88, "text": " This is expensive. The fewer of these actual training examples that the" }, { "start": 600.88, "end": 609.12, "text": " thing can be efficient on, the better. They circumvent this by using" }, { "start": 609.12, "end": 613.6, "text": " millions of synthetic examples to help the model generalize. They do this in a" }, { "start": 613.6, "end": 619.6, "text": " pre-training step. Blurt provides state-of-the-art results on the" }, { "start": 619.6, "end": 624.48, "text": " last three years of the WMT metrics shared tasks. This is this" }, { "start": 624.48, "end": 630.6, "text": " meta task where you're asked to come up with a metric for the other tasks." }, { "start": 630.6, "end": 635.72, "text": " The WebNLG competition dataset, in contrast to a vanilla bird-based" }, { "start": 635.72, "end": 639.8000000000001, "text": " approach, yields superior results even when the training data is scarce and out" }, { "start": 639.8, "end": 648.28, "text": " of distribution. Let's have a look at what they do." }, { "start": 648.28, "end": 655.1999999999999, "text": " They say what do we need to do to fine-tune BERT for quality evaluation?" }, { "start": 655.1999999999999, "end": 661.68, "text": " If you don't know what BERT is, it's basically a model that takes" }, { "start": 661.68, "end": 667.4799999999999, "text": " in a bunch of text. You have a bunch of text and then it's a model that is a" }, { "start": 667.48, "end": 673.08, "text": " transformer. I've made a video on it if you want to check that out." }, { "start": 673.08, "end": 679.04, "text": " As outputs you get a sequence of vectors, but important most of the" }, { "start": 679.04, "end": 684.52, "text": " time is only the first one, which you then use in a subsequent task. For" }, { "start": 684.52, "end": 688.36, "text": " example, if you want to do classification, you would put a" }, { "start": 688.36, "end": 693.76, "text": " classification layer on top to classify it into certain classes. If you want to do" }, { "start": 693.76, "end": 699.08, "text": " regression, you can do other things with these outputs right here," }, { "start": 699.08, "end": 708.28, "text": " but for this particular paper only this CLS output is relevant." }, { "start": 708.28, "end": 714.48, "text": " You would input this pair of gold standard and output of the machine and the" }, { "start": 714.48, "end": 721.36, "text": " output would be an output Y, which is either a number or a class label." }, { "start": 721.36, "end": 731.76, "text": " In this case it's a number. You input these two things and" }, { "start": 731.76, "end": 739.04, "text": " out comes this whole sequence. You take the CLS output vector and you" }, { "start": 739.04, "end": 743.6800000000001, "text": " put it through a linear layer, weights and bias, and that would output" }, { "start": 743.6800000000001, "end": 750.52, "text": " a number Y. The number Y you train to be as close as possible to what" }, { "start": 750.52, "end": 756.76, "text": " the human would say about this pair X, the gold standard, and X tilde, the" }, { "start": 756.76, "end": 763.12, "text": " output of the system. This is what you would do if you were simply going" }, { "start": 763.12, "end": 768.4, "text": " about it. Just take BERT, take the model, so BERT is really good at language," }, { "start": 768.4, "end": 776.28, "text": " take the model and train it on this data set. However, fine-tuning BERT" }, { "start": 776.28, "end": 781.36, "text": " requires a sizable amount of IID data. We don't have that in these tasks," }, { "start": 781.36, "end": 786.4399999999999, "text": " which is less than ideal for a metric that should generalize to a variety of" }, { "start": 786.4399999999999, "end": 792.72, "text": " tasks and model drift. The problem with just applying BERT here is that" }, { "start": 792.72, "end": 797.88, "text": " you don't have enough data and it won't be a robust solution." }, { "start": 797.88, "end": 802.92, "text": " It will only work for this particular data set that you train it on. The" }, { "start": 802.92, "end": 808.68, "text": " solution they say is you pre-train on synthetic data. What does that mean?" }, { "start": 808.68, "end": 817.36, "text": " They say the key aspect of our approach is a pre-training technique that we use" }, { "start": 817.36, "end": 823.8, "text": " to warm up BERT before fine-tuning on the rating data. You might know BERT" }, { "start": 823.8, "end": 829.8, "text": " training, which is where you do this masked language model pre-training." }, { "start": 829.8, "end": 834.5999999999999, "text": " If you are given a piece of text, let's say you're given this piece of text" }, { "start": 834.5999999999999, "end": 838.3199999999999, "text": " right here, what you would do is you would drop out a couple of words like" }, { "start": 838.3199999999999, "end": 842.9599999999999, "text": " this one and this one and ask BERT to reconstruct it, like a denoising" }, { "start": 842.9599999999999, "end": 849.2199999999999, "text": " autoencoder. That way BERT learns about language in this" }, { "start": 849.2199999999999, "end": 853.9399999999999, "text": " particular way. They're not saying you should replace that. What they're" }, { "start": 853.94, "end": 859.84, "text": " saying is first you should do this masked language model pre-training," }, { "start": 859.84, "end": 867.08, "text": " second you should do their synthetic pre-training and third you should do the" }, { "start": 867.08, "end": 876.32, "text": " fine-tuning. In the naive approach you would skip this step" }, { "start": 876.32, "end": 881.1600000000001, "text": " too. Their claim is that by introduction of this step too, that you could be a" }, { "start": 881.16, "end": 886.9599999999999, "text": " lot better and a lot more robust. You're already" }, { "start": 886.9599999999999, "end": 892.6, "text": " exposed to information in this step that will make you more robust to" }, { "start": 892.6, "end": 899.48, "text": " distribution shifts in this fine-tuning data later. I've" }, { "start": 899.48, "end": 903.7199999999999, "text": " advocated for this step to be called" }, { "start": 903.72, "end": 911.32, "text": " priming. Because otherwise you always have to say, okay I" }, { "start": 911.32, "end": 915.36, "text": " want to pre-train BERT but I don't mean pre-pre-training, like I don't" }, { "start": 915.36, "end": 920.64, "text": " mean this. This is already called pre-training. I want to pre-train after" }, { "start": 920.64, "end": 929.1, "text": " pre-train, so I just vote for this to be called priming. I have no idea." }, { "start": 929.1, "end": 933, "text": " If you come up with stuff like this, probably you've heard it somewhere." }, { "start": 933, "end": 938.92, "text": " I guess I might not be the inventor of this, but it is a good sounding word and" }, { "start": 938.92, "end": 946.32, "text": " it sort of fits. They say we generate a large number of synthetic" }, { "start": 946.32, "end": 950.88, "text": " reference candidate pairs. What they're going to do is take a" }, { "start": 950.88, "end": 956.68, "text": " bunch of text and in their case I think it's Wikipedia. For each Wikipedia" }, { "start": 956.68, "end": 963.16, "text": " article they're going to draw" }, { "start": 963.16, "end": 968.56, "text": " sentences or samples or paragraphs from Wikipedia. These are going to be" }, { "start": 968.56, "end": 976.8399999999999, "text": " your Z and then they're going to kind of muddle with them a bit. They're going to" }, { "start": 976.8399999999999, "end": 982.4799999999999, "text": " disturb them a bit, to make them a bit different, to make them go Z tilde." }, { "start": 982.48, "end": 987.24, "text": " This simulates the difference between what the machine translation" }, { "start": 987.24, "end": 991.64, "text": " outputs and the gold standard sentence. They're usually not exactly the same," }, { "start": 991.64, "end": 995.16, "text": " if you translate a sentence there are many ways you can do it. Their goal" }, { "start": 995.16, "end": 1001.44, "text": " is to produce a data set that has sentences and perturbed versions" }, { "start": 1001.44, "end": 1006.6, "text": " of the sentence, but not perturbed randomly, but perturbed in a" }, { "start": 1006.6, "end": 1012.5600000000001, "text": " language knowledgeable way. How do they do this?" }, { "start": 1012.5600000000001, "end": 1020.6, "text": " They have three different ways. First of all mask filling with BERT. What" }, { "start": 1020.6, "end": 1024.4, "text": " they're doing is they take a BERT that can do language modeling, a" }, { "start": 1024.4, "end": 1028.84, "text": " pre-trained BERT. Let's again say we have this text right here and they" }, { "start": 1028.84, "end": 1035.04, "text": " simply drop out two words or so and fill them in again with BERT. BERT might" }, { "start": 1035.04, "end": 1039.24, "text": " produce the same words or it might produce slightly different words." }, { "start": 1039.24, "end": 1043.2, "text": " Depending on how many you drop out you can choose the amount that you" }, { "start": 1043.2, "end": 1053.8, "text": " perturb these sentences. The second is they back translate. What they do" }, { "start": 1053.8, "end": 1059.8799999999999, "text": " with back translation is they use a machine translation model. It doesn't" }, { "start": 1059.88, "end": 1066.72, "text": " matter which one you take. They use any machine translation model to take a" }, { "start": 1066.72, "end": 1072.92, "text": " sentence and then they map it to another language, say from English to" }, { "start": 1072.92, "end": 1081.8000000000002, "text": " French. This is Z French and then they map it back again. The Z tilde is" }, { "start": 1081.8000000000002, "end": 1087.16, "text": " now the French to English translation. You need two translation models. First" }, { "start": 1087.16, "end": 1092.1200000000001, "text": " you translate it to French and then you translate it back again. That would" }, { "start": 1092.1200000000001, "end": 1095.44, "text": " sometimes give you the same sentence but often it will give you sort of a" }, { "start": 1095.44, "end": 1101.52, "text": " paraphrase of the sentence that you had at the beginning. That would be" }, { "start": 1101.52, "end": 1107.92, "text": " the second version that you could make pairs of sentences that are sort of" }, { "start": 1107.92, "end": 1113.24, "text": " similar. The third way is just to drop out words. They just found this to" }, { "start": 1113.24, "end": 1119.88, "text": " help. Now they have a giant data set of sentences and perturbed versions of" }, { "start": 1119.88, "end": 1125.48, "text": " sentences. What are they going to do with that giant data set? The answer is" }, { "start": 1125.48, "end": 1132.52, "text": " they're going to take this Z and Z tilde and you're going to put that into BERT" }, { "start": 1132.52, "end": 1138.8, "text": " into their thing that they prime now. This is the priming stage. This" }, { "start": 1138.8, "end": 1141.84, "text": " was pre-trained on mask language modeling. Now they want to prime it. What" }, { "start": 1141.84, "end": 1146.32, "text": " are they going to do? They're going to take this CLS vector. Of course this" }, { "start": 1146.32, "end": 1151.48, "text": " is not the final task and we don't have final labels for these two things." }, { "start": 1151.48, "end": 1157.6399999999999, "text": " We need to somehow come up with our own tasks and labels for them. They" }, { "start": 1157.6399999999999, "end": 1166.8, "text": " decide to go a whole bunch of tasks. They go like... I don't even" }, { "start": 1166.8, "end": 1172.56, "text": " know. They go eight or so or five or so different tasks. They construct different" }, { "start": 1172.56, "end": 1179.28, "text": " tasks to perform with these two things. This could be metrics like BLÖ or" }, { "start": 1179.28, "end": 1185.12, "text": " RÖSCH or this BERT score right here. You simply calculate the n-gram overlap" }, { "start": 1185.12, "end": 1191.24, "text": " between Z and Z' that would be one of the scores. It could be the back" }, { "start": 1191.24, "end": 1196.3999999999999, "text": " translation likelihood which is how likely does a back translation model" }, { "start": 1196.4, "end": 1202.72, "text": " assess the sentence. Here are all the things. Six different tasks." }, { "start": 1202.72, "end": 1210.24, "text": " The catch here is... What would happen for example with" }, { "start": 1210.24, "end": 1217.0400000000002, "text": " BLÖ is you would take a model and you would calculate the BLÖ score between" }, { "start": 1217.0400000000002, "end": 1221.8400000000001, "text": " those two things. But you wouldn't input that into BERT. You would ask BERT to" }, { "start": 1221.84, "end": 1229.9599999999998, "text": " predict the BLÖ score. BERT would be outputting B hat and B would be" }, { "start": 1229.9599999999998, "end": 1235.72, "text": " the actual BLÖ score. You would train BERT to predict the BLÖ score of this" }, { "start": 1235.72, "end": 1242.28, "text": " particular pair of inputs. One you take as the input and the" }, { "start": 1242.28, "end": 1247.52, "text": " other one you take as the reference. You would ask BERT to predict the" }, { "start": 1247.52, "end": 1252.24, "text": " BLÖ score of this. To predict the RÖSCH score. You would ask all of these" }, { "start": 1252.24, "end": 1257.72, "text": " signals. You ask the same model. You ask to predict all of these scores" }, { "start": 1257.72, "end": 1262.76, "text": " for these two things. You can calculate all of these scores by either" }, { "start": 1262.76, "end": 1269.36, "text": " BLÖ is like a script you run or you have some other model like a pre-trained" }, { "start": 1269.36, "end": 1276.76, "text": " translation model that you use to assess the... that you ask how good is this in" }, { "start": 1276.76, "end": 1282.76, "text": " terms of this particular task back translation and then you try to predict" }, { "start": 1282.76, "end": 1287.44, "text": " that score. It's important you're not training the model to perform these" }, { "start": 1287.44, "end": 1293.36, "text": " tasks. These tasks you already have another model that's specialized to" }, { "start": 1293.36, "end": 1299.56, "text": " these particular tasks and you simply ask them to score the input. You have" }, { "start": 1299.56, "end": 1304.52, "text": " an entailment model that outputs how much by how much does the second sentence" }, { "start": 1304.52, "end": 1308.84, "text": " entail the first that basically means does the second sentence follow from" }, { "start": 1308.84, "end": 1314.72, "text": " the first and of course this is not you know it's not actually proper input data" }, { "start": 1314.72, "end": 1318.72, "text": " for that task but you can still ask the model and if these are good" }, { "start": 1318.72, "end": 1322.56, "text": " translations of each other if these sentences match then the second one" }, { "start": 1322.56, "end": 1328.6399999999999, "text": " should probably follow fairly well for the first but at least you can if you" }, { "start": 1328.6399999999999, "end": 1333.08, "text": " make BERT predict that it will learn something useful about the relation" }, { "start": 1333.08, "end": 1338.36, "text": " between the two sentences. So the entire game name of the game in here is to come" }, { "start": 1338.36, "end": 1344.1599999999999, "text": " up with tasks that if BERT learns to predict the score of these tasks on" }, { "start": 1344.1599999999999, "end": 1350.4399999999998, "text": " those inputs sorry on pretending one is the input and the other one is the" }, { "start": 1350.4399999999998, "end": 1356.3999999999999, "text": " output or on the two inputs and then trying to predict the score then BERT" }, { "start": 1356.4, "end": 1364, "text": " would learn something useful. So that's the trick here is to" }, { "start": 1364, "end": 1368.4, "text": " come up with these pre-training tasks and you train them all at the same time" }, { "start": 1368.4, "end": 1373.24, "text": " and by doing it all at the same time and by doing it on many many different" }, { "start": 1373.24, "end": 1377.3600000000001, "text": " perturbations on these different tasks you hope that your model learns" }, { "start": 1377.3600000000001, "end": 1384.2800000000002, "text": " something it's kind of becoming attuned to the variations that language can have" }, { "start": 1384.28, "end": 1389.52, "text": " and what it needs to pay attention to and then you hope that if you then have" }, { "start": 1389.52, "end": 1395.04, "text": " done this then take this model and then do step three which is fine-tuning on" }, { "start": 1395.04, "end": 1399.2, "text": " the actual data you have you would guess that it becomes very good at that data" }, { "start": 1399.2, "end": 1406.6, "text": " but also it retains all of these abilities and generalizes better to" }, { "start": 1406.6, "end": 1416.6799999999998, "text": " other sort of distribution shifts. So that is the thing here and" }, { "start": 1416.6799999999998, "end": 1424.24, "text": " on this metric learning tasks they do outperform all other models" }, { "start": 1424.24, "end": 1431.24, "text": " right here and what I find interesting is down here where they now test for" }, { "start": 1431.24, "end": 1443.68, "text": " the distribution shift so what they're saying is okay this is all on" }, { "start": 1443.68, "end": 1448.96, "text": " data basically where you know we train on training data and evaluate on testing" }, { "start": 1448.96, "end": 1452.2, "text": " data and they're sort of the same they come from the same year from the same" }, { "start": 1452.2, "end": 1458.52, "text": " machine translation models and we don't really know how you know next" }, { "start": 1458.52, "end": 1461.8799999999999, "text": " year the machine translation models might be different thus our scores still" }, { "start": 1461.8799999999999, "end": 1468.68, "text": " hold so they try to simulate this by splitting the data and they introduce" }, { "start": 1468.68, "end": 1474.24, "text": " this skew factor so what they'll do is they'll split the data so usually as you" }, { "start": 1474.24, "end": 1478.98, "text": " can see right here the training date the ratings these are the human ratings the" }, { "start": 1478.98, "end": 1487.76, "text": " training data is sort of distributed like this would be the test data and the" }, { "start": 1487.76, "end": 1491.92, "text": " training data would almost be overlapping that if you can see like" }, { "start": 1491.92, "end": 1499.16, "text": " the dotted lines right here or so so you can see the overlap between the test and" }, { "start": 1499.16, "end": 1504.2, "text": " the trained of the human ratings is very close now they say we can we can skew" }, { "start": 1504.2, "end": 1511.28, "text": " that we can sort of filter the data such that in the training data only very bad" }, { "start": 1511.28, "end": 1517.2, "text": " sentences are and in the test data there are only very good sentences okay and" }, { "start": 1517.2, "end": 1522.56, "text": " this simulates the fact that you know we this might be the previous year's data" }, { "start": 1522.56, "end": 1527.56, "text": " that we train our metric on and then we we evaluate it on the next year's data" }, { "start": 1527.56, "end": 1534.72, "text": " where all the systems have become better and what this does is you can see right" }, { "start": 1534.72, "end": 1542.48, "text": " here the bottom axis is the test skew and the color here is the training skew" }, { "start": 1542.48, "end": 1550.68, "text": " okay so what interests what what we're interested in is to the right and down" }, { "start": 1550.68, "end": 1557.72, "text": " the colors so as these skew increases you can see right here that the the" }, { "start": 1557.72, "end": 1562.52, "text": " quality of the metric decreases okay the correlation with the human ratings" }, { "start": 1562.52, "end": 1570.96, "text": " decreases but it it still remains fairly well but especially the training skew if" }, { "start": 1570.96, "end": 1575.6000000000001, "text": " you update the train so if you make the training examples really bad so to say" }, { "start": 1575.6000000000001, "end": 1580.72, "text": " it the score just drops down and they can show pretty well here that if you" }, { "start": 1580.72, "end": 1587.04, "text": " add this pre training then the score except in this extreme case so the" }, { "start": 1587.04, "end": 1591.48, "text": " score for all of these it remains relatively high and especially remains" }, { "start": 1591.48, "end": 1598.52, "text": " above the blue score which is always sort of worse right so this is is pretty" }, { "start": 1598.52, "end": 1607.08, "text": " as pretty neat and shows this power of this pre training basically that's" }, { "start": 1607.08, "end": 1612, "text": " that's the the robustness to quality drift metric and they have a bunch of" }, { "start": 1612, "end": 1617.32, "text": " other metrics right here where they ablate and so on but I don't want to go" }, { "start": 1617.32, "end": 1624.48, "text": " too much into that I more want to make some comments on on this work so what" }, { "start": 1624.48, "end": 1629.4, "text": " what I think so first of all in a paper like this what what I would like to see" }, { "start": 1629.4, "end": 1637.28, "text": " is like the extrapolation right here to if and where this ever crosses the the" }, { "start": 1637.28, "end": 1642.92, "text": " blue score right because I mean okay it seems like yeah this skew of three is a" }, { "start": 1642.92, "end": 1647.96, "text": " is a big number but who knows if three is a big number right who like we can't" }, { "start": 1647.96, "end": 1652.3600000000001, "text": " assess that what we need to see is really the where the crossover point" }, { "start": 1652.36, "end": 1658.1999999999998, "text": " between the models to assess where does it where is it no longer valid and so on" }, { "start": 1658.1999999999998, "end": 1663.84, "text": " the second part here is that my problem with this method of splitting the data I" }, { "start": 1663.84, "end": 1669.08, "text": " mean yes okay you split the bad from the good but it's not it's not only that" }, { "start": 1669.08, "end": 1674.32, "text": " these things are getting better so right now everyone's using transformers for" }, { "start": 1674.32, "end": 1677.84, "text": " everyone's using BERT for everything right and BERT is a specific" }, { "start": 1677.84, "end": 1682.9599999999998, "text": " architecture that is going to be good at specific things at specific grammatical" }, { "start": 1682.9599999999998, "end": 1688.1599999999999, "text": " constructs in specific languages right so it's the mistakes it makes are very" }, { "start": 1688.1599999999999, "end": 1693.76, "text": " systematic now if in one year or two years all of a sudden the a new model" }, { "start": 1693.76, "end": 1698.6, "text": " pops up I don't know like someone discovers that graph neural networks are" }, { "start": 1698.6, "end": 1702.8, "text": " really good at machine translation these models are going to be attuned to a very" }, { "start": 1702.8, "end": 1708.08, "text": " very different set of construct they might be better overall but they're" }, { "start": 1708.08, "end": 1712.8799999999999, "text": " going to make a different sort of mistake and so I think just assessing" }, { "start": 1712.8799999999999, "end": 1719.04, "text": " the skew via just dividing up the data in into bad and good ratings I don't" }, { "start": 1719.04, "end": 1724.1599999999999, "text": " think that covers these distribution shifts that they set out to cover right" }, { "start": 1724.1599999999999, "end": 1728.9199999999998, "text": " what I would have expected is something like them because these tasks are" }, { "start": 1728.92, "end": 1735.04, "text": " repeated year after year and I would have expected them to for example train" }, { "start": 1735.04, "end": 1741.8400000000001, "text": " on 2017 and then evaluate on 2019 or something like or show like evaluate on" }, { "start": 1741.8400000000001, "end": 1748.4, "text": " 2017 2018 and 2019 and there we would have a much better assessment of a" }, { "start": 1748.4, "end": 1755.5600000000002, "text": " distribution shift over the years right so so it is not super convincing to me" }, { "start": 1755.56, "end": 1760.8799999999999, "text": " and what is most worrisome is if you look at their pre training tasks I mean" }, { "start": 1760.8799999999999, "end": 1767.84, "text": " okay there is a there's blur and rouge but there is BERT score right there is" }, { "start": 1767.84, "end": 1772.96, "text": " entailment which is also a BERT model and the back translation I mean who" }, { "start": 1772.96, "end": 1778.1599999999999, "text": " knows that's probably either going to be a transformer or a LSTM with an" }, { "start": 1778.1599999999999, "end": 1782.96, "text": " attention mechanism which is the attention mechanism is the basis for" }, { "start": 1782.96, "end": 1788.44, "text": " transformers so all of these things are basically going to make the same sort of" }, { "start": 1788.44, "end": 1794.68, "text": " bias mistakes right they're doing to to it's not it's not like there is Gaussian" }, { "start": 1794.68, "end": 1800.16, "text": " noise on top of these things all of these things are going to be weak in the" }, { "start": 1800.16, "end": 1805.16, "text": " same sort of assessments and not agree with like they're going to have" }, { "start": 1805.16, "end": 1809.8, "text": " systematic errors in them with respect to them predicting the human scores and" }, { "start": 1809.8, "end": 1817.24, "text": " if we evaluate our systems that some are also using exactly that thing right so" }, { "start": 1817.24, "end": 1823.32, "text": " these the systems we evaluate they are using the same type of models as here" }, { "start": 1823.32, "end": 1827.8799999999999, "text": " they're going to fall prey to the same type of mistakes and then if we switch" }, { "start": 1827.8799999999999, "end": 1833.24, "text": " over to systems that use some different right so next year we have some systems" }, { "start": 1833.24, "end": 1840.6, "text": " that use different techniques they're going to be like exactly maybe not bad" }, { "start": 1840.6, "end": 1845.2, "text": " in these particular things but bad in other things and then this thing will" }, { "start": 1845.2, "end": 1849.96, "text": " output a systematically biased assessment so it's sort of a house of" }, { "start": 1849.96, "end": 1855.28, "text": " like if you've seen these images of plugging in the power strip into itself" }, { "start": 1855.28, "end": 1860.48, "text": " and you have infinite power it's like it's very to me it seems very dangerous" }, { "start": 1860.48, "end": 1867.2, "text": " to have as such an overlap of architectures and methods to evaluate" }, { "start": 1867.2, "end": 1874.68, "text": " systems as you have in the systems themselves but I hope this will be" }, { "start": 1874.68, "end": 1880.68, "text": " regularly checked with human scores and assessed as to how much these systems" }, { "start": 1880.68, "end": 1886.08, "text": " are out of sync or in sync with humans all right this was it for me for blurt" }, { "start": 1886.08, "end": 1891.1599999999999, "text": " check out they have the code available the metric is available evaluate your" }, { "start": 1891.16, "end": 1919.72, "text": " stuff with it and bye bye" } ]
zt_R85Ife_U
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
[Trash] Automated Inference on Criminality using Face Images
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "trash", "wrong", "phrenology", "physiognomy", "face", "facial", "criminal", "violent", "features", "body", "physical", "visible", "intuition", "smile", "micro", "expression" ]
This paper sets out to build a classifier to distinguish criminals from non-criminals using nothing but a face picture. I explore why the research is trash and what lessons we can learn from it. https://arxiv.org/abs/1611.04135 Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there, take a look at these faces. Try to decide which of these faces are criminals and which ones are law-abiding citizens. I'll give you a second. Okay, got it. So if you decided that these four here are the criminals, you would be correct. And that makes these three the law-abiding citizens. As for this one, maybe if the crime is being too cool. Of course, none of these faces actually exist in real life. These are compositions of eigenfaces, of datasets, of criminals and non-criminals. Today's paper is an absolute controversy. This is going to get me into so much trouble. So if you see something like this in the news, always, always, always go and check. Now we're going to look at automated inference on criminality using face images by Xiaolin Wu and Xi Cheng. On a high level, they're trying to separate criminals from non-criminals using face images. So basically using classifiers on ID photos. This, of course, has generated quite the uproar. I suggest we just dive into the paper and look at what's happening right here. We study for the first time automated inference on criminality based solely on still face images, which is free of any biases and subjective judgments of human observers. So they say we train a bunch of models, including, as you can see, a CNN, using facial images of one thousand eight hundred and fifty six real persons controlled for race, gender, age and facial expressions. Nearly half of whom were convicted criminals for discriminating between criminals and non-criminals. So this is the outset. This is the kind of research question here. Now, immediately you have people jumping up saying that's not possible. And I would agree. But I think actually there are very, very interesting lessons to be learned from this paper. So they're saying they actually managed to do this with their classifiers, actually with all of these classifiers. Of course, deep learning being the best. Also, some discriminating structural features for predicting criminality have been found by machine learning. So they even tell you why. Above all, the most important discovery of this research is that criminal and non-criminal face images populate two quite distinctive manifolds. The variation among criminal faces is significantly greater than that of non-criminal faces. The two manifolds consisting of criminal and non-criminal faces appear to be concentric with the non-criminal manifold lying in the kernel with the smaller span exhibiting a law of normality for faces of non-criminals. Oh, I'm going to be canceled. I don't advocate for this. This is not, this is not, I'm not a fan of this. Just in other words, the faces of general law abiding public have a greater degree of resemblance compared with the faces of criminals. Or criminals have a higher degree of dissimilarity in facial appearance than non-criminals. So basically what they're saying is that the this kind of similarity among the non-criminals in their data set is larger than the similarity among the criminals. OK, so already the outset, right? Then they go into this introduction and in the introduction we won't go through it fully, but they basically introduce the concept of facial recognition. They try to build up kind of an argument where they say faces are different. Some people have hypothesized that it's possible to infer personality traits from facial features. Some studies exist that show that people agree on the perception of these traits. So not the actual traits, but people will kind of agree that a face looks extroverted or more agreeable. People tend to agree that the appearance exists. And then they sort of make the next step and say, OK, can facial features also be used not just for predicting the appearance, but to predict the actual personality trait? For validating the hypothesis on the correlations between the innate traits and social behaviors of a person and the physical characteristics of that person's face, it would be hard pushed to find a more convincing experiment than examining the success rate of discriminating between criminals and non-criminals. So actually, you could agree with this, right? Since this is sort of a distinction one can make about behavior, whether or not someone breaks the law or in this case is caught and convicted and so on. There are like many, many hurdles in this. In essence, the statement sort of makes sense. Like if you could actually do this from facial features, that would be very, first of all, very surprising and second of all, very drastic. People immediately jump to the conclusion that, OK, if such a thing were found, that means you could somehow precognate criminality, which I don't think it has to be, because what could also be the case is they have a quote from Aristotle right here. It is possible to infer character from features if it is granted that body and soul are changed together by the natural affections. One interpretation of me is that, let's say you break the law for whatever, it could be completely moral, like you steal the medicine from the old lady in your house and but you know you broke the law, you know you did something that society doesn't want you to do and that will exert stress on you. You now have to lie to people about this. You now have to sort of make sure you're not caught. You have to worry. Maybe there's a security tape or something like this. And the stress will, we know that stress will physically change you. And that could be in turn made out by your features. For example, the stress of being in jail could change your physical features. And since these are all convicted criminals, one might think that it might be possible. It might. Again, not saying it is, it might. So if we throw away all of the kind of prejudgments on this, it could be an interesting research question, right? Could. Now, whether we want to pursue it or not, that's a different question. But the way they build this up here is that they only have the best of intentions in mind. I feel like this might not be the case. So they say something like this right here. At the onset of this study, our gut feeling is that modern tools of machine learning and computer vision will refute the validity of physiognomy, although the outcomes turn out otherwise. This and this is the part where I just stopped believing them that their intentions were like all good. And it's just about disproving this so we can just lay it to rest because they then very quickly switch when they find something else. Non criminals are the normals and the criminals are like the that just rubs me the wrong way where you'll have to say no. It's like the Pekuk looks like, oh, no, we, you know, we have many social gatherings and our gut feeling is that people aren't really different. And the robes are actually personal protective equipment. It's all actually just a community thing. We all have, you know, good intentions. Oh, and every now and then we lynch you guys going into this with sort of a mixed bag of feelings where you'd have a hypothetically valid research question. But also even the introduction makes it very clear because it's somewhat over the top promising to just be neutral and be good, good intended. Not going to fall for it. Sorry. They say in order to conduct our experiments, they have one thousand eight hundred and fifty six ID photos. The following criteria, Chinese male between ages of 18 and 55, no facial hair, no facial scars or other markings. And the data set is called S. Then there's two subsets, S.N. for non criminals and S.C. for criminals. The non criminals contains ID photos of alone hundred twenty six non criminals that were acquired from the Internet using the web spider tool. They're from a wide gamut of professions and social status, including waiters, construction work, blah, blah, blah. OK. The subset of the criminals contains ID photos of seven hundred and thirty criminals of which three hundred and thirty as are published as wanted suspects by the Ministry of Public Security of China and by the Departments of Public Security for the provinces of Guangdong, Jiangsu, Liaoning, et cetera. The others are provided by city police department in China under a confidentiality agreement. We stress and here here's an important point. We stress that the criminal face images in S.C. are normal ID photos, not police mugshots. So they say they have violent crimes or nonviolent crimes. And so on. So they have these examples here of those images. So the top ones are the criminals and the bottom ones are the non criminals. Now, people immediately see differences here. And if you spotted that all of these have white colors and none of those have white colors, then you would be correct. Now, you're on the right path. You're not actually correct. Correct. But you're on the right path here because actually what they do is they they mask away the colors. So they only extract the face part and the upper neck part. So these this white collar part will actually not be on the image that they analyze to control for clothing, which is good. But it gives you sort of an indication that the origins of the two image groups might not actually be the same. So what you'll have is you'll have basically a database, actually have two databases of criminals, which are so the one database is this wanted. Let's call them W. These are released by the police for wanted criminals. Then the others database is the convicted criminals. Let's call that C. And then on the other side, you have the database of non criminals and the non criminals come from the Internet. So you have three different databases. And of course, these two make will going to make up the criminals and this will make up the non criminals. And the herein lies the problem, right? You even though the white collars are masked out, you have to make sure that whatever you find isn't just a property of how you collected the data. And this doesn't really come through in this paper. So they they do data preparation as again, they mask, they resize and so on. They stress again, all our idea images with frontal lighting. So, yeah. And they OK, so now they test the classifiers. So they say we test logistic regression, logistic regression, KNN, SVM and CNN on the image data set. So for the CNN, you can just input the original image. But for the other classifiers, you need a set of features. And what they do is they concatenate three different image feature vectors. So the first one is facial landmark points that you extract by some sort of tool. You can extract whatever corners of mouth and so on. Then the second facial feature vector generated by a modular PCA. And the third is a facial feature vector based on local binary pattern histograms. So these are these are sort of face features that people use for recognizing faces. They concatenate them. That gives you a feature vector. You feed that into the machine learning algorithm. And they do a we perform a tenfold cross validation for all possible combinations of three feature classifiers and the four types of feature vectors plus the data driven CNN. So they do a tenfold cross validation, right, which basically means you do you partition your data into 10 parts. You take nine to train, predict the one. Then you take the next nine to train, predict the one that you left out and so on. But this kind of you get a train test split across all sorts of splits of your data, which is a it's a you know, it's a valid thing to do. And they discover here that their CNN classifier performs at almost 90 percent accuracy, as you can see here. And even their SVM and the other classifiers, they perform fairly well in recognizing these criminality faces. So. And they analyze the ROC curves and the ROC curves. This is a really this is a classifier that works right. So you can see in the the the other models, but especially the CNN classifier here, works really well. Of course, the question is, what does it work for? So they basically say, all right, we now have a classifier that distinguishes criminals from non-criminals. And I would say you have a classifier that discriminates your particular pictures of criminals from your particular pictures of non-criminals. And if this were submitted to me as a reviewer, I would expect that any sane author would then go and try to invalidate that. So here's what you'll have to do if you want to convince me that this is not just due to how you collected your data. You need to go and you need to basically say, OK, I have these different methods of collecting data right here. Now, maybe I can go to the police and ask them for a picture from the same database of a non convicted, not of a non-criminal, someone that was arrested, but then not convicted. And I can have someone from from here. That can put in that data set, and then you have to show me that your classifier will correctly predict that that's a non-criminal. And if it predicts it's a criminal, it's due to the data set. You can also find one of the criminals, but find their picture on the Internet, like you collected the non-criminals. And that will give you someone from this database in that data set. And then you have to show me that your classifier correctly predicts that's a criminal. You can further convince me that your classifier is neutral to this separation right here of the wanted and convicted criminals, because they all should be criminals. So if your classifier is neutral to that, then it basically doesn't care where it comes from. So this will be a weaker argument, but still one that one could investigate. What do they do for validating their method? Here is where it gets funky. So they say, given the high social sensitivities and repercussions of our topic and skeptics on physiognomy, we try to exercise maximum caution before publishing our results. Yeah, you failed. In playing devil's advocate, we design and conduct the following experiments to challenge the validity of the tested classifiers. For the task of discriminating between criminals and non-criminals. All right, this is it right here. Here is where you give us where you tell us it's not because of how we collected the data. Which is the obvious explanation. We randomly label the faces in the very sample set as negative and positive instances with equal probability and redo all the above experiments of binary classification. Well, how crazy is this? All right, they're basically saying, well, if our classifier were not a criminality classifier, that means we could invalidate it by shuffling the labels. And if that comes out to 50-50, then our classifier obviously works because it's not 50-50 in this data set. So basically, they're just validating that a classification algorithm can classify something. The criticism here is never that they haven't actually trained a working classifier. The criticism is what have they trained a classifier for? But their entire validation procedure is basically, we don't have a bug in our code. The outcomes show that the randomly generated negative and positive instances cannot be distinguished at all. Gee, who guessed? A classifier on random labels doesn't generalize. Man, they say, in fact, we go much further along the self-critical path. All right, here it comes. And carry out the same experiments for random labeling on different samples of the same size and with the same variable control. Only this time in the selection criteria are standard ID photos of Chinese female young middle-aged, or standard ID photos of Caucasian male young middle-aged, of Caucasian female young middle-aged nofacial. So basically, if you train on a randomly labeled data set on any sort of pictures, your classifier will not work. Thanks. Thanks. Maybe that's, I think that's the academically most valid statement in the entire paper. Oh, man, in none of the three cases, any of the four classifiers managed to achieve a true positive rate higher than 53% on randomly labeled positive and negative instances. So the classifier must be valid because... OK, the above experiments rule out that the good accuracies of the four evaluated classifiers in phase inference on criminality are due to data overfitting. No. Otherwise, given the same sample size, they would also be able to distinguish between randomly labeled positive and negative instances with significantly better chances. But they did cross validation. They did. The cross validation prevents the overfitting. We no one criticizes that you over. These people have no idea what they're doing. They have no clue of machine learning. They don't know what the problems with methods are. They don't know what overfitting is and and how you control for it. The big jump of the true positive rate from random labeling to truth labeling on the same set of phase images can only be explained by intrinsic separability of SC and SN. That is true. That is true. But why are they separable? That's the question. OK, as different source cameras generated the ID photos in the set S, now they might be on the right track here. Different source cameras. Maybe they get the idea that different data sources lead to different things. They might leave their signatures that, although below perception threshold in signal strength, could mislead machine learning. OK, so they basically saying different cameras could generate different sort of artifacts. And they rule this out by basically adding noise to the images such that these artifacts would be washed out and the noise doesn't change their results. Gee, they were so close. They were so close to actually doing something useful. OK, this section is where it gets even more interesting. Now they're trying to guess from their classifier what are the actual features that make criminals criminals. So discriminating features right here. Having obtained the above strong empirical evidences for the validity of automated phase induced inference on criminality, one cannot resist the following intriguing questions. What features of a human face betray its owners propensity for crimes? OK, Shakespeare. And they basically they basically go and explain ability route where they see what the classifier pays attention to. And it turns out the classifier pays attention to the following features on the left. You can see where the classifier pays attention to. No surprise here, it pays attention to face features, but they kind of parse out the following three features. First of all, the D, the distance between the eyes in criminals tends to be smaller than in non criminals. The angle between the nose and the corners of the mouth tends to be smaller in criminals than in non criminals. And the curvature of the upper lip tends to be higher in criminals than in non criminals. So let's let's try just from this information to draw the ultimate criminal and non criminal faces. So first of all, the non criminal, let's draw the non criminal as just regular. I'm not very good at this. So here's the nose. And then let's just draw the lips like this. Non criminal. Perfect. Looks like a law abiding citizen to me. Criminal. Right here. So the eyes are closer together. Here's the nose. And then the curvature of the upper lip is higher. So. Hmm. And then the angle between the nose and the outer corners of the mouth is smaller. How can I make the angle smaller? Could it be that if I. Oh, yes. Ah, that's the trick. Criminal, ladies and gentlemen. So are you telling me that all someone has to do to be a criminal is frown? Yeah, totally valid. So they're so close, right? But they say, oh, these are intrinsic facial features. But come on. All right. So they go on to say that they have some histogram differences of these features. So they basically say these features are what's responsible for this. And then they do face clustering, which is beautiful. So first of all, what they do is they sort of take the average faces for criminals and non-criminals. And these are the average faces. So the top are the actual average eigen faces. And the bottom is when you kind of shift the facial landmarks around. The seeming paradox that SC and SN can be classified, but the average faces appear almost the same can. Sorry, average faces appear almost the same. The average faces appear almost the same. What a paradox. These are almost the same. I mean, if I just overlay them one over another, they're almost the same. There is no difference at all. I don't see a difference. What could possibly be the difference? What could be the difference? I don't think these are the most honest of intentions. So they basically do some clustering, which I find interesting. I find interesting, for example, that they don't really explain isomap here. So isomap uses the geodesic distance between two points on the manifold, which is defined between the sum of the weights. So they kind of washy washy isomap. But they then explain k-means in great detail with formulas. And again, I mean, OK, non-machine learning people can do machine learning. That's fine. But they're not really into the matter here. And they try k-means clustering. And they find, in their opinion, they find four clusters of criminals and three clusters of non-criminals. Now, why three and four? And usually, you can do something like this by clustering and then measuring the residual variance in your data. So how much does one cluster explain, two clusters, and so on? So here you can see the curves for non-criminals and criminals. Now, they claim that the optimal number of clusters here for non-criminals is three, which makes no sense to me. Like, why three? What you usually want to find is kind of a kink in your curve. Like, if it's steep and then it gets flat, that means that up until then, your clusters actually buy you something good. And from then, they basically are useless. So if I were to guess, I would divide the criminals into two clusters and the non-criminals into a single cluster, because that's pretty flat. Certainly not the non-criminals into three and the criminals into four. That makes no sense at all. Like, why? And they say, OK, these are the clusters right here. And these are the pictures I showed you at the beginning. What surprise? The bottom ones, the non-criminals, are smiling and the top ones aren't. Gee, I wonder why the method works. And the interesting part here is that how can we justify? How can we say if we decide on one cluster for non-criminals and two clusters for criminals, what does that remind us of? Oh, yes, that is exactly how we collected the data. That is exactly the fact that we collected the non-criminals with one procedure and the criminals with two different procedures. Gee, their data set replicates exactly how they collected the data. And that convinces me that it says absolutely nothing about the actual criminality of people. It's just that police, even if it's ID photos, they don't smile. And pictures on the internet, sometimes people smile. The rest of the paper is pretty much garbage. They did reply to critics and they kind of take issue with a number of things. So first, name calling. I don't mean to name call, but it's going to happen. I don't get why people call them racist because it's all the same. Doesn't, no, no, trouble. And smiley. Ha. In our experiments, we did control facial expression, but not faint micro expression. The critique that our methods can be reduced to a simple discriminator of smiling versus not smiling has given us a new angle of scrutiny. They say, well, Westerners think that this is smiling, but our Chinese students and colleagues, even after being prompted to consider the cue of smile, fail to detect the same. So basically, their answer is, yeah, you think so, but we don't. And then they say instead, they only find the faces in the bottom row appearing somewhat more relaxed than those in the top row. And then here's the crucial part. All criminal ID photos are government issues, but not mock shots. They are normal government issue ID portraits like those driver license in the USA. In contrast, most of the non-criminal ID style photos are taken officially by some organizations, such as real estate companies, law firms, et cetera, for their website. You know what it always says when you take your picture for a government ID? Please don't smile. Imagine if your law firm comes to you and say, we want a picture for our website. Please don't smile. All right, this was it for this paper. If you like this content, please consider subscribing and sharing it out. This is absolute garbage. And there is important lessons to learn here, namely Occam's razor is a real problem in research. People often fail to criticize themselves enough and to think, is there maybe a different explanation for why I'm getting the results that I'm getting? And how can I disprove that that is the case? And how can I make sure that the effect that I'm seeing actually comes from the place where I claim it comes from? I think this is a real threat throughout all of research. I've seen many papers that I've reviewed that are exactly of the same fallacy, not as touchy subjects as this one, but it definitely exists. And I remind everyone that learn a lesson from this. And have a good day. Thank you.
[ { "start": 0, "end": 10.32, "text": " Hi there, take a look at these faces. Try to decide which of these faces are criminals and which ones are law-abiding citizens." }, { "start": 10.32, "end": 12.32, "text": " I'll give you a second." }, { "start": 13.72, "end": 20.52, "text": " Okay, got it. So if you decided that these four here are the criminals, you would be correct." }, { "start": 20.52, "end": 23.92, "text": " And that makes these three the law-abiding citizens." }, { "start": 23.92, "end": 28.76, "text": " As for this one, maybe if the crime is being too cool." }, { "start": 28.76, "end": 31.720000000000002, "text": " Of course, none of these faces actually exist in real life." }, { "start": 31.720000000000002, "end": 38.6, "text": " These are compositions of eigenfaces, of datasets, of criminals and non-criminals." }, { "start": 38.6, "end": 45.8, "text": " Today's paper is an absolute controversy. This is going to get me into so much trouble." }, { "start": 45.8, "end": 51, "text": " So if you see something like this in the news, always, always, always go and check." }, { "start": 51, "end": 59.96, "text": " Now we're going to look at automated inference on criminality using face images by Xiaolin Wu and Xi Cheng." }, { "start": 59.96, "end": 66.84, "text": " On a high level, they're trying to separate criminals from non-criminals using face images." }, { "start": 66.84, "end": 71.6, "text": " So basically using classifiers on ID photos." }, { "start": 71.6, "end": 75.44, "text": " This, of course, has generated quite the uproar." }, { "start": 75.44, "end": 80.32, "text": " I suggest we just dive into the paper and look at what's happening right here." }, { "start": 80.32, "end": 88.24, "text": " We study for the first time automated inference on criminality based solely on still face images," }, { "start": 88.24, "end": 95.55999999999999, "text": " which is free of any biases and subjective judgments of human observers." }, { "start": 95.55999999999999, "end": 100.16, "text": " So they say we train a bunch of models, including, as you can see, a CNN," }, { "start": 100.16, "end": 109.32, "text": " using facial images of one thousand eight hundred and fifty six real persons controlled for race, gender, age and facial expressions." }, { "start": 109.32, "end": 117.96, "text": " Nearly half of whom were convicted criminals for discriminating between criminals and non-criminals." }, { "start": 117.96, "end": 122.32, "text": " So this is the outset. This is the kind of research question here." }, { "start": 122.32, "end": 128.12, "text": " Now, immediately you have people jumping up saying that's not possible." }, { "start": 128.12, "end": 130.32, "text": " And I would agree." }, { "start": 130.32, "end": 137.24, "text": " But I think actually there are very, very interesting lessons to be learned from this paper." }, { "start": 137.24, "end": 143.52, "text": " So they're saying they actually managed to do this with their classifiers, actually with all of these classifiers." }, { "start": 143.52, "end": 145.68, "text": " Of course, deep learning being the best." }, { "start": 145.68, "end": 152.68, "text": " Also, some discriminating structural features for predicting criminality have been found by machine learning." }, { "start": 152.68, "end": 154.96, "text": " So they even tell you why." }, { "start": 154.96, "end": 165.20000000000002, "text": " Above all, the most important discovery of this research is that criminal and non-criminal face images populate two quite distinctive manifolds." }, { "start": 165.2, "end": 172.32, "text": " The variation among criminal faces is significantly greater than that of non-criminal faces." }, { "start": 172.32, "end": 181.76, "text": " The two manifolds consisting of criminal and non-criminal faces appear to be concentric with the non-criminal manifold lying in the kernel" }, { "start": 181.76, "end": 190.95999999999998, "text": " with the smaller span exhibiting a law of normality for faces of non-criminals." }, { "start": 190.95999999999998, "end": 194.79999999999998, "text": " Oh, I'm going to be canceled." }, { "start": 194.8, "end": 196.48000000000002, "text": " I don't advocate for this." }, { "start": 196.48000000000002, "end": 200.56, "text": " This is not, this is not, I'm not a fan of this." }, { "start": 200.56, "end": 211.96, "text": " Just in other words, the faces of general law abiding public have a greater degree of resemblance compared with the faces of criminals." }, { "start": 211.96, "end": 217.56, "text": " Or criminals have a higher degree of dissimilarity in facial appearance than non-criminals." }, { "start": 217.56, "end": 228.08, "text": " So basically what they're saying is that the this kind of similarity among the non-criminals in their data set is larger than the similarity among the criminals." }, { "start": 228.08, "end": 231.76, "text": " OK, so already the outset, right?" }, { "start": 231.76, "end": 241.4, "text": " Then they go into this introduction and in the introduction we won't go through it fully, but they basically introduce the concept of facial recognition." }, { "start": 241.4, "end": 246.36, "text": " They try to build up kind of an argument where they say faces are different." }, { "start": 246.36, "end": 254.96, "text": " Some people have hypothesized that it's possible to infer personality traits from facial features." }, { "start": 254.96, "end": 261.52000000000004, "text": " Some studies exist that show that people agree on the perception of these traits." }, { "start": 261.52000000000004, "end": 269.44, "text": " So not the actual traits, but people will kind of agree that a face looks extroverted or more agreeable." }, { "start": 269.44, "end": 273, "text": " People tend to agree that the appearance exists." }, { "start": 273, "end": 286.08, "text": " And then they sort of make the next step and say, OK, can facial features also be used not just for predicting the appearance, but to predict the actual personality trait?" }, { "start": 286.08, "end": 297.16, "text": " For validating the hypothesis on the correlations between the innate traits and social behaviors of a person and the physical characteristics of that person's face," }, { "start": 297.16, "end": 306.52000000000004, "text": " it would be hard pushed to find a more convincing experiment than examining the success rate of discriminating between criminals and non-criminals." }, { "start": 306.52000000000004, "end": 310.24, "text": " So actually, you could agree with this, right?" }, { "start": 310.24, "end": 319.8, "text": " Since this is sort of a distinction one can make about behavior, whether or not someone breaks the law or in this case is caught and convicted and so on." }, { "start": 319.8, "end": 322.48, "text": " There are like many, many hurdles in this." }, { "start": 322.48, "end": 325.88, "text": " In essence, the statement sort of makes sense." }, { "start": 325.88, "end": 335.88, "text": " Like if you could actually do this from facial features, that would be very, first of all, very surprising and second of all, very drastic." }, { "start": 335.88, "end": 344.44, "text": " People immediately jump to the conclusion that, OK, if such a thing were found, that means you could somehow precognate criminality," }, { "start": 344.44, "end": 354.56, "text": " which I don't think it has to be, because what could also be the case is they have a quote from Aristotle right here." }, { "start": 354.56, "end": 365.16, "text": " It is possible to infer character from features if it is granted that body and soul are changed together by the natural affections." }, { "start": 365.16, "end": 371.08, "text": " One interpretation of me is that, let's say you break the law for whatever, it could be completely moral," }, { "start": 371.08, "end": 378.28, "text": " like you steal the medicine from the old lady in your house and but you know you broke the law," }, { "start": 378.28, "end": 383.64, "text": " you know you did something that society doesn't want you to do and that will exert stress on you." }, { "start": 383.64, "end": 386.32, "text": " You now have to lie to people about this." }, { "start": 386.32, "end": 389.15999999999997, "text": " You now have to sort of make sure you're not caught." }, { "start": 389.15999999999997, "end": 392.76, "text": " You have to worry. Maybe there's a security tape or something like this." }, { "start": 392.76, "end": 398.64, "text": " And the stress will, we know that stress will physically change you." }, { "start": 398.64, "end": 402.47999999999996, "text": " And that could be in turn made out by your features." }, { "start": 402.47999999999996, "end": 407.15999999999997, "text": " For example, the stress of being in jail could change your physical features." }, { "start": 407.15999999999997, "end": 413.36, "text": " And since these are all convicted criminals, one might think that it might be possible." }, { "start": 413.36, "end": 418.36, "text": " It might. Again, not saying it is, it might." }, { "start": 418.36, "end": 428.28000000000003, "text": " So if we throw away all of the kind of prejudgments on this, it could be an interesting research question, right?" }, { "start": 428.28000000000003, "end": 432.8, "text": " Could. Now, whether we want to pursue it or not, that's a different question." }, { "start": 432.8, "end": 437.6, "text": " But the way they build this up here is that they only have the best of intentions in mind." }, { "start": 437.6, "end": 440.44, "text": " I feel like this might not be the case." }, { "start": 440.44, "end": 442.92, "text": " So they say something like this right here." }, { "start": 442.92, "end": 454.24, "text": " At the onset of this study, our gut feeling is that modern tools of machine learning and computer vision will refute the validity of physiognomy," }, { "start": 454.24, "end": 458.08000000000004, "text": " although the outcomes turn out otherwise." }, { "start": 458.08000000000004, "end": 463.44, "text": " This and this is the part where I just stopped believing them that their intentions were like all good." }, { "start": 463.44, "end": 471.36, "text": " And it's just about disproving this so we can just lay it to rest because they then very quickly switch when they find something else." }, { "start": 471.36, "end": 480.48, "text": " Non criminals are the normals and the criminals are like the that just rubs me the wrong way where you'll have to say no." }, { "start": 480.48, "end": 488.6, "text": " It's like the Pekuk looks like, oh, no, we, you know, we have many social gatherings and our gut feeling is that people aren't really different." }, { "start": 488.6, "end": 492.04, "text": " And the robes are actually personal protective equipment." }, { "start": 492.04, "end": 494.52000000000004, "text": " It's all actually just a community thing." }, { "start": 494.52000000000004, "end": 497.6, "text": " We all have, you know, good intentions." }, { "start": 497.6, "end": 507.64000000000004, "text": " Oh, and every now and then we lynch you guys going into this with sort of a mixed bag of feelings where you'd have a hypothetically valid research question." }, { "start": 507.64000000000004, "end": 517.5600000000001, "text": " But also even the introduction makes it very clear because it's somewhat over the top promising to just be neutral and be good, good intended." }, { "start": 517.5600000000001, "end": 519.12, "text": " Not going to fall for it. Sorry." }, { "start": 519.12, "end": 524.8000000000001, "text": " They say in order to conduct our experiments, they have one thousand eight hundred and fifty six ID photos." }, { "start": 524.8, "end": 532.3199999999999, "text": " The following criteria, Chinese male between ages of 18 and 55, no facial hair, no facial scars or other markings." }, { "start": 532.3199999999999, "end": 534.52, "text": " And the data set is called S." }, { "start": 534.52, "end": 541.28, "text": " Then there's two subsets, S.N. for non criminals and S.C. for criminals." }, { "start": 541.28, "end": 552.1999999999999, "text": " The non criminals contains ID photos of alone hundred twenty six non criminals that were acquired from the Internet using the web spider tool." }, { "start": 552.2, "end": 557.5200000000001, "text": " They're from a wide gamut of professions and social status, including waiters, construction work, blah, blah, blah." }, { "start": 557.5200000000001, "end": 571.24, "text": " OK. The subset of the criminals contains ID photos of seven hundred and thirty criminals of which three hundred and thirty as are published as wanted suspects by the Ministry of Public Security of China" }, { "start": 571.24, "end": 580.32, "text": " and by the Departments of Public Security for the provinces of Guangdong, Jiangsu, Liaoning, et cetera." }, { "start": 580.32, "end": 586.48, "text": " The others are provided by city police department in China under a confidentiality agreement." }, { "start": 586.48, "end": 589.6800000000001, "text": " We stress and here here's an important point." }, { "start": 589.6800000000001, "end": 599.88, "text": " We stress that the criminal face images in S.C. are normal ID photos, not police mugshots." }, { "start": 599.88, "end": 608.2, "text": " So they say they have violent crimes or nonviolent crimes." }, { "start": 608.2, "end": 612.9200000000001, "text": " And so on. So they have these examples here of those images." }, { "start": 612.9200000000001, "end": 620.44, "text": " So the top ones are the criminals and the bottom ones are the non criminals." }, { "start": 620.44, "end": 625.24, "text": " Now, people immediately see differences here." }, { "start": 625.24, "end": 635.4000000000001, "text": " And if you spotted that all of these have white colors and none of those have white colors, then you would be correct." }, { "start": 635.4, "end": 638.3199999999999, "text": " Now, you're on the right path. You're not actually correct." }, { "start": 638.3199999999999, "end": 646, "text": " Correct. But you're on the right path here because actually what they do is they they mask away the colors." }, { "start": 646, "end": 651.64, "text": " So they only extract the face part and the upper neck part." }, { "start": 651.64, "end": 660.68, "text": " So these this white collar part will actually not be on the image that they analyze to control for clothing, which is good." }, { "start": 660.68, "end": 672.16, "text": " But it gives you sort of an indication that the origins of the two image groups might not actually be the same." }, { "start": 672.16, "end": 687.12, "text": " So what you'll have is you'll have basically a database, actually have two databases of criminals, which are so the one database is this wanted." }, { "start": 687.12, "end": 693.24, "text": " Let's call them W. These are released by the police for wanted criminals." }, { "start": 693.24, "end": 698.4, "text": " Then the others database is the convicted criminals." }, { "start": 698.4, "end": 709.76, "text": " Let's call that C. And then on the other side, you have the database of non criminals and the non criminals come from the Internet." }, { "start": 709.76, "end": 711.84, "text": " So you have three different databases." }, { "start": 711.84, "end": 718.88, "text": " And of course, these two make will going to make up the criminals and this will make up the non criminals." }, { "start": 718.88, "end": 724.12, "text": " And the herein lies the problem, right?" }, { "start": 724.12, "end": 735.12, "text": " You even though the white collars are masked out, you have to make sure that whatever you find isn't just a property of how you collected the data." }, { "start": 735.12, "end": 739.5600000000001, "text": " And this doesn't really come through in this paper." }, { "start": 739.56, "end": 745.52, "text": " So they they do data preparation as again, they mask, they resize and so on." }, { "start": 745.52, "end": 749.56, "text": " They stress again, all our idea images with frontal lighting." }, { "start": 749.56, "end": 756.9599999999999, "text": " So, yeah. And they OK, so now they test the classifiers." }, { "start": 756.9599999999999, "end": 768.1199999999999, "text": " So they say we test logistic regression, logistic regression, KNN, SVM and CNN on the image data set." }, { "start": 768.12, "end": 772.64, "text": " So for the CNN, you can just input the original image." }, { "start": 772.64, "end": 775.96, "text": " But for the other classifiers, you need a set of features." }, { "start": 775.96, "end": 781.28, "text": " And what they do is they concatenate three different image feature vectors." }, { "start": 781.28, "end": 787.72, "text": " So the first one is facial landmark points that you extract by some sort of tool." }, { "start": 787.72, "end": 792.36, "text": " You can extract whatever corners of mouth and so on." }, { "start": 792.36, "end": 805.44, "text": " Then the second facial feature vector generated by a modular PCA. And the third is a facial feature vector based on local binary pattern histograms." }, { "start": 805.44, "end": 811.24, "text": " So these are these are sort of face features that people use for recognizing faces." }, { "start": 811.24, "end": 815.8000000000001, "text": " They concatenate them. That gives you a feature vector. You feed that into the machine learning algorithm." }, { "start": 815.8, "end": 826.7199999999999, "text": " And they do a we perform a tenfold cross validation for all possible combinations of three feature classifiers and the four types of feature vectors plus the data driven CNN." }, { "start": 826.7199999999999, "end": 834.76, "text": " So they do a tenfold cross validation, right, which basically means you do you partition your data into 10 parts." }, { "start": 834.76, "end": 837.4799999999999, "text": " You take nine to train, predict the one." }, { "start": 837.4799999999999, "end": 841.4799999999999, "text": " Then you take the next nine to train, predict the one that you left out and so on." }, { "start": 841.48, "end": 851.64, "text": " But this kind of you get a train test split across all sorts of splits of your data, which is a it's a you know, it's a valid thing to do." }, { "start": 851.64, "end": 861.04, "text": " And they discover here that their CNN classifier performs at almost 90 percent accuracy, as you can see here." }, { "start": 861.04, "end": 873.76, "text": " And even their SVM and the other classifiers, they perform fairly well in recognizing these criminality faces." }, { "start": 873.76, "end": 879.56, "text": " So. And they analyze the ROC curves and the ROC curves." }, { "start": 879.56, "end": 882.8, "text": " This is a really this is a classifier that works right." }, { "start": 882.8, "end": 892.16, "text": " So you can see in the the the other models, but especially the CNN classifier here, works really well." }, { "start": 892.16, "end": 898.04, "text": " Of course, the question is, what does it work for?" }, { "start": 898.04, "end": 903.92, "text": " So they basically say, all right, we now have a classifier that distinguishes criminals from non-criminals." }, { "start": 903.92, "end": 912.52, "text": " And I would say you have a classifier that discriminates your particular pictures of criminals from your particular pictures of non-criminals." }, { "start": 912.52, "end": 924.1999999999999, "text": " And if this were submitted to me as a reviewer, I would expect that any sane author would then go and try to invalidate that." }, { "start": 924.1999999999999, "end": 930.56, "text": " So here's what you'll have to do if you want to convince me that this is not just due to how you collected your data." }, { "start": 930.56, "end": 939.4, "text": " You need to go and you need to basically say, OK, I have these different methods of collecting data right here." }, { "start": 939.4, "end": 947.68, "text": " Now, maybe I can go to the police and ask them for a picture from the same database of a non convicted," }, { "start": 947.68, "end": 952.52, "text": " not of a non-criminal, someone that was arrested, but then not convicted." }, { "start": 952.52, "end": 958.1999999999999, "text": " And I can have someone from from here." }, { "start": 958.1999999999999, "end": 965.1999999999999, "text": " That can put in that data set, and then you have to show me that your classifier will correctly predict that that's a non-criminal." }, { "start": 965.2, "end": 970.2, "text": " And if it predicts it's a criminal, it's due to the data set." }, { "start": 970.2, "end": 978.5600000000001, "text": " You can also find one of the criminals, but find their picture on the Internet, like you collected the non-criminals." }, { "start": 978.5600000000001, "end": 983.0400000000001, "text": " And that will give you someone from this database in that data set." }, { "start": 983.0400000000001, "end": 988.1600000000001, "text": " And then you have to show me that your classifier correctly predicts that's a criminal." }, { "start": 988.16, "end": 998.56, "text": " You can further convince me that your classifier is neutral to this separation right here of the wanted and convicted criminals," }, { "start": 998.56, "end": 1001.36, "text": " because they all should be criminals." }, { "start": 1001.36, "end": 1007.8399999999999, "text": " So if your classifier is neutral to that, then it basically doesn't care where it comes from." }, { "start": 1007.8399999999999, "end": 1013.6, "text": " So this will be a weaker argument, but still one that one could investigate." }, { "start": 1013.6, "end": 1017.16, "text": " What do they do for validating their method?" }, { "start": 1017.16, "end": 1020.28, "text": " Here is where it gets funky." }, { "start": 1020.28, "end": 1028.48, "text": " So they say, given the high social sensitivities and repercussions of our topic and skeptics on physiognomy," }, { "start": 1028.48, "end": 1034.8, "text": " we try to exercise maximum caution before publishing our results." }, { "start": 1034.8, "end": 1036.56, "text": " Yeah, you failed." }, { "start": 1036.56, "end": 1047, "text": " In playing devil's advocate, we design and conduct the following experiments to challenge the validity of the tested classifiers." }, { "start": 1047, "end": 1050.6, "text": " For the task of discriminating between criminals and non-criminals." }, { "start": 1050.6, "end": 1052.52, "text": " All right, this is it right here." }, { "start": 1052.52, "end": 1059.04, "text": " Here is where you give us where you tell us it's not because of how we collected the data." }, { "start": 1059.04, "end": 1062.96, "text": " Which is the obvious explanation." }, { "start": 1062.96, "end": 1070.8, "text": " We randomly label the faces in the very sample set as negative and positive instances with equal probability" }, { "start": 1070.8, "end": 1080, "text": " and redo all the above experiments of binary classification." }, { "start": 1080, "end": 1081.84, "text": " Well, how crazy is this?" }, { "start": 1081.84, "end": 1088.68, "text": " All right, they're basically saying, well, if our classifier were not a criminality classifier," }, { "start": 1088.68, "end": 1093.1599999999999, "text": " that means we could invalidate it by shuffling the labels." }, { "start": 1093.16, "end": 1104.8000000000002, "text": " And if that comes out to 50-50, then our classifier obviously works because it's not 50-50 in this data set." }, { "start": 1104.8000000000002, "end": 1112.44, "text": " So basically, they're just validating that a classification algorithm can classify something." }, { "start": 1112.44, "end": 1118.6000000000001, "text": " The criticism here is never that they haven't actually trained a working classifier." }, { "start": 1118.6000000000001, "end": 1122.76, "text": " The criticism is what have they trained a classifier for?" }, { "start": 1122.76, "end": 1132.12, "text": " But their entire validation procedure is basically, we don't have a bug in our code." }, { "start": 1132.12, "end": 1141.16, "text": " The outcomes show that the randomly generated negative and positive instances cannot be distinguished at all." }, { "start": 1141.16, "end": 1143.4, "text": " Gee, who guessed?" }, { "start": 1143.4, "end": 1147, "text": " A classifier on random labels doesn't generalize." }, { "start": 1147, "end": 1155.88, "text": " Man, they say, in fact, we go much further along the self-critical path." }, { "start": 1155.88, "end": 1158.56, "text": " All right, here it comes." }, { "start": 1158.56, "end": 1173.08, "text": " And carry out the same experiments for random labeling on different samples of the same size and with the same variable control." }, { "start": 1173.08, "end": 1179.32, "text": " Only this time in the selection criteria are standard ID photos of Chinese female young middle-aged," }, { "start": 1179.32, "end": 1186.3999999999999, "text": " or standard ID photos of Caucasian male young middle-aged, of Caucasian female young middle-aged nofacial." }, { "start": 1186.3999999999999, "end": 1198, "text": " So basically, if you train on a randomly labeled data set on any sort of pictures, your classifier will not work." }, { "start": 1198, "end": 1200.12, "text": " Thanks. Thanks." }, { "start": 1200.12, "end": 1206.52, "text": " Maybe that's, I think that's the academically most valid statement in the entire paper." }, { "start": 1206.52, "end": 1217.36, "text": " Oh, man, in none of the three cases, any of the four classifiers managed to achieve a true positive rate higher than 53% on randomly labeled positive and negative instances." }, { "start": 1217.36, "end": 1222.32, "text": " So the classifier must be valid because..." }, { "start": 1222.32, "end": 1233.4399999999998, "text": " OK, the above experiments rule out that the good accuracies of the four evaluated classifiers in phase inference on criminality are due to data overfitting." }, { "start": 1233.4399999999998, "end": 1236.12, "text": " No." }, { "start": 1236.12, "end": 1246.9199999999998, "text": " Otherwise, given the same sample size, they would also be able to distinguish between randomly labeled positive and negative instances with significantly better chances." }, { "start": 1246.9199999999998, "end": 1249.2, "text": " But they did cross validation." }, { "start": 1249.2, "end": 1254.76, "text": " They did. The cross validation prevents the overfitting." }, { "start": 1254.76, "end": 1257.32, "text": " We no one criticizes that you over." }, { "start": 1257.32, "end": 1260.72, "text": " These people have no idea what they're doing." }, { "start": 1260.72, "end": 1263.0800000000002, "text": " They have no clue of machine learning." }, { "start": 1263.0800000000002, "end": 1267.3600000000001, "text": " They don't know what the problems with methods are." }, { "start": 1267.3600000000001, "end": 1277.8400000000001, "text": " They don't know what overfitting is and and how you control for it." }, { "start": 1277.84, "end": 1287.12, "text": " The big jump of the true positive rate from random labeling to truth labeling on the same set of phase images can only be explained by intrinsic separability of SC and SN." }, { "start": 1287.12, "end": 1289.24, "text": " That is true. That is true." }, { "start": 1289.24, "end": 1292.1599999999999, "text": " But why are they separable?" }, { "start": 1292.1599999999999, "end": 1293.6799999999998, "text": " That's the question." }, { "start": 1293.6799999999998, "end": 1300.9199999999998, "text": " OK, as different source cameras generated the ID photos in the set S, now they might be on the right track here." }, { "start": 1300.9199999999998, "end": 1303.3999999999999, "text": " Different source cameras." }, { "start": 1303.4, "end": 1311, "text": " Maybe they get the idea that different data sources lead to different things." }, { "start": 1311, "end": 1320.44, "text": " They might leave their signatures that, although below perception threshold in signal strength, could mislead machine learning." }, { "start": 1320.44, "end": 1326.72, "text": " OK, so they basically saying different cameras could generate different sort of artifacts." }, { "start": 1326.72, "end": 1338.6000000000001, "text": " And they rule this out by basically adding noise to the images such that these artifacts would be washed out and the noise doesn't change their results." }, { "start": 1338.6000000000001, "end": 1341.76, "text": " Gee, they were so close." }, { "start": 1341.76, "end": 1346.64, "text": " They were so close to actually doing something useful." }, { "start": 1346.64, "end": 1349.72, "text": " OK, this section is where it gets even more interesting." }, { "start": 1349.72, "end": 1358.44, "text": " Now they're trying to guess from their classifier what are the actual features that make criminals criminals." }, { "start": 1358.44, "end": 1361.8, "text": " So discriminating features right here." }, { "start": 1361.8, "end": 1369.1200000000001, "text": " Having obtained the above strong empirical evidences for the validity of automated phase induced inference on criminality," }, { "start": 1369.1200000000001, "end": 1373.04, "text": " one cannot resist the following intriguing questions." }, { "start": 1373.04, "end": 1383.8, "text": " What features of a human face betray its owners propensity for crimes?" }, { "start": 1383.8, "end": 1385.36, "text": " OK, Shakespeare." }, { "start": 1385.36, "end": 1392.56, "text": " And they basically they basically go and explain ability route where they see what the classifier pays attention to." }, { "start": 1392.56, "end": 1397.6, "text": " And it turns out the classifier pays attention to the following features on the left." }, { "start": 1397.6, "end": 1400.6399999999999, "text": " You can see where the classifier pays attention to." }, { "start": 1400.64, "end": 1409.68, "text": " No surprise here, it pays attention to face features, but they kind of parse out the following three features." }, { "start": 1409.68, "end": 1422, "text": " First of all, the D, the distance between the eyes in criminals tends to be smaller than in non criminals." }, { "start": 1422, "end": 1432.64, "text": " The angle between the nose and the corners of the mouth tends to be smaller in criminals than in non criminals." }, { "start": 1432.64, "end": 1440.04, "text": " And the curvature of the upper lip tends to be higher in criminals than in non criminals." }, { "start": 1440.04, "end": 1447.16, "text": " So let's let's try just from this information to draw the ultimate criminal and non criminal faces." }, { "start": 1447.16, "end": 1455.44, "text": " So first of all, the non criminal, let's draw the non criminal as just regular." }, { "start": 1455.44, "end": 1456.88, "text": " I'm not very good at this." }, { "start": 1456.88, "end": 1459.1200000000001, "text": " So here's the nose." }, { "start": 1459.1200000000001, "end": 1464, "text": " And then let's just draw the lips like this." }, { "start": 1464, "end": 1464.88, "text": " Non criminal." }, { "start": 1464.88, "end": 1465.68, "text": " Perfect." }, { "start": 1465.68, "end": 1468.5600000000002, "text": " Looks like a law abiding citizen to me." }, { "start": 1468.5600000000002, "end": 1471.0800000000002, "text": " Criminal." }, { "start": 1471.0800000000002, "end": 1472.44, "text": " Right here." }, { "start": 1472.44, "end": 1477.44, "text": " So the eyes are closer together." }, { "start": 1477.44, "end": 1480.24, "text": " Here's the nose." }, { "start": 1480.24, "end": 1483.52, "text": " And then the curvature of the upper lip is higher." }, { "start": 1483.52, "end": 1486.3600000000001, "text": " So." }, { "start": 1486.3600000000001, "end": 1488.16, "text": " Hmm." }, { "start": 1488.16, "end": 1496.16, "text": " And then the angle between the nose and the outer corners of the mouth is smaller." }, { "start": 1496.16, "end": 1498.88, "text": " How can I make the angle smaller?" }, { "start": 1498.88, "end": 1503.7600000000002, "text": " Could it be that if I." }, { "start": 1503.7600000000002, "end": 1505.8000000000002, "text": " Oh, yes." }, { "start": 1505.8000000000002, "end": 1511.48, "text": " Ah, that's the trick." }, { "start": 1511.48, "end": 1514.48, "text": " Criminal, ladies and gentlemen." }, { "start": 1514.48, "end": 1523.8000000000002, "text": " So are you telling me that all someone has to do to be a criminal is frown?" }, { "start": 1523.8000000000002, "end": 1528.68, "text": " Yeah, totally valid." }, { "start": 1528.68, "end": 1532.72, "text": " So they're so close, right?" }, { "start": 1532.72, "end": 1535.44, "text": " But they say, oh, these are intrinsic facial features." }, { "start": 1535.44, "end": 1537.5600000000002, "text": " But come on." }, { "start": 1537.5600000000002, "end": 1538.0800000000002, "text": " All right." }, { "start": 1538.0800000000002, "end": 1544.68, "text": " So they go on to say that they have some histogram differences of these features." }, { "start": 1544.68, "end": 1548.28, "text": " So they basically say these features are what's responsible for this." }, { "start": 1548.28, "end": 1551.96, "text": " And then they do face clustering, which is beautiful." }, { "start": 1551.96, "end": 1555.92, "text": " So first of all, what they do is they sort of take the average faces" }, { "start": 1555.92, "end": 1558.92, "text": " for criminals and non-criminals." }, { "start": 1558.92, "end": 1560.52, "text": " And these are the average faces." }, { "start": 1560.52, "end": 1562.88, "text": " So the top are the actual average eigen faces." }, { "start": 1562.88, "end": 1567.2, "text": " And the bottom is when you kind of shift the facial landmarks around." }, { "start": 1567.2, "end": 1573.52, "text": " The seeming paradox that SC and SN can be classified," }, { "start": 1573.52, "end": 1578.3600000000001, "text": " but the average faces appear almost the same can." }, { "start": 1578.3600000000001, "end": 1582.1200000000001, "text": " Sorry, average faces appear almost the same." }, { "start": 1582.1200000000001, "end": 1585.3600000000001, "text": " The average faces appear almost the same." }, { "start": 1585.36, "end": 1586.4799999999998, "text": " What a paradox." }, { "start": 1586.4799999999998, "end": 1588.12, "text": " These are almost the same." }, { "start": 1588.12, "end": 1594.8, "text": " I mean, if I just overlay them one over another, they're almost the same." }, { "start": 1594.8, "end": 1597.8, "text": " There is no difference at all." }, { "start": 1597.8, "end": 1599.9599999999998, "text": " I don't see a difference." }, { "start": 1599.9599999999998, "end": 1604.08, "text": " What could possibly be the difference?" }, { "start": 1604.08, "end": 1605.7199999999998, "text": " What could be the difference?" }, { "start": 1605.7199999999998, "end": 1609.32, "text": " I don't think these are the most honest of intentions." }, { "start": 1609.32, "end": 1614.6399999999999, "text": " So they basically do some clustering, which I find interesting." }, { "start": 1614.64, "end": 1618.48, "text": " I find interesting, for example, that they don't really explain isomap here." }, { "start": 1618.48, "end": 1623.1200000000001, "text": " So isomap uses the geodesic distance between two points on the manifold," }, { "start": 1623.1200000000001, "end": 1626.1200000000001, "text": " which is defined between the sum of the weights." }, { "start": 1626.1200000000001, "end": 1629.0400000000002, "text": " So they kind of washy washy isomap." }, { "start": 1629.0400000000002, "end": 1635.4, "text": " But they then explain k-means in great detail with formulas." }, { "start": 1635.4, "end": 1641.88, "text": " And again, I mean, OK, non-machine learning people" }, { "start": 1641.88, "end": 1642.8000000000002, "text": " can do machine learning." }, { "start": 1642.8000000000002, "end": 1643.4, "text": " That's fine." }, { "start": 1643.4, "end": 1647.68, "text": " But they're not really into the matter here." }, { "start": 1647.68, "end": 1650, "text": " And they try k-means clustering." }, { "start": 1650, "end": 1656.52, "text": " And they find, in their opinion, they find four clusters of criminals" }, { "start": 1656.52, "end": 1658.96, "text": " and three clusters of non-criminals." }, { "start": 1658.96, "end": 1661.8000000000002, "text": " Now, why three and four?" }, { "start": 1661.8000000000002, "end": 1664.16, "text": " And usually, you can do something like this by clustering" }, { "start": 1664.16, "end": 1667.0800000000002, "text": " and then measuring the residual variance in your data." }, { "start": 1667.0800000000002, "end": 1670.76, "text": " So how much does one cluster explain, two clusters, and so on?" }, { "start": 1670.76, "end": 1675, "text": " So here you can see the curves for non-criminals and criminals." }, { "start": 1675, "end": 1679.24, "text": " Now, they claim that the optimal number of clusters here for non-criminals" }, { "start": 1679.24, "end": 1683.04, "text": " is three, which makes no sense to me." }, { "start": 1683.04, "end": 1683.8799999999999, "text": " Like, why three?" }, { "start": 1683.8799999999999, "end": 1687.64, "text": " What you usually want to find is kind of a kink in your curve." }, { "start": 1687.64, "end": 1690.08, "text": " Like, if it's steep and then it gets flat," }, { "start": 1690.08, "end": 1694.48, "text": " that means that up until then, your clusters actually buy you something good." }, { "start": 1694.48, "end": 1698.32, "text": " And from then, they basically are useless." }, { "start": 1698.32, "end": 1704.4399999999998, "text": " So if I were to guess, I would divide the criminals into two clusters" }, { "start": 1704.4399999999998, "end": 1709.48, "text": " and the non-criminals into a single cluster, because that's pretty flat." }, { "start": 1709.48, "end": 1714.76, "text": " Certainly not the non-criminals into three and the criminals into four." }, { "start": 1714.76, "end": 1717.72, "text": " That makes no sense at all." }, { "start": 1717.72, "end": 1719.6399999999999, "text": " Like, why?" }, { "start": 1719.6399999999999, "end": 1723.12, "text": " And they say, OK, these are the clusters right here." }, { "start": 1723.12, "end": 1727.3999999999999, "text": " And these are the pictures I showed you at the beginning." }, { "start": 1727.4, "end": 1728.76, "text": " What surprise?" }, { "start": 1728.76, "end": 1735.5600000000002, "text": " The bottom ones, the non-criminals, are smiling and the top ones aren't." }, { "start": 1735.5600000000002, "end": 1739.64, "text": " Gee, I wonder why the method works." }, { "start": 1741.96, "end": 1748.52, "text": " And the interesting part here is that how can we justify?" }, { "start": 1748.52, "end": 1752.24, "text": " How can we say if we decide on one cluster for non-criminals" }, { "start": 1752.24, "end": 1755.44, "text": " and two clusters for criminals, what does" }, { "start": 1755.44, "end": 1758.64, "text": " that remind us of?" }, { "start": 1758.64, "end": 1764.56, "text": " Oh, yes, that is exactly how we collected the data." }, { "start": 1764.56, "end": 1768.96, "text": " That is exactly the fact that we collected the non-criminals" }, { "start": 1768.96, "end": 1773.56, "text": " with one procedure and the criminals with two different procedures." }, { "start": 1773.56, "end": 1779.92, "text": " Gee, their data set replicates exactly how they collected the data." }, { "start": 1779.92, "end": 1783.52, "text": " And that convinces me that it says absolutely nothing" }, { "start": 1783.52, "end": 1786.24, "text": " about the actual criminality of people." }, { "start": 1786.24, "end": 1792.4, "text": " It's just that police, even if it's ID photos, they don't smile." }, { "start": 1792.4, "end": 1797.56, "text": " And pictures on the internet, sometimes people smile." }, { "start": 1797.56, "end": 1799.76, "text": " The rest of the paper is pretty much garbage." }, { "start": 1799.76, "end": 1804.72, "text": " They did reply to critics and they kind of take issue with a number of things." }, { "start": 1804.72, "end": 1806.48, "text": " So first, name calling." }, { "start": 1806.48, "end": 1810.68, "text": " I don't mean to name call, but it's going to happen." }, { "start": 1810.68, "end": 1817.28, "text": " I don't get why people call them racist because it's all the same." }, { "start": 1817.28, "end": 1820.16, "text": " Doesn't, no, no, trouble." }, { "start": 1820.16, "end": 1822.28, "text": " And smiley." }, { "start": 1822.28, "end": 1822.78, "text": " Ha." }, { "start": 1826.28, "end": 1828.68, "text": " In our experiments, we did control facial expression," }, { "start": 1828.68, "end": 1832.5600000000002, "text": " but not faint micro expression." }, { "start": 1832.5600000000002, "end": 1834.5600000000002, "text": " The critique that our methods can be reduced" }, { "start": 1834.5600000000002, "end": 1837.24, "text": " to a simple discriminator of smiling versus not smiling" }, { "start": 1837.24, "end": 1842.68, "text": " has given us a new angle of scrutiny." }, { "start": 1842.68, "end": 1845.88, "text": " They say, well, Westerners think that this is smiling," }, { "start": 1845.88, "end": 1849.24, "text": " but our Chinese students and colleagues, even after being prompted" }, { "start": 1849.24, "end": 1855.04, "text": " to consider the cue of smile, fail to detect the same." }, { "start": 1855.04, "end": 1859.08, "text": " So basically, their answer is, yeah, you think so, but we don't." }, { "start": 1859.08, "end": 1863.08, "text": " And then they say instead, they only find the faces in the bottom row" }, { "start": 1863.08, "end": 1869.1999999999998, "text": " appearing somewhat more relaxed than those in the top row." }, { "start": 1869.1999999999998, "end": 1871.36, "text": " And then here's the crucial part." }, { "start": 1871.36, "end": 1875.6, "text": " All criminal ID photos are government issues, but not mock shots." }, { "start": 1875.6, "end": 1878.32, "text": " They are normal government issue ID portraits" }, { "start": 1878.32, "end": 1881, "text": " like those driver license in the USA." }, { "start": 1881, "end": 1883.98, "text": " In contrast, most of the non-criminal ID style photos" }, { "start": 1883.98, "end": 1886.3999999999999, "text": " are taken officially by some organizations," }, { "start": 1886.4, "end": 1893.44, "text": " such as real estate companies, law firms, et cetera, for their website." }, { "start": 1893.44, "end": 1896.8400000000001, "text": " You know what it always says when you take your picture for a government ID?" }, { "start": 1896.8400000000001, "end": 1898.44, "text": " Please don't smile." }, { "start": 1898.44, "end": 1900.3200000000002, "text": " Imagine if your law firm comes to you and say," }, { "start": 1900.3200000000002, "end": 1901.92, "text": " we want a picture for our website." }, { "start": 1901.92, "end": 1904, "text": " Please don't smile." }, { "start": 1904, "end": 1906, "text": " All right, this was it for this paper." }, { "start": 1906, "end": 1911.2800000000002, "text": " If you like this content, please consider subscribing and sharing it out." }, { "start": 1911.2800000000002, "end": 1913.0800000000002, "text": " This is absolute garbage." }, { "start": 1913.0800000000002, "end": 1916.0800000000002, "text": " And there is important lessons to learn here," }, { "start": 1916.08, "end": 1921.52, "text": " namely Occam's razor is a real problem in research." }, { "start": 1921.52, "end": 1925.12, "text": " People often fail to criticize themselves enough" }, { "start": 1925.12, "end": 1928.56, "text": " and to think, is there maybe a different explanation for why" }, { "start": 1928.56, "end": 1930.4399999999998, "text": " I'm getting the results that I'm getting?" }, { "start": 1930.4399999999998, "end": 1934.32, "text": " And how can I disprove that that is the case?" }, { "start": 1934.32, "end": 1938.6799999999998, "text": " And how can I make sure that the effect that I'm seeing actually" }, { "start": 1938.6799999999998, "end": 1942.76, "text": " comes from the place where I claim it comes from?" }, { "start": 1942.76, "end": 1946.6, "text": " I think this is a real threat throughout all of research." }, { "start": 1946.6, "end": 1949.08, "text": " I've seen many papers that I've reviewed that" }, { "start": 1949.08, "end": 1954.2, "text": " are exactly of the same fallacy, not as touchy subjects as this one," }, { "start": 1954.2, "end": 1956.36, "text": " but it definitely exists." }, { "start": 1956.36, "end": 1961, "text": " And I remind everyone that learn a lesson from this." }, { "start": 1961, "end": 1962.92, "text": " And have a good day." }, { "start": 1962.92, "end": 1973.0800000000002, "text": " Thank you." } ]
DLq1DUcMh1Q
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
A bio-inspired bistable recurrent cell allows for long-lasting memory (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "gru", "lstm", "schmidhuber", "bistable", "bistability", "neurons", "biological", "spiking", "tanh", "stable", "attractor", "fixed points", "memory", "memorize", "sparse", "long sequence", "history", "storage", "remember", "rnn", "recurrent neural network", "gated recurrent unit", "forget", "backpropagation", "biologically inspired" ]
Even though LSTMs and GRUs solve the vanishing and exploding gradient problems, they have trouble learning to remember things over very long time spans. Inspired from bistability, a property of biological neurons, this paper constructs a recurrent cell with an inherent memory property, with only minimal modification to existing architectures. OUTLINE: 0:00 - Intro & Overview 1:10 - Recurrent Neural Networks 6:00 - Gated Recurrent Unit 14:40 - Neuronal Bistability 22:50 - Bistable Recurrent Cell 31:00 - Neuromodulation 32:50 - Copy First Benchmark 37:35 - Denoising Benchmark 48:00 - Conclusion & Comments Paper: https://arxiv.org/abs/2006.05252 Code: https://github.com/nvecoven/BRC Abstract: Recurrent neural networks (RNNs) provide state-of-the-art performances in a wide variety of tasks that require memory. These performances can often be achieved thanks to gated recurrent cells such as gated recurrent units (GRU) and long short-term memory (LSTM). Standard gated cells share a layer internal state to store information at the network level, and long term memory is shaped by network-wide recurrent connection weights. Biological neurons on the other hand are capable of holding information at the cellular level for an arbitrary long amount of time through a process called bistability. Through bistability, cells can stabilize to different stable states depending on their own past state and inputs, which permits the durable storing of past information in neuron state. In this work, we take inspiration from biological neuron bistability to embed RNNs with long-lasting memory at the cellular level. This leads to the introduction of a new bistable biologically-inspired recurrent cell that is shown to strongly improves RNN performance on time-series which require very long memory, despite using only cellular connections (all recurrent connections are from neurons to themselves, i.e. a neuron state is not influenced by the state of other neurons). Furthermore, equipping this cell with recurrent neuromodulation permits to link them to standard GRU cells, taking a step towards the biological plausibility of GRU. Authors: Nicolas Vecoven, Damien Ernst, Guillaume Drion Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there, today we're looking at a bio-inspired bistable recurrent cell, allows for long-lasting memory by Nicolas Vécovin, Damien Ernst and Gion Drion of the University of Liège. This paper here is not a paper that wants to push state-of-the-art on anything, it is a paper that takes a concept from the biological research on actual neurons, which is the bistability property and tries to introduce it to recurrent neural networks. And on toy data or small data they show that this has the interesting property that these recurrent neural networks can then remember important things for much much longer than our current recurrent architectures can do. I believe this is a very interesting paper and it's a nice refresher from the whole state-of-the-art number pushing papers. So dive in with me to explore this. If you like content like this also consider subscribing if you aren't and sharing it out and leaving a like and a comment if you have any sort of comments. They basically say recurrent neural networks provide state-of-the-art performance in a wide variety of tasks that will require memory, which is true. So we have these recurrent neural networks and what the recurrent neural networks do is they're basically... So a classic recurrent neural network goes something like this. There is a hidden state at time step T and there is a sequence of inputs that you have to work with. So we'll call them x1, x2, x3, x4 and so on. And then at some point you have to provide an output. This could be at every single time step or sometimes it's just at the end you have to provide an output y. So for example this here could be a piece of text and you need to decide whether or not that piece of text, maybe it's an email, whether or not that's spam. This could be a time series of a patient in an ICU and you need to decide whether or not to give some medication to the patient. So the applications of this are very wide and any sort of series data will do. So there's this hidden state and at each time step this hidden state is updated to a new hidden state. So this call this h0. It's updated to a new hidden state by incorporating the input. So somehow the input x and the previous hidden state are made into a new hidden state. And then the next input is taken in and by using this hidden state a new hidden state is made and so on. So one property here is that the newest hidden state always only depends on the previous hidden state and it doesn't really directly depend on like the hidden state to before itself. It only depends on the hidden state right before itself and the input that corresponds to it. So this is the information flow. The other important property here is that these connections that make a hidden state into the next hidden state and also that incorporate the input, they're always the same. So these functions here that incorporate the input, they're always the same in each time step. So the parameters are shared between them and the same goes for the functions here that transform one hidden state into the next hidden state. Of course there is a joint function between the two that actually produces the next hidden state. So these weights are all shared and for each time step and that's what makes the network recurrent. So we call the single time step here, we call that a recurrent cell. And the question now is how do you construct a recurrent cell? Usually recurrent neural networks they run into this problem of either gradient explosion or vanishing gradients because usually this here, if you are into neural networks you know this, this is a weight matrix multiplied by the previous hidden state and if you just multiply the same weight matrix over and over and over again it pretty much depends on the singular value of that weight matrix if the top singular value is higher than one then the signal is going to explode and if it's lower than one the signal is going to fade over time and there's pretty much nothing you can do. So classic RNNs have looked like this right here. So the next hidden state is a nonlinear function G and G can be some non-linearity like a sigmoid or a hyperbolic tangent but it's a function of the current input and the last hidden state by simply matrix multiplying these two things by some weight matrices and then adding them up. So that's what we've just looked at. Now this is problematic as I said because of the vanishing or exploding gradients and therefore people have come up with methods to solve this and you might know things like LSTMs and GRUs that are used to solve this. Now these cells here are much more complicated than the standard cell that we saw here but they also are much more effective because they don't have this vanishing or exploding gradient problems. Their promise is that they can remember things for longer because they allow the gradient to flow without these problems during backpropagation. Now how does one of these look? In this paper they mainly look at the GRU, the gated recurrent unit which is a simpler version of the LSTM. The LSTM is slightly more complex but the principles are the same. So they look at the GRU right here. What does the GRU do? These are the formulas for the GRU and we're going to try to deconstruct these formulas. So as you can see the inputs are the same. The inputs are going to be this points input, this time steps input and the last hidden state. Those are all the quantities that we need and we need to output somehow the next hidden state. The last hidden state is then used to predict the y by the way in all of these cases. So first we need to calculate two things called Z and R and both of them are the same. They're multiplying these two things by weight matrices and then running them through a sigmoid non-linearity. Let's do that. Let's say we have the last hidden state here and we have Xt here. So first we're going to calculate the Zt and the Rt from that. Now every one of these arrows right here is a multiplication by a weight matrix. So every one of these arrows is transforming the input and let's also let's join this into a sigmoid node and that gives you Zt and let's join these into a sigmoid that gives you Rt. Okay so far so good. Now we need to combine all of this in this last line right here. So you can see that the Z thing here sort of acts as a switch. So Z is the result of a sigmoid and therefore it's between 0 and 1 and here this is the Hadamard product. This is the element-wise product between vectors which sort of means this is like a gating. This is like a switch. If it's 1 it selects this quantity and if it's 0 it selects the quantity over here and of course it can be between 0 and 1 but those are the ends of the spectrum. So Z is a switch that selects between the last hidden state. So let's draw that right here. So the last hidden state goes here and is one of the options of the output. And the option is given by Z. So Zt let's draw a switch like this maybe. So Zt is responsible for modulating this switch right here. Okay this gives you the next hidden state. You see Zt modulates that switch so Ht is the one possibility that the switch can select. What's the other possibility? The other possibility is this quantity right here which is a hyperbolic tangent of whatever that is. So that is combined of X. So let's go from the back right here. Tanh. What's the input to the tanh? It's two things. First of all the X is an input to the tanh so we can draw directly a line from here. The X modulated. Every arrow as you might remember can be a function. Not all arrows are functions like this arrow right here is not a function it's just an arrow. Maybe that's confusing. You get what I mean. And the next thing is R times the hidden the last hidden state or the last hidden state modulated by this matrix. So R is acting as another gate. R can be again between 0 and 1 because it's the result of a sigmoid. So this hidden state will also go here. It will be modulated running out of colors here. It will be modulated by R here as a sort of gate. So R can either close or open this gate right here and then that's fed into the tanh. So it's a rather complicated setup as you can see right here. So let's analyze this. First of all the hidden state is either the last hidden state or it is something new and that's modulated by this Z right here. And Z is calculated from the hidden state and the current input. So this allows the cell to basically look at the hidden state is sort of the information of what happened so far and the current input is the new information that it gets from the sequence. And it sort of gets to look at these two things and decides do I even want to update my hidden state? If not I can just select this path right here and then nothing happens to the hidden state. The next hidden state will be exactly the same as the last hidden state. If it decides, if it thinks wow this new thing in the sequence that's actually important I should remember that. Because remember the task of the network sometimes is to remember things from the sequence. I think we drew this over here. So if this is an email and we want to detect whether it's spam then this word in the sequence right here might be really important because it would say something like gold, like buy gold. These two things might be buy gold and you need to remember that in the hidden state because the only way that information from X is going to flow to Y is through the hidden states. So you would want at this point you would want to remember this input in the hidden state so you would actually want to update the hidden state. And then this here might be not as important so you might want to say I don't want to I don't want to I still want my hidden state to be the old hidden state. So Z is that gate that allows us to do this. If we decide to update the hidden state then what do we do? Again if we decide to update the hidden state we can we can we will incorporate the new input but we will we can also decide to mix that how to mix that new input with the old hidden state. So if we decide to update the hidden state we don't simply discard the old hidden state because the old hidden state will still have a path to be sort of still there to be remembered but it's a longer path and it needs to go through this thing here and through this thing here. So this thing here decides which of the old hidden state pass through. So at each you can see right here this is an element-wise product this R is going to be between 0 and 1 at each point in the vector. So at each point in the vector the R decides is this worth remembering or not? And if it's not worth remembering then this is going to be 0 and that that position of the old hidden state is going to be 0 and then that's going to be forgotten and that's the opportunity for the hidden state to incorporate new information because then it can delete this old information and then it can incorporate the new input and that will result then on this path on the new hidden state. So there's two major things. First we can decide whether or not to even incorporate new information that's achieved by the Z gate and then we can decide which parts of the old hidden state if we want to update it which parts to forget that's the R gate and how to update it is then basically a result of the weight matrix that's associated with this function right here. Alright so that's the gated recurrent unit and it works a lot better than the classic RNNs. So having said that they now turn to this property of neuronal by stability that happens in actual neurons. So this here is sort of a model of a neuron with this property. Now forget everything we said about GRUs we're just going to look at this right now. What happens in a neuron usually is you have this is a single neuron you have input synapses from other neurons so these are connections coming from other neurons into you. They are accumulated right here. Usually they are just in a classic model of a neuron they're just summed up you would sum up all these all these input signals and then you decide you'd run it through like a step function. So if the sum of all the things is smaller than a particular threshold the output would be just nothing and if it's higher than a particular threshold then the output of the neuron would be sort of a firing of the neuron. This can be weighted and whatnot but in this case it's just a function of the inputs and that gives you your input signal. So this is like this is your input signal to the neuron. Now there is this property right here that makes it interesting. The signal goes out here and is integrated. This is an integrator and that's going to be in the output signal but there's this connection this back connection right here and that means that the signal that comes out at time step t is going to be fed back into the signal and actually added to the signal before itself and sort of self modulating right the signal comes out is sent back is added to this input and then sent through again and this here is just an integrator that's integrating everything that's happening. So if you if you look a bit closer you'll see that there is a minus here so it's actually not added it's subtracted and there is an F here which means that this is a nonlinear function. Now if this weren't a nonlinear function we can just sort of try or let's say this is a monotonic function we can sort of try to estimate what happens. If all of this right here is very high it's a high number big number this will be a big number then this sum will be a big number this output will be a big number what happens is this here will be a big number this is monotonic so it will also be a big number and that means it will subtract that big number so that means when whenever the neuron is going to to be very excited this feedback would actually push it back now when it is not very excited so when it's a very low number very negatively excited then the feedback would work in the exact opposite direction this will be very negative this will be very negative and this here would push it towards the positive so this neuron somehow self stabilizes over time to this to the zero point right here and that's simply if this F is the identity function right now so you can sort of see how this property works now we'll make it a bit more complicated in that we'll assume that this F here is not the identity function but let's say they have it somewhere but this right here so the F F of V post is this here it's V post minus alpha tan H of V post or is this the entire F yes that's this this thing right here if this is the case if this is that this if this is the signal minus the tan H then something very very very interesting happens and that that's depending on this alpha right here in that if this alpha is between if the alpha is between 0 and 1 then we simply have our monotonic function so here you can see how big V post is so how big the output signal is here that's the experiment we made before and here you can see what the feedback signal is okay or the integrator the integrated signal maybe this is in the wrong place and maybe F is just minus the tan H I'm not sure but in any case the way they build it after in the GRU it's pretty explicit so this is the thing we said before namely if if the signal is very high then this signal here will be high as well and because it's subtracted right here it's it's going to push the signal back towards zero again if this is lower than zero then this thing here will also be lower than zero and because it's subtracted it's going to push the signal towards zero so this thing here is the stable point it will always push it back towards zero however if we change the function and we change just the parameter alpha to be 1.5 a very different thing happens that you can see right here then it turns out if your output signal is very is very high the same thing happens is going to put me push back but if your output signal is between zero and this point right here there is a regime where actually even though the output signal is positive you will be pushed towards this point right here and therefore there is there are these two stable points right now and the stable point basically means if you deviate if the signal deviates from it it's going to be pushed back towards that point and you can see these two stable points they're not at zero they're actually at at these two points here and that's pretty interesting because that means you can potentially remember things with the cell right an output signal of zero it's basically not informative but here you can be in either the state here or in the state here and the little little perturbations will still keep you in that state so you could potentially be in this state right here as an output and the cell will just keep updating itself and be stable and always output that signal right here and then you could go ahead and if you then provide some huge input signal right here you could potentially throw this over to the other side over this hill and then it would stabilize at this point so this is sort of a way to remember things within these biological cells pretty cool now this here is a non filled out circle that means it's an unstable point it's technically stable in the sense that if you're exactly at zero you will remain at zero but if you perturb even a little bit you will go if you perturb a bit you will go away from it okay I hope this sort of property is right is clear and why this is so fascinating because we can use this these this fact that the stable points are not at zero and are more than just one stable point for remembering things and they're now trying to fill this into the gated recurrent unit so they call this the bi-stable recurrent cell BRC and the formulas are these right here maybe a little smaller come on can't zoom anymore okay it looks almost the same as the GRU so the formulas are these this and this so let's analyze the differences to the GRU the first most striking difference is that a lot of weight matrices here have become single numbers so or single vectors this here used to be a weight matrix and this used to be a matrix multiplication and you'll see this sort of throughout whenever the last hidden state is incorporated into these things then you'll see that it is no longer a weight matrix but is in fact a in a product with a vector a element wise product and that has a reason namely what they want to model is individual neurons so on a biological level and neuron can only feed back onto itself if there is a layer of neurons right here they can only each feed back onto themselves whereas in a recurrent neural network my hidden vector my hidden state is a vector and if I transform this into the next hidden state or any quantity let's say I transform this H into this R right here and this R is a vector too then any interaction is possible so any cell any entry in the vector here can influence any other vector because there's a big weight matrix in the middle they want to leave this away they want to model this as close as possible to actual layers of neurons and therefore they say okay the input X can you know be distributed to all the neurons because technically the input comes from some other neurons down here and they can all have connections to these neurons but these feedbacks we only really observe them in individual neuron this feedback cycle so that's why they model these recurrent weight products by just element wise products with vectors and then the second difference you again see that there is this switch right here this C switch and the C switch is like before it's a sigmoid with where combine the the output and the previous hidden state there's nothing new here so this switch is the same the cell has the possibility of letting in new information or just ignoring the new current information the XT the second thing is here and this is the same as well right the tanh this is a combination of the new information it's in case we want to let in the new information of the new information and you need to decide what things of the old information to forget or remember now the difference here is in this a so this a used to be again this sigmoid of the combination and now it's just slightly different it used to be sigmoid now it's one plus tanh this is a very very slight modification it's tanh because tanh is between minus one and one instead of zero and one like the sigmoid and the one plus makes it such that this is between zero and two and we've seen before that this critical behavior there is two regimes to these functions when it's between zero and one this behaves like a classic gated recurrent unit like a classic GRU but when it's between one and two then you have that exact behavior that we saw before of the bi-stability okay so depending on what the a is if the a is zero to one it's a classic cell and if the a is one to two it's a bi-stable cell and the network can decide by itself what it wants to do because here it has it can actually learn how to do that all right so this is the only change the only change really apart from it only being individual neurons feeding back on themselves is that now this is no longer between zero and one with the sigmoid this is now between zero and two because it's one plus the tan h very simple change but the effect of this is pretty pretty cool so they do some here is like a schematic drawing of this if this a is between zero and one again you have this stable state that's at zero but it's if it's between one and two you have two stable states at two non zero points and again this we already saw this but now this is for I believe this this recurrent cell this by modal recurrent cell not for the neuron itself and here they give an example of what happens when you run this particular signal this particular time series through a cell like this while fixing the C and the a parameters so now the C and their a parameter aren't learned they're just fixed and you see what happens now as you can see the the blue should be a classic the classic behavior so in this blue case what happens you see right here this C is moderately low so we saw the C is the switch of whether to leave in old information or take up new information if it's low it means we want to take up new information this is reasonably low and that's why when the signal goes up here the blue line goes up as well and when the signal goes down the blue line goes down again and so on so the blue line pretty straightforwardly follows the signal right here okay now in contrast to this the red line is over this threshold so a is fixed at 1.5 C is still at 0.2 so again when this line goes up then this line goes up but because this is near a this stable point if it goes down again it doesn't appear to go down enough it sort of remembers that state it was in it doesn't go down with the signal only now that it goes down even further it's over this threshold so we were in this situation now and the first bump down was only like to here and that pushed it up again but now it jumps over here because the signal is even lower and then the cell sort of switches to another state as you can see here it goes down but then this bump here is not enough to bring it up again so it kind of remains in this state so you can see the it sort of remembers the input and small deviations or small changes in signal don't manage to throw it away from that only larger things only it needs to go very the signal needs to go very much down in order for it to change state so that's pretty cool that there's this remembering behavior and now remember in the actual implementation these C and A parameters this C and this A right here aren't fixed they are also determined by the cell itself and therefore the cell can decide by itself when it wants to remember things how hard it wants to remember things and so on so we're going to check this out in an actual implementation so there's this one last modification they make where they say okay they tried this and it doesn't really work because it works sometimes but there is this issue of these neurons connecting only back on themselves which really makes the model much less powerful than a classic recurrent cell it's closer to biology but it's much less powerful and there is this property they say of a neuromodulation where technically in real neurons the one neuron here could influence another neuron by modulating these A and C parameters okay these A and C parameters this is called neuromodulation so there are interconnections between the neurons that influence how much other neurons remember and forget things so they decide let's model that and lo and behold we're now back to having weight matrices right here so this this is sort of they say this is a not really a super biologically plausible way of implementing neuromodulation but it's sort of it's an easier way and it brings us closer to the G back to the GRU and yeah so now the only difference to the GRU is that the fact that here there was a sigmoid now it's a 1 plus tan H okay I find this this pretty cool so now also the only difference here is this property of bi stability this is the only difference and now we can actually compare so let's compare they first give they do these sort of benchmarks which are they're pretty pretty neat so they have this first benchmark where it's the copy first input benchmark I'm having some trouble here moving this paper around with my fingers so the copy first input benchmark is simply a time series in this benchmark the network is presented with a one-dimensional time series of T time steps and the each entry is a is a random number after receiving the last time step the network output value should approximate the very very first input step okay so all the network needs to do is remember the first thing it sees and that's that should be learnable right that should be learnable because you can so you can it's not specified whether the zero with hidden state the initial hidden state is given into the network but technically it doesn't matter because it can just learn whatever that is I can learn to have a designated bit in this hidden state so this hidden state is of size 100 I believe one designated bit in the hidden state of whether it has already encountered the first thing or not if it has not encountered it means that it's at the first time step therefore it should incorporate the new information into the hidden state and if and also set this bit and then for each subsequent step it can see I've already set this bit and it can simply close that gate that makes it incorporate new information so it should be able to carry this information all the way to the end by simply always closing that gate after the first step and what happens in this so as you can see when the result is all the results up here so this is after three so they train it for 300,000 gradient descent iterations and you can see that when these time steps when the series are pretty small the LSTM or the GRUs tend to perform well but you can see that these BRCs they they don't tend to perform poorly they're just performing worse right it's zero it's still the 0.01 regime or something like this of error however when you go up to like 300 steps then you can see the GRUs and the LSTM they start to fail because they are not made explicitly to remember for that long they don't have this by stability property whereas now these things excel you can see they're still pretty low and at 600 steps these things completely fail they completely forget the input so and the NBRC at least is still able to remember the first thing pretty pretty well and yeah so the second one is no this is the first experiment still the copy input benchmark you can see right here that even at this three at this 100 thing where the GRU still learns it it learns it much much later than the BRC which learns it pretty fast only here when the when it's only five when that series are only five steps long does the GRU slightly outperform the BRC so the general notion here is that these classic cells are more powerful in like classic tasks whereas these things are shining whenever these things fail because they can't remember things for very long so they're not these new cells are not state-of-the-art yet possibly there are still some modifications to be made we've had a pretty long history of optimizing GRUs and LSTMs they haven't always worked so well as they do now because we kind of know how to handle them and I expect if these cells here take off especially these NBRC then with time will be as proficient at handling them and they will probably become on par or even outperform the LSTMs or GRUs on every day like on all the tasks and then be especially good on tasks where you have to remember things but for now they're outperformed by LSTMs and GRUs okay so the second thing is a more interesting experiment the denoising benchmark where they say the the copy input benchmark is interesting as a means to highlight the memorization capacity of the recurrent neural network but it does not tackle its ability to successfully exploit complex relationships between different elements of the input signal to predict the output so they have a new benchmark in the denoising benchmark the network is presented with a two-dimensional time series of t time steps five different time steps are sampled uniformly with okay and are communicated in the network okay I'll just tell you what's going on so this new time series is two dimensional in the lower dimension you simply have a bunch of random numbers like five eight two nine actually these are numbers sampled from a uniform Gaussian or so so they're not actually five eight two and nine but you can imagine it like this five eight two nine three four zero two and so on and in the second dimension you have a negative one I believe almost anywhere and then at some points you have a one negative one again and then you have a one and a negative one again and at the last point of the sequence you'll have a zero and so the zero is simply a marker that it's the end of the sequence what the network needs to do is it needs to output all the elements so the output of the network should be in this case should be nine four so all the elements where there was a one in order okay so it remember what it needs to learn it needs to learn to every time it sees a one in the second dimension it needs to take the first dimension put it somehow into the hidden state and then carry that hidden state forward and sees a one again it needs to take the second thing also put it into the hidden state but not override the first thing it put into the hidden state like if it were to just realize I need to put this into the hidden state then it would almost surely override the previous information so it needs to be able to say I've already kind of in my H is going to be a vector of a hundred dimensions it needs to be able to say well I've already stored a bunch of stuff in that part of the vector maybe I should store that thing here over here so this is fairly complex things to remember and technically GRU's and LSTMs are able to do it but as we'll see they're not as much the results are in this table where you can clearly see that whenever the n so the n the n is a parameter that is how far how far in this direction are these ones so when n is 0 the ones can be anywhere but when n here is like 5 that means that the last five ones surely don't contain a one that means only the first whatever a L minus L minus 5 contain the one so the higher this number n is the harder the task because your learning signal is way way farther away from the from what's when you get the output so you can see when the n is low then the GRU's and the LSTMs they perform pretty well but also these cells perform pretty well they're just not performing as well however when the task gets harder and you actually need to learn a sparse signal over a long period of time where in between you don't get any signal the GRU's and the LSTMs fail while the BRC's would still be able to learn these kinds of things so that's that's fairly cool now it's if from a researcher's perspective I wonder if they just first tried this task you know as I described it and then they discovered like ah crap they can still do it and like okay how can we make it such that there's a difference okay let's actually make the task harder like this and then they did that I wonder if they always had the idea with the end here or just introduced this after after it they they failed to produce a difference in the first place I'm not sure but they have they have another benchmark but they basically show that these cells are actually good can incorporate this information can reason about what they need to remember and whatnot and in the end they also have this sequential MNIST where they just feed an MNIST digit digit by digit and at the end I think that the output of the neural network needs to be the class of the of the MNIST digit and again here they have a parameter called N black which means that so they have an MNIST digit it's like a three they unroll it to a single vector right they feed this one by one into the recurrent network and then after that they attach a certain number of just empty pixels black pixels and after that the network needs to predict the Y you can see if they ask the network the class of the digit immediately after it's done then the G are using the LSTM perform fairly well as do the BRCs but if you attach a whole bunch of these black pixels remember an MNIST digit has some seven sorry seven hundred and eighty four maybe entries so attaching 300 black pixels is quite significant in in terms of the length of these sequences and then the GRUs and the LSTMs they can't learn they can't learn to ignore these things because the learning signal is just too far away right here but these things they can because they can exploit this by stability property and remember things again I wonder how this came to be it seems pretty funny but the last thing they do is they investigate what happens in their cells and this I feel is the most interesting part and they do this on this denoising benchmark so the task we've looked at before where you need to remember five randomly selected numbers that are indicated by the second dimension here they show a sequence where the five numbers occur at 3100 246 at 300 and at 376 so these are the five positions where the sequence indicates that the network should remember the thing in the first dimension and then output they analyze two things they analyze the proportion of bi-stable neurons so basically they analyze these out these a quantities and they analyze how many of the neurons in the layer have an a that's higher than one which means that they are in this bi-stable mode and also they analyze what's the average value of C so see if you remember if this is high it means it doesn't let in new information and if this is low it means it lets in new information if you first look at the C you can see that every single time when the second dimension indicates that this is one of the inputs to remember this the network drops immediately drops the C values the different colors here are different layers they build they have a recurrent network has multiple layers of these cells as is usual in the recurrent neural networks so this C as you can see it goes up pretty quickly but then as soon as one of these inputs appear the C drops which basically means that the network realizes it now must let in the new information and then it immediately shoots back up makes it seem like so the network says okay as long as so all of these inputs here they have the negative one in the second dimension right so it recognizes it says there's no reason for me to incorporate that information it's not important and as soon as the second input comes it immediately shoots down again now you can see this here is the last layer of the network the highest layer so sort of the highest abstractive information and you can see that from input to input this value of C gets higher and higher and these spikes as they go down but they go down to a higher and higher point which you know is is the fact that it recognizes it needs to let in new information but it lets in less and less new information the more things it needs to remember so not only does it recognize wait I need to remember this it also recognizes but I probably shouldn't shouldn't you know completely forget what I had previously because I it is important for me to remember these previous things so that's a pretty cool demonstration the fact that these go down at the input and the fact that generally they go up every time after a new input is incorporated into the hidden state this basically this shows that the or this is a pretty good indication that what they're saying is really happening right okay the second thing shows almost the same it shows how many of these neurons are actually in their bi-stable mode and you can also see right here that especially in the last layer you can see that the number of neurons in the bi-stable mode goes up and up and up and up after each of these steps and these spikes here correspond to always the points where they have to let in new information okay cool so I find that I find this to be pretty cool and I find this last experiment to be the coolest where they can actually show look here there's a pretty good indication that the thing we we build does what we say it does they also actually have a proof here of the bi-stability when this a is higher than one I won't go through this right here but if you want you can look at that I'm excited to see what happens with these kinds of architectures in the future because it seems to be a pretty minor modification and maybe with a little bit of more modification or if we sort of just tune this a little bit and kind of figure out what we have to do to make these things actually compete with the classic GRUs and LSTMs in regimes where a long memory isn't necessary I feel this could be a you know kind of a standard building block in the recurrent neural network toolkit even though it's been sort of outperformed by transformers in previous years alright that was it for me and I hope you had fun with this paper I invite you to check it out and bye bye
[ { "start": 0, "end": 5.2, "text": " Hi there, today we're looking at a bio-inspired bistable recurrent cell," }, { "start": 5.2, "end": 10.52, "text": " allows for long-lasting memory by Nicolas Vécovin, Damien Ernst and" }, { "start": 10.52, "end": 16.52, "text": " Gion Drion of the University of Liège. This paper here is not a paper that wants" }, { "start": 16.52, "end": 21.68, "text": " to push state-of-the-art on anything, it is a paper that takes a concept from the" }, { "start": 21.68, "end": 27.68, "text": " biological research on actual neurons, which is the bistability property and" }, { "start": 27.68, "end": 34.24, "text": " tries to introduce it to recurrent neural networks. And on toy data or small" }, { "start": 34.24, "end": 38.2, "text": " data they show that this has the interesting property that these" }, { "start": 38.2, "end": 43.24, "text": " recurrent neural networks can then remember important things for much much" }, { "start": 43.24, "end": 49.64, "text": " longer than our current recurrent architectures can do. I believe this" }, { "start": 49.64, "end": 53.92, "text": " is a very interesting paper and it's a nice refresher from the whole" }, { "start": 53.92, "end": 61.160000000000004, "text": " state-of-the-art number pushing papers. So dive in with me to explore this. If you" }, { "start": 61.160000000000004, "end": 65.58, "text": " like content like this also consider subscribing if you aren't and sharing it" }, { "start": 65.58, "end": 71.84, "text": " out and leaving a like and a comment if you have any sort of comments." }, { "start": 71.84, "end": 75.88, "text": " They basically say recurrent neural networks provide state-of-the-art" }, { "start": 75.88, "end": 80.48, "text": " performance in a wide variety of tasks that will require memory, which is true." }, { "start": 80.48, "end": 84.44, "text": " So we have these recurrent neural networks and what the recurrent neural" }, { "start": 84.44, "end": 92.2, "text": " networks do is they're basically... So a classic recurrent neural network goes" }, { "start": 92.2, "end": 97.44, "text": " something like this. There is a hidden state at time step T and there is a" }, { "start": 97.44, "end": 105.52000000000001, "text": " sequence of inputs that you have to work with. So we'll call them x1, x2, x3, x4" }, { "start": 105.52000000000001, "end": 110.44, "text": " and so on. And then at some point you have to provide an output. This could be" }, { "start": 110.44, "end": 114.6, "text": " at every single time step or sometimes it's just at the end you have to provide" }, { "start": 114.6, "end": 119.64, "text": " an output y. So for example this here could be a piece of text and you need to" }, { "start": 119.64, "end": 123.67999999999999, "text": " decide whether or not that piece of text, maybe it's an email, whether or not" }, { "start": 123.67999999999999, "end": 129.28, "text": " that's spam. This could be a time series of a patient in an ICU and you need to" }, { "start": 129.28, "end": 133.72, "text": " decide whether or not to give some medication to the patient. So the" }, { "start": 133.72, "end": 141.68, "text": " applications of this are very wide and any sort of series data will do. So" }, { "start": 141.68, "end": 146.64, "text": " there's this hidden state and at each time step this hidden state is updated" }, { "start": 146.64, "end": 153.2, "text": " to a new hidden state. So this call this h0. It's updated to a new" }, { "start": 153.2, "end": 159.66, "text": " hidden state by incorporating the input. So somehow the input x and the previous" }, { "start": 159.66, "end": 165.2, "text": " hidden state are made into a new hidden state. And then the next input is taken" }, { "start": 165.2, "end": 171, "text": " in and by using this hidden state a new hidden state is made and so on. So one" }, { "start": 171, "end": 177.12, "text": " property here is that the newest hidden state always only depends on the" }, { "start": 177.12, "end": 181.68, "text": " previous hidden state and it doesn't really directly depend on like the" }, { "start": 181.68, "end": 186.64, "text": " hidden state to before itself. It only depends on the hidden state right before" }, { "start": 186.64, "end": 192.35999999999999, "text": " itself and the input that corresponds to it. So this is the information flow. The" }, { "start": 192.35999999999999, "end": 197.64, "text": " other important property here is that these connections that make a hidden" }, { "start": 197.64, "end": 202.44, "text": " state into the next hidden state and also that incorporate the input, they're" }, { "start": 202.44, "end": 206.95999999999998, "text": " always the same. So these functions here that incorporate the input," }, { "start": 206.95999999999998, "end": 211.39999999999998, "text": " they're always the same in each time step. So the parameters are shared" }, { "start": 211.4, "end": 218, "text": " between them and the same goes for the functions here that transform" }, { "start": 218, "end": 221.32, "text": " one hidden state into the next hidden state. Of course there is a joint" }, { "start": 221.32, "end": 225.64000000000001, "text": " function between the two that actually produces the next hidden state. So these" }, { "start": 225.64000000000001, "end": 232.48000000000002, "text": " weights are all shared and for each time step and that's what makes the network" }, { "start": 232.48000000000002, "end": 238.84, "text": " recurrent. So we call the single time step here, we call that a recurrent cell." }, { "start": 238.84, "end": 243.64000000000001, "text": " And the question now is how do you construct a recurrent cell? Usually" }, { "start": 243.64000000000001, "end": 247.48000000000002, "text": " recurrent neural networks they run into this problem of either gradient" }, { "start": 247.48000000000002, "end": 253.96, "text": " explosion or vanishing gradients because usually this here, if you are into" }, { "start": 253.96, "end": 259.52, "text": " neural networks you know this, this is a weight matrix multiplied by the previous" }, { "start": 259.52, "end": 264.56, "text": " hidden state and if you just multiply the same weight matrix over and over and" }, { "start": 264.56, "end": 268.52, "text": " over again it pretty much depends on the singular value of that weight matrix" }, { "start": 268.52, "end": 274.47999999999996, "text": " if the top singular value is higher than one then the signal is going to explode" }, { "start": 274.47999999999996, "end": 279, "text": " and if it's lower than one the signal is going to fade over time and there's" }, { "start": 279, "end": 286.28, "text": " pretty much nothing you can do. So classic RNNs have looked like" }, { "start": 286.28, "end": 294.76, "text": " this right here. So the next hidden state is a nonlinear function G and G can be" }, { "start": 294.76, "end": 302.68, "text": " some non-linearity like a sigmoid or a hyperbolic tangent but it's a" }, { "start": 302.68, "end": 308.44, "text": " function of the current input and the last hidden state by simply matrix" }, { "start": 308.44, "end": 313.71999999999997, "text": " multiplying these two things by some weight matrices and then adding them up." }, { "start": 313.71999999999997, "end": 319.15999999999997, "text": " So that's what we've just looked at. Now this is problematic as I said because of" }, { "start": 319.16, "end": 325.36, "text": " the vanishing or exploding gradients and therefore people have come up with" }, { "start": 325.36, "end": 332.28000000000003, "text": " methods to solve this and you might know things like LSTMs and GRUs that are" }, { "start": 332.28000000000003, "end": 338.28000000000003, "text": " used to solve this. Now these cells here are much more complicated than the" }, { "start": 338.28000000000003, "end": 343.96000000000004, "text": " standard cell that we saw here but they also are much more effective because" }, { "start": 343.96000000000004, "end": 347.72, "text": " they don't have this vanishing or exploding gradient problems. Their" }, { "start": 347.72, "end": 353, "text": " promise is that they can remember things for longer because they allow the" }, { "start": 353, "end": 359.96000000000004, "text": " gradient to flow without these problems during backpropagation. Now how does one" }, { "start": 359.96000000000004, "end": 364.68, "text": " of these look? In this paper they mainly look at the GRU, the gated recurrent unit" }, { "start": 364.68, "end": 372.24, "text": " which is a simpler version of the LSTM. The LSTM is slightly more complex but" }, { "start": 372.24, "end": 378.04, "text": " the principles are the same. So they look at the GRU right here. What does the GRU" }, { "start": 378.04, "end": 383.44, "text": " do? These are the formulas for the GRU and we're going to try to" }, { "start": 383.44, "end": 388.56, "text": " deconstruct these formulas. So as you can see the inputs are the same. The inputs" }, { "start": 388.56, "end": 394.2, "text": " are going to be this points input, this time steps input and the last hidden" }, { "start": 394.2, "end": 398.8, "text": " state. Those are all the quantities that we need and we need to output" }, { "start": 398.8, "end": 403.6, "text": " somehow the next hidden state. The last hidden state is then used to predict" }, { "start": 403.6, "end": 409.40000000000003, "text": " the y by the way in all of these cases. So first we need to calculate two things" }, { "start": 409.40000000000003, "end": 416.04, "text": " called Z and R and both of them are the same. They're multiplying these two" }, { "start": 416.04, "end": 420.32, "text": " things by weight matrices and then running them through a sigmoid non-linearity." }, { "start": 420.32, "end": 426.8, "text": " Let's do that. Let's say we have the last hidden state here and we" }, { "start": 426.8, "end": 437.8, "text": " have Xt here. So first we're going to calculate the Zt and the Rt from that." }, { "start": 437.8, "end": 443.64, "text": " Now every one of these arrows right here is a multiplication by a weight matrix." }, { "start": 443.64, "end": 450.64, "text": " So every one of these arrows is transforming the input and let's also" }, { "start": 450.64, "end": 459.15999999999997, "text": " let's join this into a sigmoid node and that gives you Zt and let's join these" }, { "start": 459.15999999999997, "end": 468.56, "text": " into a sigmoid that gives you Rt. Okay so far so good. Now we need to combine all" }, { "start": 468.56, "end": 475.52, "text": " of this in this last line right here. So you can see that the Z thing here sort" }, { "start": 475.52, "end": 480.44, "text": " of acts as a switch. So Z is the result of a sigmoid and therefore it's between" }, { "start": 480.44, "end": 488.8, "text": " 0 and 1 and here this is the Hadamard product. This is the element-wise" }, { "start": 488.8, "end": 494.6, "text": " product between vectors which sort of means this is like a gating. This is like" }, { "start": 494.6, "end": 500.96, "text": " a switch. If it's 1 it selects this quantity and if it's 0 it selects the" }, { "start": 500.96, "end": 505.5, "text": " quantity over here and of course it can be between 0 and 1 but those are the" }, { "start": 505.5, "end": 511.28, "text": " ends of the spectrum. So Z is a switch that selects between the last hidden" }, { "start": 511.28, "end": 520.04, "text": " state. So let's draw that right here. So the last hidden state goes here and is" }, { "start": 520.04, "end": 527.2, "text": " one of the options of the output. And the option is given by Z. So Zt" }, { "start": 527.2, "end": 534.2, "text": " let's draw a switch like this maybe. So Zt is responsible for" }, { "start": 534.2, "end": 541.6400000000001, "text": " modulating this switch right here. Okay this gives you the next hidden state. You" }, { "start": 541.6400000000001, "end": 547.08, "text": " see Zt modulates that switch so Ht is the one possibility that the switch can" }, { "start": 547.08, "end": 552.76, "text": " select. What's the other possibility? The other possibility is this quantity right" }, { "start": 552.76, "end": 561.88, "text": " here which is a hyperbolic tangent of whatever that is. So that is combined of" }, { "start": 561.88, "end": 573.4, "text": " X. So let's go from the back right here. Tanh. What's the input to the tanh?" }, { "start": 573.4, "end": 579.28, "text": " It's two things. First of all the X is an input to the tanh so we can draw" }, { "start": 579.28, "end": 585.4, "text": " directly a line from here. The X modulated. Every arrow as you might" }, { "start": 585.4, "end": 590.2, "text": " remember can be a function. Not all arrows are functions like this" }, { "start": 590.2, "end": 595.2800000000001, "text": " arrow right here is not a function it's just an arrow. Maybe that's confusing." }, { "start": 595.2800000000001, "end": 604.9200000000001, "text": " You get what I mean. And the next thing is R times the hidden the last hidden" }, { "start": 604.9200000000001, "end": 612, "text": " state or the last hidden state modulated by this matrix. So R is" }, { "start": 612, "end": 616.6, "text": " acting as another gate. R can be again between 0 and 1 because it's the result" }, { "start": 616.6, "end": 624.32, "text": " of a sigmoid. So this hidden state will also go here. It will be modulated running" }, { "start": 624.32, "end": 632.24, "text": " out of colors here. It will be modulated by R here as a sort of gate. So R can either" }, { "start": 632.24, "end": 638.48, "text": " close or open this gate right here and then that's fed into the tanh. So it's" }, { "start": 638.48, "end": 645.24, "text": " a rather complicated setup as you can see right here. So let's analyze this." }, { "start": 645.24, "end": 653.48, "text": " First of all the hidden state is either the last hidden state or it is something" }, { "start": 653.48, "end": 658.98, "text": " new and that's modulated by this Z right here. And Z is calculated from the hidden" }, { "start": 658.98, "end": 666.36, "text": " state and the current input. So this allows the cell to basically look at" }, { "start": 666.36, "end": 670.28, "text": " the hidden state is sort of the information of what happened so far and" }, { "start": 670.28, "end": 676.04, "text": " the current input is the new information that it gets from the sequence. And it" }, { "start": 676.04, "end": 680.9599999999999, "text": " sort of gets to look at these two things and decides do I even want to update my" }, { "start": 680.9599999999999, "end": 685.68, "text": " hidden state? If not I can just select this path right here and then nothing" }, { "start": 685.68, "end": 689.68, "text": " happens to the hidden state. The next hidden state will be exactly the same as" }, { "start": 689.68, "end": 695.6, "text": " the last hidden state. If it decides, if it thinks wow this new thing in the" }, { "start": 695.6, "end": 699.12, "text": " sequence that's actually important I should remember that. Because" }, { "start": 699.12, "end": 704.88, "text": " remember the task of the network sometimes is to remember things from" }, { "start": 704.88, "end": 710.92, "text": " the sequence. I think we drew this over here. So if this is an email and we want" }, { "start": 710.92, "end": 715.26, "text": " to detect whether it's spam then this word in the sequence right here might be" }, { "start": 715.26, "end": 719.96, "text": " really important because it would say something like gold, like buy gold. These" }, { "start": 719.96, "end": 725.72, "text": " two things might be buy gold and you need to remember that in the hidden" }, { "start": 725.72, "end": 729.5600000000001, "text": " state because the only way that information from X is going to flow to" }, { "start": 729.5600000000001, "end": 733.64, "text": " Y is through the hidden states. So you would want at this point you would want" }, { "start": 733.64, "end": 737.12, "text": " to remember this input in the hidden state so you would actually want to" }, { "start": 737.12, "end": 741.1600000000001, "text": " update the hidden state. And then this here might be not as important so you" }, { "start": 741.1600000000001, "end": 744.76, "text": " might want to say I don't want to I don't want to I still want my hidden" }, { "start": 744.76, "end": 750.6800000000001, "text": " state to be the old hidden state. So Z is that gate that allows us to do" }, { "start": 750.68, "end": 757.9599999999999, "text": " this. If we decide to update the hidden state then what do we do? Again if we" }, { "start": 757.9599999999999, "end": 766.8, "text": " decide to update the hidden state we can we can we will incorporate the new" }, { "start": 766.8, "end": 773.92, "text": " input but we will we can also decide to mix that how to mix that new input with" }, { "start": 773.92, "end": 779.7199999999999, "text": " the old hidden state. So if we decide to update the hidden state we don't" }, { "start": 779.72, "end": 783.2, "text": " simply discard the old hidden state because the old hidden state will still" }, { "start": 783.2, "end": 791.2, "text": " have a path to be sort of still there to be remembered but it's a" }, { "start": 791.2, "end": 797.24, "text": " longer path and it needs to go through this thing here and through this thing" }, { "start": 797.24, "end": 804.2, "text": " here. So this thing here decides which of the old hidden state pass through. So at" }, { "start": 804.2, "end": 808.8000000000001, "text": " each you can see right here this is an element-wise product this R is going to" }, { "start": 808.8, "end": 813.4, "text": " be between 0 and 1 at each point in the vector. So at each point in the vector" }, { "start": 813.4, "end": 820.3199999999999, "text": " the R decides is this worth remembering or not? And if it's not worth" }, { "start": 820.3199999999999, "end": 825.04, "text": " remembering then this is going to be 0 and that that position of the old hidden" }, { "start": 825.04, "end": 830.16, "text": " state is going to be 0 and then that's going to be forgotten and that's the" }, { "start": 830.16, "end": 835.1999999999999, "text": " opportunity for the hidden state to incorporate new information because" }, { "start": 835.2, "end": 840.96, "text": " then it can delete this old information and then it can incorporate" }, { "start": 840.96, "end": 845.84, "text": " the new input and that will result then on this path on the new hidden state." }, { "start": 845.84, "end": 850.5600000000001, "text": " So there's two major things. First we can decide whether or not to even" }, { "start": 850.5600000000001, "end": 855.36, "text": " incorporate new information that's achieved by the Z gate and then we can" }, { "start": 855.36, "end": 859.8000000000001, "text": " decide which parts of the old hidden state if we want to update it which" }, { "start": 859.8, "end": 865.3599999999999, "text": " parts to forget that's the R gate and how to update it is then basically a" }, { "start": 865.3599999999999, "end": 871, "text": " result of the weight matrix that's associated with this function right here." }, { "start": 871, "end": 878.5999999999999, "text": " Alright so that's the gated recurrent unit and it works a lot better than the" }, { "start": 878.5999999999999, "end": 884.64, "text": " classic RNNs. So having said that they now turn to this property of neuronal" }, { "start": 884.64, "end": 890.48, "text": " by stability that happens in actual neurons. So this here is sort of a model" }, { "start": 890.48, "end": 895.48, "text": " of a neuron with this property. Now forget everything we said about GRUs" }, { "start": 895.48, "end": 900.1999999999999, "text": " we're just going to look at this right now. What happens in a neuron usually is" }, { "start": 900.1999999999999, "end": 907.56, "text": " you have this is a single neuron you have input synapses from other neurons" }, { "start": 907.56, "end": 913.12, "text": " so these are connections coming from other neurons into you. They are" }, { "start": 913.12, "end": 919.24, "text": " accumulated right here. Usually they are just in a classic model of a neuron" }, { "start": 919.24, "end": 923.68, "text": " they're just summed up you would sum up all these all these input signals and" }, { "start": 923.68, "end": 930.84, "text": " then you decide you'd run it through like a step function. So if the sum of" }, { "start": 930.84, "end": 937.6800000000001, "text": " all the things is smaller than a particular threshold the output would be" }, { "start": 937.6800000000001, "end": 942.04, "text": " just nothing and if it's higher than a particular threshold then the output of" }, { "start": 942.04, "end": 947.68, "text": " the neuron would be sort of a firing of the neuron. This can be weighted and" }, { "start": 947.68, "end": 952.68, "text": " whatnot but in this case it's just a function of the inputs and that gives" }, { "start": 952.68, "end": 957.8399999999999, "text": " you your input signal. So this is like this is your input signal to" }, { "start": 957.8399999999999, "end": 964, "text": " the neuron. Now there is this property right here that makes it interesting." }, { "start": 964, "end": 972.64, "text": " The signal goes out here and is integrated. This is an integrator and that's" }, { "start": 972.64, "end": 976.4, "text": " going to be in the output signal but there's this connection this back" }, { "start": 976.4, "end": 982, "text": " connection right here and that means that the signal that comes out at time" }, { "start": 982, "end": 988, "text": " step t is going to be fed back into the signal and actually added to the signal" }, { "start": 988, "end": 995.96, "text": " before itself and sort of self modulating right the signal comes out" }, { "start": 995.96, "end": 1002.4, "text": " is sent back is added to this input and then sent through again and this here is" }, { "start": 1002.4, "end": 1007.44, "text": " just an integrator that's integrating everything that's happening. So if you" }, { "start": 1007.44, "end": 1014.4, "text": " if you look a bit closer you'll see that there is a minus here so it's actually" }, { "start": 1014.4, "end": 1019.36, "text": " not added it's subtracted and there is an F here which means that this is a" }, { "start": 1019.36, "end": 1024.44, "text": " nonlinear function. Now if this weren't a nonlinear function we can just sort of" }, { "start": 1024.44, "end": 1029.84, "text": " try or let's say this is a monotonic function we can sort of try to estimate" }, { "start": 1029.84, "end": 1035.4, "text": " what happens. If all of this right here is very high it's a high number big" }, { "start": 1035.4, "end": 1040.08, "text": " number this will be a big number then this sum will be a big number this" }, { "start": 1040.08, "end": 1046.28, "text": " output will be a big number what happens is this here will be a big number this" }, { "start": 1046.28, "end": 1051.04, "text": " is monotonic so it will also be a big number and that means it will subtract" }, { "start": 1051.04, "end": 1058.24, "text": " that big number so that means when whenever the neuron is going to to be" }, { "start": 1058.24, "end": 1063.84, "text": " very excited this feedback would actually push it back now when it is not" }, { "start": 1063.84, "end": 1068.84, "text": " very excited so when it's a very low number very negatively excited then the" }, { "start": 1068.84, "end": 1071.9599999999998, "text": " feedback would work in the exact opposite direction this will be very" }, { "start": 1071.9599999999998, "end": 1076.1999999999998, "text": " negative this will be very negative and this here would push it towards the" }, { "start": 1076.1999999999998, "end": 1083.6799999999998, "text": " positive so this neuron somehow self stabilizes over time to this to the zero" }, { "start": 1083.6799999999998, "end": 1091.1599999999999, "text": " point right here and that's simply if this F is the identity function right" }, { "start": 1091.1599999999999, "end": 1098.04, "text": " now so you can sort of see how this property works now we'll make it a bit" }, { "start": 1098.04, "end": 1104.24, "text": " more complicated in that we'll assume that this F here is not the identity" }, { "start": 1104.24, "end": 1112.44, "text": " function but let's say they have it somewhere but this right here so the F" }, { "start": 1112.44, "end": 1127.28, "text": " F of V post is this here it's V post minus alpha tan H of V post or is this" }, { "start": 1127.28, "end": 1135.84, "text": " the entire F yes that's this this thing right here if this is the case if this" }, { "start": 1135.84, "end": 1143.24, "text": " is that this if this is the signal minus the tan H then something very very very" }, { "start": 1143.24, "end": 1148.24, "text": " interesting happens and that that's depending on this alpha right here in" }, { "start": 1148.24, "end": 1156.6, "text": " that if this alpha is between if the alpha is between 0 and 1 then we simply" }, { "start": 1156.6, "end": 1162.52, "text": " have our monotonic function so here you can see how big V post is so how big the" }, { "start": 1162.52, "end": 1165.9599999999998, "text": " output signal is here that's the experiment we made before and here you" }, { "start": 1165.9599999999998, "end": 1173.48, "text": " can see what the feedback signal is okay or the integrator the integrated signal" }, { "start": 1173.48, "end": 1177.84, "text": " maybe this is in the wrong place and maybe F is just minus the tan H I'm not" }, { "start": 1177.84, "end": 1183.1599999999999, "text": " sure but in any case the way they build it after in the GRU it's pretty explicit" }, { "start": 1183.16, "end": 1191.72, "text": " so this is the thing we said before namely if if the signal is very high" }, { "start": 1191.72, "end": 1199.16, "text": " then this signal here will be high as well and because it's subtracted right" }, { "start": 1199.16, "end": 1206.24, "text": " here it's it's going to push the signal back towards zero again if this is" }, { "start": 1206.24, "end": 1211.48, "text": " lower than zero then this thing here will also be lower than zero and because" }, { "start": 1211.48, "end": 1216.1200000000001, "text": " it's subtracted it's going to push the signal towards zero so this thing here" }, { "start": 1216.1200000000001, "end": 1222.52, "text": " is the stable point it will always push it back towards zero however if we" }, { "start": 1222.52, "end": 1229.92, "text": " change the function and we change just the parameter alpha to be 1.5 a very" }, { "start": 1229.92, "end": 1236.3600000000001, "text": " different thing happens that you can see right here then it turns out if your" }, { "start": 1236.36, "end": 1241.52, "text": " output signal is very is very high the same thing happens is going to put me" }, { "start": 1241.52, "end": 1246.32, "text": " push back but if your output signal is between zero and this point right here" }, { "start": 1246.32, "end": 1253, "text": " there is a regime where actually even though the output signal is positive you" }, { "start": 1253, "end": 1260.1599999999999, "text": " will be pushed towards this point right here and therefore there is there are" }, { "start": 1260.1599999999999, "end": 1264.28, "text": " these two stable points right now and the stable point basically means if you" }, { "start": 1264.28, "end": 1268, "text": " deviate if the signal deviates from it it's going to be pushed back towards" }, { "start": 1268, "end": 1271.96, "text": " that point and you can see these two stable points they're not at zero they're" }, { "start": 1271.96, "end": 1279.52, "text": " actually at at these two points here and that's pretty interesting because that" }, { "start": 1279.52, "end": 1285.16, "text": " means you can potentially remember things with the cell right an output" }, { "start": 1285.16, "end": 1289.44, "text": " signal of zero it's basically not informative but here you can be in either" }, { "start": 1289.44, "end": 1297.16, "text": " the state here or in the state here and the little little perturbations will" }, { "start": 1297.16, "end": 1301.48, "text": " still keep you in that state so you could potentially be in this state right" }, { "start": 1301.48, "end": 1307.28, "text": " here as an output and the cell will just keep updating itself and be stable and" }, { "start": 1307.28, "end": 1314, "text": " always output that signal right here and then you could go ahead and if you then" }, { "start": 1314, "end": 1320, "text": " provide some huge input signal right here you could potentially throw this" }, { "start": 1320, "end": 1325.04, "text": " over to the other side over this hill and then it would stabilize at this point" }, { "start": 1325.04, "end": 1329.8, "text": " so this is sort of a way to remember things within these biological cells" }, { "start": 1329.8, "end": 1333.92, "text": " pretty cool now this here is a non filled out circle that means it's an" }, { "start": 1333.92, "end": 1340.12, "text": " unstable point it's technically stable in the sense that if you're exactly at" }, { "start": 1340.12, "end": 1345.6799999999998, "text": " zero you will remain at zero but if you perturb even a little bit you will go if" }, { "start": 1345.6799999999998, "end": 1353.4799999999998, "text": " you perturb a bit you will go away from it okay I hope this sort of property is" }, { "start": 1353.4799999999998, "end": 1357.28, "text": " right is clear and why this is so fascinating because we can use this" }, { "start": 1357.28, "end": 1362.2399999999998, "text": " these this fact that the stable points are not at zero and are more than just" }, { "start": 1362.2399999999998, "end": 1368.32, "text": " one stable point for remembering things and they're now trying to fill this into" }, { "start": 1368.32, "end": 1379.2, "text": " the gated recurrent unit so they call this the bi-stable recurrent cell BRC" }, { "start": 1379.2, "end": 1386.52, "text": " and the formulas are these right here maybe a little smaller come on can't" }, { "start": 1386.52, "end": 1395.2, "text": " zoom anymore okay it looks almost the same as the GRU so the formulas are" }, { "start": 1395.2, "end": 1404.32, "text": " these this and this so let's analyze the differences to the GRU the first most" }, { "start": 1404.32, "end": 1409.28, "text": " striking difference is that a lot of weight matrices here have become single" }, { "start": 1409.28, "end": 1415.48, "text": " numbers so or single vectors this here used to be a weight matrix and this used" }, { "start": 1415.48, "end": 1420.04, "text": " to be a matrix multiplication and you'll see this sort of throughout whenever the" }, { "start": 1420.04, "end": 1425.96, "text": " last hidden state is incorporated into these things then you'll see that it is" }, { "start": 1425.96, "end": 1434.24, "text": " no longer a weight matrix but is in fact a in a product with a vector a element" }, { "start": 1434.24, "end": 1438.92, "text": " wise product and that has a reason namely what they want to model is" }, { "start": 1438.92, "end": 1445.78, "text": " individual neurons so on a biological level and neuron can only feed back onto" }, { "start": 1445.78, "end": 1451.24, "text": " itself if there is a layer of neurons right here they can only each feed back" }, { "start": 1451.24, "end": 1459.44, "text": " onto themselves whereas in a recurrent neural network my hidden vector my" }, { "start": 1459.44, "end": 1464.2, "text": " hidden state is a vector and if I transform this into the next hidden" }, { "start": 1464.2, "end": 1469, "text": " state or any quantity let's say I transform this H into this R right here" }, { "start": 1469, "end": 1477.76, "text": " and this R is a vector too then any interaction is possible so any cell any" }, { "start": 1477.76, "end": 1481.6, "text": " entry in the vector here can influence any other vector because there's a big" }, { "start": 1481.6, "end": 1486.24, "text": " weight matrix in the middle they want to leave this away they want to model this" }, { "start": 1486.24, "end": 1491.54, "text": " as close as possible to actual layers of neurons and therefore they say okay the" }, { "start": 1491.54, "end": 1497.24, "text": " input X can you know be distributed to all the neurons because technically the" }, { "start": 1497.24, "end": 1501.14, "text": " input comes from some other neurons down here and they can all have connections" }, { "start": 1501.14, "end": 1507.08, "text": " to these neurons but these feedbacks we only really observe them in individual" }, { "start": 1507.08, "end": 1512.1200000000001, "text": " neuron this feedback cycle so that's why they model these recurrent weight" }, { "start": 1512.1200000000001, "end": 1518.04, "text": " products by just element wise products with vectors and then the second" }, { "start": 1518.04, "end": 1523.6, "text": " difference you again see that there is this switch right here this C switch and" }, { "start": 1523.6, "end": 1532.24, "text": " the C switch is like before it's a sigmoid with where combine the the output" }, { "start": 1532.24, "end": 1537.4599999999998, "text": " and the previous hidden state there's nothing new here so this switch is the" }, { "start": 1537.4599999999998, "end": 1542.52, "text": " same the cell has the possibility of letting in new information or just" }, { "start": 1542.52, "end": 1550.4599999999998, "text": " ignoring the new current information the XT the second thing is here and this is" }, { "start": 1550.46, "end": 1555.24, "text": " the same as well right the tanh this is a combination of the new information" }, { "start": 1555.24, "end": 1559.8400000000001, "text": " it's in case we want to let in the new information of the new information and" }, { "start": 1559.8400000000001, "end": 1567.64, "text": " you need to decide what things of the old information to forget or remember now" }, { "start": 1567.64, "end": 1575.2, "text": " the difference here is in this a so this a used to be again this sigmoid of the" }, { "start": 1575.2, "end": 1582.04, "text": " combination and now it's just slightly different it used to be sigmoid now it's" }, { "start": 1582.04, "end": 1590.92, "text": " one plus tanh this is a very very slight modification it's tanh because tanh is" }, { "start": 1590.92, "end": 1595.78, "text": " between minus one and one instead of zero and one like the sigmoid and the" }, { "start": 1595.78, "end": 1601.64, "text": " one plus makes it such that this is between zero and two and we've seen" }, { "start": 1601.64, "end": 1605.6000000000001, "text": " before that this critical behavior there is two regimes to these functions when" }, { "start": 1605.6000000000001, "end": 1612.0400000000002, "text": " it's between zero and one this behaves like a classic gated recurrent unit like" }, { "start": 1612.0400000000002, "end": 1617.8400000000001, "text": " a classic GRU but when it's between one and two then you have that exact" }, { "start": 1617.8400000000001, "end": 1622.76, "text": " behavior that we saw before of the bi-stability okay so depending on what" }, { "start": 1622.76, "end": 1629.0400000000002, "text": " the a is if the a is zero to one it's a classic cell and if the a is one to two" }, { "start": 1629.04, "end": 1635.04, "text": " it's a bi-stable cell and the network can decide by itself what it wants to do" }, { "start": 1635.04, "end": 1641.92, "text": " because here it has it can actually learn how to do that all right so this" }, { "start": 1641.92, "end": 1645.3999999999999, "text": " is the only change the only change really apart from it only being" }, { "start": 1645.3999999999999, "end": 1649.92, "text": " individual neurons feeding back on themselves is that now this is no longer" }, { "start": 1649.92, "end": 1654.72, "text": " between zero and one with the sigmoid this is now between zero and two because" }, { "start": 1654.72, "end": 1663, "text": " it's one plus the tan h very simple change but the effect of this is pretty" }, { "start": 1663, "end": 1671.52, "text": " pretty cool so they do some here is like a schematic drawing of this if this a is" }, { "start": 1671.52, "end": 1676.48, "text": " between zero and one again you have this stable state that's at zero but it's" }, { "start": 1676.48, "end": 1682.24, "text": " if it's between one and two you have two stable states at two non zero points" }, { "start": 1682.24, "end": 1690.28, "text": " and again this we already saw this but now this is for I believe this this" }, { "start": 1690.28, "end": 1697.52, "text": " recurrent cell this by modal recurrent cell not for the neuron itself and here" }, { "start": 1697.52, "end": 1702.36, "text": " they give an example of what happens when you run this particular signal this" }, { "start": 1702.36, "end": 1708, "text": " particular time series through a cell like this while fixing the C and the a" }, { "start": 1708, "end": 1711.08, "text": " parameters so now the C and their a parameter aren't learned they're just" }, { "start": 1711.08, "end": 1720.6399999999999, "text": " fixed and you see what happens now as you can see the the blue should be a" }, { "start": 1720.6399999999999, "end": 1727.24, "text": " classic the classic behavior so in this blue case what happens you see right" }, { "start": 1727.24, "end": 1735.48, "text": " here this C is moderately low so we saw the C is the switch of whether to leave" }, { "start": 1735.48, "end": 1740.1999999999998, "text": " in old information or take up new information if it's low it means we want" }, { "start": 1740.2, "end": 1743.96, "text": " to take up new information this is reasonably low and that's why when the" }, { "start": 1743.96, "end": 1750.1200000000001, "text": " signal goes up here the blue line goes up as well and when the signal goes down" }, { "start": 1750.1200000000001, "end": 1753.68, "text": " the blue line goes down again and so on so the blue line pretty" }, { "start": 1753.68, "end": 1760.0800000000002, "text": " straightforwardly follows the signal right here okay now in contrast to this" }, { "start": 1760.0800000000002, "end": 1767.88, "text": " the red line is over this threshold so a is fixed at 1.5 C is still at 0.2 so" }, { "start": 1767.88, "end": 1776, "text": " again when this line goes up then this line goes up but because this is near a" }, { "start": 1776, "end": 1781.2800000000002, "text": " this stable point if it goes down again it doesn't appear to go down enough it" }, { "start": 1781.2800000000002, "end": 1787.5600000000002, "text": " sort of remembers that state it was in it doesn't go down with the signal only" }, { "start": 1787.5600000000002, "end": 1793.0800000000002, "text": " now that it goes down even further it's over this threshold so we were in this" }, { "start": 1793.08, "end": 1799.36, "text": " situation now and the first bump down was only like to here and that pushed it" }, { "start": 1799.36, "end": 1804.32, "text": " up again but now it jumps over here because the signal is even lower and" }, { "start": 1804.32, "end": 1809.76, "text": " then the cell sort of switches to another state as you can see here it" }, { "start": 1809.76, "end": 1815.4399999999998, "text": " goes down but then this bump here is not enough to bring it up again so it kind" }, { "start": 1815.4399999999998, "end": 1822.1999999999998, "text": " of remains in this state so you can see the it sort of remembers the input and" }, { "start": 1822.2, "end": 1828.68, "text": " small deviations or small changes in signal don't manage to throw it away" }, { "start": 1828.68, "end": 1834.32, "text": " from that only larger things only it needs to go very the signal needs to go" }, { "start": 1834.32, "end": 1839.28, "text": " very much down in order for it to change state so that's pretty cool that there's" }, { "start": 1839.28, "end": 1843.96, "text": " this remembering behavior and now remember in the actual implementation" }, { "start": 1843.96, "end": 1851.3600000000001, "text": " these C and A parameters this C and this A right here aren't fixed they are also" }, { "start": 1851.36, "end": 1856.6799999999998, "text": " determined by the cell itself and therefore the cell can decide by itself" }, { "start": 1856.6799999999998, "end": 1861.12, "text": " when it wants to remember things how hard it wants to remember things and so" }, { "start": 1861.12, "end": 1867.9599999999998, "text": " on so we're going to check this out in an actual implementation so there's this" }, { "start": 1867.9599999999998, "end": 1872.76, "text": " one last modification they make where they say okay they tried this and it" }, { "start": 1872.76, "end": 1878.1999999999998, "text": " doesn't really work because it works sometimes but there is this issue of" }, { "start": 1878.2, "end": 1884.52, "text": " these neurons connecting only back on themselves which really makes the model" }, { "start": 1884.52, "end": 1889.92, "text": " much less powerful than a classic recurrent cell it's closer to biology" }, { "start": 1889.92, "end": 1895.3600000000001, "text": " but it's much less powerful and there is this property they say of a neuromodulation" }, { "start": 1895.3600000000001, "end": 1904.3600000000001, "text": " where technically in real neurons the one neuron here could influence another" }, { "start": 1904.36, "end": 1910.52, "text": " neuron by modulating these A and C parameters okay these A and C parameters" }, { "start": 1910.52, "end": 1915.4399999999998, "text": " this is called neuromodulation so there are interconnections between the neurons" }, { "start": 1915.4399999999998, "end": 1920.6, "text": " that influence how much other neurons remember and forget things so they" }, { "start": 1920.6, "end": 1925.76, "text": " decide let's model that and lo and behold we're now back to having weight" }, { "start": 1925.76, "end": 1933.9199999999998, "text": " matrices right here so this this is sort of they say this is a not really a" }, { "start": 1933.92, "end": 1939.64, "text": " super biologically plausible way of implementing neuromodulation but it's" }, { "start": 1939.64, "end": 1947.04, "text": " sort of it's an easier way and it brings us closer to the G back to the GRU and" }, { "start": 1947.04, "end": 1952.28, "text": " yeah so now the only difference to the GRU is that the fact that here there" }, { "start": 1952.28, "end": 1961.48, "text": " was a sigmoid now it's a 1 plus tan H okay I find this this pretty cool so now" }, { "start": 1961.48, "end": 1967.72, "text": " also the only difference here is this property of bi stability this is the" }, { "start": 1967.72, "end": 1974.24, "text": " only difference and now we can actually compare so let's compare they first" }, { "start": 1974.24, "end": 1981.76, "text": " give they do these sort of benchmarks which are they're pretty pretty neat so" }, { "start": 1981.76, "end": 1985.4, "text": " they have this first benchmark where it's the copy first input benchmark I'm" }, { "start": 1985.4, "end": 1991.64, "text": " having some trouble here moving this paper around with my fingers so the copy" }, { "start": 1991.64, "end": 1996.92, "text": " first input benchmark is simply a time series in this benchmark the network is" }, { "start": 1996.92, "end": 2002.24, "text": " presented with a one-dimensional time series of T time steps and the each" }, { "start": 2002.24, "end": 2009.1200000000001, "text": " entry is a is a random number after receiving the last time step the network" }, { "start": 2009.1200000000001, "end": 2014.92, "text": " output value should approximate the very very first input step okay so all the" }, { "start": 2014.92, "end": 2019.72, "text": " network needs to do is remember the first thing it sees and that's that" }, { "start": 2019.72, "end": 2025.16, "text": " should be learnable right that should be learnable because you can so you can" }, { "start": 2025.16, "end": 2030.5600000000002, "text": " it's not specified whether the zero with hidden state the initial hidden state is" }, { "start": 2030.5600000000002, "end": 2035.68, "text": " given into the network but technically it doesn't matter because it can just" }, { "start": 2035.68, "end": 2043.04, "text": " learn whatever that is I can learn to have a designated bit in this hidden" }, { "start": 2043.04, "end": 2049.2, "text": " state so this hidden state is of size 100 I believe one designated bit in the" }, { "start": 2049.2, "end": 2055.24, "text": " hidden state of whether it has already encountered the first thing or not if it" }, { "start": 2055.24, "end": 2059.2, "text": " has not encountered it means that it's at the first time step therefore it" }, { "start": 2059.2, "end": 2063.94, "text": " should incorporate the new information into the hidden state and if and also" }, { "start": 2063.94, "end": 2067.88, "text": " set this bit and then for each subsequent step it can see I've already" }, { "start": 2067.88, "end": 2072.2799999999997, "text": " set this bit and it can simply close that gate that makes it incorporate new" }, { "start": 2072.28, "end": 2076.88, "text": " information so it should be able to carry this information all the way to" }, { "start": 2076.88, "end": 2083.88, "text": " the end by simply always closing that gate after the first step and what" }, { "start": 2083.88, "end": 2096.8, "text": " happens in this so as you can see when the result is all the results up here so" }, { "start": 2096.8, "end": 2101.6800000000003, "text": " this is after three so they train it for 300,000 gradient descent iterations and" }, { "start": 2101.68, "end": 2106.44, "text": " you can see that when these time steps when the series are pretty small the" }, { "start": 2106.44, "end": 2112.6, "text": " LSTM or the GRUs tend to perform well but you can see that these BRCs they" }, { "start": 2112.6, "end": 2116.96, "text": " they don't tend to perform poorly they're just performing worse right it's" }, { "start": 2116.96, "end": 2123.3599999999997, "text": " zero it's still the 0.01 regime or something like this of error however" }, { "start": 2123.3599999999997, "end": 2128.72, "text": " when you go up to like 300 steps then you can see the GRUs and the LSTM they" }, { "start": 2128.72, "end": 2134.72, "text": " start to fail because they are not made explicitly to remember for that long" }, { "start": 2134.72, "end": 2141.2, "text": " they don't have this by stability property whereas now these things excel" }, { "start": 2141.2, "end": 2145.6, "text": " you can see they're still pretty low and at 600 steps these things completely" }, { "start": 2145.6, "end": 2153.2799999999997, "text": " fail they completely forget the input so and the NBRC at least is still able to" }, { "start": 2153.28, "end": 2162, "text": " remember the first thing pretty pretty well and yeah so the second one is no" }, { "start": 2162, "end": 2165.8, "text": " this is the first experiment still the copy input benchmark you can see right" }, { "start": 2165.8, "end": 2172.1600000000003, "text": " here that even at this three at this 100 thing where the GRU still learns it it" }, { "start": 2172.1600000000003, "end": 2178.36, "text": " learns it much much later than the BRC which learns it pretty fast only here" }, { "start": 2178.36, "end": 2184.6800000000003, "text": " when the when it's only five when that series are only five steps long does the" }, { "start": 2184.6800000000003, "end": 2191.52, "text": " GRU slightly outperform the BRC so the general notion here is that these" }, { "start": 2191.52, "end": 2200.28, "text": " classic cells are more powerful in like classic tasks whereas these things are" }, { "start": 2200.28, "end": 2205.56, "text": " shining whenever these things fail because they can't remember things for" }, { "start": 2205.56, "end": 2211.08, "text": " very long so they're not these new cells are not state-of-the-art yet possibly" }, { "start": 2211.08, "end": 2216.96, "text": " there are still some modifications to be made we've had a pretty long history of" }, { "start": 2216.96, "end": 2222.08, "text": " optimizing GRUs and LSTMs they haven't always worked so well as they do now" }, { "start": 2222.08, "end": 2227.48, "text": " because we kind of know how to handle them and I expect if these cells here" }, { "start": 2227.48, "end": 2235.16, "text": " take off especially these NBRC then with time will be as proficient at handling" }, { "start": 2235.16, "end": 2241.8799999999997, "text": " them and they will probably become on par or even outperform the LSTMs or GRUs on" }, { "start": 2241.8799999999997, "end": 2246.64, "text": " every day like on all the tasks and then be especially good on tasks where you" }, { "start": 2246.64, "end": 2252.3999999999996, "text": " have to remember things but for now they're outperformed by LSTMs and GRUs" }, { "start": 2252.3999999999996, "end": 2257, "text": " okay so the second thing is a more interesting experiment the denoising" }, { "start": 2257, "end": 2264.08, "text": " benchmark where they say the the copy input benchmark is interesting as a" }, { "start": 2264.08, "end": 2267.2, "text": " means to highlight the memorization capacity of the recurrent neural network" }, { "start": 2267.2, "end": 2271.4, "text": " but it does not tackle its ability to successfully exploit complex" }, { "start": 2271.4, "end": 2275.72, "text": " relationships between different elements of the input signal to predict the output" }, { "start": 2275.72, "end": 2280.92, "text": " so they have a new benchmark in the denoising benchmark the network is" }, { "start": 2280.92, "end": 2285.52, "text": " presented with a two-dimensional time series of t time steps five different" }, { "start": 2285.52, "end": 2292.68, "text": " time steps are sampled uniformly with okay and are communicated in the network" }, { "start": 2292.68, "end": 2297.44, "text": " okay I'll just tell you what's going on so this new time series is two" }, { "start": 2297.44, "end": 2301.7999999999997, "text": " dimensional in the lower dimension you simply have a bunch of random numbers" }, { "start": 2301.7999999999997, "end": 2308.68, "text": " like five eight two nine actually these are numbers sampled from a uniform" }, { "start": 2308.68, "end": 2312.72, "text": " Gaussian or so so they're not actually five eight two and nine but you can" }, { "start": 2312.72, "end": 2320.66, "text": " imagine it like this five eight two nine three four zero two and so on and in the" }, { "start": 2320.66, "end": 2326.72, "text": " second dimension you have a negative one I believe almost anywhere and then at" }, { "start": 2326.72, "end": 2331.24, "text": " some points you have a one negative one again and then you have a one and a" }, { "start": 2331.24, "end": 2336.8799999999997, "text": " negative one again and at the last point of the sequence you'll have a zero and" }, { "start": 2336.8799999999997, "end": 2342.56, "text": " so the zero is simply a marker that it's the end of the sequence what the" }, { "start": 2342.56, "end": 2348, "text": " network needs to do is it needs to output all the elements so the output of" }, { "start": 2348, "end": 2354.48, "text": " the network should be in this case should be nine four so all the elements" }, { "start": 2354.48, "end": 2361.08, "text": " where there was a one in order okay so it remember what it needs to learn it" }, { "start": 2361.08, "end": 2366.88, "text": " needs to learn to every time it sees a one in the second dimension it needs to" }, { "start": 2366.88, "end": 2373.12, "text": " take the first dimension put it somehow into the hidden state and then carry" }, { "start": 2373.12, "end": 2378.56, "text": " that hidden state forward and sees a one again it needs to take the second thing" }, { "start": 2378.56, "end": 2383.48, "text": " also put it into the hidden state but not override the first thing it put into" }, { "start": 2383.48, "end": 2387.3199999999997, "text": " the hidden state like if it were to just realize I need to put this into the" }, { "start": 2387.3199999999997, "end": 2392.52, "text": " hidden state then it would almost surely override the previous information so it" }, { "start": 2392.52, "end": 2398.72, "text": " needs to be able to say I've already kind of in my H is going to be a vector" }, { "start": 2398.72, "end": 2404.3599999999997, "text": " of a hundred dimensions it needs to be able to say well I've already stored a" }, { "start": 2404.3599999999997, "end": 2409.7999999999997, "text": " bunch of stuff in that part of the vector maybe I should store that thing" }, { "start": 2409.7999999999997, "end": 2415.9599999999996, "text": " here over here so this is fairly complex things to remember and technically GRU's" }, { "start": 2415.9599999999996, "end": 2424.16, "text": " and LSTMs are able to do it but as we'll see they're not as much the results are" }, { "start": 2424.16, "end": 2432.8399999999997, "text": " in this table where you can clearly see that whenever the n so the n the n is a" }, { "start": 2432.8399999999997, "end": 2440.2799999999997, "text": " parameter that is how far how far in this direction are these ones so when n" }, { "start": 2440.2799999999997, "end": 2446.8799999999997, "text": " is 0 the ones can be anywhere but when n here is like 5 that means that the last" }, { "start": 2446.8799999999997, "end": 2453.3999999999996, "text": " five ones surely don't contain a one that means only the first whatever a L" }, { "start": 2453.4, "end": 2458.52, "text": " minus L minus 5 contain the one so the higher this number n is the harder the" }, { "start": 2458.52, "end": 2465.48, "text": " task because your learning signal is way way farther away from the from what's" }, { "start": 2465.48, "end": 2473.32, "text": " when you get the output so you can see when the n is low then the GRU's and the" }, { "start": 2473.32, "end": 2478.12, "text": " LSTMs they perform pretty well but also these cells perform pretty well they're" }, { "start": 2478.12, "end": 2483.7599999999998, "text": " just not performing as well however when the task gets harder and you actually" }, { "start": 2483.7599999999998, "end": 2489.2799999999997, "text": " need to learn a sparse signal over a long period of time where in between you" }, { "start": 2489.2799999999997, "end": 2494.68, "text": " don't get any signal the GRU's and the LSTMs fail while the BRC's would still" }, { "start": 2494.68, "end": 2501.2, "text": " be able to learn these kinds of things so that's that's fairly cool now it's if" }, { "start": 2501.2, "end": 2506.2, "text": " from a researcher's perspective I wonder if they just first tried this task you" }, { "start": 2506.2, "end": 2510.2799999999997, "text": " know as I described it and then they discovered like ah crap they can still" }, { "start": 2510.2799999999997, "end": 2514.96, "text": " do it and like okay how can we make it such that there's a difference okay" }, { "start": 2514.96, "end": 2519.24, "text": " let's actually make the task harder like this and then they did that I wonder if" }, { "start": 2519.24, "end": 2526.04, "text": " they always had the idea with the end here or just introduced this after after" }, { "start": 2526.04, "end": 2531, "text": " it they they failed to produce a difference in the first place I'm not" }, { "start": 2531, "end": 2534.9199999999996, "text": " sure but they have they have another benchmark but they basically show that" }, { "start": 2534.92, "end": 2539.8, "text": " these cells are actually good can incorporate this information can reason" }, { "start": 2539.8, "end": 2544.4, "text": " about what they need to remember and whatnot and in the end they also have" }, { "start": 2544.4, "end": 2549.7200000000003, "text": " this sequential MNIST where they just feed an MNIST digit digit by digit and" }, { "start": 2549.7200000000003, "end": 2555.2000000000003, "text": " at the end I think that the output of the neural network needs to be the class" }, { "start": 2555.2000000000003, "end": 2561.96, "text": " of the of the MNIST digit and again here they have a parameter called N black" }, { "start": 2561.96, "end": 2567.92, "text": " which means that so they have an MNIST digit it's like a three they unroll it to" }, { "start": 2567.92, "end": 2574.64, "text": " a single vector right they feed this one by one into the recurrent network and" }, { "start": 2574.64, "end": 2580, "text": " then after that they attach a certain number of just empty pixels black pixels" }, { "start": 2580, "end": 2586.4, "text": " and after that the network needs to predict the Y you can see if they ask" }, { "start": 2586.4, "end": 2591.7200000000003, "text": " the network the class of the digit immediately after it's done then the G" }, { "start": 2591.72, "end": 2598.3599999999997, "text": " are using the LSTM perform fairly well as do the BRCs but if you attach a whole" }, { "start": 2598.3599999999997, "end": 2603.6, "text": " bunch of these black pixels remember an MNIST digit has some seven sorry seven" }, { "start": 2603.6, "end": 2612.3999999999996, "text": " hundred and eighty four maybe entries so attaching 300 black pixels is quite" }, { "start": 2612.3999999999996, "end": 2617.64, "text": " significant in in terms of the length of these sequences and then the GRUs and" }, { "start": 2617.64, "end": 2623.64, "text": " the LSTMs they can't learn they can't learn to ignore these things because the" }, { "start": 2623.64, "end": 2630.72, "text": " learning signal is just too far away right here but these things they can" }, { "start": 2630.72, "end": 2636.6, "text": " because they can exploit this by stability property and remember things" }, { "start": 2636.6, "end": 2642.16, "text": " again I wonder how this came to be it seems pretty funny but the last thing" }, { "start": 2642.16, "end": 2647, "text": " they do is they investigate what happens in their cells and this I feel is the" }, { "start": 2647, "end": 2651.96, "text": " most interesting part and they do this on this denoising benchmark so the task" }, { "start": 2651.96, "end": 2656.96, "text": " we've looked at before where you need to remember five randomly selected numbers" }, { "start": 2656.96, "end": 2662.4, "text": " that are indicated by the second dimension here they show a sequence" }, { "start": 2662.4, "end": 2670.12, "text": " where the five numbers occur at 3100 246 at 300 and at 376 so these are the five" }, { "start": 2670.12, "end": 2675.48, "text": " positions where the sequence indicates that the network should remember the" }, { "start": 2675.48, "end": 2681.88, "text": " thing in the first dimension and then output they analyze two things they" }, { "start": 2681.88, "end": 2685.76, "text": " analyze the proportion of bi-stable neurons so basically they analyze these" }, { "start": 2685.76, "end": 2691.96, "text": " out these a quantities and they analyze how many of the neurons in the layer" }, { "start": 2691.96, "end": 2696.68, "text": " have an a that's higher than one which means that they are in this bi-stable" }, { "start": 2696.68, "end": 2702.68, "text": " mode and also they analyze what's the average value of C so see if you" }, { "start": 2702.68, "end": 2707.68, "text": " remember if this is high it means it doesn't let in new information and if" }, { "start": 2707.68, "end": 2712.6, "text": " this is low it means it lets in new information if you first look at the C" }, { "start": 2712.6, "end": 2717.7999999999997, "text": " you can see that every single time when the second dimension indicates that this" }, { "start": 2717.7999999999997, "end": 2722.8399999999997, "text": " is one of the inputs to remember this the network drops immediately drops the" }, { "start": 2722.8399999999997, "end": 2726.8399999999997, "text": " C values the different colors here are different layers they build they have a" }, { "start": 2726.8399999999997, "end": 2732.52, "text": " recurrent network has multiple layers of these cells as is usual in the" }, { "start": 2732.52, "end": 2738.52, "text": " recurrent neural networks so this C as you can see it goes up pretty quickly" }, { "start": 2738.52, "end": 2744.52, "text": " but then as soon as one of these inputs appear the C drops which basically means" }, { "start": 2744.52, "end": 2749.8, "text": " that the network realizes it now must let in the new information and then it" }, { "start": 2749.8, "end": 2756.44, "text": " immediately shoots back up makes it seem like so the network says okay as long as" }, { "start": 2756.44, "end": 2760.52, "text": " so all of these inputs here they have the negative one in the second dimension" }, { "start": 2760.52, "end": 2764.56, "text": " right so it recognizes it says there's no reason for me to incorporate that" }, { "start": 2764.56, "end": 2769.36, "text": " information it's not important and as soon as the second input comes it" }, { "start": 2769.36, "end": 2774.52, "text": " immediately shoots down again now you can see this here is the last layer of" }, { "start": 2774.52, "end": 2780.24, "text": " the network the highest layer so sort of the highest abstractive information and" }, { "start": 2780.24, "end": 2786.8, "text": " you can see that from input to input this value of C gets higher and higher" }, { "start": 2786.8, "end": 2791.6800000000003, "text": " and these spikes as they go down but they go down to a higher and higher" }, { "start": 2791.6800000000003, "end": 2798.5600000000004, "text": " point which you know is is the fact that it recognizes it needs to let in new" }, { "start": 2798.5600000000004, "end": 2805.4, "text": " information but it lets in less and less new information the more things it needs" }, { "start": 2805.4, "end": 2808.6000000000004, "text": " to remember so not only does it recognize wait I need to remember this" }, { "start": 2808.6000000000004, "end": 2813.0800000000004, "text": " it also recognizes but I probably shouldn't shouldn't you know completely" }, { "start": 2813.08, "end": 2819.84, "text": " forget what I had previously because I it is important for me to remember these" }, { "start": 2819.84, "end": 2824.48, "text": " previous things so that's a pretty cool demonstration the fact that these go" }, { "start": 2824.48, "end": 2830.2799999999997, "text": " down at the input and the fact that generally they go up every time after a" }, { "start": 2830.2799999999997, "end": 2835.7599999999998, "text": " new input is incorporated into the hidden state this basically this shows" }, { "start": 2835.7599999999998, "end": 2841.7999999999997, "text": " that the or this is a pretty good indication that what they're saying is" }, { "start": 2841.8, "end": 2848.2400000000002, "text": " really happening right okay the second thing shows almost the same it shows how" }, { "start": 2848.2400000000002, "end": 2852.6000000000004, "text": " many of these neurons are actually in their bi-stable mode and you can also" }, { "start": 2852.6000000000004, "end": 2860.2400000000002, "text": " see right here that especially in the last layer you can see that the number" }, { "start": 2860.2400000000002, "end": 2866.1200000000003, "text": " of neurons in the bi-stable mode goes up and up and up and up after each of these" }, { "start": 2866.1200000000003, "end": 2871.6400000000003, "text": " steps and these spikes here correspond to always the points where they have to" }, { "start": 2871.64, "end": 2881.3199999999997, "text": " let in new information okay cool so I find that I find this to be pretty cool" }, { "start": 2881.3199999999997, "end": 2885.52, "text": " and I find this last experiment to be the coolest where they can actually show" }, { "start": 2885.52, "end": 2891, "text": " look here there's a pretty good indication that the thing we we build" }, { "start": 2891, "end": 2897.92, "text": " does what we say it does they also actually have a proof here of the" }, { "start": 2897.92, "end": 2903.76, "text": " bi-stability when this a is higher than one I won't go through this right here" }, { "start": 2903.76, "end": 2909.7200000000003, "text": " but if you want you can look at that I'm excited to see what happens with these" }, { "start": 2909.7200000000003, "end": 2913.6, "text": " kinds of architectures in the future because it seems to be a pretty minor" }, { "start": 2913.6, "end": 2919.16, "text": " modification and maybe with a little bit of more modification or if we sort of" }, { "start": 2919.16, "end": 2923.2000000000003, "text": " just tune this a little bit and kind of figure out what we have to do to make" }, { "start": 2923.2, "end": 2929.52, "text": " these things actually compete with the classic GRUs and LSTMs in regimes where" }, { "start": 2929.52, "end": 2934.3599999999997, "text": " a long memory isn't necessary I feel this could be a you know kind of a" }, { "start": 2934.3599999999997, "end": 2940.04, "text": " standard building block in the recurrent neural network toolkit even though it's" }, { "start": 2940.04, "end": 2944.9199999999996, "text": " been sort of outperformed by transformers in previous years alright" }, { "start": 2944.9199999999996, "end": 2950.02, "text": " that was it for me and I hope you had fun with this paper I invite you to" }, { "start": 2950.02, "end": 2953.56, "text": " check it out and bye bye" } ]
AR3W-nfcDe4
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Auditing Radicalization Pathways on YouTube
[ "Science & Technology" ]
[ "machine learning", "data science", "empirical", "study", "youtube", "radicalization", "alt-right", "alt-lite", "idw", "intellectual dark web", "alt right", "alt lite", "jordan peterson", "joe rogan", "pipeline", "recommended", "network", "diffusion", "social graph", "infected", "ideology", "radical", "analysis", "suggested", "filter bubble", "fringe" ]
This paper claims that there is a radicalization pipeline on YouTube pushing people towards the Alt-Right, backing up their claims with empirical analysis of channel recommendations and commenting behavior. I suggest that there is a much simpler explanation of this data: A basic diffusion process. Abstract: Non-profits and the media claim there is a radicalization pipeline on YouTube. Its content creators would sponsor fringe ideas, and its recommender system would steer users towards edgier content. Yet, the supporting evidence for this claim is mostly anecdotal, and there are no proper measurements of the influence of YouTube's recommender system. In this work, we conduct a large scale audit of user radicalization on YouTube. We analyze 331,849 videos of 360 channels which we broadly classify into: control, the Alt-lite, the Intellectual Dark Web (I.D.W.), and the Alt-right ---channels in the I.D.W. and the Alt-lite would be gateways to fringe far-right ideology, here represented by Alt-right channels. Processing more than 79M comments, we show that the three communities increasingly share the same user base; that users consistently migrate from milder to more extreme content; and that a large percentage of users who consume Alt-right content now consumed Alt-lite and I.D.W. content in the past. We also probe YouTube's recommendation algorithm, looking at more than 2M million recommendations for videos and channels between May and July 2019. We find that Alt-lite content is easily reachable from I.D.W. channels via recommendations and that Alt-right channels may be reached from both I.D.W. and Alt-lite channels. Overall, we paint a comprehensive picture of user radicalization on YouTube and provide methods to transparently audit the platform and its recommender system. Authors: Manoel Horta Ribeiro, Raphael Ottoni, Robert West, Virgílio A. F. Almeida, Wagner Meira https://arxiv.org/abs/1908.08313 YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Minds: https://www.minds.com/ykilcher BitChute: https://www.bitchute.com/channel/10a5ui845DOJ/
Hi there! Today we're going to look at Auditing Radicalization Pathways on YouTube by Manuel Horta-Riberio at AL. So this paper is a bit different than the one we're usually looking at, but since I'm a YouTuber and this is in the kind of a data science realm, I thought it fits neatly. So yeah, we'll have a look. And this is mostly going to be an analysis and my opinion on it, so take that for what it is. This is, in my opinion, a paper where you can see very well what it looks like when you deceive yourself. So when you have a hypothesis of something and then only collect data that matches that, and you don't think of simpler solutions that explain the data, and therefore you don't think of experiments that could differentiate the simple solutions from what you propose. So it's a good example of how you can kind of trick yourself into believing you found something. And this isn't now about YouTube or anything. This happened to me so many times. It always pays off to take a step back and say, is there a simpler explanation for what's happening? And this is what I think is exactly happening here. So I'll present to you their hypothesis and then I'll present to you my kind of what I think is going on and a model that explains the data much much much easier and simpler and actually better. So let's dive in. This paper basically claims the following. So on YouTube there are channels and channels are, you know, independent channels. They make videos and you can actually arrange these channels. So each dot here is a channel. You can arrange these channels in kind of a network. And two channels you can claim they're connected and they can be a connection strength or whatever. For simplicity they can be connected if, for example, their topics are similar, if they reference each other, if they are recommended by YouTube from each other, if they have the same users watching those same channels or the videos of these channels. There are a number of metrics where you could make channels connected but all of them will turn out similar, like will give you the similar structure of channels being connected. Oh that's connected twice. So you can kind of build a graph of how these channels are connected and what you can do then is you can cluster them. You don't have to build a graph to cluster them but you can cluster the channels and what will emerge are parts of the graph that are very well connected. Right here this might be connected with this and with this. Parts of graph that are very well connected and are kind of well connected within and more sparsely connected to others, like also have a larger distance in between them. So if you start out from one channel and you're kind of watching recommended videos and recommended channels and so on, you'll stroll along here, you will get much faster to these things than to the other things. So these are called communities usually in these kind of social network analysis. So on YouTube you know there is a community for makeup, there's a community for sports, within sports there is a community for soccer, there's one for basketball and so on. So these are all these kind of communities that you can discover by clustering. This paper mainly deals with three communities. Namely the first of all is the IDW, which is the intellectual dark web. They discuss this here. So the intellectual dark web is they describe as a group of individuals that are in a rolling conversation with each other about topics that are, let's say, usually kind of difficult to talk about, such as gender differences or intelligence research in certain areas or even you know regular politics, but kind of the intellectual dark web are a wide variety of people that basically are conversing with each other about topics. The description is a bit vague but the main aspect is conversation and maybe topics that are kind of on the edge of what's acceptable to talk about. But the opinions range widely on these topics. The second group is the alt-right. And the alt-right here is kind of the, they're defined as ethno-nationalists. For example, here is an example, the fringe ideas such as white ethno-state, white supremacist ideology and so on. So specifically ethno-nationalists, nationalists that I think nations should be organized to along the lines of ethnicity. And the goal of the paper is actually to show that there is a kind of a dangerous pipeline on YouTube that will drive people to the alt-right and drive people into these radical ideas of the alt-right. Kind of in between is the alt-light, which is here defined as civic nationalists, which is simply as I understand it means that people should be organized into nations, not along ethnicity, but just should organize themselves into sovereign communities. And it would be more of your libertarian, classically liberal people, whereas the alt-right would be more of your, let's say, authoritarian right-wing person. So these three communities, they have a fourth community which is what they call a control group. And the control group consists of what they say are kind of mainstream channels on YouTube, simply to differentiate them from these three and two, see what's going on with them and if there is a difference. So this is kind of the setup and as I said the hypothesis is the following. People go on YouTube, so YouTube is here, YouTube, people come on YouTube, they go around, they explore a bit and all of a sudden they find IDW videos. These are recommended by YouTube on a fairly regular basis. That may mean they're interesting, people find it, they find it interesting and so on. And then there from the IDW there are recommendations and links to the alt-light. And the alt-light are still, so as I read this paper there is kind of an undertone, kind of the IDW and the alt-light are still okay. Like they discuss ideas that are sometimes political and so on, but the real worry is the alt-right, the kind of radical right-wing ethnic nationalists. And I mean yes, the formulation I can agree with. And then they claim, so you find IDW, that they have links to the alt-light or links, I mean recommendations and so on. And from the alt-light and to a certain degree also from the IDW you can then find the alt-right. So even though a user that goes on YouTube at first isn't likely to find the alt-right videos because it's fringe, it's extreme and so on, by through the YouTube recommendation algorithm basically by going to the IDW finding this, then from there they'll find the alt-light and from there and from the IDW they will then find the alt-right. So they claim that there's this pathway of radicalization here that kind of pushes people towards the alt-right. And that's their hypothesis. And they claim that they have evidence to support this and I claim that there is a simpler solution, namely... So first of all let me state I don't like the alt-right. I think their ideas are despicable. I should go without saying, though I have said it now, so you know, just as a disclaimer I'm not defending anyone here. I'm simply saying this paper has a simpler explanation for their data. Namely, what I think is happening here is YouTube again is channels. Each dot here is a channel. Channels can be clustered as such, right there, as we saw before. I'm just drawing more of them right now. Channels, channels, channels, channels, channels, channels, channels. So what I think is happening is there is a control group, what they call the control group. It's over here, it's large control, right? It's a bunch of channels. Then, which is kind of mainstream media, then over here there is, let's say, alternative media where all of these three groups belong into. So at some point you will have the IDW, then maybe a bit further away from the control group, but very close to the IDW you would have the alt-light, and very close to the two, maybe here you would have the alt-right. So notably, in my model, the IDW and the alt-light are kind of close together. They are in terms of comparative distance. So if you cluster these channels, let's say audience or topics or and so on, it will turn out that all of these three are far, far away from the control group. Those two are very close to each other and then here there is some distance, but how much distance is a question? But of course it's going to be smaller distance than the distance to the control group here. I mean I could draw the alt-right, maybe a more accurate picture would be something like this. So whatever, I mean it doesn't matter the details, but the distance here is smaller than the distance to the control group. In this model a second thing is also important, namely the alt-right, as you can see here, is much much smaller than the IDW and the alt-light. And these again are much smaller than the control group. And this I think accounts for most, so the distance relations between these and the size of the clusters account for most. So with size I mean mainly number of channels and also audience. This accounts for most of the data better than their model. So just keep this in mind. And my model of course doesn't include any kind of pipeline that they suggest. So first of all they go ahead and they say, alright, they collect channels. So they collect data for this and you know we could go over how they collect the data and criticize that and so on. They do human annotation and they start from already published reports and so on, which themselves can be criticized. I'm not gonna go into their data collection methodology. It can have mistakes, but then any collection methodology can have mistakes. What they end up with is a number of channels and here are the top channels from each category. And as you can see alt-right, alt-light, intellectual dark web, and control. So already here you can see pretty clearly the model I have in mind. They acknowledge all of this by the way. Look at the size of the alt-right channels, the biggest ones, compared to the size of the alt-light and the intellectual dark web. They're much much smaller in number of views. And then compare this to the size of the control group. The control group again is again larger than the other two groups. So just keep it in mind. Second thing to keep in mind, look at these channels. Maybe you know some of them. Joe Rogan, Sargon of Akkad, Paul Joseph Watson, Sticks Hexenhammer. These are youtubers. These are individuals making YouTube clips, creating content for YouTube, being on this platform. Whereas if you compare it with the control group, what's here? Vox, GQ, Wired, Business Insider. These aren't youtubers. These are websites or traditional media companies or their own kind of blogs and so on that have a YouTube channel where YouTube is one of the outlets of this media company. So I think there's a giant discrepancy here in the control group that can explain also some of this data that you see. So keep that in mind. I think the control group, they say they don't try to capture the user dynamic with the control group, but I think that there's many problems with this control group, including the fact that these are kind of traditional mainstream media that just have YouTube as an outlet. Moreover, a lot of these like Vox or Vice, they are clickbait media and rage bait media that it has worked for a number of years, but the algorithms are becoming more attuned to clickbait and these are crashing fast. Whereas the more youtuber people, they are not susceptible to that much to kind of the abolishment of clickbait. Alright, so this is the data. They have all these channels, they have all these videos and they first of all give some stats on it. Here you see on the bottom is always the year. So they do this over time and you see the active channels which are channels that have uploaded videos in some time. See the control group again is larger but has started to flatten out in the last few years. Whereas these communities, they are relatively flourishing. Another interesting point is that the paper somehow tries to tie this to the election of Donald Trump in 2016. But I think this is just kind of in there to gain relevance. A lot of these kind of trends and so on you'll see already start before that. So the start of the rise here, if you see these bumps here and so on, a lot of them start before 2016. So as we go through this make up your own mind of how much this is actually tied to the election or not. I think it's much more the years when clickbait started to go down as a business model. Never mind though. So the active channels growing, though the control group not growing as much. Videos published, even though the control group isn't growing so much, they still publish the most videos. But you can see generally the site is growing. Generally YouTube is growing. Like counts. And here you see something interesting starting to happen. Namely these communities, especially the alt-light and the intellectual dark web, they're starting to catch up. And this is one of the things that the paper also states is that if you look at for example comments per video, this light and the intellectual dark web outperform the control group vastly. Also if you look at views per video and likes per video, the control group simply don't have an engaged audience. Which I think first of all is because they produce clickbait. Second of all they're just not that interesting. And third of all they're not youtubers. Like this isn't their thing. They're just simply an outlet. But yeah so that's kind of a one, just kind of a bunch of metrics that they show here. The next table is a bit more interesting. In the next table they do a user intersection. So what they do is they collect all these videos and then they collect all the comments of these videos. And the comment of course always comes with a username. You need to be logged into YouTube to make a comment. And they see which users comment on multiple videos or on videos of multiple categories. And then they can look at how many users of category A also comment in category B and vice versa. So they have two metrics here. Jucard similarity which is for two communities A and B, number of users commenting on A and B divided by number of users commenting on A or B. And the second the overlap coefficient is number of users commenting on A and B divided by the minimum size of A and B. They say that the overlap coefficient is more useful to compare communities of different sizes. So we'll look at that. The top graphs are always always jacquard difference and the jacquard similarity in the bottom one are overlap coefficient. The first graphs though are number of commenting users per year. And you already see that even though the control group has much more views and probably much more videos, much larger, the comments don't... so the again the the users of the all light and the intellectual dark web are much more engaged. Also comments per user. This is the cumulative distribution function. Most people that comment on control group videos maybe comment once and then but these other communities they comment more. Self similarity means year after year. So always compared to the year before how many users are similar. So how well do these communities retain users. And you can already see here the control group is actually very bad at retaining users. It does have this overlap coefficient high but it has the jacquard self similarity low which basically if you think of the formula of the jacquard similarity means that this number is small and this number is high which means that A and B are very disjoint which means that the last year's users aren't this year's users basically. So they they constantly have to appeal to new users because they're losing old users because well I guess they're boring. Whereas the all light and intellectual dark web are much more are much better at retaining users. Interestingly the alt right not as good as retaining users as the other two. This could also be an effect of size like if your community is smaller the users might wander away more quickly. But I think this already speaks against the radicalization pipeline. If the if the alt right if YouTube was radicalizing people towards alt right we I think we would see a the alt right being on top of user retention. Then here they have intersections between communities. So green here is alt light and IDW while the blue is alt right and alt light and the other blue is alt right and IDW. So basically the green is alt light and IDW and the blues are the other two. And we see that the overlap in terms of overlap coefficient is similar. The overlap in terms of jacquard similarity the alt light and the IDW are very much more sharing users which in the picture I painted makes sense if you think my model is valid. My model explains this very well in that these two communities are quite close together therefore share a similar user base. The alt right smaller and a bit further apart therefore not as similar though more similar than the control group which is the last graph. The last graph is sorry the last graph is how similar are these communities to the control group and here we see the IDW and the alt light kind of similar. The alt right not as similar though in the overlap coefficient they're about the same. So the paper here claims oh look at the similarity this is definitely a radicalization. So they don't claim yet this is a radicalization pipeline but they claim that there's a higher similarity. If you actually look at the numbers it's not so I mean here you're around the 50% similarity and here at the end you're also around the 50% similarity with the control group. So this is within these groups and this is here with the control group. Also here if I look at the kind of mean here you're at whatever 20-18% and here you're also you may be a bit lower but you're also going towards this. What it looks to me like rather than there being a radicalization pipeline if you look at the shape of this and kind of where it starts in 2013-2014 it starts to go up here and you look at the shape of this it's simply the same shape delayed and I mean there's no reason why this graph wouldn't go up here wouldn't go up here in the future and reach the exact same numbers as here. It seems that the graph is simply shifted which makes total sense if you think these communities are... I'm gonna draw the same picture here... right IDW, alt light and over here control. If you think they're they're like that if you think simply think well YouTube is growing users are growing users are starting somewhere here and then spreading out pretty much randomly like they're spreading out spreading out spreading out users start here spreading out users start here spreading out here spreading out everywhere users just kind of there's a diffusion process going on not in a particular direction like they claim if there is just a diffusion process going on what would you expect you would expect users that started here to reach the IDW and alt right much sooner than they reach the control group but ultimately as the diffusion continues all users will have commented on most videos if you run YouTube infinitely and these numbers would go that's why the numbers go up right if you just let it go the diffusion process will go along and it simply takes a longer time to go from here all the way over here then it goes then between these communities so to me we're looking at a simple diffusion process here that is shifted in time and that explains very much the discrepancy in number but also the shape of the curve that is exactly the same but shifted their model does not explain the shape of the curve they simply say well here it's 75% and here it's only 50% that means that these communities are kind of shipping users towards each other so I think the explanation is easier then so they claim this does not alone kind of show that there is a pipeline what they now do however will show that basically so they claim this is the experiment that really shows that there is it is pipeline so what they do is they define what they call an infection so what they say is okay we are for example this this row here we're taking users that are alt light users at the beginning in this time so basically they only comment on the only comment on alt light videos during this time right so discard all users that comment on anything else just retain the ones that only comment on alt light videos during this time then we're going to follow them over time and see how many of them have at least one comment in an alt right video so this is only directed from the community over here towards the alt right and then they call a user infected specifically if they comment on one or two alt right videos they're lightly infected if they comment on three to five they're mildly infected and if they comment on more they're severely infected so as you can see users starting from the alt light or from the IDW or from both they will become in some will become infected over time namely and I postulate we simply look at the since that the tendencies between the groups are similar we'll simply look at the light infections here so they say okay after you know in 2018 about 8 to 10 percent of the users become infected in these groups you see here here about the same trajectories whereas it so whereas in the control group it's less here though honestly I don't think it's that much less right I think that again I think there's a normal diffusion process here they do this similarly with the with the other ones and to me like to them this makes total sense like oh yeah users that start in these communities they migrate to get infected by the alt right they go towards the alt right because you can find it so easily and to me this simply looks like a normal diffusion process here's what you need if you want and by the way the control group isn't that much different here's what you need if you want to show that there is a pipeline in this direction you need this exact same graph in the other direction and you need to show that people that started in the alt right do not go back in the same fashion towards the alt light or the IDW and they do especially not go to the control group you need to show this basically between each pair of these and you need to show that the direction of infection is only in a single direction namely towards radicalization otherwise you're just looking at a normal diffusion process between differently distance and differently sized groups so they go on to analyze and they say well how much basically how much of the alt right audience makes is made up by people that have been radicalized that have been infected so that this infection is kind of their proxy for what they call a radicalization and if you become infected then basically you're not part of the alt right or something even though you might have you might have commented something negative actually the might engage with their ideas and call them their crap but in any case you're now infected and they ask themselves how much of the alt right audience has are of these infected so basically how much of the alt right audience have our people that in the past have been not alt writers have been exclusively commenting on alt light or IDW videos and they find that for example for alt light 23% of the alt right audience are former alt lighters and have our former alt lighters that have now made one comment on an alt right video so that their claim is well there is a sizable portion of the alt right that at the beginning wasn't alt right that basically became infected and therefore that that kind of shows this radicalization pipeline that the alt right audience is mainly consistent of people that have not been alt right previously but have become so and to me again this is simply a function of the size of these communities right if if you think of this again and you start randomly somewhere on YouTube let's let's make this assumption people start randomly somewhere on YouTube what's the probability that you're going to start in the alt right very small right so what's the the kind of natural let's say the natural size of alt right before users go and migrate is very tiny right so not many users are going to be what you would consult originally alt writers whatever their their first comment basically what this thing measures is where is your first comment and are any of your subsequent comments alt right if your first comment is not in the alt right then you become a potential candidate for infection and if any comment is on the alt right then you're infected so what's the probability that your first comment is not alt right well you're gonna land somewhere on YouTube YouTube is huge the alt right is very small thus that probability is extremely small and then you let you simply let people diffuse let them diffuse let them diffuse some will end up in the alt right and since the alt right is so small to begin with actually most people that will comment at some point on an alt right video will will have their first comment from somewhere outside the alt right videos simply simply a numbers game right simply the alt right is so small that this is virtually guaranteed so what they find here is again simply an evidence of a regular diffusion process between these differently sized groups and the claims they make from this are just over the top again that their comparison to the control group if you if you look at the numbers they're actually not that different from this from the IDW numbers there they're different than the alt light here substantially different but again that simply a function of distance in my opinion in these in these clusters lastly they look at the YouTube recommender system and they say okay if we look at these videos and the channels and we look at on these videos what other videos are recommended and what other channels are recommended so if you have like a video on YouTube you have the video here and here you have like recommended videos similarly when you have a channel right you have a channel this is a person yeah I'm this person the person can have first of all they can have featured channels where they say look these are channels that I find cool I go check them out and then they also have recommended channels that are kind of given by YouTube as recommendations so here YouTube controls basically everything here the creator controls part and the YouTube controls dollar part so they look to both first of all the channels channels recommend recommendations so these are both sections here and they look at if you start on a alt light video how likely if you do a random walk are you to end up in the alt right or in the intellectual dark web or control group after one step two steps three steps four steps so that the big line is the random Walker and actually the dashed line is the distance if you were to target Lee go into the direction of such a video like what's the minimum number of clicks you need and you can see here the the if you start at alt light after one or two steps the random Walker is kind of a 2% chance to end up at an alt right video and about a 25% chance here of ending up in a intellectual dark web video and about a 50% chance of ending up again at an alt light video the scales here really different so it's very difficult to judge how it compares to the control group which is kind of at zero here but to me again this is a reflection of the size of these communities and I think it's a bit you know we are to to then claim oh these are reachable basically so 2% chance of landing on an alt right video um I'm not sure but again if you compare if you start from the control group there's almost no chance you'll end up in a alt right video so I guess the comparison is is okay if you compare to control group if you start look at videos however again if you start at alt light after one step you are approximately 25% likely to be in an IDW video you're a bit over 50% likely to stay in an alt light video however compare this to channels you're almost super unlikely to end at a control channel if you start at an alt light channel but in video recommendations you're actually also about 25% chance of ending in a control group video where as look at the scale here you're only about 0.03% likely to end up in an alt right video and also here so here even look at this if you start an IDW video the chance that you're going to end up in a control again super high much higher than an alt light video whereas with the channel recommendations this was completely turned around so we see the alt right completely loses when it comes to video recommendations and mainly the control group gains compared to the channel recommendations I think here's what I think I think this is due to this section here this section here where the creators have power and also this section here YouTube recommending I think they're putting a lot of work into the video recommendations I think they're putting not that much work into these recommendations and by work I mean actually manually intervening and deciding what's kind of good videos and bad videos and the the control group they're probably there's probably big advertisement money in that so they might be pushed up a bit in the video recommendations since most people are going by video recommendations I've actually never used the channel recommendations feature and the channel recommendations first of all the creator has power over part of it and then also YouTube may not put as much work into these related channels so both have in the effect that I would say that that the data here first of all it doesn't doesn't convince me of a radicalization pipeline it simply convinces me that some communities are larger smaller and closer together but second of all that this down here if you forget about the alt-right for a moment yeah they're irrelevant this down here actually compared to up here shows maybe a bit of evidence of an algorithmic promotion of these mainstream media channels compared to how the communities are actually clustering which I think this this up here might be a much more accurate picture so you know that it's just kind of a funky thing in the data yeah that alt-right is irrelevant to this part because they're they're just too small so this is this is kind of my take on this they didn't give recommendations and is this a pipeline and so on and I don't think so you've now heard my idea and you've heard their idea decide for yourself but I think it's a good example of how if you are convinced of an underlying mechanism you're going to collect evidence in support of that mechanism and if you catch yourself doing that really really think isn't there an easier explanation for this all right that was it for me have fun
[ { "start": 0, "end": 5.44, "text": " Hi there! Today we're going to look at Auditing Radicalization Pathways on" }, { "start": 5.44, "end": 12.96, "text": " YouTube by Manuel Horta-Riberio at AL. So this paper is a bit different than the" }, { "start": 12.96, "end": 19.52, "text": " one we're usually looking at, but since I'm a YouTuber and this is in the kind" }, { "start": 19.52, "end": 26.52, "text": " of a data science realm, I thought it fits neatly. So yeah, we'll have a look." }, { "start": 26.52, "end": 34.04, "text": " And this is mostly going to be an analysis and my opinion on it, so take" }, { "start": 34.04, "end": 42.4, "text": " that for what it is. This is, in my opinion, a paper where you can see very" }, { "start": 42.4, "end": 50.96, "text": " well what it looks like when you deceive yourself. So when you have a" }, { "start": 50.96, "end": 57.92, "text": " hypothesis of something and then only collect data that matches that, and you" }, { "start": 57.92, "end": 64.08, "text": " don't think of simpler solutions that explain the data, and" }, { "start": 64.08, "end": 68.56, "text": " therefore you don't think of experiments that could differentiate the simple" }, { "start": 68.56, "end": 72.84, "text": " solutions from what you propose. So it's a good example of how you can kind of" }, { "start": 72.84, "end": 77.96000000000001, "text": " trick yourself into believing you found something. And this isn't" }, { "start": 77.96, "end": 83.36, "text": " now about YouTube or anything. This happened to me so many times. It always" }, { "start": 83.36, "end": 89.83999999999999, "text": " pays off to take a step back and say, is there a simpler explanation for what's" }, { "start": 89.83999999999999, "end": 94.19999999999999, "text": " happening? And this is what I think is exactly happening here. So I'll present" }, { "start": 94.19999999999999, "end": 101.6, "text": " to you their hypothesis and then I'll present to you my kind of what I think" }, { "start": 101.6, "end": 108.55999999999999, "text": " is going on and a model that explains the data much much much easier and" }, { "start": 108.55999999999999, "end": 117.67999999999999, "text": " simpler and actually better. So let's dive in. This paper basically claims" }, { "start": 117.67999999999999, "end": 124.47999999999999, "text": " the following. So on YouTube there are channels and channels are, you know," }, { "start": 124.47999999999999, "end": 128.72, "text": " independent channels. They make videos and you can actually arrange these" }, { "start": 128.72, "end": 134.84, "text": " channels. So each dot here is a channel. You can arrange these channels in kind" }, { "start": 134.84, "end": 139.96, "text": " of a network. And two channels you can claim they're connected and they can be" }, { "start": 139.96, "end": 145.16, "text": " a connection strength or whatever. For simplicity they can be connected if, for" }, { "start": 145.16, "end": 150.64, "text": " example, their topics are similar, if they reference each other, if they are" }, { "start": 150.64, "end": 155.12, "text": " recommended by YouTube from each other, if they have the same users watching" }, { "start": 155.12, "end": 159.64000000000001, "text": " those same channels or the videos of these channels. There are a number of" }, { "start": 159.64000000000001, "end": 166.64000000000001, "text": " metrics where you could make channels connected but all of them" }, { "start": 166.64000000000001, "end": 172.72, "text": " will turn out similar, like will give you the similar structure of channels" }, { "start": 172.72, "end": 179.16, "text": " being connected. Oh that's connected twice. So you can kind of build a" }, { "start": 179.16, "end": 183.6, "text": " graph of how these channels are connected and what you can do then is you" }, { "start": 183.6, "end": 188, "text": " can cluster them. You don't have to build a graph to cluster them but you" }, { "start": 188, "end": 193.92, "text": " can cluster the channels and what will emerge are parts of the graph that are" }, { "start": 193.92, "end": 199.64, "text": " very well connected. Right here this might be connected with this and with" }, { "start": 199.64, "end": 206.88, "text": " this. Parts of graph that are very well connected and are kind of well" }, { "start": 206.88, "end": 211.35999999999999, "text": " connected within and more sparsely connected to others, like also have a" }, { "start": 211.36, "end": 217.88000000000002, "text": " larger distance in between them. So if you start out from one channel and you're" }, { "start": 217.88000000000002, "end": 222.16000000000003, "text": " kind of watching recommended videos and recommended channels and so on, you'll" }, { "start": 222.16000000000003, "end": 227.32000000000002, "text": " stroll along here, you will get much faster to these things than to the other" }, { "start": 227.32000000000002, "end": 231.16000000000003, "text": " things. So these are called communities usually in these kind of" }, { "start": 231.16000000000003, "end": 235.76000000000002, "text": " social network analysis. So on YouTube you know there is a community for" }, { "start": 235.76, "end": 242.35999999999999, "text": " makeup, there's a community for sports, within sports there is a community for" }, { "start": 242.35999999999999, "end": 246.51999999999998, "text": " soccer, there's one for basketball and so on. So these are all these kind of" }, { "start": 246.51999999999998, "end": 251.07999999999998, "text": " communities that you can discover by clustering. This paper mainly deals with" }, { "start": 251.07999999999998, "end": 257.71999999999997, "text": " three communities. Namely the first of all is the IDW, which is the" }, { "start": 257.71999999999997, "end": 263.36, "text": " intellectual dark web. They discuss this here. So the intellectual dark web is" }, { "start": 263.36, "end": 272.28000000000003, "text": " they describe as a group of individuals that are in a rolling conversation with" }, { "start": 272.28000000000003, "end": 278.72, "text": " each other about topics that are, let's say, usually kind of difficult to talk" }, { "start": 278.72, "end": 285.40000000000003, "text": " about, such as gender differences or intelligence research in certain areas" }, { "start": 285.4, "end": 293.71999999999997, "text": " or even you know regular politics, but kind of the intellectual dark web are a" }, { "start": 293.71999999999997, "end": 300.4, "text": " wide variety of people that basically are conversing with each other about" }, { "start": 300.4, "end": 307.59999999999997, "text": " topics. The description is a bit vague but the main aspect is conversation" }, { "start": 307.59999999999997, "end": 315.08, "text": " and maybe topics that are kind of on the edge of what's acceptable to talk" }, { "start": 315.08, "end": 322.44, "text": " about. But the opinions range widely on these topics. The second group is the alt-right." }, { "start": 322.44, "end": 331.88, "text": " And the alt-right here is kind of the, they're defined as ethno-nationalists." }, { "start": 331.88, "end": 339.47999999999996, "text": " For example, here is an example, the fringe ideas such as white ethno-state," }, { "start": 339.48, "end": 345.8, "text": " white supremacist ideology and so on. So specifically ethno-nationalists," }, { "start": 345.8, "end": 350.96000000000004, "text": " nationalists that I think nations should be organized to along the lines of" }, { "start": 350.96000000000004, "end": 357.72, "text": " ethnicity. And the goal of the paper is actually to show that there is a" }, { "start": 357.72, "end": 364.44, "text": " kind of a dangerous pipeline on YouTube that will drive people to the alt-right" }, { "start": 364.44, "end": 370.28, "text": " and drive people into these radical ideas of the alt-right. Kind of in between is" }, { "start": 370.28, "end": 377.44, "text": " the alt-light, which is here defined as civic nationalists, which is simply as I" }, { "start": 377.44, "end": 382.76, "text": " understand it means that people should be organized into nations, not along" }, { "start": 382.76, "end": 386.64, "text": " ethnicity, but just should organize themselves into sovereign communities." }, { "start": 386.64, "end": 396.52, "text": " And it would be more of your libertarian, classically liberal people, whereas the" }, { "start": 396.52, "end": 404.96, "text": " alt-right would be more of your, let's say, authoritarian right-wing person." }, { "start": 404.96, "end": 409.68, "text": " So these three communities, they have a fourth community which is what they call a" }, { "start": 409.68, "end": 413.47999999999996, "text": " control group. And the control group consists of what they say are kind of" }, { "start": 413.48, "end": 420.92, "text": " mainstream channels on YouTube, simply to differentiate them from these three" }, { "start": 420.92, "end": 427.64000000000004, "text": " and two, see what's going on with them and if there is a difference. So this is" }, { "start": 427.64000000000004, "end": 432.40000000000003, "text": " kind of the setup and as I said the hypothesis is the following." }, { "start": 432.40000000000003, "end": 438.84000000000003, "text": " People go on YouTube, so YouTube is here, YouTube, people come on YouTube, they go" }, { "start": 438.84, "end": 444.52, "text": " around, they explore a bit and all of a sudden they find IDW videos. These are" }, { "start": 444.52, "end": 449.28, "text": " recommended by YouTube on a fairly regular basis. That may mean they're" }, { "start": 449.28, "end": 453.12, "text": " interesting, people find it, they find it interesting and so on. And then there from" }, { "start": 453.12, "end": 460.91999999999996, "text": " the IDW there are recommendations and links to the alt-light. And the alt-light" }, { "start": 460.91999999999996, "end": 467.15999999999997, "text": " are still, so as I read this paper there is kind of an undertone, kind of the IDW" }, { "start": 467.16, "end": 472.44, "text": " and the alt-light are still okay. Like they discuss ideas that are" }, { "start": 472.44, "end": 477.88000000000005, "text": " sometimes political and so on, but the real worry is the alt-right, the" }, { "start": 477.88000000000005, "end": 486.04, "text": " kind of radical right-wing ethnic nationalists. And I mean yes, the" }, { "start": 486.04, "end": 492.44000000000005, "text": " formulation I can agree with. And then they claim, so you find IDW," }, { "start": 492.44, "end": 497.52, "text": " that they have links to the alt-light or links, I mean recommendations and so on." }, { "start": 497.52, "end": 502.64, "text": " And from the alt-light and to a certain degree also from the IDW you can then" }, { "start": 502.64, "end": 510.36, "text": " find the alt-right. So even though a user that goes on YouTube at first isn't" }, { "start": 510.36, "end": 517.16, "text": " likely to find the alt-right videos because it's fringe, it's extreme and so" }, { "start": 517.16, "end": 521.84, "text": " on, by through the YouTube recommendation algorithm basically by" }, { "start": 521.84, "end": 527.24, "text": " going to the IDW finding this, then from there they'll find the alt-light and" }, { "start": 527.24, "end": 534.96, "text": " from there and from the IDW they will then find the alt-right. So they claim" }, { "start": 534.96, "end": 542.26, "text": " that there's this pathway of radicalization here that kind of pushes" }, { "start": 542.26, "end": 551.76, "text": " people towards the alt-right. And that's their hypothesis. And they claim" }, { "start": 551.76, "end": 558.84, "text": " that they have evidence to support this and I claim that there is a simpler" }, { "start": 558.84, "end": 565.28, "text": " solution, namely... So first of all let me state I don't like the alt-right. I think" }, { "start": 565.28, "end": 574.64, "text": " their ideas are despicable. I should go without saying, though I have said it now," }, { "start": 574.64, "end": 581.28, "text": " so you know, just as a disclaimer I'm not defending anyone here. I'm simply saying" }, { "start": 581.28, "end": 586.56, "text": " this paper has a simpler explanation for their data. Namely, what I think is" }, { "start": 586.56, "end": 595.6, "text": " happening here is YouTube again is channels. Each dot here is a channel." }, { "start": 595.6, "end": 601.36, "text": " Channels can be clustered as such, right there, as we saw before. I'm just drawing" }, { "start": 601.36, "end": 606.88, "text": " more of them right now. Channels, channels, channels, channels, channels, channels, channels." }, { "start": 606.88, "end": 614.12, "text": " So what I think is happening is there is a control group, what they call the" }, { "start": 614.12, "end": 621.6, "text": " control group. It's over here, it's large control, right? It's a bunch of channels." }, { "start": 621.6, "end": 630.28, "text": " Then, which is kind of mainstream media, then over here there is, let's say," }, { "start": 630.28, "end": 635.56, "text": " alternative media where all of these three groups belong into. So at some" }, { "start": 635.56, "end": 642.28, "text": " point you will have the IDW, then maybe a bit further away from the control group," }, { "start": 642.28, "end": 647.68, "text": " but very close to the IDW you would have the alt-light, and very close to the two," }, { "start": 647.68, "end": 656.0799999999999, "text": " maybe here you would have the alt-right. So notably, in my model, the" }, { "start": 656.0799999999999, "end": 662.76, "text": " IDW and the alt-light are kind of close together. They are in terms of" }, { "start": 662.76, "end": 667.84, "text": " comparative distance. So if you cluster these channels, let's say audience or" }, { "start": 667.84, "end": 674.88, "text": " topics or and so on, it will turn out that all of these three are far, far" }, { "start": 674.88, "end": 679.68, "text": " away from the control group. Those two are very close to each other and then" }, { "start": 679.68, "end": 686.72, "text": " here there is some distance, but how much distance is a question?" }, { "start": 686.72, "end": 691.28, "text": " But of course it's going to be smaller distance than the distance to the" }, { "start": 691.28, "end": 697.6, "text": " control group here. I mean I could draw the alt-right, maybe a more" }, { "start": 697.6, "end": 705.0799999999999, "text": " accurate picture would be something like this. So whatever, I mean" }, { "start": 705.0799999999999, "end": 710.8, "text": " it doesn't matter the details, but the distance here is smaller" }, { "start": 710.8, "end": 719.12, "text": " than the distance to the control group. In this model a second thing is" }, { "start": 719.12, "end": 725.52, "text": " also important, namely the alt-right, as you can see here, is much much smaller" }, { "start": 725.52, "end": 731.84, "text": " than the IDW and the alt-light. And these again are much smaller than the" }, { "start": 731.84, "end": 737.96, "text": " control group. And this I think accounts for most, so the distance relations" }, { "start": 737.96, "end": 749.6, "text": " between these and the size of the clusters account for most. So with" }, { "start": 749.6, "end": 754.9200000000001, "text": " size I mean mainly number of channels and also audience. This accounts" }, { "start": 754.9200000000001, "end": 761.0400000000001, "text": " for most of the data better than their model. So just keep this in mind." }, { "start": 761.0400000000001, "end": 767.36, "text": " And my model of course doesn't include any kind of pipeline that they" }, { "start": 767.36, "end": 776.08, "text": " suggest. So first of all they go ahead and they say, alright, they collect" }, { "start": 776.08, "end": 781.5600000000001, "text": " channels. So they collect data for this and you know we could go over how they" }, { "start": 781.5600000000001, "end": 786.2, "text": " collect the data and criticize that and so on. They do human annotation and they" }, { "start": 786.2, "end": 791.8000000000001, "text": " start from already published reports and so on, which themselves can be criticized." }, { "start": 791.8000000000001, "end": 796.46, "text": " I'm not gonna go into their data collection methodology. It can have" }, { "start": 796.46, "end": 803.5600000000001, "text": " mistakes, but then any collection methodology can have mistakes. What they" }, { "start": 803.5600000000001, "end": 807.5600000000001, "text": " end up with is a number of channels and here are the top channels from each" }, { "start": 807.5600000000001, "end": 814, "text": " category. And as you can see alt-right, alt-light, intellectual dark web," }, { "start": 814, "end": 821.2800000000001, "text": " and control. So already here you can see pretty clearly the model I have in mind." }, { "start": 821.28, "end": 827.16, "text": " They acknowledge all of this by the way. Look at the size of the alt-right" }, { "start": 827.16, "end": 832.56, "text": " channels, the biggest ones, compared to the size of the alt-light and the" }, { "start": 832.56, "end": 838.0799999999999, "text": " intellectual dark web. They're much much smaller in number of views. And then" }, { "start": 838.0799999999999, "end": 843.52, "text": " compare this to the size of the control group. The control group again is again" }, { "start": 843.52, "end": 849.9599999999999, "text": " larger than the other two groups. So just keep it in mind. Second thing to keep in" }, { "start": 849.96, "end": 856, "text": " mind, look at these channels. Maybe you know some of them. Joe Rogan, Sargon of" }, { "start": 856, "end": 864.14, "text": " Akkad, Paul Joseph Watson, Sticks Hexenhammer. These are" }, { "start": 864.14, "end": 870.88, "text": " youtubers. These are individuals making YouTube clips, creating content for" }, { "start": 870.88, "end": 876.52, "text": " YouTube, being on this platform. Whereas if you compare it with the control group," }, { "start": 876.52, "end": 884.56, "text": " what's here? Vox, GQ, Wired, Business Insider. These aren't youtubers. These are" }, { "start": 884.56, "end": 890.1999999999999, "text": " websites or traditional media companies or their own kind of" }, { "start": 890.1999999999999, "end": 895.84, "text": " blogs and so on that have a YouTube channel where YouTube is one of the" }, { "start": 895.84, "end": 904.24, "text": " outlets of this media company. So I think there's a giant discrepancy" }, { "start": 904.24, "end": 909.34, "text": " here in the control group that can explain also some of this data that you" }, { "start": 909.34, "end": 915.04, "text": " see. So keep that in mind. I think the control group, they say they don't try to" }, { "start": 915.04, "end": 919.24, "text": " capture the user dynamic with the control group, but I think that there's" }, { "start": 919.24, "end": 923.6800000000001, "text": " many problems with this control group, including the fact that these are" }, { "start": 923.6800000000001, "end": 929.6800000000001, "text": " kind of traditional mainstream media that just have YouTube as an outlet." }, { "start": 929.68, "end": 936.28, "text": " Moreover, a lot of these like Vox or Vice, they are clickbait media and" }, { "start": 936.28, "end": 943.28, "text": " rage bait media that it has worked for a number of years, but the algorithms" }, { "start": 943.28, "end": 949.8399999999999, "text": " are becoming more attuned to clickbait and these are crashing fast." }, { "start": 949.8399999999999, "end": 958.4, "text": " Whereas the more youtuber people, they are not susceptible to" }, { "start": 958.4, "end": 965.12, "text": " that much to kind of the abolishment of clickbait. Alright, so this is" }, { "start": 965.12, "end": 970.68, "text": " the data. They have all these channels, they have all these videos and they first of" }, { "start": 970.68, "end": 979.28, "text": " all give some stats on it. Here you see on the bottom is always the year." }, { "start": 979.28, "end": 987.38, "text": " So they do this over time and you see the active channels which are channels" }, { "start": 987.38, "end": 993.72, "text": " that have uploaded videos in some time. See the control group again is larger" }, { "start": 993.72, "end": 1001.68, "text": " but has started to flatten out in the last few years. Whereas these" }, { "start": 1001.68, "end": 1007.52, "text": " communities, they are relatively flourishing. Another interesting point" }, { "start": 1007.52, "end": 1015.08, "text": " is that the paper somehow tries to tie this to the election of Donald Trump in" }, { "start": 1015.08, "end": 1022.08, "text": " 2016. But I think this is just kind of in there to gain" }, { "start": 1022.08, "end": 1027.8400000000001, "text": " relevance. A lot of these kind of trends and so on you'll see already start" }, { "start": 1027.8400000000001, "end": 1035.2, "text": " before that. So the start of the rise here, if you see these" }, { "start": 1035.2, "end": 1041.96, "text": " bumps here and so on, a lot of them start before 2016. So as we go through this" }, { "start": 1041.96, "end": 1046.48, "text": " make up your own mind of how much this is actually tied to the election" }, { "start": 1046.48, "end": 1054.92, "text": " or not. I think it's much more the years when clickbait started to go" }, { "start": 1054.92, "end": 1060.4, "text": " down as a business model. Never mind though. So the active channels" }, { "start": 1060.4, "end": 1069.16, "text": " growing, though the control group not growing as much. Videos published, even" }, { "start": 1069.16, "end": 1073, "text": " though the control group isn't growing so much, they still publish the most" }, { "start": 1073, "end": 1079.5600000000002, "text": " videos. But you can see generally the site is growing. Generally YouTube is" }, { "start": 1079.5600000000002, "end": 1085.88, "text": " growing. Like counts. And here you see something interesting starting to happen." }, { "start": 1085.88, "end": 1089.6000000000001, "text": " Namely these communities, especially the alt-light and the intellectual dark web," }, { "start": 1089.6000000000001, "end": 1094.2, "text": " they're starting to catch up. And this is one of the things that the paper also" }, { "start": 1094.2, "end": 1100.4, "text": " states is that if you look at for example comments per video, this" }, { "start": 1100.4, "end": 1107.6000000000001, "text": " light and the intellectual dark web outperform the control group vastly." }, { "start": 1107.6000000000001, "end": 1117.1200000000001, "text": " Also if you look at views per video and likes per video, the control" }, { "start": 1117.1200000000001, "end": 1123, "text": " group simply don't have an engaged audience. Which I think first of all is" }, { "start": 1123, "end": 1127.68, "text": " because they produce clickbait. Second of all they're just not that interesting." }, { "start": 1127.68, "end": 1132.32, "text": " And third of all they're not youtubers. Like this isn't their thing. They're" }, { "start": 1132.32, "end": 1140.4, "text": " just simply an outlet. But yeah so that's kind of a one, just kind of a" }, { "start": 1140.4, "end": 1149.76, "text": " bunch of metrics that they show here. The next table is a bit more" }, { "start": 1149.76, "end": 1155.44, "text": " interesting. In the next table they do a user intersection. So what they do is they" }, { "start": 1155.44, "end": 1159.76, "text": " collect all these videos and then they collect all the comments of these" }, { "start": 1159.76, "end": 1165.28, "text": " videos. And the comment of course always comes with a username. You need to be" }, { "start": 1165.28, "end": 1170.84, "text": " logged into YouTube to make a comment. And they see which users comment on" }, { "start": 1170.84, "end": 1176.08, "text": " multiple videos or on videos of multiple categories. And then they can look at" }, { "start": 1176.08, "end": 1181.9199999999998, "text": " how many users of category A also comment in category B and vice versa." }, { "start": 1181.9199999999998, "end": 1188.28, "text": " So they have two metrics here. Jucard similarity which is for two" }, { "start": 1188.28, "end": 1193.84, "text": " communities A and B, number of users commenting on A and B divided" }, { "start": 1193.84, "end": 1199.04, "text": " by number of users commenting on A or B. And the second the overlap coefficient" }, { "start": 1199.04, "end": 1205.32, "text": " is number of users commenting on A and B divided by the minimum size of A and B." }, { "start": 1205.32, "end": 1212.2, "text": " They say that the overlap coefficient is more useful to compare communities of" }, { "start": 1212.2, "end": 1220.1599999999999, "text": " different sizes. So we'll look at that. The top graphs are always always" }, { "start": 1220.1599999999999, "end": 1226.8799999999999, "text": " jacquard difference and the jacquard similarity in the bottom one are" }, { "start": 1226.8799999999999, "end": 1232.32, "text": " overlap coefficient. The first graphs though are number of commenting users" }, { "start": 1232.32, "end": 1238.28, "text": " per year. And you already see that even though the control group has much more" }, { "start": 1238.28, "end": 1245, "text": " views and probably much more videos, much larger, the comments don't... so the" }, { "start": 1245, "end": 1250.04, "text": " again the the users of the all light and the intellectual dark web are much more" }, { "start": 1250.04, "end": 1258.2, "text": " engaged. Also comments per user. This is the cumulative distribution function." }, { "start": 1258.2, "end": 1264.44, "text": " Most people that comment on control group videos maybe comment once" }, { "start": 1264.44, "end": 1271.64, "text": " and then but these other communities they comment more. Self similarity" }, { "start": 1271.64, "end": 1277.1200000000001, "text": " means year after year. So always compared to the year before how many users are" }, { "start": 1277.1200000000001, "end": 1283.52, "text": " similar. So how well do these communities retain users. And you can" }, { "start": 1283.52, "end": 1289.04, "text": " already see here the control group is actually very bad at retaining users. It" }, { "start": 1289.04, "end": 1295.2, "text": " does have this overlap coefficient high but it has the jacquard self similarity" }, { "start": 1295.2, "end": 1299.72, "text": " low which basically if you think of the formula of the jacquard similarity means" }, { "start": 1299.72, "end": 1308.32, "text": " that this number is small and this number is high which means that A and" }, { "start": 1308.32, "end": 1314.96, "text": " B are very disjoint which means that the last year's users aren't this year's" }, { "start": 1314.96, "end": 1321.6, "text": " users basically. So they they constantly have to appeal to new users because" }, { "start": 1321.6, "end": 1327.36, "text": " they're losing old users because well I guess they're boring. Whereas the" }, { "start": 1327.36, "end": 1332.9199999999998, "text": " all light and intellectual dark web are much more are much better at retaining" }, { "start": 1332.92, "end": 1342.6000000000001, "text": " users. Interestingly the alt right not as good as retaining users as the other two." }, { "start": 1342.6000000000001, "end": 1347.2, "text": " This could also be an effect of size like if your community is smaller the" }, { "start": 1347.2, "end": 1354.1200000000001, "text": " users might wander away more quickly. But I think this already speaks against the" }, { "start": 1354.1200000000001, "end": 1360.88, "text": " radicalization pipeline. If the if the alt right if YouTube was radicalizing" }, { "start": 1360.88, "end": 1368.8000000000002, "text": " people towards alt right we I think we would see a the alt right being on top" }, { "start": 1368.8000000000002, "end": 1379.68, "text": " of user retention. Then here they have intersections between communities. So" }, { "start": 1379.68, "end": 1390.8400000000001, "text": " green here is alt light and IDW while the blue is alt right and alt light and" }, { "start": 1390.84, "end": 1396.4399999999998, "text": " the other blue is alt right and IDW. So basically the green is alt light and IDW" }, { "start": 1396.4399999999998, "end": 1404.8, "text": " and the blues are the other two. And we see that the overlap in terms of overlap" }, { "start": 1404.8, "end": 1411.52, "text": " coefficient is similar. The overlap in terms of jacquard similarity the alt" }, { "start": 1411.52, "end": 1418.8, "text": " light and the IDW are very much more sharing users which in the picture I" }, { "start": 1418.8, "end": 1425.8799999999999, "text": " painted makes sense if you think my model is valid. My model explains this" }, { "start": 1425.8799999999999, "end": 1434.04, "text": " very well in that these two communities are quite close together therefore share" }, { "start": 1434.04, "end": 1438.52, "text": " a similar user base. The alt right smaller and a bit further apart" }, { "start": 1438.52, "end": 1445.54, "text": " therefore not as similar though more similar than the control group which is" }, { "start": 1445.54, "end": 1450.96, "text": " the last graph. The last graph is sorry the last graph is how similar are these" }, { "start": 1450.96, "end": 1461.12, "text": " communities to the control group and here we see the IDW and the alt light" }, { "start": 1461.12, "end": 1467.36, "text": " kind of similar. The alt right not as similar though in the overlap" }, { "start": 1467.36, "end": 1476.56, "text": " coefficient they're about the same. So the paper here claims oh look at the" }, { "start": 1476.56, "end": 1481.6, "text": " similarity this is definitely a radicalization. So they don't claim yet this" }, { "start": 1481.6, "end": 1485.56, "text": " is a radicalization pipeline but they claim that there's a higher similarity." }, { "start": 1485.56, "end": 1491.36, "text": " If you actually look at the numbers it's not so I mean here you're" }, { "start": 1491.36, "end": 1496.9599999999998, "text": " around the 50% similarity and here at the end you're also around the 50%" }, { "start": 1496.96, "end": 1500.76, "text": " similarity with the control group. So this is within these groups and this is" }, { "start": 1500.76, "end": 1506.8400000000001, "text": " here with the control group. Also here if I look at the kind of mean here" }, { "start": 1506.8400000000001, "end": 1513.8, "text": " you're at whatever 20-18% and here you're also you may be a bit lower but" }, { "start": 1513.8, "end": 1519.6000000000001, "text": " you're also going towards this. What it looks to me like rather than there being" }, { "start": 1519.6000000000001, "end": 1525.16, "text": " a radicalization pipeline if you look at the shape of this and kind" }, { "start": 1525.16, "end": 1532.44, "text": " of where it starts in 2013-2014 it starts to go up here and you look at the" }, { "start": 1532.44, "end": 1538.64, "text": " shape of this it's simply the same shape delayed and I mean there's no reason why" }, { "start": 1538.64, "end": 1547.64, "text": " this graph wouldn't go up here wouldn't go up here in the future and reach the" }, { "start": 1547.64, "end": 1551.88, "text": " exact same numbers as here. It seems that the graph is simply shifted which makes" }, { "start": 1551.88, "end": 1557.0800000000002, "text": " total sense if you think these communities are... I'm gonna draw the same" }, { "start": 1557.0800000000002, "end": 1568.3600000000001, "text": " picture here... right IDW, alt light and over here control. If you think they're" }, { "start": 1568.3600000000001, "end": 1574.2800000000002, "text": " they're like that if you think simply think well YouTube is growing users are" }, { "start": 1574.2800000000002, "end": 1580.8400000000001, "text": " growing users are starting somewhere here and then spreading out pretty much" }, { "start": 1580.84, "end": 1585.12, "text": " randomly like they're spreading out spreading out spreading out users start" }, { "start": 1585.12, "end": 1588.32, "text": " here spreading out users start here spreading out here spreading out" }, { "start": 1588.32, "end": 1593.52, "text": " everywhere users just kind of there's a diffusion process going on not in a" }, { "start": 1593.52, "end": 1597.56, "text": " particular direction like they claim if there is just a diffusion process going" }, { "start": 1597.56, "end": 1604.24, "text": " on what would you expect you would expect users that started here to reach" }, { "start": 1604.24, "end": 1611.68, "text": " the IDW and alt right much sooner than they reach the control group but" }, { "start": 1611.68, "end": 1617, "text": " ultimately as the diffusion continues all users will have commented on most" }, { "start": 1617, "end": 1621.72, "text": " videos if you run YouTube infinitely and these numbers would go that's why the" }, { "start": 1621.72, "end": 1626.92, "text": " numbers go up right if you just let it go the diffusion process will go along" }, { "start": 1626.92, "end": 1633.04, "text": " and it simply takes a longer time to go from here all the way over here then it" }, { "start": 1633.04, "end": 1639.8, "text": " goes then between these communities so to me we're looking at a simple diffusion" }, { "start": 1639.8, "end": 1647.92, "text": " process here that is shifted in time and that explains very much the discrepancy" }, { "start": 1647.92, "end": 1651.6, "text": " in number but also the shape of the curve that is exactly the same but" }, { "start": 1651.6, "end": 1656.04, "text": " shifted their model does not explain the shape of the curve they simply say well" }, { "start": 1656.04, "end": 1663.1599999999999, "text": " here it's 75% and here it's only 50% that means that these communities are" }, { "start": 1663.1599999999999, "end": 1671.1599999999999, "text": " kind of shipping users towards each other so I think the explanation is" }, { "start": 1671.1599999999999, "end": 1677.24, "text": " easier then so they claim this does not alone kind of show that there is a" }, { "start": 1677.24, "end": 1683.32, "text": " pipeline what they now do however will show that basically so they claim this" }, { "start": 1683.32, "end": 1689.12, "text": " is the experiment that really shows that there is it is pipeline so what they do" }, { "start": 1689.12, "end": 1697.84, "text": " is they define what they call an infection so what they say is okay we" }, { "start": 1697.84, "end": 1706.36, "text": " are for example this this row here we're taking users that are alt light users" }, { "start": 1706.36, "end": 1713.4799999999998, "text": " at the beginning in this time so basically they only comment on the only" }, { "start": 1713.4799999999998, "end": 1720.08, "text": " comment on alt light videos during this time right so discard all users that" }, { "start": 1720.08, "end": 1724.6799999999998, "text": " comment on anything else just retain the ones that only comment on alt light" }, { "start": 1724.6799999999998, "end": 1730.76, "text": " videos during this time then we're going to follow them over time and see how" }, { "start": 1730.76, "end": 1738.84, "text": " many of them have at least one comment in an alt right video so this is only" }, { "start": 1738.84, "end": 1744.56, "text": " directed from the community over here towards the alt right and then they call" }, { "start": 1744.56, "end": 1750.72, "text": " a user infected specifically if they comment on one or two alt right videos" }, { "start": 1750.72, "end": 1757.2, "text": " they're lightly infected if they comment on three to five they're mildly infected" }, { "start": 1757.2, "end": 1765.04, "text": " and if they comment on more they're severely infected so as you can see" }, { "start": 1765.04, "end": 1773.92, "text": " users starting from the alt light or from the IDW or from both they will" }, { "start": 1773.92, "end": 1781.4, "text": " become in some will become infected over time namely and I postulate we simply" }, { "start": 1781.4, "end": 1785.8400000000001, "text": " look at the since that the tendencies between the groups are similar we'll" }, { "start": 1785.84, "end": 1794.56, "text": " simply look at the light infections here so they say okay after you know in 2018" }, { "start": 1794.56, "end": 1799.04, "text": " about 8 to 10 percent of the users become infected in these groups you see" }, { "start": 1799.04, "end": 1806.9199999999998, "text": " here here about the same trajectories whereas it so whereas in the control" }, { "start": 1806.92, "end": 1817.1200000000001, "text": " group it's less here though honestly I don't think it's that much less right I" }, { "start": 1817.1200000000001, "end": 1823.52, "text": " think that again I think there's a normal diffusion process here they do" }, { "start": 1823.52, "end": 1831.44, "text": " this similarly with the with the other ones and to me like to them this makes" }, { "start": 1831.44, "end": 1836.4, "text": " total sense like oh yeah users that start in these communities they migrate" }, { "start": 1836.4, "end": 1839.88, "text": " to get infected by the alt right they go towards the alt right because you can" }, { "start": 1839.88, "end": 1844.0800000000002, "text": " find it so easily and to me this simply looks like a normal diffusion process" }, { "start": 1844.0800000000002, "end": 1850.64, "text": " here's what you need if you want and by the way the control group isn't that" }, { "start": 1850.64, "end": 1855.72, "text": " much different here's what you need if you want to show that there is a" }, { "start": 1855.72, "end": 1863.5600000000002, "text": " pipeline in this direction you need this exact same graph in the other direction" }, { "start": 1863.56, "end": 1872.24, "text": " and you need to show that people that started in the alt right do not go back" }, { "start": 1872.24, "end": 1878.56, "text": " in the same fashion towards the alt light or the IDW and they do especially" }, { "start": 1878.56, "end": 1883.6, "text": " not go to the control group you need to show this basically between each pair of" }, { "start": 1883.6, "end": 1889.96, "text": " these and you need to show that the direction of infection is only in a" }, { "start": 1889.96, "end": 1895.4, "text": " single direction namely towards radicalization otherwise you're just" }, { "start": 1895.4, "end": 1899.56, "text": " looking at a normal diffusion process between differently distance and" }, { "start": 1899.56, "end": 1907.64, "text": " differently sized groups so they go on to analyze and they say well how much" }, { "start": 1907.64, "end": 1914.56, "text": " basically how much of the alt right audience makes is made up by people that" }, { "start": 1914.56, "end": 1919.2, "text": " have been radicalized that have been infected so that this infection is kind" }, { "start": 1919.2, "end": 1923.3600000000001, "text": " of their proxy for what they call a radicalization and if you become" }, { "start": 1923.3600000000001, "end": 1930.72, "text": " infected then basically you're not part of the alt right or something even though" }, { "start": 1930.72, "end": 1936.56, "text": " you might have you might have commented something negative actually the might" }, { "start": 1936.56, "end": 1942.32, "text": " engage with their ideas and call them their crap but in any case you're now" }, { "start": 1942.32, "end": 1948.8, "text": " infected and they ask themselves how much of the alt right audience has" }, { "start": 1948.8, "end": 1954.44, "text": " are of these infected so basically how much of the alt right audience have our" }, { "start": 1954.44, "end": 1960.9199999999998, "text": " people that in the past have been not alt writers have been exclusively" }, { "start": 1960.9199999999998, "end": 1970.76, "text": " commenting on alt light or IDW videos and they find that for example for alt" }, { "start": 1970.76, "end": 1978.6, "text": " light 23% of the alt right audience are former alt lighters and have our former" }, { "start": 1978.6, "end": 1984.6, "text": " alt lighters that have now made one comment on an alt right video so that" }, { "start": 1984.6, "end": 1992.52, "text": " their claim is well there is a sizable portion of the alt right that at the" }, { "start": 1992.52, "end": 1998.12, "text": " beginning wasn't alt right that basically became infected and therefore" }, { "start": 1998.12, "end": 2002.84, "text": " that that kind of shows this radicalization pipeline that the alt" }, { "start": 2002.84, "end": 2009.8, "text": " right audience is mainly consistent of people that have not been alt right" }, { "start": 2009.8, "end": 2017.1599999999999, "text": " previously but have become so and to me again this is simply a function of the" }, { "start": 2017.1599999999999, "end": 2024.36, "text": " size of these communities right if if you think of this again and you start" }, { "start": 2024.36, "end": 2028.6, "text": " randomly somewhere on YouTube let's let's make this assumption people start" }, { "start": 2028.6, "end": 2033.56, "text": " randomly somewhere on YouTube what's the probability that you're going to start" }, { "start": 2033.56, "end": 2040.4399999999998, "text": " in the alt right very small right so what's the the kind of natural let's say" }, { "start": 2040.4399999999998, "end": 2048.24, "text": " the natural size of alt right before users go and migrate is very tiny right" }, { "start": 2048.24, "end": 2054.4, "text": " so not many users are going to be what you would consult originally alt writers" }, { "start": 2054.4, "end": 2058.12, "text": " whatever their their first comment basically what this thing measures is" }, { "start": 2058.12, "end": 2064.2799999999997, "text": " where is your first comment and are any of your subsequent comments alt right if" }, { "start": 2064.2799999999997, "end": 2068.68, "text": " your first comment is not in the alt right then you become a potential" }, { "start": 2068.68, "end": 2073.3199999999997, "text": " candidate for infection and if any comment is on the alt right then you're" }, { "start": 2073.3199999999997, "end": 2077.4, "text": " infected so what's the probability that your first comment is not alt right well" }, { "start": 2077.4, "end": 2080.8399999999997, "text": " you're gonna land somewhere on YouTube YouTube is huge the alt right is very" }, { "start": 2080.84, "end": 2088.96, "text": " small thus that probability is extremely small and then you let you simply let" }, { "start": 2088.96, "end": 2095, "text": " people diffuse let them diffuse let them diffuse some will end up in the alt" }, { "start": 2095, "end": 2099.96, "text": " right and since the alt right is so small to begin with actually most people" }, { "start": 2099.96, "end": 2106.2400000000002, "text": " that will comment at some point on an alt right video will will have their" }, { "start": 2106.24, "end": 2114.3999999999996, "text": " first comment from somewhere outside the alt right videos simply simply a" }, { "start": 2114.3999999999996, "end": 2119.56, "text": " numbers game right simply the alt right is so small that this is virtually" }, { "start": 2119.56, "end": 2124.64, "text": " guaranteed so what they find here is again simply an evidence of a regular" }, { "start": 2124.64, "end": 2130.9599999999996, "text": " diffusion process between these differently sized groups and the claims" }, { "start": 2130.9599999999996, "end": 2136.2, "text": " they make from this are just over the top again that their comparison to" }, { "start": 2136.2, "end": 2140.3199999999997, "text": " the control group if you if you look at the numbers they're actually not that" }, { "start": 2140.3199999999997, "end": 2147.7599999999998, "text": " different from this from the IDW numbers there they're different than the alt" }, { "start": 2147.7599999999998, "end": 2156.9199999999996, "text": " light here substantially different but again that simply a function of distance" }, { "start": 2156.9199999999996, "end": 2164.96, "text": " in my opinion in these in these clusters lastly they look at the YouTube" }, { "start": 2164.96, "end": 2173.4, "text": " recommender system and they say okay if we look at these videos and the channels" }, { "start": 2173.4, "end": 2179.8, "text": " and we look at on these videos what other videos are recommended and what" }, { "start": 2179.8, "end": 2183.64, "text": " other channels are recommended so if you have like a video on YouTube you have" }, { "start": 2183.64, "end": 2187.6, "text": " the video here and here you have like recommended videos similarly when you" }, { "start": 2187.6, "end": 2191.7200000000003, "text": " have a channel right you have a channel this is a person yeah I'm this person" }, { "start": 2191.72, "end": 2195.68, "text": " the person can have first of all they can have featured channels where they" }, { "start": 2195.68, "end": 2200.8399999999997, "text": " say look these are channels that I find cool I go check them out and then they" }, { "start": 2200.8399999999997, "end": 2204.7599999999998, "text": " also have recommended channels that are kind of given by YouTube as" }, { "start": 2204.7599999999998, "end": 2211.08, "text": " recommendations so here YouTube controls basically everything here the creator" }, { "start": 2211.08, "end": 2217.72, "text": " controls part and the YouTube controls dollar part so they look to both first" }, { "start": 2217.72, "end": 2225.3599999999997, "text": " of all the channels channels recommend recommendations so these are both" }, { "start": 2225.3599999999997, "end": 2233.7999999999997, "text": " sections here and they look at if you start on a alt light video how likely if" }, { "start": 2233.7999999999997, "end": 2240.52, "text": " you do a random walk are you to end up in the alt right or in the intellectual" }, { "start": 2240.52, "end": 2245.7999999999997, "text": " dark web or control group after one step two steps three steps four steps so that" }, { "start": 2245.8, "end": 2251.8, "text": " the big line is the random Walker and actually the dashed line is the distance" }, { "start": 2251.8, "end": 2257.0800000000004, "text": " if you were to target Lee go into the direction of such a video like what's" }, { "start": 2257.0800000000004, "end": 2268, "text": " the minimum number of clicks you need and you can see here the the if you" }, { "start": 2268, "end": 2274.36, "text": " start at alt light after one or two steps the random Walker is kind of a 2%" }, { "start": 2274.36, "end": 2282.32, "text": " chance to end up at an alt right video and about a 25% chance here of ending up" }, { "start": 2282.32, "end": 2289.08, "text": " in a intellectual dark web video and about a 50% chance of ending up again at" }, { "start": 2289.08, "end": 2293.56, "text": " an alt light video the scales here really different so it's very difficult" }, { "start": 2293.56, "end": 2301, "text": " to judge how it compares to the control group which is kind of at zero here but" }, { "start": 2301, "end": 2306.68, "text": " to me again this is a reflection of the size of these communities and I think" }, { "start": 2306.68, "end": 2313.68, "text": " it's a bit you know we are to to then claim oh these are reachable basically so" }, { "start": 2313.68, "end": 2321.56, "text": " 2% chance of landing on an alt right video um I'm not sure but again if you" }, { "start": 2321.56, "end": 2326.48, "text": " compare if you start from the control group there's almost no chance you'll" }, { "start": 2326.48, "end": 2335, "text": " end up in a alt right video so I guess the comparison is is okay if you compare" }, { "start": 2335, "end": 2344.92, "text": " to control group if you start look at videos however again if you start at alt" }, { "start": 2344.92, "end": 2355, "text": " light after one step you are approximately 25% likely to be in an IDW" }, { "start": 2355, "end": 2360.8, "text": " video you're a bit over 50% likely to stay in an alt light video however" }, { "start": 2360.8, "end": 2367.32, "text": " compare this to channels you're almost super unlikely to end at a control" }, { "start": 2367.32, "end": 2372.16, "text": " channel if you start at an alt light channel but in video recommendations" }, { "start": 2372.16, "end": 2379.6, "text": " you're actually also about 25% chance of ending in a control group video where" }, { "start": 2379.6, "end": 2388, "text": " as look at the scale here you're only about 0.03% likely to end up in an alt" }, { "start": 2388, "end": 2399.64, "text": " right video and also here so here even look at this if you start an IDW video" }, { "start": 2399.64, "end": 2405.92, "text": " the chance that you're going to end up in a control again super high much" }, { "start": 2405.92, "end": 2413.32, "text": " higher than an alt light video whereas with the channel recommendations this" }, { "start": 2413.32, "end": 2418.4, "text": " was completely turned around so we see the alt right completely loses when it" }, { "start": 2418.4, "end": 2423.7200000000003, "text": " comes to video recommendations and mainly the control group gains compared" }, { "start": 2423.7200000000003, "end": 2430.84, "text": " to the channel recommendations I think here's what I think I think this is due" }, { "start": 2430.84, "end": 2437.08, "text": " to this section here this section here where the creators have power and also" }, { "start": 2437.08, "end": 2442.2400000000002, "text": " this section here YouTube recommending I think they're putting a lot of work" }, { "start": 2442.2400000000002, "end": 2447.1600000000003, "text": " into the video recommendations I think they're putting not that much work into" }, { "start": 2447.1600000000003, "end": 2451.76, "text": " these recommendations and by work I mean actually manually intervening and" }, { "start": 2451.76, "end": 2457.08, "text": " deciding what's kind of good videos and bad videos and the the control group" }, { "start": 2457.08, "end": 2463.7999999999997, "text": " they're probably there's probably big advertisement money in that so they" }, { "start": 2463.7999999999997, "end": 2467.36, "text": " might be pushed up a bit in the video recommendations since most people are" }, { "start": 2467.36, "end": 2472.2, "text": " going by video recommendations I've actually never used the channel" }, { "start": 2472.2, "end": 2476.12, "text": " recommendations feature and the channel recommendations first of all the" }, { "start": 2476.12, "end": 2481.24, "text": " creator has power over part of it and then also YouTube may not put as much" }, { "start": 2481.24, "end": 2491.08, "text": " work into these related channels so both have in the effect that I would say that" }, { "start": 2491.08, "end": 2496.56, "text": " that the data here first of all it doesn't doesn't convince me of a" }, { "start": 2496.56, "end": 2500.9599999999996, "text": " radicalization pipeline it simply convinces me that some communities are" }, { "start": 2500.9599999999996, "end": 2506.8599999999997, "text": " larger smaller and closer together but second of all that this down here if you" }, { "start": 2506.86, "end": 2512.36, "text": " forget about the alt-right for a moment yeah they're irrelevant this down here" }, { "start": 2512.36, "end": 2518.28, "text": " actually compared to up here shows maybe a bit of evidence of an algorithmic" }, { "start": 2518.28, "end": 2527.48, "text": " promotion of these mainstream media channels compared to how the communities" }, { "start": 2527.48, "end": 2531.84, "text": " are actually clustering which I think this this up here might be a much more" }, { "start": 2531.84, "end": 2541.1600000000003, "text": " accurate picture so you know that it's just kind of a funky thing in the data" }, { "start": 2541.1600000000003, "end": 2546.96, "text": " yeah that alt-right is irrelevant to this part because they're they're just" }, { "start": 2546.96, "end": 2556.08, "text": " too small so this is this is kind of my take on this they didn't give" }, { "start": 2556.08, "end": 2562.96, "text": " recommendations and is this a pipeline and so on and I don't think so you've" }, { "start": 2562.96, "end": 2571.24, "text": " now heard my idea and you've heard their idea decide for yourself but I think" }, { "start": 2571.24, "end": 2578.64, "text": " it's a good example of how if you are convinced of an underlying mechanism" }, { "start": 2578.64, "end": 2584.48, "text": " you're going to collect evidence in support of that mechanism and if you" }, { "start": 2584.48, "end": 2588.76, "text": " catch yourself doing that really really think isn't there an easier explanation" }, { "start": 2588.76, "end": 2618.5200000000004, "text": " for this all right that was it for me have fun" } ]
3qxJ2WD8p4w
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
LambdaNetworks: Modeling long-range Interactions without Attention (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "attention", "attention mechanism", "lambda", "lambdaresnet", "residual networks", "local attention", "quadratic", "memory", "transformer", "transformers", "keys", "values", "queries", "architecture", "input size", "iclr", "lambdanet", "lambdanets", "lambdaresnets", "efficientnet", "tradeoff", "routing", "linear function", "functional programming" ]
#ai #research #attention Transformers, having already captured NLP, have recently started to take over the field of Computer Vision. So far, the size of images as input has been challenging, as the Transformers' Attention Mechanism's memory requirements grows quadratic in its input size. LambdaNetworks offer a way around this requirement and capture long-range interactions without the need to build expensive attention maps. They reach a new state-of-the-art in ImageNet and compare favorably to both Transformers and CNNs in terms of efficiency. OUTLINE: 0:00 - Introduction & Overview 6:25 - Attention Mechanism Memory Requirements 9:30 - Lambda Layers vs Attention Layers 17:10 - How Lambda Layers Work 31:50 - Attention Re-Appears in Lambda Layers 40:20 - Positional Encodings 51:30 - Extensions and Experimental Comparisons 58:00 - Code Paper: https://openreview.net/forum?id=xTJEN-ggl1b Lucidrains' Code: https://github.com/lucidrains/lambda-networks Abstract: We present a general framework for capturing long-range interactions between an input and structured contextual information (e.g. a pixel surrounded by other pixels). Our method, called the lambda layer, captures such interactions by transforming available contexts into linear functions, termed lambdas, and applying these linear functions to each input separately. Lambda layers are versatile and may be implemented to model content and position-based interactions in global, local or masked contexts. As they bypass the need for expensive attention maps, lambda layers can routinely be applied to inputs of length in the thousands, en-abling their applications to long sequences or high-resolution images. The resulting neural network architectures, LambdaNetworks, are computationally efficient and simple to implement using direct calls to operations available in modern neural network libraries. Experiments on ImageNet classification and COCO object detection and instance segmentation demonstrate that LambdaNetworks significantly outperform their convolutional and attentional counterparts while being more computationally efficient. Finally, we introduce LambdaResNets, a family of LambdaNetworks, that considerably improve the speed-accuracy tradeoff of image classification models. LambdaResNets reach state-of-the-art accuracies on ImageNet while being ∼4.5x faster than the popular EfficientNets on modern machine learning accelerators. Authors: Anonymous Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher LinkedIn: https://www.linkedin.com/in/yannic-kilcher-488534136/ If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Another day, another state-of-the-art result in machine learning land on ImageNet. This time coming from a thing called Lambda ResNets. As you can see here, it outperforms EfficientNets and ResNets right here, not only in terms of top one accuracy, but also in terms of the trade-off between accuracy and training time. Here it says Lambda ResNets are about 4.5 times faster than EfficientNets and substantially improve the speed accuracy trade-off of image classification models across different scales. So this is something new that we have not seen in recent times. In recent times we've seen like transformers take over image classification and so on, but it came either with downsampling the image like this 16 by 16 patches and so on, or just throwing massive amounts of data at it or massive amounts of compute. This paper here promises that they have something that's more efficient and it can reach good accuracy or for the same efficiency can reach better accuracy. So today we're going to look at this paper, Lambda Networks Modeling Long Range Interactions Without Attention by Anonymous Authors. It's under review at ICLR 2021. I'm not going to de-anonymize this paper. Well mostly because this one is a bit harder and would require a bit of research, but also because I think I've made my point. I remain that double blind reviewing isn't really what it's set out to be in the ideal case. But let's actually look at this paper because the paper itself is quite hard to understand. And I still don't know if I understand it correctly, but we'll just go through it and I will talk about what I understand and then we, I guess we can have a discussion. Before a discussion, always leave a comment if you want, join our Discord. There are many, many competent people there that have opinions, way better opinions than I do. So, all right. So they say we present a general framework for capturing long range interactions between an input and structured contextual information, e.g. a pixel surrounded by other pixels. Another method called the Lambda layer captures such interactions by transforming available contexts into linear function termed Lambdas and applying these linear functions to each input separately. Lambda layers are versatile and may be implemented to model content and position based interactions in global, local or mass contexts. So as you read this, there are a number of things right here that we are going to blatantly disregard while reading this paper. So first of all, they present a general framework, like let's like screw, screw the general framework. They're going to apply this to image classification. We'll look at it in the context of well, first of sequence classification, and then of image classification, because it comes out of the kind of transformer area. So then the transformers classically have been applied to sequence or set classifications. So we're going to look at it in that framework, like general framework, blah, blah, blah, right. Okay, so for capturing long range interactions between an input and structured contextual information, e.g. a pixel surrounded by other pixels, okay, so when you hear again, this long range interactions immediately, you should think of something like a transformer like an attention mechanism that that's exactly what they're going for here. And they're trying to frame this into this this like lambda layer, the fact that we build a linear function termed lambdas from lambda calculus, and we apply these linear functions to each input separately. Now, anytime you multiply a matrix by a vector, that's what you're doing. But the framing here is, and we'll see why the framing is like this. But it sort of makes it it introduces a new terminology. Lambda layers are versatile, yada, yada, yada, yada. And the tricky part or the important part here is, as they bypass the need for expensive attention maps, lambda layers can routinely be applied to inputs of length in the 1000th, enabling their applications to long sequences or high resolution images. The resulting neural network architectures, the lambda networks are computationally efficient and simple to implement using direct calls to operations available in modern neural network libraries. Okay, so they have a bunch of things here, they now get into the framework of okay, it's kind of like attention, but we do not need these expensive attention maps. And they're going to show why they do not need the attention maps that an attention layer would compute. And we will look at what what's the trade off here, like there's always a trade off. The attention is kind of a very, very general computational framework. It's super general, it's like dynamic routing of information. And they don't do that. So we're going to see where the trade off is. And the what they gain is, of course, if they don't need to compute these expensive attention maps, which know that the limiting factor is memory in transformers. It's also a bit time, but we can just let it run for longer. But memory, we can't really just wait long. And then we get more memory, we have the memory that we have. So since they don't have that they can take inputs and links of the 1000s, you know, they can apply these things to high resolution images. And we're going to see that applying these things to high resolution images, that is, let's say, that is shaky. Let me just say, they can't do that without going to what's called local attention. And what I mean by this is so attention mechanisms, extremely briefly, extremely briefly, if you have a sequence, and you transform it into another sequence, that's what an attention mechanism is for. The attention mechanism looks at a looks at from each top part here, it emits a query queue. Wow, that's a big thing. Each top part emits a query queue. Each bottom thing emits a key K, and then it builds what's called an attention map. So an attention map, in this case, is just a matrix, a in this case, a five by five matrix. And this matrix specifies how each of the inputs is routed to the outputs. So this five by five matrix, as you can see, pretty clearly, if I make the sequence here longer than this, like one of the axes is going to get longer. And if I make this sequence longer, the other axis is going to get longer. And normally, or in what's called self attention, these sequences are the same sequence. So you'll have the sequence paying attention to itself. And if you have an image, what that means in an image is that so the image is already a matrix, but it's a it's kind of a collection of pixels, what you would do is you would see the image as a collection of as a sequence of pixels, and then each pixel needs to attend to each other pixel. So you can see pretty easily if the image is like something like 200 by 200, that's what 40,000. So you'd have a your matrix up here would be 40,000 by 40,000, which is impossible, right? That's the trouble here. Now people have gotten around this by doing what's called local attention. And local attention means like, well, you know, you pixel, you don't need to pay attention to all of the other pixels, you actually only need to pay attention to the pixels in your neighborhood, which is sort of, it's a convolution, right? A convolution is usually this but local attention is a dynamic convolution. So usually in a convolution, you have a fixed convolutional kernel, local attention is simply a dynamic convolutional kernel, like global attention is a dynamic feed forward layer, instead of a fixed feed forward layer, local attention is a dynamic convolution instead of a fixed convolution. They are going to do something similar here to process for high resolution images, they are going to restrict their context to a local kind of local field of view around the pixel that they're interested in. So just so you don't get super hyped by by the by the abstract right here. So we'll go into what these lambda layers do. And I'm going to jump a whole bunch of things in the paper, just so we get to the kind of the meat of the thing. So they say, look at these images, and we just we just set this right. So usually you have a, you have for each pixel, you wonder how should I transform this to the next layer. So you imagine your neural network as having layer, layer, layer, layer, layer. And in each time you can imagine you have this image, and you want to transform it into like an intermediate representation that's still, it still looks like an image, maybe has different number of channels and so on. But and maybe it's a different resolution. But still, you want to kind of forward propagate this image into its intermediate representations. And the question is, for each location in the image, so for each pixel, how should I transform that particular location into its next intermediate representation? That's what a neural network does. In this, in this framework, what we want to do is we want to look at this pixel, and then say, okay, well, we can't just look at the pixel itself, we somehow need to look at all the other pixels. So we know how to transform it, because it's going to be a really boring neural network if we just look at each pixel individually. So we are going to look at all the other pixels in the picture. As we said, it we're going to pay attention to all the other pixels. And that determines how we should transform the current pixel into the next representation. That would be what they call a global context or global attention in the attention framework. However, as we already said, here, what we're going to do is we're simply around, we're simply going to restrict how far the pixel can look at the other pixels, what they call the local context. So the pixels, they're going to be transformed into what's called queries, like in the attention framework, the context is, it can be something else. But usually, it's going to be the same as the input. So the input is this picture. And the context is also going to be the picture. But now, we are going to additionally for each location restrict the context around that location. So what local attention would do, local attention would build for each pixel an attention map. And the attention map, as we said, it is going to define how the pixel should pay attention to all the surrounding pixels. So you can see right here, this is the attention map for this one pixel. So you can imagine that if I were to construct an attention map for all the pixels in the image, now it's going to be every pixel is going to have an attention map like this telling it how it should aggregate all the pixels around itself. And you can easily see that if we make the context as large as the image itself, that is going to give us each context map is going to be as large as the image. And we need that for each pixel. So we're going to end up with if this is if this is height and this is width, we're going to end up with height squared width squared memory requirements. So the difference in the lambda layers is that the lambda layers, what they do is they take the context, and they're going to abstract this into a matrix, they're going to summarize the context first without looking at the query, okay, they're going to take the context and make it into this lower dimensional linear function, you can see from the picture that what they're trying to make sure that you see is that the left thing is basically restricted to be of the size that the it's pixel by pixel. While on the right side, you have you're going to have some freedom over how you want to construct that matrix. And they are going to abstract the context into a function. And then they're simply going to multiply this by the query. So the whole operation here is going to be a linear function, as opposed to the attention operation, which is you look at the interactions between queries and keys, and then you take a softmax over that, which makes it into a nonlinear function, this is going to be a linear function. Okay, so, but the rhetoric around this, you can already see they say we abstract the context into a linear function, and then we apply that linear function to each query separately. The problem right here is that there is one context per query, right? As soon as you go to the next pixel, like right here, your context is going to be is going to be shifted. So it's not like if you had the global context, right, if you had the global context, you could simply compute this context function once, and then apply it to each to each pixel individually, that's going to be, that would be the gain in, let's say time. But here, not so much. So they're the trade offs that they make in space immediately result in the in the breakdown of their narrative, at least, I feel like this. Now, how can you understand this just from here before we go into the formula? Again, I would say we go back to kind of the sequence narrative, okay, so the sequence narrative is the following, we want to transform the sequence into its next layer representation. In attention, we take a look here and we look at how does this pay attention to each of the inputs right here, depending on what the inputs are, right, we depending on what these queries and depending on what the keys are here. So that's going to be really important. What we do here instead, in the lambda network is we're going to take the context, which is this thing, and now we're dealing with a global context because we don't. So we are closer to the terminology, and we're going to summarize it, we're going to just summarize this into a function. So and the function is represented by a matrix and the matrix dimensions, we can even choose how big this matrix is, right? We're just going to summarize the context without looking at the queries and then the queries without looking at the individual part of the context, like we don't do that. We simply take the queries and pull them through this function to get the next higher level representation, right, we take, we take the query, put it through the same function, get the higher level representation. So the context is summarized into one single linear function that transforms all queries the same. And it's not exactly what they do, like they have positional encodings and so on. But in essence, that's what they are, that's what they are advertising in the first place. Alright, so let's dive into the formula, the formulas are fairly, fairly complex, I had a while until I until I grasped all of this. So this is the first half, you can see right here that this is the first half. And then how you get from here to the outputs, that's another set of equations right here. Okay. It's again, as I said, it's it's fairly complex. And that's not all like there and there, then there is translation, equivariants, then there is the convolutional lambda, and so on, and the analysis. But let's break this down and see where the lambda layer is different and how it works. So we start out with the input and the context, right, that is that is here. These are the inputs to the lambda layer, x and c. Now, keep in first of all, okay, let's let's build up a little diagram over here, we have x and we have c coming in, and we'll annotate them with their respective sizes. So x is n by d, and c is m by d. So that's n by d, and m by d. Now, keep in mind, okay, that x and c are often the same thing. First of all, right, or similar if c is restricted and so on. But keep keep that in mind. So x and c are often the same thing, n here is what would be referred to as the input size, input size, right. And if n is equal to m, if x is equal to c, then the problem is going to be whenever there is a term m by n, then that is going to be quadratic in the input size, and that is going to blow up. So in terms of in when if this is an image, and this here is going to be whatever 225 by 225, that's the image resolution. That's that's n, right? n is this. We're not talking d is going to be the channels. So n itself is going to be this giant number. So you can see that n by m is going to be that squared. So whenever there is a term like this, that's going to be a problem. So in attention, what do we do in attention, let's make a little thing here in attention, we have x and we have c. This is n by d, this is m by d. In attention, what we're going to do is we're going to transform x by means of w q, but this is these are learnable parameters, the w, w q is d by k. So it transforms the inputs into queries and the queries are going to be n one query per input, by the key dimension, which is often which is a parameter you can choose, then we're going to transform the context by means of w k, which is also d by k into the keys, which are now m by k, sorry, and we're going to transform the c into w also into values. And the values, I mean, there would be an additional parameter of the value dimension, but very often, since the output dimension is going to be d again, we'll just say this is m by d. Sorry, no, this is, let's call that d by d, which makes the values m by d. Okay, so these are now your standard attention parameters, let's say. So you are going to take the queries and the keys and you're going to multiply them together to get the attention map. Okay, you can see if you multiply those two things together. So query, you do query times key transposed, you get n by m, and you're going to softmax this, let's do it like a little sigma, so which is going to be the normalized by m, and you're going to take the values and calculate the outputs y from this and the outputs y are going to be n by d. All right, so you can see that the nonlinearity is right here. Okay, so the nonlinearity determines how do you aggregate the context which is transformed into the values linearly, how do you aggregate the context to the output that's determined by the nonlinearity, it's determined by this attention map. And most notably, you have this n by m parameter right here. This is a matrix you have to construct, you can't get around it because you have to apply nonlinearity to it can decompose it. And that's the problem. So now, it's about to get complicated. Really easy. First of all, we take the inputs, and we're going to again, apply a WQ, that's d by k to get the queries. Okay, the queries are going to be n by k so far, so good. So we got these, we got the query, as you can see right here, it's d by k. And the queries are constructed like this. Now there's a there's a mistake here. Authors, anonymous authors, if you're looking, this is wrong. Yes, this should be something like n by k. Okay, not even you. So you here is like an inter dimension parameter, this, we're just going to scrap this, this is equal to one for our purposes. You can, you know, you can you can do all the things with the with the u equal to more stuff, but we're just going to leave it at one if that's okay. So yeah, scrap this. Alright, so we got we got our queries and you can see keys and values just the same. So we're going to transform the context into keys and values just the same as in attention. Let's quickly go over here and do that. Here we're going to transform this using WK, which is d by k, and we're going to transform it as well using WV, which is D. Now, they're going to say D by V, but we'll just always say D by D. They are going to relax that later on and so on. But yeah, D by D. So this gives you keys and this gives you values and sorry, m by k, and now m by D. And now the the difference is is happening. We're getting to the positional embeddings in a minute. So now what we're going to do is we're going to apply a softmax to the keys, just the keys. Okay, so we're going to take the keys and we're going to do a softmax operation along m. So we'll maybe say along which dimension here is along m along the m dimension. Okay, so which gives us the key m by k. Now this is a little bit weird. Why would we apply the softmax to like an individual thing? And we're going to see in a minute what that does. But for now, this simply create, we create a key matrix. The key matrix is m by k. And then we're going to apply a softmax over the m dimension. And that means that means we now have k attention maps. We have k different attention maps over m inputs. All right, and every time you make a softmax, you basically make a distribution. And that defines how you aggregate information. And so we have k different distributions as here, you can see our attention map was we had n different attention maps of size m. And now we have k different attention maps of size m. This is going to be the difference, right? Here, it's not that attention vanishes in this model. It's that the attention shifts where it is. And you're going to see that quickly. When you look at here, this content contribution and position contribution is where we're going to now multiply the keys by the values. And yeah, the position we're going to look in a minute. But we're now going to multiply the keys by the value. So the queries are nowhere to be found. And if we go down here, you can see that we multiply the keys by the values and then contract over m. So this is this is a a multiplication right here. So we're going to take the values, whoopsie, the values and the keys, and we're going to contract over m. So in this case, we'll simply do whatever key key like key transposed times V, maybe. Yeah, that makes sense. Or the other way around. No, that that sounds sounds about right. Which gives us what what do they call it? I think they call it lambda. They call it lambda C. Now we have to pay attention. The C up here is going to be this is not a dimension. This is just the name of this is lambda C, which is going to be of size k by D. Okay. Do we get this right? This is going to be of size. Yes, k by V in this case, but k by D in our case and contracting over m. So here you see that it's kind of a it's kind of a tricky trick in here. So this whole thing is sort of by itself. And it does kind of an attention to itself. It's the context summarizes itself. And you can see at the end, there is no more m. So m, there is there's no more m, m is vanished from this. So we have summarized the context in in and abstracted the m before we ever had a chance to let it interact with the end. And this is exactly where the this differs from attention. So the last step here is going to be that we're going to take this this lambda C, and we're going to take the queries. And we're going to multiply those together. So this is simply a linear function right here. This is a linear function, we're doing q times lambda C. And that is going to give us our output y. Okay, and y is going to be n by D. So each of the inputs have this is each of the inputs next layer representation. So each of the inputs next layer representation is simply a linear function of its query. And its context, and the context is a summary of the context. So what you don't have is fine grained interaction between position, a transformer can say, well, I am this pixel here. And I am green. And you are this pixel there. And you are red. I am going to pay x amount of attention to you. This is no law and you this pixel here you are yellow, I'm going to pay more attention to you. You can't do that. The pixels in the context, they will go among themselves, they will decide, okay, you're red, I'm yellow, and so on. How much attention should anyone be able to pay to the two of us, they will put that into a summary vector, basically. And then the query can only look at that summary vector and decide what it wants to do with it. In essence, I have a multiple frameworks of how you can understand this. Notably, what you can understand this as is the whole blue part here, what it does is it kind of constructs a vector space, okay, it constructs a vector space of k dimensions, you can see here, this k is going to be very important. So it constructs a vector space of k, not of k dimensions. But it comes, yeah, like a subspace of k dimensions in the D dimensional vector space. Okay, is usually pretty small. So we're going to have this k subspace of k vectors in the D dimensional space that is constructed, and all the queries can do is they can select a point in that, okay. The meaning here is that the context, no, let's go a step back and talk about this softmax operation. So it might be a bit weird to apply the softmax just to like a single matrix of keys. But that's not exactly what's happening. So in the attention, what you'll have is you'll have a softmax over the queries times the keys, right. And the both are computed, the queries are computed from the input and the keys are computed from the input. And the question is, how, how should they how should information be aggregated from the values that's determined by the two things, okay. Now, in this case, you might say, well, it's just the keys that decide, so there is no interaction. But there is. If you write the keys out what the keys are, the keys are the context times this matrix WK. Okay, and what this is now, you can see this as the analog to the one before. So this here is the input that's kind of like the query matrix, except the query matrix is a linear transformation of the input. But it's sort of like it comes to the input. But this here is now no longer like the key matrix from above, this here is actually fixed. So the keys in this world are fixed. How you can imagine that is each layer constructs a sort of like a pseudo sequence, a pseudo sequence of K of K different of size K. And what it first does is it kind of summarizes the input sequence, it will draw it will draw it like I drew this before. So instead of transforming this sequence into this sequence, what it does is it constructs a pseudo sequence of let's say length three intermediate, and this pseudo sequence, this intermediate sequence always, always, always has the same queries. Now, okay, you have to swap the two actually. This this is kind of like the keys. This is like the queries. Okay, so this pseudo sequence always has the same queries. And the the this this sequence down here is now going to send information to that pseudo sequence. So this pseudo sequence always aggregates information in the same way, independent of what the input is. And after and after, so that's how it aggregates the output. So no longer transforms this into this upper sequence right here. And then, of course, it does in the second step, but this now is just linear. So this here, this part here is attention. And then this part here is linear, this is kind of reminiscent of the Lin former and so on that that kind of concept that project the sizes, the intermediate sizes of the sequences down. It's just done in a different way is that the attention is shifted to this first part here and is sort of fixed. I don't even want to call it attention. Because it's kind of like fixed, the queries are always the same, they are learned a bit like, if you remember the DETR paper where we have learned queries. So what does this mean? It means something like you each layer learns these different dimensions that it could that it can aggregate in the in the context. So this could be like color. So it says this context, what what kind of what what, or this particular context element, what kind of a color does it have? It could be it could be higher level features, it could be like, is there is there give me the give me if there is a corner, if this is an image, there's a corner, or if this is a sequence, tell me whether or not like what kind of word it is, tell me it's it's grammatical meaning, I don't know, even though it's grammatical meaning, or its label, like whether it's a noun or a verb. And here, you kind of get what I mean that there it constructs this space of properties of the context elements. And each, each query can then come and basically decide how important each query from up here can decide how important each of these is. So this these blue arrows here refer directly to the pseudo sequence, which is of length k. And then the query simply selects a point in this and aggregates information in that. Okay. I don't know if that's if that's entirely clear. But the point is that the attention operation is now shifted to instead of transforming a sequence into its higher representation, it's transforming it into kind of an intermediary pseudo sequence that has nothing to do with the with the queries in question is just dependent on the context. Then the projection to the next level representation where the queries actually come in is simply a linear operation constructs this kind of subspace that has these axes. And then it in this subspace, it's just a linear operation to get to the next layer. Okay, so summarize the context using attention. So the trick here is you don't summarize the context into a vector, you actually summarize the context into a bunch of vectors. So the context can say my color is green. My my corner reness over the whole like, I got lots of corners. And each of these each of these properties is a vector, as you can see here. And then so maybe it's better characterized as a list, a list of size k. And each entry in this list has a particular meaning like color, and each one is a vector. So the context will be summarized into a collection of k vectors. Like this, okay, so each context can have a different collection of k vectors, but still it's k. And then the query, the query can decide how it wants to aggregate how important is color to me. It's like five, five important color. And then sees like, oh, you're you're green. Okay, cool. How important is corner reness to me? Eight. Okay, cool. The important part is what the query cannot do is it cannot go look, it cannot look at what the color is and then decide how important it is. That's what makes it different from attention. So in attention, the query can see and it's like, oh, you're green. Well, that's not that important to me. The query must decide, ah, okay, I myself am a red pixel, I'm going to pay five attention to the color of other pixels. If I am yellow, I'm going to pay seven attention, but it can't look at the other pixels, because they're all summarized, right? It can't go look at all the other pixels, it can only look at the summary, decide how important is that. So enough ranting from me, there is a second part to this, which is the position encoding. So they have noticed probably they've tried it like this. And this just doesn't doesn't work. And it shows in their ablations, what's actually important is the additional positional encodings. And that's what they have right here. So the what they have now is these encodings E and E, as you can see, right here, E is already indexed by n and m. So E is going to be an n by m by k tensor. You see the inputs are n by d, and m by d, and E is going to be n by m by k. Now these are positional encodings. So what they do is they are a fixed set of learn parameters kind of like positional encodings in a transformer, but in a transformer, it would simply be like m by k, right? That's what it would be because you just put the positional encodings onto the context or on the input. In that case, it would be n by k. Here we have an n by m by k. So these are actually learned attention weights kind of. So these are going to be a matrix that is n by m and is going to be a k dimensional vector for each. So each n by m pair has a vector associated with it and embedding. This kind of destroys the whole notion of this summarizing the context first, right? Because now we're building up basically a learned attention map, a learned attention map. The advantage here is that this thing is learned, this thing is not computed, it is learned per layer, and it cannot be kind of changed from example to example. So that's the difference between the attention map. So the stuff that is computed dynamically is not dependent on n by m. And the stuff that is n by m is not computed dynamically. And that has the big advantage that if I have a batch size in front, then these things here are all going to be adding the batch size n by d by b, n by d by b, while this thing no b, okay? So this thing is fixed. And all you have to do is you have to hold n by m once in memory. And you don't have to hold it, you don't have to grow it with the batch size. And since we are reducing n and m anyway, because or m at least, because we are only paying attention to local context, that's going to be feasible. You can see that you can't get around the fact that you have to have these attention maps. And therefore, you probably in this framework can't get around to the fact that you have to have some sort of local restriction. Because if it weren't for that, this thing right here, there is no n by m, never ever an n by m, and therefore, you don't have this giant blow up, the attention mechanism is over m by k, as you can see here. And as long as you can keep k small, that could actually work with a global context. Okay, not with the position embedding. And it doesn't work without the position embeddings. And they are not position embeddings, they are attention embeddings. Okay, let's or interaction embeddings, to call them position embeddings would be a little bit a little bit. I mean, they say it's a positional bedding for their relation n to m. It's important to note that these, again, are not computed from the input, they are simply fixed, they're simply say, if a pixel is on the top left, and the other pixels on the bottom right, then they are, their relation is given by this vector right here. Okay, so for each pair of pixel, there is an entry in this matrix. Now how do we use those? Kinda similar, we just start down here, we multiply them with the value. And you can see that you will and you contract over m in subsequent equation. Where is it? Right here, you contract over m, which gives you this thing right here, which you can see there is nothing here, now there is an n here. So what you'll get naturally is one positional embedding per input. So yeah, as I said, it sort of destroys this this notion of first summarizing the context, because now it's, it's on again. So you're going to take the values and this thing, and you're going to compute from this, this lambda p positional lambda, which is of size, and you can see it, it's n by k by d. And you're going to take, you're going to take the queries, it's going to get complicated. So you're going to take the queries over here. And you're going to compute the output y p, which is going to be n by d. Yes, this is n, this is n, you're going to do it once per, and then you're going to add the y's together. So this is a plus for the final y. So you can see these are two completely linear, this is y c, the content y, two completely linearly separable pathways, one comes from these positional encodings, and one comes from these from the context. And the positional encodings are actually more important in the experiments. If they leave those away, nothing works. If they leave this summarizing away, then stuff pretty much works still. So you know, it's fair to say that the power here comes from the positional encodings. And that, again, a bit, it's a bit counter to their to their narrative, because I feel that the whole point of the lambda layers is to do this stuff right here. And this here is something that you need to make it work. But in any case, what you do is you take, you take these positional encodings and you multiply them by the values. So what this does is this here, this is a special object, this lambda p, as you can see, it creates n times k times d tensor. And this is it's a big tensor. So what does it do for each of the n pieces in the input? For each of the n pieces in the input, it creates a one of these lists, right, one of these k sized lists, k sized lists of the vectors, as we've seen before, but it does so differently for each position. Okay. So for each position, it creates a different table. And the queue again indexes into this table, but into, you know, at the position where it is. So if you take the query from a particular position in the output, it's going to look to its table, aggregated according to what it's interested in. So the positional encodings basically say, if you if if if this element in the context, if you are the first element in the sequence, then you have to aggregate information according to this particular scheme. But if you're the second element, you have to aggregate information according to this particular scheme. So again, it can't look at the contents of what these particular things are, it can only kind of define a linear operation. However, it can kind of look at the contents of the query, because usually x and c are the same. So by incorporating v in here, m being equal to n, most often, it can actually do that. And again, we see in the results that most of the information actually goes through this path. The good thing, again, is that so here you have n by m, but you don't have a B, you don't have a batch size. Here the batch size appears because there is actually a batch size, right, there is a batch size here. And then the batch size would appear right here. But at the moment the batch size appears, the n by m term falls away. So there is no m right here, you contract over m as you introduce the batch size. So again, there is nowhere an n by m tensor to be held as you add that that is scaled by the batch size. So there is again, this this kind of performance increase. But you can already see here you have we had these nice construction where all the whole context constructs this table of vectors, and then the query aggregates it. And here we construct a separate table for each element in the input. And then the query, according to its position, aggregates that and it simply adds those two aggregations together, most of the performance comes from the bottom right here, which you can sort of see this as if you know if you have like y equals w x plus b, you can sort of see the w here as these tables right here, because they actually depend on what the x is, in this case, the position of the x and the b is just something that comes on top to every single position that that there is. Okay, this is a giant mess. But that's about how it works. And I hope you didn't you didn't completely you didn't get completely lost in this. So they have a whole bunch of extensions, as I said, so they have translation equivalence, then because they build their positional encodings as relative encodings, which makes it very easy to then build this lambda convolution. So you can actually implement this operation here as a convolutional operation to get this positional lambda. And their whole point is kind of that if I do local attention, right, if I do local attention, what I need to do is I kind of if I do local attention, then this thing only pays attention to these three, and this thing only pays attention to these three kind of like a convolution. But because it's an attention for each of these things, I need to build my attention map, I need to build my attention map. And that kind of if I want to batch this, if I want to do this at once, I need to sort of if this is my interaction matrix, it kind of looks like this, this downward descending stairs or something like this. And that is not well supported in current frameworks. And that makes it a lot like really slow. They say, look, even though we use the same amount of let's say memory, as local attention or time, sorry time, we can implement it using these primitives, and they are much faster. So they are they are going to outperform local attention in that sense. They do compare here in terms of time and space to an attention layer. Now, they split this into content interactions, which is that first pathway and position interactions like this here, this is absolutely irrelevant because it's smaller than the position interaction and the position interactions give the performance. So you can see clearly that there is in space we have B times n times m, h is the number of heads, we don't care much about that right now. So B times n times for the attention layer, which is the problem. And here you see you have n times m here, but no B. And you have B times n, but no M. So that is kind of the the gain right here, as long as you can keep the K small, right, this intermediate sequence, which makes sense, right, this attention goes to this intermediate sequence. So as long as you can keep that intermediate sequence small and fixed, you don't have a problem with this quadratic memory, at least you have a problem right here, but that's not modulated by the batch size. In terms of time, it's still you can see there is a B times n times m, you still have that time complexity, because after all, you need to do these multiplications and contracts just the same. So not much of a difference in terms of time. The time argument is more like they can implement it using convolutional operators rather than the this kind of striding attention maps. They also do this in multi query, multi like multi head and so on. And you can see right here that it outperforms outperforms other systems, including like systems with self attention, especially in terms of if you see the memory, if you do global self attention, it uses a lot of memory. In fact, like an out of memory error on their machine axial self attention, these are all kind of limits to self attention, local self attention, which comes closest to what they do. But then what you suffer is a massive drop in performance, whereas their lambda layer right here. It has a lot of performance. And you can see the performance gain, right? This is k, I believe k is equal to 16. In this example, if they go k to eight, and we know that the attention interaction in the lambda networks is not n by m, but actually m by k. So if you have k, you can already see there is a massive jump in the number of examples you can throughput through the network. Okay, so that kind of gives evidence to what we are what what my hypothesis is is going on right here. Okay, lastly, I've already shown you this table as it outperforms kind of the efficient nets. And this is a special version of lambda networks, the lambda res nets, where they take a res nets and they only they only replace a part of the resnet. So if you look at the table down here, these are the different architectures where they could replace things in the resnet, for example, the resnet 50 right here. So this is all convolutions. This is kind of the baseline and you can see that it's like 7200 samples per second. If you replace everything by a lambda layer, you're down to like 1160 examples per second. Interestingly, if you replace the first layer by a lambda layer, you are also the performance drops enormously. And that is because of course, the the sizes of the of the of the images get smaller and smaller. So your your n gets smaller and smaller as you go up the layers. As you can see right here, if you only replace the last layer by a lambda layer, then you can gain all back almost all of that performance and interestingly still outperform the complete convolutional layer. And it also has less parameters, you can see the 25 instead of the 18. Alright so that was my rant on this paper. Again, I hope this wasn't too convoluted. There's a lot more to this paper. I want to kind of quickly shout out LucidRains and made a made a I got to show you. This is hilarious. He implemented this so. Yes, thank you. Implemented this as the paper came out. And of course, well, we don't know if Phil Wang is the author of this paper. We don't know maybe maybe not. Chances are not but still cool that he goes ahead and implements these things. I especially I love the conciseness using the INOPs right here. So there are as you can see, like this is it. That's it. That's all. The use of INOPs right here to like do this rearrange and INSOM operations, which are much more concise than the reshape, squeeze, unsqueeze whatnot. So that's pretty cool. And the coolest thing is lambda actual Greek letters in the code. Thank you, Python. So yeah, I invite you to check out this implementation. I'll of course link it. Tell me what you think of the paper and I'll see you next time. Bye bye.
[ { "start": 0, "end": 7.42, "text": " Another day, another state-of-the-art result in machine learning land on ImageNet." }, { "start": 7.42, "end": 11.56, "text": " This time coming from a thing called Lambda ResNets." }, { "start": 11.56, "end": 18.6, "text": " As you can see here, it outperforms EfficientNets and ResNets right here, not only in terms" }, { "start": 18.6, "end": 25.52, "text": " of top one accuracy, but also in terms of the trade-off between accuracy and training" }, { "start": 25.52, "end": 26.52, "text": " time." }, { "start": 26.52, "end": 33.32, "text": " Here it says Lambda ResNets are about 4.5 times faster than EfficientNets and substantially" }, { "start": 33.32, "end": 40.879999999999995, "text": " improve the speed accuracy trade-off of image classification models across different scales." }, { "start": 40.879999999999995, "end": 45.64, "text": " So this is something new that we have not seen in recent times." }, { "start": 45.64, "end": 50.14, "text": " In recent times we've seen like transformers take over image classification and so on," }, { "start": 50.14, "end": 58.8, "text": " but it came either with downsampling the image like this 16 by 16 patches and so on, or just" }, { "start": 58.8, "end": 63.02, "text": " throwing massive amounts of data at it or massive amounts of compute." }, { "start": 63.02, "end": 68.96000000000001, "text": " This paper here promises that they have something that's more efficient and it can reach good" }, { "start": 68.96000000000001, "end": 74.24000000000001, "text": " accuracy or for the same efficiency can reach better accuracy." }, { "start": 74.24, "end": 80.19999999999999, "text": " So today we're going to look at this paper, Lambda Networks Modeling Long Range Interactions" }, { "start": 80.19999999999999, "end": 83.47999999999999, "text": " Without Attention by Anonymous Authors." }, { "start": 83.47999999999999, "end": 86.52, "text": " It's under review at ICLR 2021." }, { "start": 86.52, "end": 91, "text": " I'm not going to de-anonymize this paper." }, { "start": 91, "end": 96.3, "text": " Well mostly because this one is a bit harder and would require a bit of research, but also" }, { "start": 96.3, "end": 98.75999999999999, "text": " because I think I've made my point." }, { "start": 98.76, "end": 106.24000000000001, "text": " I remain that double blind reviewing isn't really what it's set out to be in the ideal" }, { "start": 106.24000000000001, "end": 107.24000000000001, "text": " case." }, { "start": 107.24000000000001, "end": 113.16000000000001, "text": " But let's actually look at this paper because the paper itself is quite hard to understand." }, { "start": 113.16000000000001, "end": 118.68, "text": " And I still don't know if I understand it correctly, but we'll just go through it and" }, { "start": 118.68, "end": 124.52000000000001, "text": " I will talk about what I understand and then we, I guess we can have a discussion." }, { "start": 124.52, "end": 129.68, "text": " Before a discussion, always leave a comment if you want, join our Discord." }, { "start": 129.68, "end": 136.2, "text": " There are many, many competent people there that have opinions, way better opinions than" }, { "start": 136.2, "end": 137.2, "text": " I do." }, { "start": 137.2, "end": 138.6, "text": " So, all right." }, { "start": 138.6, "end": 143.07999999999998, "text": " So they say we present a general framework for capturing long range interactions between" }, { "start": 143.07999999999998, "end": 150.44, "text": " an input and structured contextual information, e.g. a pixel surrounded by other pixels." }, { "start": 150.44, "end": 154.68, "text": " Another method called the Lambda layer captures such interactions by transforming available" }, { "start": 154.68, "end": 160, "text": " contexts into linear function termed Lambdas and applying these linear functions to each" }, { "start": 160, "end": 162.2, "text": " input separately." }, { "start": 162.2, "end": 166.88, "text": " Lambda layers are versatile and may be implemented to model content and position based interactions" }, { "start": 166.88, "end": 169.28, "text": " in global, local or mass contexts." }, { "start": 169.28, "end": 174.04, "text": " So as you read this, there are a number of things right here that we are going to blatantly" }, { "start": 174.04, "end": 177.12, "text": " disregard while reading this paper." }, { "start": 177.12, "end": 183.78, "text": " So first of all, they present a general framework, like let's like screw, screw the general framework." }, { "start": 183.78, "end": 187.64000000000001, "text": " They're going to apply this to image classification." }, { "start": 187.64000000000001, "end": 195, "text": " We'll look at it in the context of well, first of sequence classification, and then of image" }, { "start": 195, "end": 200.52, "text": " classification, because it comes out of the kind of transformer area." }, { "start": 200.52, "end": 206.12, "text": " So then the transformers classically have been applied to sequence or set classifications." }, { "start": 206.12, "end": 211.96, "text": " So we're going to look at it in that framework, like general framework, blah, blah, blah," }, { "start": 211.96, "end": 212.96, "text": " right." }, { "start": 212.96, "end": 217.56, "text": " Okay, so for capturing long range interactions between an input and structured contextual" }, { "start": 217.56, "end": 224.48000000000002, "text": " information, e.g. a pixel surrounded by other pixels, okay, so when you hear again, this" }, { "start": 224.48000000000002, "end": 230.36, "text": " long range interactions immediately, you should think of something like a transformer like" }, { "start": 230.36, "end": 234.24, "text": " an attention mechanism that that's exactly what they're going for here." }, { "start": 234.24, "end": 240.06, "text": " And they're trying to frame this into this this like lambda layer, the fact that we build" }, { "start": 240.06, "end": 246.88, "text": " a linear function termed lambdas from lambda calculus, and we apply these linear functions" }, { "start": 246.88, "end": 248.56, "text": " to each input separately." }, { "start": 248.56, "end": 253.28, "text": " Now, anytime you multiply a matrix by a vector, that's what you're doing." }, { "start": 253.28, "end": 259.16, "text": " But the framing here is, and we'll see why the framing is like this." }, { "start": 259.16, "end": 266.48, "text": " But it sort of makes it it introduces a new terminology." }, { "start": 266.48, "end": 269.40000000000003, "text": " Lambda layers are versatile, yada, yada, yada, yada." }, { "start": 269.40000000000003, "end": 275.76000000000005, "text": " And the tricky part or the important part here is, as they bypass the need for expensive" }, { "start": 275.76000000000005, "end": 282.48, "text": " attention maps, lambda layers can routinely be applied to inputs of length in the 1000th," }, { "start": 282.48, "end": 288.76000000000005, "text": " enabling their applications to long sequences or high resolution images." }, { "start": 288.76, "end": 293.56, "text": " The resulting neural network architectures, the lambda networks are computationally efficient" }, { "start": 293.56, "end": 299.88, "text": " and simple to implement using direct calls to operations available in modern neural network" }, { "start": 299.88, "end": 300.88, "text": " libraries." }, { "start": 300.88, "end": 307.68, "text": " Okay, so they have a bunch of things here, they now get into the framework of okay, it's" }, { "start": 307.68, "end": 313.58, "text": " kind of like attention, but we do not need these expensive attention maps." }, { "start": 313.58, "end": 317.86, "text": " And they're going to show why they do not need the attention maps that an attention" }, { "start": 317.86, "end": 319.28000000000003, "text": " layer would compute." }, { "start": 319.28000000000003, "end": 324.52000000000004, "text": " And we will look at what what's the trade off here, like there's always a trade off." }, { "start": 324.52000000000004, "end": 328.96000000000004, "text": " The attention is kind of a very, very general computational framework." }, { "start": 328.96000000000004, "end": 332.8, "text": " It's super general, it's like dynamic routing of information." }, { "start": 332.8, "end": 334.76, "text": " And they don't do that." }, { "start": 334.76, "end": 338.06, "text": " So we're going to see where the trade off is." }, { "start": 338.06, "end": 343.04, "text": " And the what they gain is, of course, if they don't need to compute these expensive attention" }, { "start": 343.04, "end": 349.14000000000004, "text": " maps, which know that the limiting factor is memory in transformers." }, { "start": 349.14000000000004, "end": 352.42, "text": " It's also a bit time, but we can just let it run for longer." }, { "start": 352.42, "end": 356.88, "text": " But memory, we can't really just wait long." }, { "start": 356.88, "end": 360.12, "text": " And then we get more memory, we have the memory that we have." }, { "start": 360.12, "end": 364.8, "text": " So since they don't have that they can take inputs and links of the 1000s, you know, they" }, { "start": 364.8, "end": 367.96000000000004, "text": " can apply these things to high resolution images." }, { "start": 367.96, "end": 373.28, "text": " And we're going to see that applying these things to high resolution images, that is," }, { "start": 373.28, "end": 376.88, "text": " let's say, that is shaky." }, { "start": 376.88, "end": 383.32, "text": " Let me just say, they can't do that without going to what's called local attention." }, { "start": 383.32, "end": 390.91999999999996, "text": " And what I mean by this is so attention mechanisms, extremely briefly, extremely briefly, if you" }, { "start": 390.92, "end": 399.44, "text": " have a sequence, and you transform it into another sequence, that's what an attention" }, { "start": 399.44, "end": 401.12, "text": " mechanism is for." }, { "start": 401.12, "end": 410.88, "text": " The attention mechanism looks at a looks at from each top part here, it emits a query" }, { "start": 410.88, "end": 411.88, "text": " queue." }, { "start": 411.88, "end": 414.78000000000003, "text": " Wow, that's a big thing." }, { "start": 414.78000000000003, "end": 417.70000000000005, "text": " Each top part emits a query queue." }, { "start": 417.7, "end": 422.53999999999996, "text": " Each bottom thing emits a key K, and then it builds what's called an attention map." }, { "start": 422.53999999999996, "end": 429.84, "text": " So an attention map, in this case, is just a matrix, a in this case, a five by five matrix." }, { "start": 429.84, "end": 435.48, "text": " And this matrix specifies how each of the inputs is routed to the outputs." }, { "start": 435.48, "end": 440.64, "text": " So this five by five matrix, as you can see, pretty clearly, if I make the sequence here" }, { "start": 440.64, "end": 444.28, "text": " longer than this, like one of the axes is going to get longer." }, { "start": 444.28, "end": 448.28, "text": " And if I make this sequence longer, the other axis is going to get longer." }, { "start": 448.28, "end": 454.4, "text": " And normally, or in what's called self attention, these sequences are the same sequence." }, { "start": 454.4, "end": 460, "text": " So you'll have the sequence paying attention to itself." }, { "start": 460, "end": 465.55999999999995, "text": " And if you have an image, what that means in an image is that so the image is already" }, { "start": 465.55999999999995, "end": 470.03999999999996, "text": " a matrix, but it's a it's kind of a collection of pixels, what you would do is you would" }, { "start": 470.04, "end": 477.52000000000004, "text": " see the image as a collection of as a sequence of pixels, and then each pixel needs to attend" }, { "start": 477.52000000000004, "end": 480.16, "text": " to each other pixel." }, { "start": 480.16, "end": 487.24, "text": " So you can see pretty easily if the image is like something like 200 by 200, that's" }, { "start": 487.24, "end": 490.78000000000003, "text": " what 40,000." }, { "start": 490.78000000000003, "end": 498.64000000000004, "text": " So you'd have a your matrix up here would be 40,000 by 40,000, which is impossible," }, { "start": 498.64000000000004, "end": 499.64000000000004, "text": " right?" }, { "start": 499.64, "end": 502.47999999999996, "text": " That's the trouble here." }, { "start": 502.47999999999996, "end": 507.65999999999997, "text": " Now people have gotten around this by doing what's called local attention." }, { "start": 507.65999999999997, "end": 513.02, "text": " And local attention means like, well, you know, you pixel, you don't need to pay attention" }, { "start": 513.02, "end": 517.4399999999999, "text": " to all of the other pixels, you actually only need to pay attention to the pixels in your" }, { "start": 517.4399999999999, "end": 521.76, "text": " neighborhood, which is sort of, it's a convolution, right?" }, { "start": 521.76, "end": 527.4, "text": " A convolution is usually this but local attention is a dynamic convolution." }, { "start": 527.4, "end": 533.04, "text": " So usually in a convolution, you have a fixed convolutional kernel, local attention is simply" }, { "start": 533.04, "end": 539.8, "text": " a dynamic convolutional kernel, like global attention is a dynamic feed forward layer," }, { "start": 539.8, "end": 544.36, "text": " instead of a fixed feed forward layer, local attention is a dynamic convolution instead" }, { "start": 544.36, "end": 547.68, "text": " of a fixed convolution." }, { "start": 547.68, "end": 553.28, "text": " They are going to do something similar here to process for high resolution images, they" }, { "start": 553.28, "end": 560.92, "text": " are going to restrict their context to a local kind of local field of view around the pixel" }, { "start": 560.92, "end": 562.76, "text": " that they're interested in." }, { "start": 562.76, "end": 570.26, "text": " So just so you don't get super hyped by by the by the abstract right here." }, { "start": 570.26, "end": 573.3199999999999, "text": " So we'll go into what these lambda layers do." }, { "start": 573.3199999999999, "end": 578.4399999999999, "text": " And I'm going to jump a whole bunch of things in the paper, just so we get to the kind of" }, { "start": 578.4399999999999, "end": 580.1999999999999, "text": " the meat of the thing." }, { "start": 580.2, "end": 586.44, "text": " So they say, look at these images, and we just we just set this right." }, { "start": 586.44, "end": 593.2800000000001, "text": " So usually you have a, you have for each pixel, you wonder how should I transform this to" }, { "start": 593.2800000000001, "end": 594.2800000000001, "text": " the next layer." }, { "start": 594.2800000000001, "end": 598.2, "text": " So you imagine your neural network as having layer, layer, layer, layer, layer." }, { "start": 598.2, "end": 603.86, "text": " And in each time you can imagine you have this image, and you want to transform it into" }, { "start": 603.86, "end": 608.32, "text": " like an intermediate representation that's still, it still looks like an image, maybe" }, { "start": 608.32, "end": 610.72, "text": " has different number of channels and so on." }, { "start": 610.72, "end": 613.2600000000001, "text": " But and maybe it's a different resolution." }, { "start": 613.2600000000001, "end": 620.6400000000001, "text": " But still, you want to kind of forward propagate this image into its intermediate representations." }, { "start": 620.6400000000001, "end": 626.08, "text": " And the question is, for each location in the image, so for each pixel, how should I" }, { "start": 626.08, "end": 631.44, "text": " transform that particular location into its next intermediate representation?" }, { "start": 631.44, "end": 633.2800000000001, "text": " That's what a neural network does." }, { "start": 633.28, "end": 641.12, "text": " In this, in this framework, what we want to do is we want to look at this pixel, and then" }, { "start": 641.12, "end": 648.12, "text": " say, okay, well, we can't just look at the pixel itself, we somehow need to look at all" }, { "start": 648.12, "end": 649.4, "text": " the other pixels." }, { "start": 649.4, "end": 653.66, "text": " So we know how to transform it, because it's going to be a really boring neural network" }, { "start": 653.66, "end": 657.04, "text": " if we just look at each pixel individually." }, { "start": 657.04, "end": 661.04, "text": " So we are going to look at all the other pixels in the picture." }, { "start": 661.04, "end": 664.92, "text": " As we said, it we're going to pay attention to all the other pixels." }, { "start": 664.92, "end": 671.28, "text": " And that determines how we should transform the current pixel into the next representation." }, { "start": 671.28, "end": 677.4, "text": " That would be what they call a global context or global attention in the attention framework." }, { "start": 677.4, "end": 682.3199999999999, "text": " However, as we already said, here, what we're going to do is we're simply around, we're" }, { "start": 682.3199999999999, "end": 689.16, "text": " simply going to restrict how far the pixel can look at the other pixels, what they call" }, { "start": 689.16, "end": 691.6, "text": " the local context." }, { "start": 691.6, "end": 696.28, "text": " So the pixels, they're going to be transformed into what's called queries, like in the attention" }, { "start": 696.28, "end": 703.38, "text": " framework, the context is, it can be something else." }, { "start": 703.38, "end": 706.88, "text": " But usually, it's going to be the same as the input." }, { "start": 706.88, "end": 709.3199999999999, "text": " So the input is this picture." }, { "start": 709.3199999999999, "end": 712.56, "text": " And the context is also going to be the picture." }, { "start": 712.56, "end": 717.4399999999999, "text": " But now, we are going to additionally for each location restrict the context around" }, { "start": 717.4399999999999, "end": 718.68, "text": " that location." }, { "start": 718.68, "end": 725.1999999999999, "text": " So what local attention would do, local attention would build for each pixel an attention map." }, { "start": 725.1999999999999, "end": 731.4399999999999, "text": " And the attention map, as we said, it is going to define how the pixel should pay attention" }, { "start": 731.4399999999999, "end": 733.56, "text": " to all the surrounding pixels." }, { "start": 733.56, "end": 740, "text": " So you can see right here, this is the attention map for this one pixel." }, { "start": 740, "end": 745.0799999999999, "text": " So you can imagine that if I were to construct an attention map for all the pixels in the" }, { "start": 745.08, "end": 751.32, "text": " image, now it's going to be every pixel is going to have an attention map like this telling" }, { "start": 751.32, "end": 755.76, "text": " it how it should aggregate all the pixels around itself." }, { "start": 755.76, "end": 762.0400000000001, "text": " And you can easily see that if we make the context as large as the image itself, that" }, { "start": 762.0400000000001, "end": 767.5, "text": " is going to give us each context map is going to be as large as the image." }, { "start": 767.5, "end": 770.44, "text": " And we need that for each pixel." }, { "start": 770.44, "end": 775.44, "text": " So we're going to end up with if this is if this is height and this is width, we're going" }, { "start": 775.44, "end": 780.0400000000001, "text": " to end up with height squared width squared memory requirements." }, { "start": 780.0400000000001, "end": 786.9000000000001, "text": " So the difference in the lambda layers is that the lambda layers, what they do is they" }, { "start": 786.9000000000001, "end": 795.6800000000001, "text": " take the context, and they're going to abstract this into a matrix, they're going to summarize" }, { "start": 795.68, "end": 804.0799999999999, "text": " the context first without looking at the query, okay, they're going to take the context and" }, { "start": 804.0799999999999, "end": 811.52, "text": " make it into this lower dimensional linear function, you can see from the picture that" }, { "start": 811.52, "end": 817.8, "text": " what they're trying to make sure that you see is that the left thing is basically restricted" }, { "start": 817.8, "end": 821.92, "text": " to be of the size that the it's pixel by pixel." }, { "start": 821.92, "end": 825.92, "text": " While on the right side, you have you're going to have some freedom over how you want to" }, { "start": 825.92, "end": 828.04, "text": " construct that matrix." }, { "start": 828.04, "end": 833.4799999999999, "text": " And they are going to abstract the context into a function." }, { "start": 833.4799999999999, "end": 837.4799999999999, "text": " And then they're simply going to multiply this by the query." }, { "start": 837.4799999999999, "end": 843.1999999999999, "text": " So the whole operation here is going to be a linear function, as opposed to the attention" }, { "start": 843.1999999999999, "end": 849, "text": " operation, which is you look at the interactions between queries and keys, and then you take" }, { "start": 849, "end": 852.76, "text": " a softmax over that, which makes it into a nonlinear function, this is going to be a" }, { "start": 852.76, "end": 854.72, "text": " linear function." }, { "start": 854.72, "end": 862.72, "text": " Okay, so, but the rhetoric around this, you can already see they say we abstract the context" }, { "start": 862.72, "end": 870.32, "text": " into a linear function, and then we apply that linear function to each query separately." }, { "start": 870.32, "end": 876.28, "text": " The problem right here is that there is one context per query, right?" }, { "start": 876.28, "end": 883.1999999999999, "text": " As soon as you go to the next pixel, like right here, your context is going to be is" }, { "start": 883.1999999999999, "end": 884.92, "text": " going to be shifted." }, { "start": 884.92, "end": 890.6, "text": " So it's not like if you had the global context, right, if you had the global context, you" }, { "start": 890.6, "end": 898.64, "text": " could simply compute this context function once, and then apply it to each to each pixel" }, { "start": 898.64, "end": 904.86, "text": " individually, that's going to be, that would be the gain in, let's say time." }, { "start": 904.86, "end": 907.08, "text": " But here, not so much." }, { "start": 907.08, "end": 913.66, "text": " So they're the trade offs that they make in space immediately result in the in the breakdown" }, { "start": 913.66, "end": 917.6, "text": " of their narrative, at least, I feel like this." }, { "start": 917.6, "end": 921.8000000000001, "text": " Now, how can you understand this just from here before we go into the formula?" }, { "start": 921.8000000000001, "end": 928.02, "text": " Again, I would say we go back to kind of the sequence narrative, okay, so the sequence" }, { "start": 928.02, "end": 934.48, "text": " narrative is the following, we want to transform the sequence into its next layer representation." }, { "start": 934.48, "end": 941.96, "text": " In attention, we take a look here and we look at how does this pay attention to each of" }, { "start": 941.96, "end": 947.28, "text": " the inputs right here, depending on what the inputs are, right, we depending on what these" }, { "start": 947.28, "end": 950.36, "text": " queries and depending on what the keys are here." }, { "start": 950.36, "end": 952.36, "text": " So that's going to be really important." }, { "start": 952.36, "end": 960.28, "text": " What we do here instead, in the lambda network is we're going to take the context, which" }, { "start": 960.28, "end": 965.3199999999999, "text": " is this thing, and now we're dealing with a global context because we don't." }, { "start": 965.3199999999999, "end": 969.8399999999999, "text": " So we are closer to the terminology, and we're going to summarize it, we're going to just" }, { "start": 969.8399999999999, "end": 973.8399999999999, "text": " summarize this into a function." }, { "start": 973.8399999999999, "end": 978.4, "text": " So and the function is represented by a matrix and the matrix dimensions, we can even choose" }, { "start": 978.4, "end": 980.92, "text": " how big this matrix is, right?" }, { "start": 980.92, "end": 986.12, "text": " We're just going to summarize the context without looking at the queries and then the" }, { "start": 986.12, "end": 992.08, "text": " queries without looking at the individual part of the context, like we don't do that." }, { "start": 992.08, "end": 998.92, "text": " We simply take the queries and pull them through this function to get the next higher level" }, { "start": 998.92, "end": 1004.8, "text": " representation, right, we take, we take the query, put it through the same function, get" }, { "start": 1004.8, "end": 1006.68, "text": " the higher level representation." }, { "start": 1006.68, "end": 1013.96, "text": " So the context is summarized into one single linear function that transforms all queries" }, { "start": 1013.96, "end": 1017.4000000000001, "text": " the same." }, { "start": 1017.4000000000001, "end": 1022.7800000000001, "text": " And it's not exactly what they do, like they have positional encodings and so on." }, { "start": 1022.7800000000001, "end": 1031.32, "text": " But in essence, that's what they are, that's what they are advertising in the first place." }, { "start": 1031.32, "end": 1038.28, "text": " Alright, so let's dive into the formula, the formulas are fairly, fairly complex, I had" }, { "start": 1038.28, "end": 1042.4, "text": " a while until I until I grasped all of this." }, { "start": 1042.4, "end": 1050.1200000000001, "text": " So this is the first half, you can see right here that this is the first half." }, { "start": 1050.1200000000001, "end": 1059.5600000000002, "text": " And then how you get from here to the outputs, that's another set of equations right here." }, { "start": 1059.5600000000002, "end": 1060.8400000000001, "text": " Okay." }, { "start": 1060.8400000000001, "end": 1064.5600000000002, "text": " It's again, as I said, it's it's fairly complex." }, { "start": 1064.5600000000002, "end": 1069, "text": " And that's not all like there and there, then there is translation, equivariants, then there" }, { "start": 1069, "end": 1075.48, "text": " is the convolutional lambda, and so on, and the analysis." }, { "start": 1075.48, "end": 1082.6, "text": " But let's break this down and see where the lambda layer is different and how it works." }, { "start": 1082.6, "end": 1090.64, "text": " So we start out with the input and the context, right, that is that is here." }, { "start": 1090.64, "end": 1094.76, "text": " These are the inputs to the lambda layer, x and c." }, { "start": 1094.76, "end": 1102.96, "text": " Now, keep in first of all, okay, let's let's build up a little diagram over here, we have" }, { "start": 1102.96, "end": 1109.96, "text": " x and we have c coming in, and we'll annotate them with their respective sizes." }, { "start": 1109.96, "end": 1113.8799999999999, "text": " So x is n by d, and c is m by d." }, { "start": 1113.8799999999999, "end": 1120.04, "text": " So that's n by d, and m by d." }, { "start": 1120.04, "end": 1127.28, "text": " Now, keep in mind, okay, that x and c are often the same thing." }, { "start": 1127.28, "end": 1131.32, "text": " First of all, right, or similar if c is restricted and so on." }, { "start": 1131.32, "end": 1133.84, "text": " But keep keep that in mind." }, { "start": 1133.84, "end": 1139.32, "text": " So x and c are often the same thing, n here is what would be referred to as the input" }, { "start": 1139.32, "end": 1142.68, "text": " size, input size, right." }, { "start": 1142.68, "end": 1151.8400000000001, "text": " And if n is equal to m, if x is equal to c, then the problem is going to be whenever there" }, { "start": 1151.8400000000001, "end": 1158.76, "text": " is a term m by n, then that is going to be quadratic in the input size, and that is going" }, { "start": 1158.76, "end": 1159.76, "text": " to blow up." }, { "start": 1159.76, "end": 1165.2, "text": " So in terms of in when if this is an image, and this here is going to be whatever 225" }, { "start": 1165.2, "end": 1169.04, "text": " by 225, that's the image resolution." }, { "start": 1169.04, "end": 1171.0800000000002, "text": " That's that's n, right?" }, { "start": 1171.0800000000002, "end": 1172.52, "text": " n is this." }, { "start": 1172.52, "end": 1175.32, "text": " We're not talking d is going to be the channels." }, { "start": 1175.32, "end": 1177.84, "text": " So n itself is going to be this giant number." }, { "start": 1177.84, "end": 1183.04, "text": " So you can see that n by m is going to be that squared." }, { "start": 1183.04, "end": 1188.36, "text": " So whenever there is a term like this, that's going to be a problem." }, { "start": 1188.36, "end": 1195.48, "text": " So in attention, what do we do in attention, let's make a little thing here in attention," }, { "start": 1195.48, "end": 1197.56, "text": " we have x and we have c." }, { "start": 1197.56, "end": 1203.52, "text": " This is n by d, this is m by d." }, { "start": 1203.52, "end": 1212, "text": " In attention, what we're going to do is we're going to transform x by means of w q, but" }, { "start": 1212, "end": 1220.2, "text": " this is these are learnable parameters, the w, w q is d by k." }, { "start": 1220.2, "end": 1227.52, "text": " So it transforms the inputs into queries and the queries are going to be n one query per" }, { "start": 1227.52, "end": 1235.68, "text": " input, by the key dimension, which is often which is a parameter you can choose, then" }, { "start": 1235.68, "end": 1244.6399999999999, "text": " we're going to transform the context by means of w k, which is also d by k into the keys," }, { "start": 1244.6399999999999, "end": 1256.8799999999999, "text": " which are now m by k, sorry, and we're going to transform the c into w also into values." }, { "start": 1256.88, "end": 1262.18, "text": " And the values, I mean, there would be an additional parameter of the value dimension," }, { "start": 1262.18, "end": 1267.2, "text": " but very often, since the output dimension is going to be d again, we'll just say this" }, { "start": 1267.2, "end": 1269.0400000000002, "text": " is m by d." }, { "start": 1269.0400000000002, "end": 1279.44, "text": " Sorry, no, this is, let's call that d by d, which makes the values m by d." }, { "start": 1279.44, "end": 1287.3200000000002, "text": " Okay, so these are now your standard attention parameters, let's say." }, { "start": 1287.3200000000002, "end": 1293.3200000000002, "text": " So you are going to take the queries and the keys and you're going to multiply them together" }, { "start": 1293.3200000000002, "end": 1295.24, "text": " to get the attention map." }, { "start": 1295.24, "end": 1298.3600000000001, "text": " Okay, you can see if you multiply those two things together." }, { "start": 1298.3600000000001, "end": 1307.92, "text": " So query, you do query times key transposed, you get n by m, and you're going to softmax" }, { "start": 1307.92, "end": 1316.92, "text": " this, let's do it like a little sigma, so which is going to be the normalized by m," }, { "start": 1316.92, "end": 1323.74, "text": " and you're going to take the values and calculate the outputs y from this and the outputs y" }, { "start": 1323.74, "end": 1327.88, "text": " are going to be n by d." }, { "start": 1327.88, "end": 1335.5600000000002, "text": " All right, so you can see that the nonlinearity is right here." }, { "start": 1335.56, "end": 1344.52, "text": " Okay, so the nonlinearity determines how do you aggregate the context which is transformed" }, { "start": 1344.52, "end": 1350.52, "text": " into the values linearly, how do you aggregate the context to the output that's determined" }, { "start": 1350.52, "end": 1354.6, "text": " by the nonlinearity, it's determined by this attention map." }, { "start": 1354.6, "end": 1359.6799999999998, "text": " And most notably, you have this n by m parameter right here." }, { "start": 1359.6799999999998, "end": 1363.26, "text": " This is a matrix you have to construct, you can't get around it because you have to apply" }, { "start": 1363.26, "end": 1367.08, "text": " nonlinearity to it can decompose it." }, { "start": 1367.08, "end": 1369.42, "text": " And that's the problem." }, { "start": 1369.42, "end": 1373.96, "text": " So now, it's about to get complicated." }, { "start": 1373.96, "end": 1374.96, "text": " Really easy." }, { "start": 1374.96, "end": 1384.46, "text": " First of all, we take the inputs, and we're going to again, apply a WQ, that's d by k" }, { "start": 1384.46, "end": 1386, "text": " to get the queries." }, { "start": 1386, "end": 1392.12, "text": " Okay, the queries are going to be n by k so far, so good." }, { "start": 1392.12, "end": 1400.6, "text": " So we got these, we got the query, as you can see right here, it's d by k." }, { "start": 1400.6, "end": 1403.3999999999999, "text": " And the queries are constructed like this." }, { "start": 1403.3999999999999, "end": 1405.76, "text": " Now there's a there's a mistake here." }, { "start": 1405.76, "end": 1409.8799999999999, "text": " Authors, anonymous authors, if you're looking, this is wrong." }, { "start": 1409.8799999999999, "end": 1414.12, "text": " Yes, this should be something like n by k." }, { "start": 1414.12, "end": 1416.34, "text": " Okay, not even you." }, { "start": 1416.34, "end": 1422.1, "text": " So you here is like an inter dimension parameter, this, we're just going to scrap this, this" }, { "start": 1422.1, "end": 1426.52, "text": " is equal to one for our purposes." }, { "start": 1426.52, "end": 1431.56, "text": " You can, you know, you can you can do all the things with the with the u equal to more" }, { "start": 1431.56, "end": 1436.48, "text": " stuff, but we're just going to leave it at one if that's okay." }, { "start": 1436.48, "end": 1440.6399999999999, "text": " So yeah, scrap this." }, { "start": 1440.6399999999999, "end": 1447.9399999999998, "text": " Alright, so we got we got our queries and you can see keys and values just the same." }, { "start": 1447.94, "end": 1453.44, "text": " So we're going to transform the context into keys and values just the same as in attention." }, { "start": 1453.44, "end": 1457.56, "text": " Let's quickly go over here and do that." }, { "start": 1457.56, "end": 1466.1200000000001, "text": " Here we're going to transform this using WK, which is d by k, and we're going to transform" }, { "start": 1466.1200000000001, "end": 1476.5800000000002, "text": " it as well using WV, which is D. Now, they're going to say D by V, but we'll just always" }, { "start": 1476.58, "end": 1481.6, "text": " say D by D. They are going to relax that later on and so on." }, { "start": 1481.6, "end": 1491.52, "text": " But yeah, D by D. So this gives you keys and this gives you values and sorry, m by k, and" }, { "start": 1491.52, "end": 1503.08, "text": " now m by D. And now the the difference is is happening." }, { "start": 1503.08, "end": 1506.86, "text": " We're getting to the positional embeddings in a minute." }, { "start": 1506.86, "end": 1514.08, "text": " So now what we're going to do is we're going to apply a softmax to the keys, just the keys." }, { "start": 1514.08, "end": 1521.76, "text": " Okay, so we're going to take the keys and we're going to do a softmax operation along" }, { "start": 1521.76, "end": 1522.76, "text": " m." }, { "start": 1522.76, "end": 1529.1599999999999, "text": " So we'll maybe say along which dimension here is along m along the m dimension." }, { "start": 1529.1599999999999, "end": 1532.28, "text": " Okay, so which gives us the key m by k." }, { "start": 1532.28, "end": 1534.06, "text": " Now this is a little bit weird." }, { "start": 1534.06, "end": 1537.8799999999999, "text": " Why would we apply the softmax to like an individual thing?" }, { "start": 1537.8799999999999, "end": 1541.28, "text": " And we're going to see in a minute what that does." }, { "start": 1541.28, "end": 1547.6, "text": " But for now, this simply create, we create a key matrix." }, { "start": 1547.6, "end": 1550.2, "text": " The key matrix is m by k." }, { "start": 1550.2, "end": 1554.56, "text": " And then we're going to apply a softmax over the m dimension." }, { "start": 1554.56, "end": 1560.36, "text": " And that means that means we now have k attention maps." }, { "start": 1560.36, "end": 1564.28, "text": " We have k different attention maps over m inputs." }, { "start": 1564.28, "end": 1569.8, "text": " All right, and every time you make a softmax, you basically make a distribution." }, { "start": 1569.8, "end": 1573.62, "text": " And that defines how you aggregate information." }, { "start": 1573.62, "end": 1580.28, "text": " And so we have k different distributions as here, you can see our attention map was we" }, { "start": 1580.28, "end": 1585.7199999999998, "text": " had n different attention maps of size m." }, { "start": 1585.7199999999998, "end": 1589.04, "text": " And now we have k different attention maps of size m." }, { "start": 1589.04, "end": 1591.92, "text": " This is going to be the difference, right?" }, { "start": 1591.92, "end": 1595.3999999999999, "text": " Here, it's not that attention vanishes in this model." }, { "start": 1595.3999999999999, "end": 1599.1599999999999, "text": " It's that the attention shifts where it is." }, { "start": 1599.1599999999999, "end": 1601.98, "text": " And you're going to see that quickly." }, { "start": 1601.98, "end": 1608.8799999999999, "text": " When you look at here, this content contribution and position contribution is where we're going" }, { "start": 1608.8799999999999, "end": 1613.68, "text": " to now multiply the keys by the values." }, { "start": 1613.68, "end": 1616.32, "text": " And yeah, the position we're going to look in a minute." }, { "start": 1616.32, "end": 1618.24, "text": " But we're now going to multiply the keys by the value." }, { "start": 1618.24, "end": 1622.24, "text": " So the queries are nowhere to be found." }, { "start": 1622.24, "end": 1627.88, "text": " And if we go down here, you can see that we multiply the keys by the values and then contract" }, { "start": 1627.88, "end": 1628.88, "text": " over m." }, { "start": 1628.88, "end": 1635.28, "text": " So this is this is a a multiplication right here." }, { "start": 1635.28, "end": 1644.16, "text": " So we're going to take the values, whoopsie, the values and the keys, and we're going to" }, { "start": 1644.16, "end": 1646, "text": " contract over m." }, { "start": 1646, "end": 1656.76, "text": " So in this case, we'll simply do whatever key key like key transposed times V, maybe." }, { "start": 1656.76, "end": 1661.44, "text": " Yeah, that makes sense." }, { "start": 1661.44, "end": 1663.76, "text": " Or the other way around." }, { "start": 1663.76, "end": 1668.12, "text": " No, that that sounds sounds about right." }, { "start": 1668.12, "end": 1671.12, "text": " Which gives us what what do they call it?" }, { "start": 1671.12, "end": 1673.8, "text": " I think they call it lambda." }, { "start": 1673.8, "end": 1675.8, "text": " They call it lambda C." }, { "start": 1675.8, "end": 1677.48, "text": " Now we have to pay attention." }, { "start": 1677.48, "end": 1682.12, "text": " The C up here is going to be this is not a dimension." }, { "start": 1682.12, "end": 1693.36, "text": " This is just the name of this is lambda C, which is going to be of size k by D. Okay." }, { "start": 1693.36, "end": 1695.56, "text": " Do we get this right?" }, { "start": 1695.56, "end": 1697.08, "text": " This is going to be of size." }, { "start": 1697.08, "end": 1703.36, "text": " Yes, k by V in this case, but k by D in our case and contracting over m." }, { "start": 1703.36, "end": 1711.12, "text": " So here you see that it's kind of a it's kind of a tricky trick in here." }, { "start": 1711.12, "end": 1716.24, "text": " So this whole thing is sort of by itself." }, { "start": 1716.24, "end": 1719.8799999999999, "text": " And it does kind of an attention to itself." }, { "start": 1719.8799999999999, "end": 1723.08, "text": " It's the context summarizes itself." }, { "start": 1723.08, "end": 1726.24, "text": " And you can see at the end, there is no more m." }, { "start": 1726.24, "end": 1731.7199999999998, "text": " So m, there is there's no more m, m is vanished from this." }, { "start": 1731.72, "end": 1739.24, "text": " So we have summarized the context in in and abstracted the m before we ever had a chance" }, { "start": 1739.24, "end": 1743.3600000000001, "text": " to let it interact with the end." }, { "start": 1743.3600000000001, "end": 1747.92, "text": " And this is exactly where the this differs from attention." }, { "start": 1747.92, "end": 1756, "text": " So the last step here is going to be that we're going to take this this lambda C, and" }, { "start": 1756, "end": 1758.92, "text": " we're going to take the queries." }, { "start": 1758.92, "end": 1761.18, "text": " And we're going to multiply those together." }, { "start": 1761.18, "end": 1764.96, "text": " So this is simply a linear function right here." }, { "start": 1764.96, "end": 1772.3600000000001, "text": " This is a linear function, we're doing q times lambda C." }, { "start": 1772.3600000000001, "end": 1775.28, "text": " And that is going to give us our output y." }, { "start": 1775.28, "end": 1786.74, "text": " Okay, and y is going to be n by D. So each of the inputs have this is each of the inputs" }, { "start": 1786.74, "end": 1788.5600000000002, "text": " next layer representation." }, { "start": 1788.56, "end": 1796.22, "text": " So each of the inputs next layer representation is simply a linear function of its query." }, { "start": 1796.22, "end": 1801.84, "text": " And its context, and the context is a summary of the context." }, { "start": 1801.84, "end": 1808.72, "text": " So what you don't have is fine grained interaction between position, a transformer can say, well," }, { "start": 1808.72, "end": 1811.4199999999998, "text": " I am this pixel here." }, { "start": 1811.4199999999998, "end": 1812.8799999999999, "text": " And I am green." }, { "start": 1812.8799999999999, "end": 1815.8999999999999, "text": " And you are this pixel there." }, { "start": 1815.8999999999999, "end": 1817.76, "text": " And you are red." }, { "start": 1817.76, "end": 1822.16, "text": " I am going to pay x amount of attention to you." }, { "start": 1822.16, "end": 1827.02, "text": " This is no law and you this pixel here you are yellow, I'm going to pay more attention" }, { "start": 1827.02, "end": 1828.02, "text": " to you." }, { "start": 1828.02, "end": 1829.02, "text": " You can't do that." }, { "start": 1829.02, "end": 1834.8799999999999, "text": " The pixels in the context, they will go among themselves, they will decide, okay, you're" }, { "start": 1834.8799999999999, "end": 1836.8799999999999, "text": " red, I'm yellow, and so on." }, { "start": 1836.8799999999999, "end": 1842.9, "text": " How much attention should anyone be able to pay to the two of us, they will put that into" }, { "start": 1842.9, "end": 1846.06, "text": " a summary vector, basically." }, { "start": 1846.06, "end": 1851.86, "text": " And then the query can only look at that summary vector and decide what it wants to do with" }, { "start": 1851.86, "end": 1853.08, "text": " it." }, { "start": 1853.08, "end": 1859.6599999999999, "text": " In essence, I have a multiple frameworks of how you can understand this." }, { "start": 1859.6599999999999, "end": 1867.98, "text": " Notably, what you can understand this as is the whole blue part here, what it does is" }, { "start": 1867.98, "end": 1875.54, "text": " it kind of constructs a vector space, okay, it constructs a vector space of k dimensions," }, { "start": 1875.54, "end": 1878.36, "text": " you can see here, this k is going to be very important." }, { "start": 1878.36, "end": 1883.8999999999999, "text": " So it constructs a vector space of k, not of k dimensions." }, { "start": 1883.8999999999999, "end": 1889.1, "text": " But it comes, yeah, like a subspace of k dimensions in the D dimensional vector space." }, { "start": 1889.1, "end": 1891.44, "text": " Okay, is usually pretty small." }, { "start": 1891.44, "end": 1899.6599999999999, "text": " So we're going to have this k subspace of k vectors in the D dimensional space that" }, { "start": 1899.66, "end": 1908.3000000000002, "text": " is constructed, and all the queries can do is they can select a point in that, okay." }, { "start": 1908.3000000000002, "end": 1916.98, "text": " The meaning here is that the context, no, let's go a step back and talk about this softmax" }, { "start": 1916.98, "end": 1918.78, "text": " operation." }, { "start": 1918.78, "end": 1925.8200000000002, "text": " So it might be a bit weird to apply the softmax just to like a single matrix of keys." }, { "start": 1925.82, "end": 1929.7, "text": " But that's not exactly what's happening." }, { "start": 1929.7, "end": 1936.06, "text": " So in the attention, what you'll have is you'll have a softmax over the queries times the" }, { "start": 1936.06, "end": 1938.08, "text": " keys, right." }, { "start": 1938.08, "end": 1944.1, "text": " And the both are computed, the queries are computed from the input and the keys are computed" }, { "start": 1944.1, "end": 1945.5, "text": " from the input." }, { "start": 1945.5, "end": 1952.06, "text": " And the question is, how, how should they how should information be aggregated from" }, { "start": 1952.06, "end": 1956.94, "text": " the values that's determined by the two things, okay." }, { "start": 1956.94, "end": 1964.3, "text": " Now, in this case, you might say, well, it's just the keys that decide, so there is no" }, { "start": 1964.3, "end": 1965.3, "text": " interaction." }, { "start": 1965.3, "end": 1967.1799999999998, "text": " But there is." }, { "start": 1967.1799999999998, "end": 1976.26, "text": " If you write the keys out what the keys are, the keys are the context times this matrix" }, { "start": 1976.26, "end": 1978.46, "text": " WK." }, { "start": 1978.46, "end": 1986.82, "text": " Okay, and what this is now, you can see this as the analog to the one before." }, { "start": 1986.82, "end": 1991.9, "text": " So this here is the input that's kind of like the query matrix, except the query matrix" }, { "start": 1991.9, "end": 1993.94, "text": " is a linear transformation of the input." }, { "start": 1993.94, "end": 1996.26, "text": " But it's sort of like it comes to the input." }, { "start": 1996.26, "end": 2002.02, "text": " But this here is now no longer like the key matrix from above, this here is actually fixed." }, { "start": 2002.02, "end": 2007.18, "text": " So the keys in this world are fixed." }, { "start": 2007.18, "end": 2014.7, "text": " How you can imagine that is each layer constructs a sort of like a pseudo sequence, a pseudo" }, { "start": 2014.7, "end": 2026.46, "text": " sequence of K of K different of size K. And what it first does is it kind of summarizes" }, { "start": 2026.46, "end": 2030.78, "text": " the input sequence, it will draw it will draw it like I drew this before." }, { "start": 2030.78, "end": 2036.7, "text": " So instead of transforming this sequence into this sequence, what it does is it constructs" }, { "start": 2036.7, "end": 2044.02, "text": " a pseudo sequence of let's say length three intermediate, and this pseudo sequence, this" }, { "start": 2044.02, "end": 2050.02, "text": " intermediate sequence always, always, always has the same queries." }, { "start": 2050.02, "end": 2055.26, "text": " Now, okay, you have to swap the two actually." }, { "start": 2055.26, "end": 2059.26, "text": " This this is kind of like the keys." }, { "start": 2059.26, "end": 2061.78, "text": " This is like the queries." }, { "start": 2061.78, "end": 2067.3, "text": " Okay, so this pseudo sequence always has the same queries." }, { "start": 2067.3, "end": 2073.42, "text": " And the the this this sequence down here is now going to send information to that pseudo" }, { "start": 2073.42, "end": 2074.42, "text": " sequence." }, { "start": 2074.42, "end": 2078.94, "text": " So this pseudo sequence always aggregates information in the same way, independent of" }, { "start": 2078.94, "end": 2081.1800000000003, "text": " what the input is." }, { "start": 2081.1800000000003, "end": 2086.3, "text": " And after and after, so that's how it aggregates the output." }, { "start": 2086.3, "end": 2091.52, "text": " So no longer transforms this into this upper sequence right here." }, { "start": 2091.52, "end": 2097.58, "text": " And then, of course, it does in the second step, but this now is just linear." }, { "start": 2097.58, "end": 2104.74, "text": " So this here, this part here is attention." }, { "start": 2104.74, "end": 2110.74, "text": " And then this part here is linear, this is kind of reminiscent of the Lin former and" }, { "start": 2110.74, "end": 2116.86, "text": " so on that that kind of concept that project the sizes, the intermediate sizes of the sequences" }, { "start": 2116.86, "end": 2117.86, "text": " down." }, { "start": 2117.86, "end": 2122.98, "text": " It's just done in a different way is that the attention is shifted to this first part" }, { "start": 2122.98, "end": 2125.7000000000003, "text": " here and is sort of fixed." }, { "start": 2125.7000000000003, "end": 2128.7000000000003, "text": " I don't even want to call it attention." }, { "start": 2128.7000000000003, "end": 2134.34, "text": " Because it's kind of like fixed, the queries are always the same, they are learned a bit" }, { "start": 2134.34, "end": 2139.7000000000003, "text": " like, if you remember the DETR paper where we have learned queries." }, { "start": 2139.7000000000003, "end": 2142.98, "text": " So what does this mean?" }, { "start": 2142.98, "end": 2152.66, "text": " It means something like you each layer learns these different dimensions that it could that" }, { "start": 2152.66, "end": 2157.9, "text": " it can aggregate in the in the context." }, { "start": 2157.9, "end": 2161.38, "text": " So this could be like color." }, { "start": 2161.38, "end": 2169.54, "text": " So it says this context, what what kind of what what, or this particular context element," }, { "start": 2169.54, "end": 2172.58, "text": " what kind of a color does it have?" }, { "start": 2172.58, "end": 2177.98, "text": " It could be it could be higher level features, it could be like, is there is there give me" }, { "start": 2177.98, "end": 2185.42, "text": " the give me if there is a corner, if this is an image, there's a corner, or if this" }, { "start": 2185.42, "end": 2190.98, "text": " is a sequence, tell me whether or not like what kind of word it is, tell me it's it's" }, { "start": 2190.98, "end": 2197.46, "text": " grammatical meaning, I don't know, even though it's grammatical meaning, or its label, like" }, { "start": 2197.46, "end": 2200.58, "text": " whether it's a noun or a verb." }, { "start": 2200.58, "end": 2208.88, "text": " And here, you kind of get what I mean that there it constructs this space of properties" }, { "start": 2208.88, "end": 2212.06, "text": " of the context elements." }, { "start": 2212.06, "end": 2223.44, "text": " And each, each query can then come and basically decide how important each query from up here" }, { "start": 2223.44, "end": 2226.34, "text": " can decide how important each of these is." }, { "start": 2226.34, "end": 2234.82, "text": " So this these blue arrows here refer directly to the pseudo sequence, which is of length" }, { "start": 2234.82, "end": 2235.82, "text": " k." }, { "start": 2235.82, "end": 2244.6600000000003, "text": " And then the query simply selects a point in this and aggregates information in that." }, { "start": 2244.6600000000003, "end": 2245.6600000000003, "text": " Okay." }, { "start": 2245.6600000000003, "end": 2249.86, "text": " I don't know if that's if that's entirely clear." }, { "start": 2249.86, "end": 2255.6800000000003, "text": " But the point is that the attention operation is now shifted to instead of transforming" }, { "start": 2255.68, "end": 2260.8599999999997, "text": " a sequence into its higher representation, it's transforming it into kind of an intermediary" }, { "start": 2260.8599999999997, "end": 2266.3999999999996, "text": " pseudo sequence that has nothing to do with the with the queries in question is just dependent" }, { "start": 2266.3999999999996, "end": 2268.54, "text": " on the context." }, { "start": 2268.54, "end": 2275.56, "text": " Then the projection to the next level representation where the queries actually come in is simply" }, { "start": 2275.56, "end": 2286.22, "text": " a linear operation constructs this kind of subspace that has these axes." }, { "start": 2286.22, "end": 2292.02, "text": " And then it in this subspace, it's just a linear operation to get to the next layer." }, { "start": 2292.02, "end": 2297.2999999999997, "text": " Okay, so summarize the context using attention." }, { "start": 2297.2999999999997, "end": 2302.42, "text": " So the trick here is you don't summarize the context into a vector, you actually summarize" }, { "start": 2302.42, "end": 2306.66, "text": " the context into a bunch of vectors." }, { "start": 2306.66, "end": 2311.86, "text": " So the context can say my color is green." }, { "start": 2311.86, "end": 2318.7400000000002, "text": " My my corner reness over the whole like, I got lots of corners." }, { "start": 2318.7400000000002, "end": 2323.58, "text": " And each of these each of these properties is a vector, as you can see here." }, { "start": 2323.58, "end": 2330.66, "text": " And then so maybe it's better characterized as a list, a list of size k." }, { "start": 2330.66, "end": 2336.58, "text": " And each entry in this list has a particular meaning like color, and each one is a vector." }, { "start": 2336.58, "end": 2342.62, "text": " So the context will be summarized into a collection of k vectors." }, { "start": 2342.62, "end": 2347.14, "text": " Like this, okay, so each context can have a different collection of k vectors, but still" }, { "start": 2347.14, "end": 2348.14, "text": " it's k." }, { "start": 2348.14, "end": 2354.62, "text": " And then the query, the query can decide how it wants to aggregate how important is color" }, { "start": 2354.62, "end": 2355.7799999999997, "text": " to me." }, { "start": 2355.7799999999997, "end": 2358.06, "text": " It's like five, five important color." }, { "start": 2358.06, "end": 2360.7799999999997, "text": " And then sees like, oh, you're you're green." }, { "start": 2360.7799999999997, "end": 2362, "text": " Okay, cool." }, { "start": 2362, "end": 2364.86, "text": " How important is corner reness to me?" }, { "start": 2364.86, "end": 2365.86, "text": " Eight." }, { "start": 2365.86, "end": 2367.62, "text": " Okay, cool." }, { "start": 2367.62, "end": 2376.06, "text": " The important part is what the query cannot do is it cannot go look, it cannot look at" }, { "start": 2376.06, "end": 2379.34, "text": " what the color is and then decide how important it is." }, { "start": 2379.34, "end": 2381.5, "text": " That's what makes it different from attention." }, { "start": 2381.5, "end": 2385.18, "text": " So in attention, the query can see and it's like, oh, you're green." }, { "start": 2385.18, "end": 2387.16, "text": " Well, that's not that important to me." }, { "start": 2387.16, "end": 2395.58, "text": " The query must decide, ah, okay, I myself am a red pixel, I'm going to pay five attention" }, { "start": 2395.58, "end": 2398.22, "text": " to the color of other pixels." }, { "start": 2398.22, "end": 2403.42, "text": " If I am yellow, I'm going to pay seven attention, but it can't look at the other pixels, because" }, { "start": 2403.42, "end": 2405.2999999999997, "text": " they're all summarized, right?" }, { "start": 2405.2999999999997, "end": 2410.22, "text": " It can't go look at all the other pixels, it can only look at the summary, decide how" }, { "start": 2410.22, "end": 2412.8799999999997, "text": " important is that." }, { "start": 2412.88, "end": 2419.7400000000002, "text": " So enough ranting from me, there is a second part to this, which is the position encoding." }, { "start": 2419.7400000000002, "end": 2422.6600000000003, "text": " So they have noticed probably they've tried it like this." }, { "start": 2422.6600000000003, "end": 2424.38, "text": " And this just doesn't doesn't work." }, { "start": 2424.38, "end": 2432.1600000000003, "text": " And it shows in their ablations, what's actually important is the additional positional encodings." }, { "start": 2432.1600000000003, "end": 2434.5, "text": " And that's what they have right here." }, { "start": 2434.5, "end": 2447.68, "text": " So the what they have now is these encodings E and E, as you can see, right here, E is" }, { "start": 2447.68, "end": 2451.14, "text": " already indexed by n and m." }, { "start": 2451.14, "end": 2457.22, "text": " So E is going to be an n by m by k tensor." }, { "start": 2457.22, "end": 2469.22, "text": " You see the inputs are n by d, and m by d, and E is going to be n by m by k." }, { "start": 2469.22, "end": 2472.02, "text": " Now these are positional encodings." }, { "start": 2472.02, "end": 2477.66, "text": " So what they do is they are a fixed set of learn parameters kind of like positional encodings" }, { "start": 2477.66, "end": 2486.7799999999997, "text": " in a transformer, but in a transformer, it would simply be like m by k, right?" }, { "start": 2486.78, "end": 2491.78, "text": " That's what it would be because you just put the positional encodings onto the context" }, { "start": 2491.78, "end": 2492.78, "text": " or on the input." }, { "start": 2492.78, "end": 2494.42, "text": " In that case, it would be n by k." }, { "start": 2494.42, "end": 2496.34, "text": " Here we have an n by m by k." }, { "start": 2496.34, "end": 2501.5, "text": " So these are actually learned attention weights kind of." }, { "start": 2501.5, "end": 2514.46, "text": " So these are going to be a matrix that is n by m and is going to be a k dimensional" }, { "start": 2514.46, "end": 2515.46, "text": " vector for each." }, { "start": 2515.46, "end": 2523.54, "text": " So each n by m pair has a vector associated with it and embedding." }, { "start": 2523.54, "end": 2529.1, "text": " This kind of destroys the whole notion of this summarizing the context first, right?" }, { "start": 2529.1, "end": 2534.7, "text": " Because now we're building up basically a learned attention map, a learned attention" }, { "start": 2534.7, "end": 2535.78, "text": " map." }, { "start": 2535.78, "end": 2541.2200000000003, "text": " The advantage here is that this thing is learned, this thing is not computed, it is learned" }, { "start": 2541.22, "end": 2548.2999999999997, "text": " per layer, and it cannot be kind of changed from example to example." }, { "start": 2548.2999999999997, "end": 2550.62, "text": " So that's the difference between the attention map." }, { "start": 2550.62, "end": 2557.58, "text": " So the stuff that is computed dynamically is not dependent on n by m." }, { "start": 2557.58, "end": 2561.5, "text": " And the stuff that is n by m is not computed dynamically." }, { "start": 2561.5, "end": 2567.3399999999997, "text": " And that has the big advantage that if I have a batch size in front, then these things here" }, { "start": 2567.34, "end": 2577.6200000000003, "text": " are all going to be adding the batch size n by d by b, n by d by b, while this thing" }, { "start": 2577.6200000000003, "end": 2580.5, "text": " no b, okay?" }, { "start": 2580.5, "end": 2583.6600000000003, "text": " So this thing is fixed." }, { "start": 2583.6600000000003, "end": 2589.94, "text": " And all you have to do is you have to hold n by m once in memory." }, { "start": 2589.94, "end": 2594.94, "text": " And you don't have to hold it, you don't have to grow it with the batch size." }, { "start": 2594.94, "end": 2600.54, "text": " And since we are reducing n and m anyway, because or m at least, because we are only" }, { "start": 2600.54, "end": 2604.98, "text": " paying attention to local context, that's going to be feasible." }, { "start": 2604.98, "end": 2609.34, "text": " You can see that you can't get around the fact that you have to have these attention" }, { "start": 2609.34, "end": 2610.34, "text": " maps." }, { "start": 2610.34, "end": 2614.38, "text": " And therefore, you probably in this framework can't get around to the fact that you have" }, { "start": 2614.38, "end": 2618.1, "text": " to have some sort of local restriction." }, { "start": 2618.1, "end": 2623.04, "text": " Because if it weren't for that, this thing right here, there is no n by m, never ever" }, { "start": 2623.04, "end": 2629.98, "text": " an n by m, and therefore, you don't have this giant blow up, the attention mechanism is" }, { "start": 2629.98, "end": 2632.94, "text": " over m by k, as you can see here." }, { "start": 2632.94, "end": 2639.98, "text": " And as long as you can keep k small, that could actually work with a global context." }, { "start": 2639.98, "end": 2642.34, "text": " Okay, not with the position embedding." }, { "start": 2642.34, "end": 2645.18, "text": " And it doesn't work without the position embeddings." }, { "start": 2645.18, "end": 2648.86, "text": " And they are not position embeddings, they are attention embeddings." }, { "start": 2648.86, "end": 2656.38, "text": " Okay, let's or interaction embeddings, to call them position embeddings would be a little" }, { "start": 2656.38, "end": 2658.3, "text": " bit a little bit." }, { "start": 2658.3, "end": 2662.58, "text": " I mean, they say it's a positional bedding for their relation n to m." }, { "start": 2662.58, "end": 2667.1, "text": " It's important to note that these, again, are not computed from the input, they are" }, { "start": 2667.1, "end": 2672.7400000000002, "text": " simply fixed, they're simply say, if a pixel is on the top left, and the other pixels on" }, { "start": 2672.74, "end": 2682.3399999999997, "text": " the bottom right, then they are, their relation is given by this vector right here." }, { "start": 2682.3399999999997, "end": 2687.8599999999997, "text": " Okay, so for each pair of pixel, there is an entry in this matrix." }, { "start": 2687.8599999999997, "end": 2691.54, "text": " Now how do we use those?" }, { "start": 2691.54, "end": 2698.06, "text": " Kinda similar, we just start down here, we multiply them with the value." }, { "start": 2698.06, "end": 2707.1, "text": " And you can see that you will and you contract over m in subsequent equation." }, { "start": 2707.1, "end": 2708.2599999999998, "text": " Where is it?" }, { "start": 2708.2599999999998, "end": 2714.2599999999998, "text": " Right here, you contract over m, which gives you this thing right here, which you can see" }, { "start": 2714.2599999999998, "end": 2717.74, "text": " there is nothing here, now there is an n here." }, { "start": 2717.74, "end": 2722.98, "text": " So what you'll get naturally is one positional embedding per input." }, { "start": 2722.98, "end": 2728.5, "text": " So yeah, as I said, it sort of destroys this this notion of first summarizing the context," }, { "start": 2728.5, "end": 2732.08, "text": " because now it's, it's on again." }, { "start": 2732.08, "end": 2741.1, "text": " So you're going to take the values and this thing, and you're going to compute from this," }, { "start": 2741.1, "end": 2751.3, "text": " this lambda p positional lambda, which is of size, and you can see it, it's n by k by" }, { "start": 2751.3, "end": 2753.94, "text": " d." }, { "start": 2753.94, "end": 2763.98, "text": " And you're going to take, you're going to take the queries, it's going to get complicated." }, { "start": 2763.98, "end": 2771.82, "text": " So you're going to take the queries over here." }, { "start": 2771.82, "end": 2782.2200000000003, "text": " And you're going to compute the output y p, which is going to be n by d." }, { "start": 2782.2200000000003, "end": 2790.6800000000003, "text": " Yes, this is n, this is n, you're going to do it once per, and then you're going to add" }, { "start": 2790.6800000000003, "end": 2792.42, "text": " the y's together." }, { "start": 2792.42, "end": 2795.6600000000003, "text": " So this is a plus for the final y." }, { "start": 2795.66, "end": 2803.06, "text": " So you can see these are two completely linear, this is y c, the content y, two completely" }, { "start": 2803.06, "end": 2807.72, "text": " linearly separable pathways, one comes from these positional encodings, and one comes" }, { "start": 2807.72, "end": 2811.7, "text": " from these from the context." }, { "start": 2811.7, "end": 2815.2599999999998, "text": " And the positional encodings are actually more important in the experiments." }, { "start": 2815.2599999999998, "end": 2816.92, "text": " If they leave those away, nothing works." }, { "start": 2816.92, "end": 2822.44, "text": " If they leave this summarizing away, then stuff pretty much works still." }, { "start": 2822.44, "end": 2830.26, "text": " So you know, it's fair to say that the power here comes from the positional encodings." }, { "start": 2830.26, "end": 2835.9, "text": " And that, again, a bit, it's a bit counter to their to their narrative, because I feel" }, { "start": 2835.9, "end": 2840.94, "text": " that the whole point of the lambda layers is to do this stuff right here." }, { "start": 2840.94, "end": 2843.98, "text": " And this here is something that you need to make it work." }, { "start": 2843.98, "end": 2849.46, "text": " But in any case, what you do is you take, you take these positional encodings and you" }, { "start": 2849.46, "end": 2852.62, "text": " multiply them by the values." }, { "start": 2852.62, "end": 2859.54, "text": " So what this does is this here, this is a special object, this lambda p, as you can" }, { "start": 2859.54, "end": 2866.38, "text": " see, it creates n times k times d tensor." }, { "start": 2866.38, "end": 2868.34, "text": " And this is it's a big tensor." }, { "start": 2868.34, "end": 2875.38, "text": " So what does it do for each of the n pieces in the input?" }, { "start": 2875.38, "end": 2882.1, "text": " For each of the n pieces in the input, it creates a one of these lists, right, one of" }, { "start": 2882.1, "end": 2888.54, "text": " these k sized lists, k sized lists of the vectors, as we've seen before, but it does" }, { "start": 2888.54, "end": 2893.1400000000003, "text": " so differently for each position." }, { "start": 2893.1400000000003, "end": 2894.86, "text": " Okay." }, { "start": 2894.86, "end": 2900.34, "text": " So for each position, it creates a different table." }, { "start": 2900.34, "end": 2907.9, "text": " And the queue again indexes into this table, but into, you know, at the position where" }, { "start": 2907.9, "end": 2908.9, "text": " it is." }, { "start": 2908.9, "end": 2914.34, "text": " So if you take the query from a particular position in the output, it's going to look" }, { "start": 2914.34, "end": 2920.58, "text": " to its table, aggregated according to what it's interested in." }, { "start": 2920.58, "end": 2929.82, "text": " So the positional encodings basically say, if you if if if this element in the context," }, { "start": 2929.82, "end": 2936.26, "text": " if you are the first element in the sequence, then you have to aggregate information according" }, { "start": 2936.26, "end": 2939.04, "text": " to this particular scheme." }, { "start": 2939.04, "end": 2943.6800000000003, "text": " But if you're the second element, you have to aggregate information according to this" }, { "start": 2943.6800000000003, "end": 2945.1600000000003, "text": " particular scheme." }, { "start": 2945.1600000000003, "end": 2953.06, "text": " So again, it can't look at the contents of what these particular things are, it can only" }, { "start": 2953.06, "end": 2955.54, "text": " kind of define a linear operation." }, { "start": 2955.54, "end": 2962.86, "text": " However, it can kind of look at the contents of the query, because usually x and c are" }, { "start": 2962.86, "end": 2963.86, "text": " the same." }, { "start": 2963.86, "end": 2971.9, "text": " So by incorporating v in here, m being equal to n, most often, it can actually do that." }, { "start": 2971.9, "end": 2976.2599999999998, "text": " And again, we see in the results that most of the information actually goes through this" }, { "start": 2976.2599999999998, "end": 2977.9, "text": " path." }, { "start": 2977.9, "end": 2985.7400000000002, "text": " The good thing, again, is that so here you have n by m, but you don't have a B, you don't" }, { "start": 2985.7400000000002, "end": 2987.82, "text": " have a batch size." }, { "start": 2987.82, "end": 2992.62, "text": " Here the batch size appears because there is actually a batch size, right, there is" }, { "start": 2992.62, "end": 2995.2200000000003, "text": " a batch size here." }, { "start": 2995.2200000000003, "end": 2997.92, "text": " And then the batch size would appear right here." }, { "start": 2997.92, "end": 3002.82, "text": " But at the moment the batch size appears, the n by m term falls away." }, { "start": 3002.82, "end": 3008.06, "text": " So there is no m right here, you contract over m as you introduce the batch size." }, { "start": 3008.06, "end": 3016.46, "text": " So again, there is nowhere an n by m tensor to be held as you add that that is scaled" }, { "start": 3016.46, "end": 3017.94, "text": " by the batch size." }, { "start": 3017.94, "end": 3023.26, "text": " So there is again, this this kind of performance increase." }, { "start": 3023.26, "end": 3028.34, "text": " But you can already see here you have we had these nice construction where all the whole" }, { "start": 3028.34, "end": 3034.6600000000003, "text": " context constructs this table of vectors, and then the query aggregates it." }, { "start": 3034.6600000000003, "end": 3041.48, "text": " And here we construct a separate table for each element in the input." }, { "start": 3041.48, "end": 3046.8, "text": " And then the query, according to its position, aggregates that and it simply adds those two" }, { "start": 3046.8, "end": 3053.7000000000003, "text": " aggregations together, most of the performance comes from the bottom right here, which you" }, { "start": 3053.7, "end": 3061.02, "text": " can sort of see this as if you know if you have like y equals w x plus b, you can sort" }, { "start": 3061.02, "end": 3070.8599999999997, "text": " of see the w here as these tables right here, because they actually depend on what the x" }, { "start": 3070.8599999999997, "end": 3077.58, "text": " is, in this case, the position of the x and the b is just something that comes on top" }, { "start": 3077.58, "end": 3083.06, "text": " to every single position that that there is." }, { "start": 3083.06, "end": 3085.7799999999997, "text": " Okay, this is a giant mess." }, { "start": 3085.7799999999997, "end": 3087.2599999999998, "text": " But that's about how it works." }, { "start": 3087.2599999999998, "end": 3093.46, "text": " And I hope you didn't you didn't completely you didn't get completely lost in this." }, { "start": 3093.46, "end": 3101.14, "text": " So they have a whole bunch of extensions, as I said, so they have translation equivalence," }, { "start": 3101.14, "end": 3109.58, "text": " then because they build their positional encodings as relative encodings, which makes it very" }, { "start": 3109.58, "end": 3113.22, "text": " easy to then build this lambda convolution." }, { "start": 3113.22, "end": 3120.86, "text": " So you can actually implement this operation here as a convolutional operation to get this" }, { "start": 3120.86, "end": 3124.44, "text": " positional lambda." }, { "start": 3124.44, "end": 3130.66, "text": " And their whole point is kind of that if I do local attention, right, if I do local attention," }, { "start": 3130.66, "end": 3136.86, "text": " what I need to do is I kind of if I do local attention, then this thing only pays attention" }, { "start": 3136.86, "end": 3141.6200000000003, "text": " to these three, and this thing only pays attention to these three kind of like a convolution." }, { "start": 3141.6200000000003, "end": 3146.34, "text": " But because it's an attention for each of these things, I need to build my attention" }, { "start": 3146.34, "end": 3148.8, "text": " map, I need to build my attention map." }, { "start": 3148.8, "end": 3154.34, "text": " And that kind of if I want to batch this, if I want to do this at once, I need to sort" }, { "start": 3154.34, "end": 3161.6600000000003, "text": " of if this is my interaction matrix, it kind of looks like this, this downward descending" }, { "start": 3161.6600000000003, "end": 3165.78, "text": " stairs or something like this." }, { "start": 3165.78, "end": 3170.1000000000004, "text": " And that is not well supported in current frameworks." }, { "start": 3170.1000000000004, "end": 3173.46, "text": " And that makes it a lot like really slow." }, { "start": 3173.46, "end": 3180.7400000000002, "text": " They say, look, even though we use the same amount of let's say memory, as local attention" }, { "start": 3180.7400000000002, "end": 3189.76, "text": " or time, sorry time, we can implement it using these primitives, and they are much faster." }, { "start": 3189.76, "end": 3194.92, "text": " So they are they are going to outperform local attention in that sense." }, { "start": 3194.92, "end": 3200.54, "text": " They do compare here in terms of time and space to an attention layer." }, { "start": 3200.54, "end": 3206.78, "text": " Now, they split this into content interactions, which is that first pathway and position interactions" }, { "start": 3206.78, "end": 3213.78, "text": " like this here, this is absolutely irrelevant because it's smaller than the position interaction" }, { "start": 3213.78, "end": 3216.64, "text": " and the position interactions give the performance." }, { "start": 3216.64, "end": 3226.74, "text": " So you can see clearly that there is in space we have B times n times m, h is the number" }, { "start": 3226.74, "end": 3230.12, "text": " of heads, we don't care much about that right now." }, { "start": 3230.12, "end": 3234.3399999999997, "text": " So B times n times for the attention layer, which is the problem." }, { "start": 3234.3399999999997, "end": 3242.3199999999997, "text": " And here you see you have n times m here, but no B. And you have B times n, but no M." }, { "start": 3242.32, "end": 3249.54, "text": " So that is kind of the the gain right here, as long as you can keep the K small, right," }, { "start": 3249.54, "end": 3254.1800000000003, "text": " this intermediate sequence, which makes sense, right, this attention goes to this intermediate" }, { "start": 3254.1800000000003, "end": 3255.26, "text": " sequence." }, { "start": 3255.26, "end": 3259.26, "text": " So as long as you can keep that intermediate sequence small and fixed, you don't have a" }, { "start": 3259.26, "end": 3265.94, "text": " problem with this quadratic memory, at least you have a problem right here, but that's" }, { "start": 3265.94, "end": 3268.2200000000003, "text": " not modulated by the batch size." }, { "start": 3268.22, "end": 3275.4199999999996, "text": " In terms of time, it's still you can see there is a B times n times m, you still have that" }, { "start": 3275.4199999999996, "end": 3279.66, "text": " time complexity, because after all, you need to do these multiplications and contracts" }, { "start": 3279.66, "end": 3281.52, "text": " just the same." }, { "start": 3281.52, "end": 3285.06, "text": " So not much of a difference in terms of time." }, { "start": 3285.06, "end": 3291.7799999999997, "text": " The time argument is more like they can implement it using convolutional operators rather than" }, { "start": 3291.7799999999997, "end": 3296.72, "text": " the this kind of striding attention maps." }, { "start": 3296.72, "end": 3300.2999999999997, "text": " They also do this in multi query, multi like multi head and so on." }, { "start": 3300.2999999999997, "end": 3312.54, "text": " And you can see right here that it outperforms outperforms other systems, including like" }, { "start": 3312.54, "end": 3318.8199999999997, "text": " systems with self attention, especially in terms of if you see the memory, if you do" }, { "start": 3318.8199999999997, "end": 3322.5, "text": " global self attention, it uses a lot of memory." }, { "start": 3322.5, "end": 3327.78, "text": " In fact, like an out of memory error on their machine axial self attention, these are all" }, { "start": 3327.78, "end": 3334.54, "text": " kind of limits to self attention, local self attention, which comes closest to what they" }, { "start": 3334.54, "end": 3335.54, "text": " do." }, { "start": 3335.54, "end": 3341.74, "text": " But then what you suffer is a massive drop in performance, whereas their lambda layer" }, { "start": 3341.74, "end": 3344.8, "text": " right here." }, { "start": 3344.8, "end": 3347.58, "text": " It has a lot of performance." }, { "start": 3347.58, "end": 3350.5, "text": " And you can see the performance gain, right?" }, { "start": 3350.5, "end": 3353.62, "text": " This is k, I believe k is equal to 16." }, { "start": 3353.62, "end": 3358.9, "text": " In this example, if they go k to eight, and we know that the attention interaction in" }, { "start": 3358.9, "end": 3364.42, "text": " the lambda networks is not n by m, but actually m by k." }, { "start": 3364.42, "end": 3369.24, "text": " So if you have k, you can already see there is a massive jump in the number of examples" }, { "start": 3369.24, "end": 3373.86, "text": " you can throughput through the network." }, { "start": 3373.86, "end": 3382.58, "text": " Okay, so that kind of gives evidence to what we are what what my hypothesis is is going" }, { "start": 3382.58, "end": 3384.34, "text": " on right here." }, { "start": 3384.34, "end": 3390.34, "text": " Okay, lastly, I've already shown you this table as it outperforms kind of the efficient" }, { "start": 3390.34, "end": 3391.5, "text": " nets." }, { "start": 3391.5, "end": 3396.6200000000003, "text": " And this is a special version of lambda networks, the lambda res nets, where they take a res" }, { "start": 3396.6200000000003, "end": 3402.84, "text": " nets and they only they only replace a part of the resnet." }, { "start": 3402.84, "end": 3410.1400000000003, "text": " So if you look at the table down here, these are the different architectures where they" }, { "start": 3410.1400000000003, "end": 3415.08, "text": " could replace things in the resnet, for example, the resnet 50 right here." }, { "start": 3415.08, "end": 3417.7000000000003, "text": " So this is all convolutions." }, { "start": 3417.7000000000003, "end": 3424.26, "text": " This is kind of the baseline and you can see that it's like 7200 samples per second." }, { "start": 3424.26, "end": 3431.1000000000004, "text": " If you replace everything by a lambda layer, you're down to like 1160 examples per second." }, { "start": 3431.1, "end": 3437.54, "text": " Interestingly, if you replace the first layer by a lambda layer, you are also the performance" }, { "start": 3437.54, "end": 3440.3199999999997, "text": " drops enormously." }, { "start": 3440.3199999999997, "end": 3445.38, "text": " And that is because of course, the the sizes of the of the of the images get smaller and" }, { "start": 3445.38, "end": 3446.38, "text": " smaller." }, { "start": 3446.38, "end": 3450.54, "text": " So your your n gets smaller and smaller as you go up the layers." }, { "start": 3450.54, "end": 3457.2599999999998, "text": " As you can see right here, if you only replace the last layer by a lambda layer, then you" }, { "start": 3457.26, "end": 3465.26, "text": " can gain all back almost all of that performance and interestingly still outperform the complete" }, { "start": 3465.26, "end": 3469.5400000000004, "text": " convolutional layer." }, { "start": 3469.5400000000004, "end": 3477.0200000000004, "text": " And it also has less parameters, you can see the 25 instead of the 18." }, { "start": 3477.0200000000004, "end": 3480.6600000000003, "text": " Alright so that was my rant on this paper." }, { "start": 3480.6600000000003, "end": 3483.38, "text": " Again, I hope this wasn't too convoluted." }, { "start": 3483.38, "end": 3485.86, "text": " There's a lot more to this paper." }, { "start": 3485.86, "end": 3496.3, "text": " I want to kind of quickly shout out LucidRains and made a made a I got to show you." }, { "start": 3496.3, "end": 3498.3, "text": " This is hilarious." }, { "start": 3498.3, "end": 3503.1800000000003, "text": " He implemented this so." }, { "start": 3503.1800000000003, "end": 3511.98, "text": " Yes, thank you." }, { "start": 3511.98, "end": 3514.1800000000003, "text": " Implemented this as the paper came out." }, { "start": 3514.18, "end": 3522.18, "text": " And of course, well, we don't know if Phil Wang is the author of this paper." }, { "start": 3522.18, "end": 3525.7, "text": " We don't know maybe maybe not." }, { "start": 3525.7, "end": 3530.94, "text": " Chances are not but still cool that he goes ahead and implements these things." }, { "start": 3530.94, "end": 3536.66, "text": " I especially I love the conciseness using the INOPs right here." }, { "start": 3536.66, "end": 3540.2999999999997, "text": " So there are as you can see, like this is it." }, { "start": 3540.2999999999997, "end": 3541.3399999999997, "text": " That's it." }, { "start": 3541.3399999999997, "end": 3542.8999999999996, "text": " That's all." }, { "start": 3542.9, "end": 3548.6600000000003, "text": " The use of INOPs right here to like do this rearrange and INSOM operations, which are" }, { "start": 3548.6600000000003, "end": 3555.02, "text": " much more concise than the reshape, squeeze, unsqueeze whatnot." }, { "start": 3555.02, "end": 3556.58, "text": " So that's pretty cool." }, { "start": 3556.58, "end": 3561.98, "text": " And the coolest thing is lambda actual Greek letters in the code." }, { "start": 3561.98, "end": 3563.78, "text": " Thank you, Python." }, { "start": 3563.78, "end": 3567.42, "text": " So yeah, I invite you to check out this implementation." }, { "start": 3567.42, "end": 3569.26, "text": " I'll of course link it." }, { "start": 3569.26, "end": 3572.14, "text": " Tell me what you think of the paper and I'll see you next time." }, { "start": 3572.14, "end": 3572.3799999999997, "text": " Bye bye." } ]
gJR28onlqzs
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
How much memory does Longformer use?
[ "Science & Technology" ]
[ "deep learning", "machine learning", "nlp", "natural language processing", "machine translation", "arxiv", "google", "attention mechanism", "attention", "transformer", "tensor2tensor", "rnn", "recurrent", "seq2seq" ]
A calculation of the memory requirements of the Longformer. Original video: https://youtu.be/_8KNb5iqblE Paper: https://arxiv.org/abs/2004.05150 Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
So I wanted to come back to this paper here about the longformer. I have done a video on this. If you haven't seen it then this video is probably not going to make much sense to you. But in the video I go over what the longformer is, what it does, how it compares and so on. And the gist of the longformer is that it can now do a transformer model on a long document as you can read here. So I've gotten a lot of questions of like does that mean we can now have much longer documents, right? The BERT model doesn't fit into my memory, can this solve my problem? And I just kind of want to go into the math of the longformer memory requirements here because I think I've alluded to it but it is quite a... I think the graphics here are just a bit misleading from the way they implement it. Now I've already gone over something like this in the last thing. So Roberta, let's spell this correctly, Roberta, that is their baseline, has a size, a let's call that N0 of 512. So they can have 512 tokens at the same time. So if you have a sequence that is way longer than 512 you need to chunk it up into pieces of 512 and usually you do something like overlapping pieces or something like this, right? And now the promise of the longformer as it is in the paper is that you can put all of this into the longformer, right? And it will do this sliding window attention thing where it basically slides a window here, this window, across this input sequence and only does this local attention, right, within the window. And then it also has some global attention that it constantly has. Now what I find interesting is that in their experiments their window size here, so the longformer window size is 512, right? So within that window you have the classic N squared full attention, right? So let's just go into that. How much memory does the longformer really do? We've already calculated it here a bit but I want to take this still apart a bit. So as you can see on the left here you have N times W that you have for this middle band, right? So this middle band is N times W. Then you want to add the global attention, right? So the global attention, you can already see it right here, if you have one, two, three, four locations of global attention you have four times two because you also have them in this direction, right? You have them in both directions times your full sequence length. So plus two times full sequence length times the number of global attention. I call this S over here. So as we saw up here the window size here was N zero in their experiments. So let's replace this window size by N zero and actually let's factor out the N. So we'll get to N times N zero plus 2S. Alright, so you can already see that Roberta originally had N zero squared. Now if N is larger than N zero that means you already use more here. The kind of trick, it's not really a trick, it is true that this is order of N, right? If N is your input sequence length but in this here is technically order of N squared if N, if this is N. But the sequence length in Roberta was the window size of the long former. So this is N zero squared, right? And here technically you'd have to say this is N times N zero. So if N is larger than N zero you can see that this uses more memory given that. So in their experiments they use a model that on paper uses more memory than the baseline model and saying that it scales linearly with sequence length is because, I mean of course it scales linearly because they can now input these long sequences, right? And the attention, sorry the memory requirements scales basically linear and also linear with the window size. Now the window size still needs to be apparently large-ish in order to achieve the performance. So the fact that the performance is equal or better is not really a secret because it uses more memory, right? It's not like this model uses less memory but outperforms the old one, it uses more. If you want to look at it you have to ask, okay I have Roberta and right now I can do N squared. So this is N, this is N, so there's N zero, N zero. This is my sequence length that I can put into Roberta. You have to ask yourself what kind of sequence do I want to put in? And if you say I want to put in a sequence that's twice as long, right? I want to put in this long of a sequence, so N here would be twice N zero. Then you have to take this, put it here, put it here and then you realize, yes, that your window size of the long former can only be half, right? So if you have the same amount of memory you can double your sequence length at the cost of having your window size but that doesn't yet include the cost of the global attention. So any global attention you do will come basically on top of the window size. You see this here, right? So you decide on, let's do it like this, you decide on how long you want your thing, your input sequence length to be, then you decide, and that means that's this rectangle here, then you decide how many global attentions do I want and here I say I want one global attention and you have to cross out as many rows here as you want global attention and what remains is your window. Actually you have to cross out twice but we don't have, we only have one left, but you get the point. You have to cross out two times S rows of how many global attentions you want and what remains will be your window size. In this case it's just a window size of one. So that's how you would construct a longformer that takes in the same amount of memory as a your classic model but can take a full n sequence length. Alright? So I just wanted to kind of make that clear, go through the calculation myself and I hope that helped. Thanks for listening and if you liked this consider subscribing, liking and bye bye.
[ { "start": 0, "end": 7.32, "text": " So I wanted to come back to this paper here about the longformer. I have done a" }, { "start": 7.32, "end": 11, "text": " video on this. If you haven't seen it then this video is probably not going to" }, { "start": 11, "end": 16.32, "text": " make much sense to you. But in the video I go over what the longformer is, what it" }, { "start": 16.32, "end": 21.52, "text": " does, how it compares and so on. And the gist of the longformer is that it can" }, { "start": 21.52, "end": 31.36, "text": " now do a transformer model on a long document as you can read here. So I've" }, { "start": 31.36, "end": 35.96, "text": " gotten a lot of questions of like does that mean we can now have much longer" }, { "start": 35.96, "end": 41.2, "text": " documents, right? The BERT model doesn't fit into my memory, can this solve my" }, { "start": 41.2, "end": 47.519999999999996, "text": " problem? And I just kind of want to go into the math of the longformer memory" }, { "start": 47.52, "end": 54.88, "text": " requirements here because I think I've alluded to it but it is quite a..." }, { "start": 54.88, "end": 62.400000000000006, "text": " I think the graphics here are just a bit misleading from the way they implement" }, { "start": 62.400000000000006, "end": 68.36, "text": " it. Now I've already gone over something like this in the last thing. So Roberta," }, { "start": 68.36, "end": 77.12, "text": " let's spell this correctly, Roberta, that is their baseline, has a size, a" }, { "start": 77.12, "end": 88.4, "text": " let's call that N0 of 512. So they can have 512 tokens at the same time. So if" }, { "start": 88.4, "end": 94.48, "text": " you have a sequence that is way longer than 512 you need to chunk it up into" }, { "start": 94.48, "end": 100.60000000000001, "text": " pieces of 512 and usually you do something like overlapping pieces or" }, { "start": 100.60000000000001, "end": 106, "text": " something like this, right? And now the promise of the longformer as it is in" }, { "start": 106, "end": 114.28, "text": " the paper is that you can put all of this into the longformer, right? And it" }, { "start": 114.28, "end": 121.84, "text": " will do this sliding window attention thing where it basically slides a window" }, { "start": 121.84, "end": 129.24, "text": " here, this window, across this input sequence and only does this local" }, { "start": 129.24, "end": 134.56, "text": " attention, right, within the window. And then it also has some global attention" }, { "start": 134.56, "end": 140.56, "text": " that it constantly has. Now what I find interesting is that in their experiments" }, { "start": 140.56, "end": 149.28, "text": " their window size here, so the longformer window size is 512, right? So" }, { "start": 149.28, "end": 158.84, "text": " within that window you have the classic N squared full attention, right? So let's" }, { "start": 158.84, "end": 166.92000000000002, "text": " just go into that. How much memory does the longformer really do? We've" }, { "start": 166.92000000000002, "end": 175.68, "text": " already calculated it here a bit but I want to take this still apart a bit. So" }, { "start": 175.68, "end": 185.92000000000002, "text": " as you can see on the left here you have N times W that you have for this middle" }, { "start": 185.92, "end": 194, "text": " band, right? So this middle band is N times W. Then you want to add the global" }, { "start": 194, "end": 200.04, "text": " attention, right? So the global attention, you can already see it right here, if you" }, { "start": 200.04, "end": 209.04, "text": " have one, two, three, four locations of global attention you have four times two" }, { "start": 209.04, "end": 214.16, "text": " because you also have them in this direction, right? You have them in both" }, { "start": 214.16, "end": 221.96, "text": " directions times your full sequence length. So plus two times full sequence" }, { "start": 221.96, "end": 231.07999999999998, "text": " length times the number of global attention. I call this S over here. So as" }, { "start": 231.07999999999998, "end": 242.07999999999998, "text": " we saw up here the window size here was N zero in their experiments. So let's" }, { "start": 242.08, "end": 251.24, "text": " replace this window size by N zero and actually let's factor out the N. So we'll" }, { "start": 251.24, "end": 268.04, "text": " get to N times N zero plus 2S. Alright, so you can already see that Roberta" }, { "start": 268.04, "end": 278.96000000000004, "text": " originally had N zero squared. Now if N is larger than N zero that means you" }, { "start": 278.96000000000004, "end": 287.84000000000003, "text": " already use more here. The kind of trick, it's not really a trick, it is" }, { "start": 287.84000000000003, "end": 297.44, "text": " true that this is order of N, right? If N is your input sequence length but in" }, { "start": 297.44, "end": 307.4, "text": " this here is technically order of N squared if N, if this is N. But the" }, { "start": 307.4, "end": 314.42, "text": " sequence length in Roberta was the window size of the long former. So this" }, { "start": 314.42, "end": 320.4, "text": " is N zero squared, right? And here technically you'd have to say this is N" }, { "start": 320.4, "end": 330.32, "text": " times N zero. So if N is larger than N zero you can see that this uses more" }, { "start": 330.32, "end": 338.03999999999996, "text": " memory given that. So in their experiments they use a model that on" }, { "start": 338.03999999999996, "end": 345.76, "text": " paper uses more memory than the baseline model and saying that it scales" }, { "start": 345.76, "end": 351.88, "text": " linearly with sequence length is because, I mean of course it scales linearly" }, { "start": 351.88, "end": 358.56, "text": " because they can now input these long sequences, right? And the attention, sorry" }, { "start": 358.56, "end": 364.15999999999997, "text": " the memory requirements scales basically linear and also linear with the window" }, { "start": 364.15999999999997, "end": 371.48, "text": " size. Now the window size still needs to be apparently large-ish in order to" }, { "start": 371.48, "end": 376.08000000000004, "text": " achieve the performance. So the fact that the performance is equal or better is" }, { "start": 376.08000000000004, "end": 386.32, "text": " not really a secret because it uses more memory, right? It's not like this model" }, { "start": 386.32, "end": 395.28000000000003, "text": " uses less memory but outperforms the old one, it uses more. If you want to look at" }, { "start": 395.28, "end": 407.52, "text": " it you have to ask, okay I have Roberta and right now I can do N squared. So this" }, { "start": 407.52, "end": 413.35999999999996, "text": " is N, this is N, so there's N zero, N zero. This is my sequence length that I can put" }, { "start": 413.35999999999996, "end": 419.35999999999996, "text": " into Roberta. You have to ask yourself what kind of sequence do I want to put" }, { "start": 419.36, "end": 429.44, "text": " in? And if you say I want to put in a sequence that's twice as long, right? I" }, { "start": 429.44, "end": 436.88, "text": " want to put in this long of a sequence, so N here would be twice N zero. Then you" }, { "start": 436.88, "end": 444.96000000000004, "text": " have to take this, put it here, put it here and then you realize, yes, that your" }, { "start": 444.96, "end": 451.32, "text": " window size of the long former can only be half, right? So if you have the same" }, { "start": 451.32, "end": 456.12, "text": " amount of memory you can double your sequence length at the cost of having" }, { "start": 456.12, "end": 462.91999999999996, "text": " your window size but that doesn't yet include the cost of the global" }, { "start": 462.91999999999996, "end": 468.56, "text": " attention. So any global attention you do will come basically on top of the window" }, { "start": 468.56, "end": 478.12, "text": " size. You see this here, right? So you decide on, let's do it like this, you" }, { "start": 478.12, "end": 484.04, "text": " decide on how long you want your thing, your input sequence length to be, then" }, { "start": 484.04, "end": 489.04, "text": " you decide, and that means that's this rectangle here, then you decide how many" }, { "start": 489.04, "end": 496.48, "text": " global attentions do I want and here I say I want one global attention and you" }, { "start": 496.48, "end": 502.16, "text": " have to cross out as many rows here as you want global attention and what" }, { "start": 502.16, "end": 507.84000000000003, "text": " remains is your window. Actually you have to cross out twice but we don't have, we" }, { "start": 507.84000000000003, "end": 513.52, "text": " only have one left, but you get the point. You have to cross out two times S rows" }, { "start": 513.52, "end": 520.12, "text": " of how many global attentions you want and what remains will be your window" }, { "start": 520.12, "end": 525.9200000000001, "text": " size. In this case it's just a window size of one. So that's how you would" }, { "start": 525.92, "end": 533.1999999999999, "text": " construct a longformer that takes in the same amount of memory as a your classic" }, { "start": 533.1999999999999, "end": 541.92, "text": " model but can take a full n sequence length. Alright? So I just wanted to kind" }, { "start": 541.92, "end": 549.8399999999999, "text": " of make that clear, go through the calculation myself and I hope that helped." }, { "start": 549.84, "end": 556.44, "text": " Thanks for listening and if you liked this consider subscribing, liking and" }, { "start": 556.44, "end": 580.36, "text": " bye bye." } ]
Xc9Rkbg6IZA
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
SinGAN: Learning a Generative Model from a Single Natural Image
[ "Science & Technology" ]
[ "ml", "ai", "machine learning", "artificial ingelligence", "gan", "generative", "image processing", "deep learning", "image editing", "deep dream", "style transfer", "convolutional neural networks", "generative adversarial networks", "photoshop" ]
With just a single image as an input, this algorithm learns a generative model that matches the input image's patch distribution at multiple scales and resolutions. This enables sampling of extremely realistic looking variations on the original image and much more. Abstract: We introduce SinGAN, an unconditional generative model that can be learned from a single natural image. Our model is trained to capture the internal distribution of patches within the image, and is then able to generate high quality, diverse samples that carry the same visual content as the image. SinGAN contains a pyramid of fully convolutional GANs, each responsible for learning the patch distribution at a different scale of the image. This allows generating new samples of arbitrary size and aspect ratio, that have significant variability, yet maintain both the global structure and the fine textures of the training image. In contrast to previous single image GAN schemes, our approach is not limited to texture images, and is not conditional (i.e. it generates samples from noise). User studies confirm that the generated samples are commonly confused to be real images. We illustrate the utility of SinGAN in a wide range of image manipulation tasks. Authors: Tamar Rott Shaham, Tali Dekel, Tomer Michaeli https://arxiv.org/abs/1905.01164 https://github.com/tamarott/SinGAN Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there! Today we'll look at SINGAN, Learning a Generative Model from a Single Natural Image by Tamar Rott-Schaum, Tali Dekal and Tomer Mikhaili. So this paper, as it says, it's dealing with learning a generative model from just one image. And this kind of needs to be stressed because most generative models, even if they produce single image samples, they're kind of trained on a large image database beforehand to kind of learn what an image is. But this algorithm really starts out clean-slate, right? The algorithm starts out with nothing and then you give it this one single training image. And from that it can then generate all of these things, without ever having seen any other images during training. And the second row is simply a second example where you start clean-slate, input this image and then produce these. And you can see there's quite a bit of variety in the samples you produce from this image. So basically the task is, if you're just given one image, learn something about the distribution. And this paper specifically deals with patch distributions at different scales. So this could be learn about the distribution of these grass to sky here. So learn about the individual birds and so on. And then at lower scales learn about how the border of this grass looks. So the generative model learns that there's always kind of grass at the bottom, where there's just one image at the largest scale. But then at lower scales sometimes the border looks like a sharp corner and sometimes the border is relatively flat, like here. So it can vary up those things and it can make the border different. Also the birds, it kind of learns how the individual birds look and how they're distributed and therefore it can change that. You see there's quite a bit of variety here. You can also change the aspect ratio and you can actually do much more, much weirder things with it. For example, here are some examples of applications. First there is paint to image. So these are different tasks here. So the top row is always the training image. This is the single image you give the algorithm. And then you have a row of input and then this is what the algorithm outputs. So in paint to image you input a training image and you input a, you can do this in MS Paint or something, kind of the way you want the image to look. So what you want the algorithm to do is take the style of this image and put it into the form of that image and it produces this. Looks pretty good. In editing you can tell the algorithm, alright I want this, I want this tower to go lower down, right? I want this house to be more wide. So you'll get an image like this and you can see there are clear kind of contours here and here that are not nice and also the house is, you know, pixel stretched and so on. So this algorithm, this generative algorithm, can produce this image from it which looks much better here around the borders and kind of fills in missing windows to match of course the patch statistics that it sees in this top image, right? You always have to think that all this algorithm sees is the topmost image to learn from. Harmonization is a task where you have an input image and then you like copy paste some object in it and what it does is it will kind of adjust the patch statistics of that object to the surrounding image. And super resolution, finally, finally we get what every single action movie, just the NSA, can do. It's like, ah here is the security camera footage. Zoom in, enhance. Yeah, so I doubt that, you know, hidden number plates here, pixel-ish number plates, all of a sudden can become readable and identifiable but still this is very cool. And lastly you can do animation from this, as you can guess, I guess. It's not a movie. All right, let's look at how they do all of this kind of stuff. All of this is the same model that can be tasked to do these different things through various probing. At its essence it's this multi-scale GAN and the GAN is trained to have a series of generators and a series of discriminators and you always train them one by one. So first you train the lowest resolution and then you keep it fixed and then train the next resolution and so on until you're at the highest resolution. So in each layer, so at the bottom layer, we simply feed in, we simply feed in noise to a generator of a GAN and the generator generates an image. Now you take this image and you take a down sampled version of your training image. Remember you just have one training image. You take a down sampled version of that and you let the discriminator decide which one is real, which one's fake and you train the generator to fool the discriminator as much as possible. Now if you were to do this with the entire image, of course the generator would simply learn to reproduce the original image. So that's no good. So what this paper does more is that the discriminator actually doesn't work on the entire image but just on patches of the image. And that's so that they basically can't memorize the entire image. So the discriminator will pick these patches, these overlapping patches basically. You can imagine it's something like this overlapping patches and it will try to decide for each one is this patch real or is this patch fake? So the generator produces the entire image. This is what the generator produces the entire image but the discriminator can only see the image in patches, in overlapping patches. And that's what makes this paper kind of work. Otherwise they would just remember the single training image because you only have one training image. You kind of need some variety. This is at the lowest scale. Remember you input the noise and the lowest scale in this example is for example 25 by 25 pixel. You scale down your original image here also to 25 by 25 and then you let the discriminator decide. So once you've trained this generator to make very good 25 by 25 pixel images, that in this patch way fool the discriminator. You keep it fixed. For the next stage what you want to do is you always want to go through this layer first. So forget this discriminator now. We've trained this stage. Keep this generator fixed. Input noise, output, whatever the generator produces. Then take this upscale it. For example multiply each side by 2 to 50 by 50 pixels. Input this together with some new noise into the next stage generator. And then the same as before. This generator produces an image. You scale down your original image. You scale it down to now 50 by 50 pixels and you let the discriminator decide again in patches. Since the discriminator patches are always the same size but we scale down the image less and less, the effective patch size of the discriminator becomes much lower. Now this discriminator only sees the image in patches like so. Also the generated image that comes in here. It also sees in these patches and tries to decide are these patches from real or from fake images. You can see that the lowest layer here, this layer, is trained to kind of get the coarse-grained structure of the image. The discriminator will kind of see very large patches. So the generator must match the kind of large-scale structure. These patches won't be very very high resolution because we downscaled the image, but they will be large across the image. So the generator must match the coarse low resolution stuff in the image. But as you go up the layers, up and up the layers, your discriminator sees less and less of the picture at once. It sees less and less of the picture at once. So this discriminator here in the topmost layer can only concentrate on very small patches and therefore this generator will only have to produce things that look real at a very very small scale. So in essence you have this series of generators trained that each one is tasked with basically modeling details at a finer and finer scale until you come to the last final scale. But then each input of each one is the output of the last one. So basically you take whatever the last one has produced and the last one is really good at doing coarser grain things and you add to it your details of this level. And this will in the end give you a very realistic image that matches at every level of resolution, matches the kind of statistics, the patch statistics of this real image. So that's the whole point of this thing. To have this series of generators one after the other, each one adds their own details at its own scale. And this works super well apparently. So each generator is just built like this. It takes some noise and the image of the lower scale, it adds them, sorry for these artifacts, it puts it through five convolutional layers and then simply combines it with the input. And this will produce this image at this scale. That's each layer, it's just five conv layers. And since they're fully convolutional you can actually change the aspect ratio at inference time, you can change the resolution and so on. It seems pretty neat. Of course from experience I can tell you that this probably didn't work at the first try and there is a lot of work even though it seems pretty easy. Keep that in mind. So for training this there are actually two different losses. First of all you have what's called the adversarial loss. And the adversarial loss is your classic GAN loss, where the generator tries to fool the discriminator and the discriminator tries to catch the generator. But then also you have a reconstruction loss. And the reconstruction loss specifically deals at each layer. At each layer you train the generator to reconstruct the original image when you put in a zero noise, except at the lowest layer. But essentially what you want to do is you want to say well when I don't input any noise then please reconstruct the original image. And that seems to be important for the setup to include this noise so that the generative model is basically able to reconstruct the original image as a whole. So these two losses are combined to form the training objective. And again this is not trained on data set. It is trained on a single image. And the productions are pretty cool. So again here are more samples from just the single training images at the left side. And then you have random samples from the single image. You can do things like super resolution, where this picture has been super resoluted to that picture. And I like that they investigate the effects of kind of their setup. So they ask okay what happens if we just have basically two different scales in this scaling setup. Then you see the kind of patch statistics will be very very fine-grained and it won't match any sort of coarse-grained structure. If you have very many scales, the more scales you have better basically. The more different scales you capture. Even more interesting is what if, so at this layer where we have G, G, G, you scale up, scale up, scale up and so on. What you could do is you could not start here, but you say okay scrap this layer. What we actually do is we take the original image and we scale it down and we input that into here instead of inputting the output from the lower layer. So basically you start at let's say the ground truth and that effect is shown here. So if you start at the lowest layer in this particular example you see that sometimes there are weird things. But what you can do is start at a let's say an intermediate layer with the original image and then the variety you get because you kind of keep the coarse-grained structure the same. The variety you get will only be in the right we said there are different layers and but you now eliminate these two layers and replace them with your original image at the scale. So the variety you get will only be from these finer grained lower resolution patches things. So for example as you can see here the zebra samples now differ in how exactly their stripes are manifested. This seems pretty cool. So you have kind of a handle on how fine grained you want your details or your changes to be. They do a bunch of more experiments where you can do a lot of kind of playful things with this thing. There is code available for example here you can see editing again as an example where they compare also with content aware move which I think is implemented in Photoshop and paint harmonization as we saw before. So all of these kind of things are very playful are very cool and I encourage you to check out this paper and the code it seems pretty easy. I have a remark though this again is only learned from a single image and that's the kind of cool part but it should be possible to combine this with some sort of approach over a data set. Like if I have a model that is really good at a single image right producing something that looks like a single image I should be able to combine it with a model that has been learned from a database. It's kind of like a Bayesian approach where you say okay I want to produce the best image so I want to maximize the probability of this image given the other image. But then you can also say aha but that's kind of proportional to j given i times p of i right you know Bayes rule and it seems that this paper is dealing mostly with kind of maximizing the likelihood of the output while you could probably combine it with some sort of prior over natural images and come up with an even better model. Of course then you'd need an actual database of images and training procedure and you need a way to combine these two models. So maybe that's a bit of a challenge. Anyway cool paper check it out bye bye.
[ { "start": 0, "end": 6, "text": " Hi there! Today we'll look at SINGAN, Learning a Generative Model from a Single" }, { "start": 6, "end": 13.96, "text": " Natural Image by Tamar Rott-Schaum, Tali Dekal and Tomer Mikhaili. So this paper," }, { "start": 13.96, "end": 19.04, "text": " as it says, it's dealing with learning a generative model from just one image. And" }, { "start": 19.04, "end": 22.92, "text": " this kind of needs to be stressed because most generative models, even if" }, { "start": 22.92, "end": 27.28, "text": " they produce single image samples, they're kind of trained on a large image" }, { "start": 27.28, "end": 32.68, "text": " database beforehand to kind of learn what an image is. But this" }, { "start": 32.68, "end": 38.2, "text": " algorithm really starts out clean-slate, right? The algorithm starts out with nothing" }, { "start": 38.2, "end": 44.040000000000006, "text": " and then you give it this one single training image. And from that it can then" }, { "start": 44.040000000000006, "end": 49.44, "text": " generate all of these things, without ever having seen any other images" }, { "start": 49.44, "end": 55.120000000000005, "text": " during training. And the second row is simply a second example where you start" }, { "start": 55.12, "end": 61.519999999999996, "text": " clean-slate, input this image and then produce these. And you can see there's" }, { "start": 61.519999999999996, "end": 65.16, "text": " quite a bit of variety in the samples you produce from this image. So basically" }, { "start": 65.16, "end": 71, "text": " the task is, if you're just given one image, learn something about the" }, { "start": 71, "end": 75.8, "text": " distribution. And this paper specifically deals with patch distributions at" }, { "start": 75.8, "end": 81.2, "text": " different scales. So this could be learn about the distribution of these" }, { "start": 81.2, "end": 90.60000000000001, "text": " grass to sky here. So learn about the individual birds and so on. And then at" }, { "start": 90.60000000000001, "end": 97.28, "text": " lower scales learn about how the border of this grass looks. So the" }, { "start": 97.28, "end": 102.24000000000001, "text": " generative model learns that there's always kind of grass at the" }, { "start": 102.24000000000001, "end": 107.36, "text": " bottom, where there's just one image at the largest scale. But then at lower" }, { "start": 107.36, "end": 114, "text": " scales sometimes the border looks like a sharp corner and sometimes the" }, { "start": 114, "end": 119.88, "text": " border is relatively flat, like here. So it can vary up those things and it can" }, { "start": 119.88, "end": 125.8, "text": " make the border different. Also the birds, it kind of learns how" }, { "start": 125.8, "end": 130.2, "text": " the individual birds look and how they're distributed and therefore it" }, { "start": 130.2, "end": 135.16, "text": " can change that. You see there's quite a bit of variety here. You can also change" }, { "start": 135.16, "end": 139.88, "text": " the aspect ratio and you can actually do much more, much weirder things with it." }, { "start": 139.88, "end": 146.12, "text": " For example, here are some examples of applications. First there is paint to" }, { "start": 146.12, "end": 151, "text": " image. So these are different tasks here. So the top row is always the training" }, { "start": 151, "end": 155.96, "text": " image. This is the single image you give the algorithm. And then you have a row of" }, { "start": 155.96, "end": 160.56, "text": " input and then this is what the algorithm outputs. So in paint to image" }, { "start": 160.56, "end": 167.08, "text": " you input a training image and you input a, you can do this in MS Paint or" }, { "start": 167.08, "end": 173.24, "text": " something, kind of the way you want the image to look. So what you want" }, { "start": 173.24, "end": 178.32, "text": " the algorithm to do is take the style of this" }, { "start": 178.32, "end": 184.48000000000002, "text": " image and put it into the form of that image and it produces this. Looks" }, { "start": 184.48, "end": 192.88, "text": " pretty good. In editing you can tell the algorithm, alright I want this, I want" }, { "start": 192.88, "end": 199.17999999999998, "text": " this tower to go lower down, right? I want this house to be more wide. So you'll get" }, { "start": 199.17999999999998, "end": 204.35999999999999, "text": " an image like this and you can see there are clear kind of contours here and here" }, { "start": 204.35999999999999, "end": 210.28, "text": " that are not nice and also the house is, you know, pixel stretched and so on. So" }, { "start": 210.28, "end": 216.16, "text": " this algorithm, this generative algorithm, can produce this image from it" }, { "start": 216.16, "end": 220.52, "text": " which looks much better here around the borders and kind of fills in missing" }, { "start": 220.52, "end": 227.28, "text": " windows to match of course the patch statistics that it sees in this top" }, { "start": 227.28, "end": 232.36, "text": " image, right? You always have to think that all this algorithm sees is the" }, { "start": 232.36, "end": 237.52, "text": " topmost image to learn from. Harmonization is a task where you have" }, { "start": 237.52, "end": 243.76000000000002, "text": " an input image and then you like copy paste some object in it and what it does" }, { "start": 243.76000000000002, "end": 248.4, "text": " is it will kind of adjust the patch statistics of that object to the" }, { "start": 248.4, "end": 255.48000000000002, "text": " surrounding image. And super resolution, finally, finally we get what every single" }, { "start": 255.48000000000002, "end": 262.24, "text": " action movie, just the NSA, can do. It's like, ah here is the security camera" }, { "start": 262.24, "end": 272.56, "text": " footage. Zoom in, enhance. Yeah, so I doubt that, you know, hidden" }, { "start": 272.56, "end": 276.12, "text": " number plates here, pixel-ish number plates, all of a sudden can become" }, { "start": 276.12, "end": 283.2, "text": " readable and identifiable but still this is very cool. And lastly you can do" }, { "start": 283.2, "end": 292.92, "text": " animation from this, as you can guess, I guess. It's not a movie." }, { "start": 292.92, "end": 297.48, "text": " All right, let's look at how they do all of this kind of stuff. All of this is the" }, { "start": 297.48, "end": 301.8, "text": " same model that can be tasked to do these different things through various" }, { "start": 301.8, "end": 309, "text": " probing. At its essence it's this multi-scale GAN and the GAN is trained" }, { "start": 309, "end": 314.76, "text": " to have a series of generators and a series of discriminators and you always" }, { "start": 314.76, "end": 320.32, "text": " train them one by one. So first you train the lowest resolution and then you keep" }, { "start": 320.32, "end": 323.84, "text": " it fixed and then train the next resolution and so on until you're at" }, { "start": 323.84, "end": 330.68, "text": " the highest resolution. So in each layer, so at the bottom layer, we simply feed in," }, { "start": 330.68, "end": 338.52, "text": " we simply feed in noise to a generator of a GAN and the generator generates" }, { "start": 338.52, "end": 345.47999999999996, "text": " an image. Now you take this image and you take a down sampled version of" }, { "start": 345.47999999999996, "end": 349.15999999999997, "text": " your training image. Remember you just have one training image. You take a" }, { "start": 349.15999999999997, "end": 355.47999999999996, "text": " down sampled version of that and you let the discriminator decide which one is" }, { "start": 355.47999999999996, "end": 359.64, "text": " real, which one's fake and you train the generator to fool the discriminator as" }, { "start": 359.64, "end": 363.64, "text": " much as possible. Now if you were to do this with the entire image, of course the" }, { "start": 363.64, "end": 369.12, "text": " generator would simply learn to reproduce the original image. So that's" }, { "start": 369.12, "end": 375.44, "text": " no good. So what this paper does more is that the discriminator" }, { "start": 375.44, "end": 380.8, "text": " actually doesn't work on the entire image but just on patches of the image." }, { "start": 380.8, "end": 388.8, "text": " And that's so that they basically can't memorize the" }, { "start": 388.8, "end": 396.36, "text": " entire image. So the discriminator will pick these patches, these overlapping" }, { "start": 396.36, "end": 400.5, "text": " patches basically. You can imagine it's something like this overlapping patches" }, { "start": 400.5, "end": 406.8, "text": " and it will try to decide for each one is this patch real or is this patch fake?" }, { "start": 406.8, "end": 412.76, "text": " So the generator produces the entire image. This is what the" }, { "start": 412.76, "end": 419.92, "text": " generator produces the entire image but the discriminator can only see the image" }, { "start": 419.92, "end": 426.4, "text": " in patches, in overlapping patches. And that's what makes this paper kind of" }, { "start": 426.4, "end": 432.64, "text": " work. Otherwise they would just remember the single training image" }, { "start": 432.64, "end": 437.88, "text": " because you only have one training image. You kind of need some variety." }, { "start": 437.88, "end": 445.24, "text": " This is at the lowest scale. Remember you input the noise and the lowest" }, { "start": 445.24, "end": 451.64, "text": " scale in this example is for example 25 by 25 pixel. You scale down" }, { "start": 451.64, "end": 456.44, "text": " your original image here also to 25 by 25 and then you let the discriminator" }, { "start": 456.44, "end": 461.92, "text": " decide. So once you've trained this generator to make very good" }, { "start": 461.92, "end": 469.64000000000004, "text": " 25 by 25 pixel images, that in this patch way fool the discriminator. You keep" }, { "start": 469.64000000000004, "end": 474.68, "text": " it fixed. For the next stage what you want to do is you always want to go" }, { "start": 474.68, "end": 480.8, "text": " through this layer first. So forget this discriminator now. We've trained" }, { "start": 480.8, "end": 487.28000000000003, "text": " this stage. Keep this generator fixed. Input noise, output, whatever the" }, { "start": 487.28, "end": 494.32, "text": " generator produces. Then take this upscale it. For example multiply each" }, { "start": 494.32, "end": 501.64, "text": " side by 2 to 50 by 50 pixels. Input this together with some new noise into the" }, { "start": 501.64, "end": 506.11999999999995, "text": " next stage generator. And then the same as before. This generator produces an" }, { "start": 506.11999999999995, "end": 512.4, "text": " image. You scale down your original image. You scale it down to now 50 by 50" }, { "start": 512.4, "end": 518.76, "text": " pixels and you let the discriminator decide again in patches. Since the" }, { "start": 518.76, "end": 523.0799999999999, "text": " discriminator patches are always the same size but we scale down the image" }, { "start": 523.0799999999999, "end": 527.72, "text": " less and less, the effective patch size of the discriminator becomes much lower." }, { "start": 527.72, "end": 537.36, "text": " Now this discriminator only sees the image in patches like so. Also the" }, { "start": 537.36, "end": 542.28, "text": " generated image that comes in here. It also sees in these" }, { "start": 542.28, "end": 549.88, "text": " patches and tries to decide are these patches from real or from fake images." }, { "start": 549.88, "end": 559.24, "text": " You can see that the lowest layer here, this layer, is trained to kind of get the" }, { "start": 559.24, "end": 566.9200000000001, "text": " coarse-grained structure of the image. The discriminator will" }, { "start": 566.92, "end": 573.5999999999999, "text": " kind of see very large patches. So the generator must match the kind of" }, { "start": 573.5999999999999, "end": 578.52, "text": " large-scale structure. These patches won't be very very high resolution" }, { "start": 578.52, "end": 582.8399999999999, "text": " because we downscaled the image, but they will be large across the image. So the" }, { "start": 582.8399999999999, "end": 589.7199999999999, "text": " generator must match the coarse low resolution stuff in the image. But as you" }, { "start": 589.72, "end": 597.6800000000001, "text": " go up the layers, up and up the layers, your discriminator sees less and less of" }, { "start": 597.6800000000001, "end": 604.1600000000001, "text": " the picture at once. It sees less and less of the picture at once." }, { "start": 604.1600000000001, "end": 610.44, "text": " So this discriminator here in the topmost layer can only concentrate on" }, { "start": 610.44, "end": 616.6, "text": " very small patches and therefore this generator will only have to produce" }, { "start": 616.6, "end": 625.44, "text": " things that look real at a very very small scale. So in essence you have" }, { "start": 625.44, "end": 631.6, "text": " this series of generators trained that each one is tasked with basically" }, { "start": 631.6, "end": 636.8000000000001, "text": " modeling details at a finer and finer scale until you come to the last final" }, { "start": 636.8000000000001, "end": 642.2, "text": " scale. But then each input of each one is the output of the last one. So" }, { "start": 642.2, "end": 646.52, "text": " basically you take whatever the last one has produced and the last one is really" }, { "start": 646.52, "end": 653.36, "text": " good at doing coarser grain things and you add to it your details of this level." }, { "start": 653.36, "end": 660.12, "text": " And this will in the end give you a very realistic image that matches at every" }, { "start": 660.12, "end": 666.4399999999999, "text": " level of resolution, matches the kind of statistics, the patch statistics of this" }, { "start": 666.4399999999999, "end": 674.3199999999999, "text": " real image. So that's the whole point of this thing. To have" }, { "start": 674.32, "end": 679.2, "text": " this series of generators one after the other, each one adds their own details" }, { "start": 679.2, "end": 685.5600000000001, "text": " at its own scale. And this works super well apparently. So each generator is" }, { "start": 685.5600000000001, "end": 690.96, "text": " just built like this. It takes some noise and the image of the lower" }, { "start": 690.96, "end": 696.7600000000001, "text": " scale, it adds them, sorry for these artifacts, it puts it through five" }, { "start": 696.7600000000001, "end": 704.2, "text": " convolutional layers and then simply combines it with the input. And this" }, { "start": 704.2, "end": 711.1600000000001, "text": " will produce this image at this scale. That's each layer, it's just five" }, { "start": 711.1600000000001, "end": 716.2, "text": " conv layers. And since they're fully convolutional you can actually change" }, { "start": 716.2, "end": 723.2800000000001, "text": " the aspect ratio at inference time, you can change the resolution and so on." }, { "start": 723.2800000000001, "end": 731.2, "text": " It seems pretty neat. Of course from experience I can tell you that this" }, { "start": 731.2, "end": 736.84, "text": " probably didn't work at the first try and there is a lot of work even though" }, { "start": 736.84, "end": 742.32, "text": " it seems pretty easy. Keep that in mind. So for training this there are" }, { "start": 742.32, "end": 746.76, "text": " actually two different losses. First of all you have what's called the" }, { "start": 746.76, "end": 753.12, "text": " adversarial loss. And the adversarial loss is your classic GAN loss, where" }, { "start": 753.12, "end": 756.84, "text": " the generator tries to fool the discriminator and the" }, { "start": 756.84, "end": 760.72, "text": " discriminator tries to catch the generator. But then also you have a" }, { "start": 760.72, "end": 765.76, "text": " reconstruction loss. And the reconstruction loss specifically deals" }, { "start": 765.76, "end": 775.6, "text": " at each layer. At each layer you train the generator to reconstruct the" }, { "start": 775.6, "end": 781.1600000000001, "text": " original image when you put in a zero noise, except at the lowest layer. But" }, { "start": 781.1600000000001, "end": 786.64, "text": " essentially what you want to do is you want to say well when I don't input" }, { "start": 786.64, "end": 792.48, "text": " any noise then please reconstruct the original image. And that seems to be" }, { "start": 792.48, "end": 797.76, "text": " important for the setup to include this noise so that the" }, { "start": 797.76, "end": 804.36, "text": " generative model is basically able to reconstruct the original image as a whole." }, { "start": 804.36, "end": 809.4399999999999, "text": " So these two losses are combined to form the training objective. And" }, { "start": 809.4399999999999, "end": 815.84, "text": " again this is not trained on data set. It is trained on a single image." }, { "start": 815.84, "end": 824.32, "text": " And the productions are pretty cool. So again here are more samples from just" }, { "start": 824.32, "end": 828.48, "text": " the single training images at the left side. And then you have random samples" }, { "start": 828.48, "end": 833.0600000000001, "text": " from the single image. You can do things like super resolution, where this picture" }, { "start": 833.0600000000001, "end": 840.7800000000001, "text": " has been super resoluted to that picture. And I like that they investigate the" }, { "start": 840.7800000000001, "end": 845.72, "text": " effects of kind of their setup. So they ask okay what happens if we just have" }, { "start": 845.72, "end": 851.9200000000001, "text": " basically two different scales in this scaling setup. Then you see" }, { "start": 851.9200000000001, "end": 859.24, "text": " the kind of patch statistics will be very very fine-grained and it won't match" }, { "start": 859.24, "end": 865.32, "text": " any sort of coarse-grained structure. If you have very many scales, the" }, { "start": 865.32, "end": 872.52, "text": " more scales you have better basically. The more different scales you capture." }, { "start": 872.52, "end": 881.56, "text": " Even more interesting is what if, so at this layer where we have G, G, G," }, { "start": 881.56, "end": 886.52, "text": " you scale up, scale up, scale up and so on. What you could do is you could not" }, { "start": 886.52, "end": 892, "text": " start here, but you say okay scrap this layer. What we actually do is we" }, { "start": 892, "end": 896.92, "text": " take the original image and we scale it down and we input that into here instead" }, { "start": 896.92, "end": 901.12, "text": " of inputting the output from the lower layer. So basically you start at let's" }, { "start": 901.12, "end": 908.84, "text": " say the ground truth and that effect is shown here. So if you" }, { "start": 908.84, "end": 916.84, "text": " start at the lowest layer in this particular example you see that" }, { "start": 916.84, "end": 923.12, "text": " sometimes there are weird things. But what you can do is start at a let's say" }, { "start": 923.12, "end": 928.52, "text": " an intermediate layer with the original image and then the variety you get" }, { "start": 928.52, "end": 932.8, "text": " because you kind of keep the coarse-grained structure the same. The" }, { "start": 932.8, "end": 936.6, "text": " variety you get will only be in the right we said there are different" }, { "start": 936.6, "end": 941.52, "text": " layers and but you now eliminate these two layers and replace them with your" }, { "start": 941.52, "end": 945.68, "text": " original image at the scale. So the variety you get will only be from these" }, { "start": 945.68, "end": 951.72, "text": " finer grained lower resolution patches things. So for example as you can see" }, { "start": 951.72, "end": 958.76, "text": " here the zebra samples now differ in how exactly their stripes are manifested." }, { "start": 958.76, "end": 965.76, "text": " This seems pretty cool. So you have kind of a handle on how fine" }, { "start": 965.76, "end": 971.48, "text": " grained you want your details or your changes to be. They do a bunch of" }, { "start": 971.48, "end": 978.36, "text": " more experiments where you can do a lot of kind of playful things with this" }, { "start": 978.36, "end": 984.8000000000001, "text": " thing. There is code available for example here you can see editing again" }, { "start": 984.8000000000001, "end": 990.88, "text": " as an example where they compare also with content aware move which I think is" }, { "start": 990.88, "end": 999.76, "text": " implemented in Photoshop and paint harmonization as we saw before. So all of" }, { "start": 999.76, "end": 1003.88, "text": " these kind of things are very playful are very cool and I encourage you to" }, { "start": 1003.88, "end": 1008.6, "text": " check out this paper and the code it seems pretty easy. I have a remark though" }, { "start": 1008.6, "end": 1013.24, "text": " this again is only learned from a single image and that's the kind of" }, { "start": 1013.24, "end": 1020.24, "text": " cool part but it should be possible to combine this with some sort of approach" }, { "start": 1020.24, "end": 1028.42, "text": " over a data set. Like if I have a model that is really good at a single" }, { "start": 1028.42, "end": 1032.56, "text": " image right producing something that looks like a single image I should be" }, { "start": 1032.56, "end": 1039.24, "text": " able to combine it with a model that has been learned from a database." }, { "start": 1039.24, "end": 1043.72, "text": " It's kind of like a Bayesian approach where you say okay I want to produce" }, { "start": 1043.72, "end": 1052.6799999999998, "text": " the best image so I want to maximize the probability of this image given the" }, { "start": 1052.6799999999998, "end": 1060.32, "text": " other image. But then you can also say aha but that's kind of" }, { "start": 1060.32, "end": 1069.6399999999999, "text": " proportional to j given i times p of i right you know Bayes rule and it seems" }, { "start": 1069.6399999999999, "end": 1075.2, "text": " that this paper is dealing mostly with kind of maximizing the likelihood of the" }, { "start": 1075.2, "end": 1080.36, "text": " output while you could probably combine it with some sort of prior over natural" }, { "start": 1080.36, "end": 1086.32, "text": " images and come up with an even better model. Of course then you'd need an" }, { "start": 1086.32, "end": 1092.1599999999999, "text": " actual database of images and training procedure and you need a way to combine" }, { "start": 1092.1599999999999, "end": 1096.76, "text": " these two models. So maybe that's a bit of a challenge. Anyway cool paper check" }, { "start": 1096.76, "end": 1116.92, "text": " it out bye bye." } ]
2lkUNDZld-4
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "cnn", "resnet", "simclr", "simclr2", "simclrv2", "simclr v2", "v2", "hinton", "geoff", "brain", "wide", "deep", "convolutional", "convolutions", "self-supervised", "contrastive", "moco", "momentum", "projection", "semi-supervised", "unsupervised", "distillation", "teacher", "student" ]
This paper proposes SimCLRv2 and shows that semi-supervised learning benefits a lot from self-supervised pre-training. And stunningly, that effect gets larger the fewer labels are available and the more parameters the model has. OUTLINE: 0:00 - Intro & Overview 1:40 - Semi-Supervised Learning 3:50 - Pre-Training via Self-Supervision 5:45 - Contrastive Loss 10:50 - Retaining Projection Heads 13:10 - Supervised Fine-Tuning 13:45 - Unsupervised Distillation & Self-Training 18:45 - Architecture Recap 22:25 - Experiments 34:15 - Broader Impact Paper: https://arxiv.org/abs/2006.10029 Code: https://github.com/google-research/simclr Abstract: One paradigm for learning from few labeled examples while making best use of a large amount of unlabeled data is unsupervised pretraining followed by supervised fine-tuning. Although this paradigm uses unlabeled data in a task-agnostic way, in contrast to most previous approaches to semi-supervised learning for computer vision, we show that it is surprisingly effective for semi-supervised learning on ImageNet. A key ingredient of our approach is the use of a big (deep and wide) network during pretraining and fine-tuning. We find that, the fewer the labels, the more this approach (task-agnostic use of unlabeled data) benefits from a bigger network. After fine-tuning, the big network can be further improved and distilled into a much smaller one with little loss in classification accuracy by using the unlabeled examples for a second time, but in a task-specific way. The proposed semi-supervised learning algorithm can be summarized in three steps: unsupervised pretraining of a big ResNet model using SimCLRv2 (a modification of SimCLR), supervised fine-tuning on a few labeled examples, and distillation with unlabeled examples for refining and transferring the task-specific knowledge. This procedure achieves 73.9\% ImageNet top-1 accuracy with just 1\% of the labels (≤13 labeled images per class) using ResNet-50, a 10× improvement in label efficiency over the previous state-of-the-art. With 10\% of labels, ResNet-50 trained with our method achieves 77.5\% top-1 accuracy, outperforming standard supervised training with all of the labels. Authors: Ting Chen, Simon Kornblith, Kevin Swersky, Mohammad Norouzi, Geoffrey Hinton Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there, today we'll look at big self-supervised models are strong semi-supervised learners by Ting Chen, Simon Kornblith, Kevin Swirsky, Mohamed Nourouzi and Jeffrey Hinton of Google Brain. So this paper on a high level, it's also known as Sinclair v2, demonstrates that if you want to do semi-supervised learning, that you're very well served by starting out with self-supervised learning, and then doing fine tuning much like NLP models do, rather than the kind of semi-supervised approach that image tasks had so far. And they present this Sinclair v2, which is an improvement over the Sinclair approach to self-supervised pre-training, and they demonstrate it outperforms a lot of the baselines. Alright, so if you like content like this, don't forget to share it out, and leave a like and tell me what you think in the comments. So this paper, it sort of is kind of a club together thing of different things. So they present this new method, like this Sinclair v2, which is a modification of Sinclair, and we'll go over that, but they also try to make like a scientific claim, namely that somehow bigger models are better for this pathway of learning, and we'll try to untangle all of these things. So first of all, we're in the semi-supervised learning regime right here. This basically means that you have a data set, and you only have labels for a part of that data set. So this could be like here, the bottom 10% or so, because labels might be expensive to get. And so you only have a few of them, but you have much more data that's unlabeled. Now sometimes this problem is formulated as this here is your data set, and then this here is like a different data set, but one that's close enough such that you can learn from it. And that's usually in NLP. You'll have your data set is like a sentiment classification task, but you have all of Wikipedia that is not labeled, but it's just text. So you can sort of pre-train on it. In this case, we'll be in a situation where we'll artificially construct a small data set. So this entire thing here is going to be the ImageNet data set, and this right here is going to be our labeled portion, like we have labels. Now usually one has labels for ImageNet as well, but we artificially restrict ourselves to simulate a situation where we have lots of data and we only have a fixed budget. So we can only, because to obtain labels, oftentimes you have to ask humans to label images. And let's say we are a company and we've collected this big data set, but we only have like maybe 500 bucks on Amazon Mechanical Turk, and we only managed to get a very like 1% of our data set labeled. Now we're in the regime of semi-supervised learning. This is slightly different from what NLP does. As I said, in NLP, usually assume you have different data sets, the large one being a different distribution, and in the semi-supervised regime, you often assume that it is actually the same data distribution, but you only have labels for some of them. But there should be a fair bit of overlap between the two things. So I've recently made a video about OpenAI's ImageGPT that kind of goes into the same direction as this work right here that basically says pre-training on unlabeled data, like this whole data set without the labels, can be a very good preconditioner for fine tuning later. And this paper says the same thing. So basically, in the good old days, what you would do is you would devise a method that somehow takes, you know, takes in a device, a method that takes in a mini batch. And in the mini batch, you'd have your data samples, and then some of them would be labeled, right here, you'd have a Y and here you'd have a Y, but most of them would be not labeled. And you'd have like some sort of loss function that would put special weight on the ones that are labeled or somehow handle these ones that are unlabeled in a way, you might be doing like some sort of a consistency loss such that if they are very nearest near neighbors to these in the feature space, they should have similar labels or things like this. So these semi supervised methods, they basically try to solve the problem at once. But while taking data that is labeled and not labeled, this paper goes into a different direction. This paper says, first, we should, it's actually three stages right here, and they have a diagram, so I don't need to draw. They have a three stage approach. Three stages. The one on the left is unsupervised pre training. So they say, let's forget about the labels right now, even like your unlabeled data. So even the data where we have the labels, let's forget about the labels. And let's just do unsupervised pre training. Now unsupervised pre training in this kind of setting is also known as self supervised pre training. And this first stage is done using a contrastive loss, and that's very similar to sim clear to this contrastive loss. So what you'll do, and they describe it very, very well here. So what you'll do is given a randomly sampled mini batch of images, each image is augmented twice using random crop color distortion and Gaussian blur, creating two views of the same example. Okay, so you have an image in your mini batch. Each image you take and you make two versions of it. And each version you crop, you random crop somewhere. So version one could be random cropped here. Version two could be random cropped here. And then you put some Gaussian blur on it and so on. So a little bit of, as you can see, random crop color distortion, Gaussian blur. So what you'll want is two different versions of the same image. Each of these versions has been augmented in a different way, cropped in a different way, blurred in a different way. It's two slightly different versions of the same image. And now you want to enforce, you want to put this through your network. So ultimately, as you can see on the right side here, what you want to end up is a network. And then, okay, we'll forget about this right now. What you want to train is this network right here, actually including these projection layers. We'll get to them later. This is the network that you want to train. So you want to put, you take your unlabeled data, you take an image, you make two versions of it. And you put those through the network, right, until the end right here. So you'll get Z1, Z2. These are the outputs of the network for the two images. And then what you want to do is you want to take another image that's not this image, and also put it through the network, maybe also augmented first. And then you have Z3. So now you have the outputs of two things that are supposed to come from the same image and one thing that's supposed to come from a different image. And now your loss is simply going to be make those two things close together and push those two things apart, or those three actually. So the loss, and this is the contrastive loss of self supervised learning. As you know, you don't need any labels right here. You simply say the things that come from the same image should be close together. And the things that come from different images should be far apart. And this relies heavily on these data augmentations that you do right here. They also employ some other tricks like the momentum encoder from MoCo, from momentum contrast and so on. But this is the main part. So you can pull a lot of strings here to get like another percent of performance. But ultimately, they won't see the similarity of ZI and ZJ, which are the outputs of the same image to be close together. And then this down here, they want to be far apart, ZI with ZK, where K is all the other images. Okay. And you can do this in a mini batch fashion. So this is self supervised learning. And the reason why you do this is you don't need labels. And it tends, we know it tends to give very, very good representations. So I'm past that. So what this network here will learn will be very good for some reason. We still don't exactly know why combining augmentation with the self supervised loss with contrastive loss, for example, gives such good performance. There have been papers recently that modify the loss and so on. But it's not super well understood yet. But if you do it like this, the network here will give you already very, very good representation. And we know this because we can take a network like this and then simply train a linear classifier on top of that on a data set and achieve very, very good performance. And mind you, you have trained it with unlabeled data, right? So the network has never been trained to solve like ImageNet classification. It has simply been trained to look at the pictures and determine if two versions of a picture come from the same picture or from different pictures. And now, if you simply train a linear classifier on top of these representations, you're doing extremely well already. So we know these representations, they actually learn something about these images. So that's the first part. Then stage two, let's cancel all that. Stage two is you want to do supervised fine tuning. Now you already see that the arrow here coming out is not this task agnostic big CNN. The arrow is actually coming out of those yellow boxes. And the yellow boxes are these projection heads. So in the original SimClear paper, what they did was they wanted originally, they wanted to train this network right here. This is like a ResNet-50. It's pretty standard in these kind of self-supervised approaches and so on to train or these few label approaches to train a standardized network. And this is like a ResNet-50. So in the original SimClear paper, they said we want to make ResNet-50 as strong as possible. But in order to do this loss right here, we are going to attach this projection head just to because the dimensionality here I think is like 2048. And we want to do this inner product in a lower dimension of like maybe 256 or so. So these are just multi-layer perceptrons. These are just fully connected layers that compress the representation down to that. And once we're done with the unsupervised pre-training, we're going to throw those away, right? And this ResNet is the thing that we really care about. Now here they claim, OK, it actually works better. And they have experiments to prove this or to show this if you use one, if you actually leave one of these layers here. So in the end, I guess they converge on three projection head layers. And then they only throw away the top two. And like they make this big deal out of the fact where, you know, I can just call this part right here now the encoder. And I don't so I don't know exactly like I don't see the giant deal here. Like you just made your network one layer bigger. And now you consider that to be your encoder. And the projection head is now two layers. And that will be much easier than calling the projection head three layers. But we leave one layer and we train from the middle layer. In any case, they have this layer, additional layer right here compared to the old Sinclair. And then the representation of that goes into supervised fine tuning. Now, this is pretty easy. This is exactly what it sounds like. So now you use only only the data set that has labels. So the part of the data set that has labels, and you do the fine tuning and fine tuning is simply supervised learning. You train this network in a supervised fashion on that small fraction of data that has cloud class labels. And that already performs pretty well. And they show this in experiments. But then you can go a step further and do what's known as distillation or self training. And what's distillation or self training? It's so distillation is when you have a network that you call the teacher network. And that network has been trained to do some classification maybe into three classes pretty, pretty well. Okay. But now this is very large and you want maybe a smaller model. So you just want like this tiny model because you want to ship it on a mobile device, right? But it's also supposed to do this. And you know that if you just directly train this, which is called the student model, it doesn't perform as well as the teacher model. There is a better way. If you have the teacher model, you can sort of transfer the knowledge to the student model. You can distill the knowledge. And how do you do that? You do that by, so what would you do in supervised training? In supervised training, you would take an image, put it in, and then put the label that comes along with the image. You put it up here and you compare the output to the label and that gives you the loss function. Right? So you do that right here. If you distill, you put the image into both. Now the teacher is already trained. So its output will be a distribution over classes. It won't be a single label. It will be like, okay, 90% class one, 10% class two, 0% class three, something like this. And now you take this as like a pseudo label, this entire distribution, and you put it here and you compare the output of the student to that of the teacher and that's your loss function. So this kind of, the teacher might have learned to put some nuance into the classification to say, well, I'm pretty sure this is class one, but I'm not 100% sure. And it can transfer that knowledge to the student. And that makes the student better than had you just trained it from the beginning from, with just the labels. Right? So this is distillation and you can do this even what they call self distillation here or self training. So apparently this even helps if the teacher is, if the student model is the same as the teacher model. Now why does it help in this case? And I think it is not exactly the case in this case because they always say their teacher model has this extra projection layer. Right? And then the student model doesn't have that even if they do self training. But why does it help in this case? I mean, it's, it's kind of shocking and I'm pretty sure it helps in any case, but in this particular case it helps because now you're using the unlabeled data again. So you have a teacher model and the teacher model is trained first using unsupervised like this is the teacher model right here using unsupervised training. Then the teacher model is further fine tuned on the small data. Right? So it is now already pretty good at the task, but how can you get a student model that's even better than the teacher model? It's by using again this unlabeled data. You have this giant amount of data. So what you'll do is you take an image from the unlabeled data and you ask the teacher model, teacher model, what do you think about that image? Right? And the teacher model will give you a prediction. Like let's say again, this 90%, 10%, 0% and then you take the student model, you input that image and you compare its output to what the teacher said. So this combines the teacher model. You freeze the teacher model, right? The teacher model is only trained until here. You take it from here. The student model is now able to take basically the teacher. It takes everything that the teacher model knows, not only about this data, but about all the data. So it kind of gets to ask the teacher model, what do you think about this? What do you think about this? What do you think about this? And it can incorporate all that knowledge about all of this unlabeled data. And that's why the student model here in the end, if it's the same size, will probably end up even better than the teacher model. So distillation, I think also is still kind of a mystery of why you get a better model or, I mean, to make it smaller, if you make it a lot smaller, usually you don't end up with a better model, but you end up with a pretty good model that you couldn't have gotten by just training the small model. So that's already pretty cool. But why you get a better model when they're the same size, I don't think that's well understood yet. So that's the three stage approach. So recap, first, use all of the data without labels to do unsupervised or self supervised contrastive pre-training. Second, use only the data that has labels to do fine tuning. Third, either distill the learned classifier to a smaller model or distill it to a model of the same size. Then in both cases, you would again use the unlabeled, all of the unlabeled data. And that's the three step approach. That's SEMCLEAR v2 in all of its form. So they go into fine tuning right here. And yeah, so they say again, we elaborate with a three layer projection head. So that's the three layer projection head. This here is the output of ResNet-50, where Sigma is a ReLU non-linearity and we ignore the bias term for brevity, blah, blah, blah, blah, blah. So they contrast this here. For fine tuning, SEMCLEAR uses this right here, which is just, it's basically just a classifier on top of the output of the ResNet-50. This is fine tuning from the input layer of the projection head. To fine tune from the first layer of the projection head, we have a new encoder function as this, which is ResNet followed by fully connected layers. And you see they take the ResNet-50 output and they ship it through the first projection layer and then there is a task specific classifier. Now, again, why, I don't even see why they make like this ginormous deal out of it, especially, especially since the last layer of the ResNet-50. I'm not, okay, here is, I'm not entirely sure, but are they taking the log? No, they're probably not taking the log. It's okay. But it's, yeah, it's just weird. Like is there even a non-linearity at the end right here? Or is this really just like two matrix multiplications in a row, which I'm going to guess there's a big chance that that's the case, that the last layer of this encoder is actually not even followed by non-linearity and therefore you'll just kind of make the dimension different. And I don't see why you can't just incorporate this into the model and have to like say it over and over again that this is a new special thing, right? Again, this is equivalent of tuning from a middle layer of the projection head instead of the output layer. Okay, you just make your model a bit bigger. Yeah. So the third step is self-training or knowledge distillation. And they give two variants right here. This variant, as you can see here, this is just the cross entropy. But instead of having labels right here, Y, you have what the teacher model thinks Y is given X. Okay, that's cross entropy, but not with the true labels, but with the output of the teacher model. And you can even mix that. So you can, as you can see right here, you can mix this with an actual supervised loss. So this would be the supervised loss, whatever. Yeah, I guess that I was wrong. That wasn't, I guess P of Y is always one in that case. But they don't use this particular kind, I think, except in one of the ablations. So how does this work? It works pretty well. And so one of their experiments, as you see up here, it works pretty well in that if you have 1% of the labels, only 1% of ImageNet labels, which they say is smaller or equal than 13 images per class, so there's a thousand classes and you only have 13 labels per class or less. If you, and they differentiate, if your encoder that you train is a ResNet 50, then you get, and you can see the dashed line here is a supervised baseline. You almost get to the supervised baseline with 1% of the labels. And if you actually have a larger ResNet, then you get to the supervised performance without 99% of the labels. And if you have, excuse me, 10% of the labels, you pass the supervised baseline. So the supervised baseline is on 100% of the labels, mind you, and you only have 10% and this outperforms the supervised baseline. Now of course, you could, here you could have another graphic where you show, oh, 100%. What if we, you know, what if we do the whole procedure with 100% of the labels? So first we don't label the data, we do supervised, self-supervision, then we fine tune on a 100% of the data. And then we do this distillation again, you would of course be even better. And I think they have this somewhere in a table, but this is already pretty, pretty impressive. And another claim they make right here is about the model sizes. So and this figure is description, this now relates to the title. They say bigger models yield larger gains when fine tuning with fewer labeled examples. So there are three comparative statement words in one sentence. Let's unpack this. Bigger models yield larger gains. So the bigger the model, the better the good, let's say, when fine tuning with fewer labeled examples. Let's just look at the graph. It's pretty, it's really clear. So here we have number of parameters going over. So these are the different models they look at, how many parameters they have to do this whole procedure. And here is the relative improvement in percent over the top ImageNet 1 top accuracy. So if you do this whole thing with 100% of the labels, right, I'm going to guess this here, this here is where they start out. And you can see as you grow your models, you grow the performance. And this, this is just by increasing the model size, right, you have the same data set, you have the same amount of labels, you have the same number of steps that you train for, and so on, just by the fact that you make your model bigger, you gain in performance. Okay, now you can see that these curves here are above one another. And these curves refer to getting small, less and less labels. Okay, so if you only have 10% of the labels, your relative gains are larger. That doesn't mean that you perform better with 10% of the labels than with 100% of the labels, that would be like ridiculous. Well, I guess in this day and age, nothing is ridiculous. But for now, we're still performing better by having more labels if we do the same procedure, right? It's not like here. So here, this baseline, the supervised baseline only does supervised training, right? So that's why we can outperform it with less of labels. But here, we do the same procedure. This is relative improvement, right? So this right here, the starting point would be if you had 10% of labels and a 25 million model, parameter model. And this right here, for example, is if you have the same amount of labels, but a 200 million parameter model. And this is relative improvement, okay? But what the graph says is that the relative improvement is larger, the relative improvement is higher, the more parameters you have, which is the more you go to the right. And that effect in itself is higher, the fewer labels you have, which is the different graphs. And you can see that right here. So if you have fewer and fewer labels, it becomes more and more important that you have bigger models. And that's really counterintuitive, right? Because you would expect that the bigger models, they can overfit much more easily to the fewer labels. But that doesn't seem the case. So this self supervision, it really seems to be sort of a counter to this notion of overfitting. And if you have larger and larger models, that's what they argue in the paper, you might be able to learn more and more features that might be useful for classification. So if you have a larger model, you might, you're going to learn more kinds of features, and then you're going to outperform because you have more chance that these features are going to be useful for classification. And I don't think they really make a statement as to why that happens more with the, if you have less labels. So let's think about this. If I have very few labels, very, very few labels, why does it help me even more if I have a big model? Well, with the same argumentation, we could say, and maybe they actually say this already. So I might be copying them involuntarily. Maybe with fewer and fewer labels, like let's say we have all the labels, that's probably too many, right? If we can learn a task with some accuracy, we probably had too many labels. Okay. It's like, if we can't learn a task, we know we have too few. Somewhere there is a border where we have enough, but that's like kind of one number. And everything else is too many, technically speaking, like learning theoretically speaking. So usually we have too many labels. And what does that mean? That probably means that there are multiple ways. Like if we have too many labels, there are multiple different features we can pick up to learn. There are multiple different paths to learn our goals. So if we have ImageNet, and like there's this weird task to recognize a three, and we get lots and lots and lots of examples of threes, right? We can decide on a feature. We can say, oh, all the threes that I see, they have this bow down here, or all the threes that I see, they have this bend here, and so on. But if I only have very few labels, there might only be like a single feature that is even theoretically possible to learn from the labels I'm given. And therefore, if I have a bigger model in cell in pre-training, because the pre-training happens with the same amount of data, right? If I have a bigger model that does the self-supervised pre-training, it's going to learn more features. And then there's a higher chance that that one feature that these very few labels that I am able to learn something from is going to be in these features. So that's kind of how I make sense of it in combination with what they're saying right here. Okay, so this was the main points. They do a lot of empirical studies showing the effects of these sizes. They stress that it's important to have both deep and wide networks. And they also do this additional attention mechanism over the convolution filters. I don't want to go into that particularly. But they also do linear evaluation compared to supervised, compared to fine tuning with 100% of the labels. So they do a very thorough empirical investigation. And yeah, I do appreciate that. And they kind of show the same things. And here they show the number of layers in the projection head. So as you increase the number of layers in the projection head and train from the optimal layer in the middle, your performance goes up, as you can see. But it also this effect is stronger when you have fewer labels, right? You can see the differences here are greater than the differences here or even here when you have 100% of the labels. So the fewer labels, the fewer the labels, the more benefit you have from the architecture right here. And here they show that it's not always optimal to train from the last projection layer, but here the first one. So I guess they converge on three projection layers, and you always want to keep the first one around after self supervised training, as we mentioned before. They investigate different distillation losses and show that it is actually important that you do the distillation loss on labeled and unlabeled sets. You can see here if you only train with the labels after fine tuning, you get poor performance. If you do the label and distillation loss, but only do it on the data set where you have labels, then you get more performance. If you do label and distillation loss, but also include your unlabeled data, you get even more performance. And then if you do that, but you don't do the label loss. So before we've seen you can mix the distillation loss with the label loss, if you have lots of labels, then you drop in performance again. And you can see right here, the drop in performance is proportional to how many labeled examples you have. And that's natural, right? If you have the labels, you can actually mix that information in with the distillation loss and that will make you better. And here they drop 0.1% and here they drop less than 1% by leaving away the label. But their point basically is that it is more important to distill using also unlabeled data, then it is to distill, including the label loss. And it's much easier to not include the label loss. So they don't do it, I guess. All right, so I think that was it. They compare, as I said, they compare like self distillation, where you distill into an equally sized model and down distillation, where you distill into a smaller model, maybe that's vice versa. And they do a lot of comparison to other methods. So this is a very thorough work, I feel. And yeah, if you want more about the exact experiments, I invite you to look at the paper. And let's just have a final look at the broader impact statement right here. So the broader, remember the broader impact statement is supposed to force you to think about how society might be impacted at large by your work. So it says, the finding described in this paper can potentially be harnessed to improve accuracy in any application or computer vision, where it is more expensive or difficult to label additional data than to train larger models. Such applications are clearly beneficial to society. For example, in medical applications where acquiring high quality labels requires careful annotation by clinicians, better semi supervised learning approaches can potentially help save lives. Application of computer vision to agriculture can increase crop yields, which may help to improve availability of food. However, we also recognize that our approach can become a potential component of harmful surveillance systems. Moreover, there is an entire industry built around human labeling services and technology that reduces the need for these services could lead to short term loss of income for some of those currently employed or contracted to provide labels. So ask yourself how much of that statement has to do with the actual novelty of this paper? And the answer is of course, zero, right? Like you can replace like our method in this thing with like machine learning or computer vision in general, like, oh, really SIMClear V2 specifically can increase crop yields? Like that specific invention of this paper will lead to higher crop yields, will lead to surveillance systems. So I'm, yeah, you know, I think like, I'm not gonna get too upset about these. I mean, this, I think it's quite funny. But just, again, I wonder whether the people advocating for these things are happy with these statements, because clearly, clearly, this is just a template that you copy paste from paper to paper, replacing like a few words. And if it's computer vision, you're like, oh, my deep fakes. And if it's an NLP, it's like, oh, I'm a fake news. And yeah, I wonder if really anything like particularly is has I wonder whether these people are happy now. Yeah, I just I wonder. And if, if they are, I wonder whether it's really for the reason that they claim that, oh, now we have a statement here of how it impacts society, because I could have told you that before. I even read the title of the paper, right, what the broader impact statement is going to be. In any case, rant too long, check out paper, share it out, leave a like, comment if you disagree or agree. And yeah, bye bye.
[ { "start": 0, "end": 6.72, "text": " Hi there, today we'll look at big self-supervised models are strong semi-supervised learners" }, { "start": 6.72, "end": 12.96, "text": " by Ting Chen, Simon Kornblith, Kevin Swirsky, Mohamed Nourouzi and Jeffrey Hinton of Google" }, { "start": 12.96, "end": 14.32, "text": " Brain." }, { "start": 14.32, "end": 21.42, "text": " So this paper on a high level, it's also known as Sinclair v2, demonstrates that if you want" }, { "start": 21.42, "end": 28.18, "text": " to do semi-supervised learning, that you're very well served by starting out with self-supervised" }, { "start": 28.18, "end": 34.92, "text": " learning, and then doing fine tuning much like NLP models do, rather than the kind of" }, { "start": 34.92, "end": 40.36, "text": " semi-supervised approach that image tasks had so far." }, { "start": 40.36, "end": 45.34, "text": " And they present this Sinclair v2, which is an improvement over the Sinclair approach" }, { "start": 45.34, "end": 51.96, "text": " to self-supervised pre-training, and they demonstrate it outperforms a lot of the baselines." }, { "start": 51.96, "end": 58.32, "text": " Alright, so if you like content like this, don't forget to share it out, and leave a" }, { "start": 58.32, "end": 62.08, "text": " like and tell me what you think in the comments." }, { "start": 62.08, "end": 70.44, "text": " So this paper, it sort of is kind of a club together thing of different things." }, { "start": 70.44, "end": 77.26, "text": " So they present this new method, like this Sinclair v2, which is a modification of Sinclair," }, { "start": 77.26, "end": 86.92, "text": " and we'll go over that, but they also try to make like a scientific claim, namely that" }, { "start": 86.92, "end": 93.80000000000001, "text": " somehow bigger models are better for this pathway of learning, and we'll try to untangle" }, { "start": 93.80000000000001, "end": 95.64, "text": " all of these things." }, { "start": 95.64, "end": 101.56, "text": " So first of all, we're in the semi-supervised learning regime right here." }, { "start": 101.56, "end": 108.16, "text": " This basically means that you have a data set, and you only have labels for a part of" }, { "start": 108.16, "end": 109.48, "text": " that data set." }, { "start": 109.48, "end": 115.4, "text": " So this could be like here, the bottom 10% or so, because labels might be expensive to" }, { "start": 115.4, "end": 116.4, "text": " get." }, { "start": 116.4, "end": 121.76, "text": " And so you only have a few of them, but you have much more data that's unlabeled." }, { "start": 121.76, "end": 127.76, "text": " Now sometimes this problem is formulated as this here is your data set, and then this" }, { "start": 127.76, "end": 132.56, "text": " here is like a different data set, but one that's close enough such that you can learn" }, { "start": 132.56, "end": 133.56, "text": " from it." }, { "start": 133.56, "end": 135.52, "text": " And that's usually in NLP." }, { "start": 135.52, "end": 141.44, "text": " You'll have your data set is like a sentiment classification task, but you have all of Wikipedia" }, { "start": 141.44, "end": 143.44, "text": " that is not labeled, but it's just text." }, { "start": 143.44, "end": 146.48000000000002, "text": " So you can sort of pre-train on it." }, { "start": 146.48000000000002, "end": 152.72, "text": " In this case, we'll be in a situation where we'll artificially construct a small data" }, { "start": 152.72, "end": 153.72, "text": " set." }, { "start": 153.72, "end": 159.84, "text": " So this entire thing here is going to be the ImageNet data set, and this right here is" }, { "start": 159.84, "end": 163.88, "text": " going to be our labeled portion, like we have labels." }, { "start": 163.88, "end": 170.36, "text": " Now usually one has labels for ImageNet as well, but we artificially restrict ourselves" }, { "start": 170.36, "end": 177.04, "text": " to simulate a situation where we have lots of data and we only have a fixed budget." }, { "start": 177.04, "end": 182.2, "text": " So we can only, because to obtain labels, oftentimes you have to ask humans to label" }, { "start": 182.2, "end": 183.2, "text": " images." }, { "start": 183.2, "end": 190.48, "text": " And let's say we are a company and we've collected this big data set, but we only have like maybe" }, { "start": 190.48, "end": 196.72, "text": " 500 bucks on Amazon Mechanical Turk, and we only managed to get a very like 1% of our" }, { "start": 196.72, "end": 198.44, "text": " data set labeled." }, { "start": 198.44, "end": 204.56, "text": " Now we're in the regime of semi-supervised learning." }, { "start": 204.56, "end": 208.2, "text": " This is slightly different from what NLP does." }, { "start": 208.2, "end": 212.44, "text": " As I said, in NLP, usually assume you have different data sets, the large one being a" }, { "start": 212.44, "end": 218.2, "text": " different distribution, and in the semi-supervised regime, you often assume that it is actually" }, { "start": 218.2, "end": 221.84, "text": " the same data distribution, but you only have labels for some of them." }, { "start": 221.84, "end": 226.56, "text": " But there should be a fair bit of overlap between the two things." }, { "start": 226.56, "end": 235.28, "text": " So I've recently made a video about OpenAI's ImageGPT that kind of goes into the same direction" }, { "start": 235.28, "end": 241.12, "text": " as this work right here that basically says pre-training on unlabeled data, like this" }, { "start": 241.12, "end": 248.68, "text": " whole data set without the labels, can be a very good preconditioner for fine tuning" }, { "start": 248.68, "end": 249.68, "text": " later." }, { "start": 249.68, "end": 251.44, "text": " And this paper says the same thing." }, { "start": 251.44, "end": 258.4, "text": " So basically, in the good old days, what you would do is you would devise a method that" }, { "start": 258.4, "end": 265.72, "text": " somehow takes, you know, takes in a device, a method that takes in a mini batch." }, { "start": 265.72, "end": 271.44000000000005, "text": " And in the mini batch, you'd have your data samples, and then some of them would be labeled," }, { "start": 271.44000000000005, "end": 276.56, "text": " right here, you'd have a Y and here you'd have a Y, but most of them would be not labeled." }, { "start": 276.56, "end": 281.92, "text": " And you'd have like some sort of loss function that would put special weight on the ones" }, { "start": 281.92, "end": 287.32000000000005, "text": " that are labeled or somehow handle these ones that are unlabeled in a way, you might be" }, { "start": 287.32000000000005, "end": 293.8, "text": " doing like some sort of a consistency loss such that if they are very nearest near neighbors" }, { "start": 293.8, "end": 298.76, "text": " to these in the feature space, they should have similar labels or things like this." }, { "start": 298.76, "end": 305.16, "text": " So these semi supervised methods, they basically try to solve the problem at once." }, { "start": 305.16, "end": 309.96000000000004, "text": " But while taking data that is labeled and not labeled, this paper goes into a different" }, { "start": 309.96000000000004, "end": 310.96000000000004, "text": " direction." }, { "start": 310.96000000000004, "end": 317.08000000000004, "text": " This paper says, first, we should, it's actually three stages right here, and they have a diagram," }, { "start": 317.08000000000004, "end": 319.32, "text": " so I don't need to draw." }, { "start": 319.32, "end": 322, "text": " They have a three stage approach." }, { "start": 322, "end": 323.12, "text": " Three stages." }, { "start": 323.12, "end": 327.08, "text": " The one on the left is unsupervised pre training." }, { "start": 327.08, "end": 333, "text": " So they say, let's forget about the labels right now, even like your unlabeled data." }, { "start": 333, "end": 337.62, "text": " So even the data where we have the labels, let's forget about the labels." }, { "start": 337.62, "end": 340.96, "text": " And let's just do unsupervised pre training." }, { "start": 340.96, "end": 346.28000000000003, "text": " Now unsupervised pre training in this kind of setting is also known as self supervised" }, { "start": 346.28000000000003, "end": 347.56, "text": " pre training." }, { "start": 347.56, "end": 355.84, "text": " And this first stage is done using a contrastive loss, and that's very similar to sim clear" }, { "start": 355.84, "end": 356.88, "text": " to this contrastive loss." }, { "start": 356.88, "end": 361.12, "text": " So what you'll do, and they describe it very, very well here." }, { "start": 361.12, "end": 367, "text": " So what you'll do is given a randomly sampled mini batch of images, each image is augmented" }, { "start": 367, "end": 373.04, "text": " twice using random crop color distortion and Gaussian blur, creating two views of the same" }, { "start": 373.04, "end": 374.04, "text": " example." }, { "start": 374.04, "end": 377.04, "text": " Okay, so you have an image in your mini batch." }, { "start": 377.04, "end": 380.56, "text": " Each image you take and you make two versions of it." }, { "start": 380.56, "end": 383.70000000000005, "text": " And each version you crop, you random crop somewhere." }, { "start": 383.70000000000005, "end": 385.84000000000003, "text": " So version one could be random cropped here." }, { "start": 385.84000000000003, "end": 388.78000000000003, "text": " Version two could be random cropped here." }, { "start": 388.78000000000003, "end": 392.56, "text": " And then you put some Gaussian blur on it and so on." }, { "start": 392.56, "end": 398.24, "text": " So a little bit of, as you can see, random crop color distortion, Gaussian blur." }, { "start": 398.24, "end": 402.84000000000003, "text": " So what you'll want is two different versions of the same image." }, { "start": 402.84, "end": 408.03999999999996, "text": " Each of these versions has been augmented in a different way, cropped in a different" }, { "start": 408.03999999999996, "end": 410.96, "text": " way, blurred in a different way." }, { "start": 410.96, "end": 414.44, "text": " It's two slightly different versions of the same image." }, { "start": 414.44, "end": 421.53999999999996, "text": " And now you want to enforce, you want to put this through your network." }, { "start": 421.53999999999996, "end": 428.59999999999997, "text": " So ultimately, as you can see on the right side here, what you want to end up is a network." }, { "start": 428.59999999999997, "end": 432.35999999999996, "text": " And then, okay, we'll forget about this right now." }, { "start": 432.36, "end": 437.44, "text": " What you want to train is this network right here, actually including these projection" }, { "start": 437.44, "end": 438.44, "text": " layers." }, { "start": 438.44, "end": 439.44, "text": " We'll get to them later." }, { "start": 439.44, "end": 441.28000000000003, "text": " This is the network that you want to train." }, { "start": 441.28000000000003, "end": 446.24, "text": " So you want to put, you take your unlabeled data, you take an image, you make two versions" }, { "start": 446.24, "end": 448.12, "text": " of it." }, { "start": 448.12, "end": 453.96000000000004, "text": " And you put those through the network, right, until the end right here." }, { "start": 453.96000000000004, "end": 457.24, "text": " So you'll get Z1, Z2." }, { "start": 457.24, "end": 461.92, "text": " These are the outputs of the network for the two images." }, { "start": 461.92, "end": 467.28000000000003, "text": " And then what you want to do is you want to take another image that's not this image," }, { "start": 467.28000000000003, "end": 471.04, "text": " and also put it through the network, maybe also augmented first." }, { "start": 471.04, "end": 473.36, "text": " And then you have Z3." }, { "start": 473.36, "end": 478.56, "text": " So now you have the outputs of two things that are supposed to come from the same image" }, { "start": 478.56, "end": 481.44, "text": " and one thing that's supposed to come from a different image." }, { "start": 481.44, "end": 489.16, "text": " And now your loss is simply going to be make those two things close together and push those" }, { "start": 489.16, "end": 493.56, "text": " two things apart, or those three actually." }, { "start": 493.56, "end": 499.76000000000005, "text": " So the loss, and this is the contrastive loss of self supervised learning." }, { "start": 499.76000000000005, "end": 502.72, "text": " As you know, you don't need any labels right here." }, { "start": 502.72, "end": 506.40000000000003, "text": " You simply say the things that come from the same image should be close together." }, { "start": 506.40000000000003, "end": 510.12, "text": " And the things that come from different images should be far apart." }, { "start": 510.12, "end": 516.4, "text": " And this relies heavily on these data augmentations that you do right here." }, { "start": 516.4, "end": 521.4399999999999, "text": " They also employ some other tricks like the momentum encoder from MoCo, from momentum" }, { "start": 521.4399999999999, "end": 523.4399999999999, "text": " contrast and so on." }, { "start": 523.4399999999999, "end": 525.64, "text": " But this is the main part." }, { "start": 525.64, "end": 531.84, "text": " So you can pull a lot of strings here to get like another percent of performance." }, { "start": 531.84, "end": 540.4, "text": " But ultimately, they won't see the similarity of ZI and ZJ, which are the outputs of the" }, { "start": 540.4, "end": 543.92, "text": " same image to be close together." }, { "start": 543.92, "end": 552.4399999999999, "text": " And then this down here, they want to be far apart, ZI with ZK, where K is all the other" }, { "start": 552.4399999999999, "end": 553.4399999999999, "text": " images." }, { "start": 553.4399999999999, "end": 554.4399999999999, "text": " Okay." }, { "start": 554.4399999999999, "end": 556.64, "text": " And you can do this in a mini batch fashion." }, { "start": 556.64, "end": 558.0799999999999, "text": " So this is self supervised learning." }, { "start": 558.0799999999999, "end": 561.88, "text": " And the reason why you do this is you don't need labels." }, { "start": 561.88, "end": 567.24, "text": " And it tends, we know it tends to give very, very good representations." }, { "start": 567.24, "end": 570.26, "text": " So I'm past that." }, { "start": 570.26, "end": 576.16, "text": " So what this network here will learn will be very good for some reason." }, { "start": 576.16, "end": 581.88, "text": " We still don't exactly know why combining augmentation with the self supervised loss" }, { "start": 581.88, "end": 587.52, "text": " with contrastive loss, for example, gives such good performance." }, { "start": 587.52, "end": 593, "text": " There have been papers recently that modify the loss and so on." }, { "start": 593, "end": 595, "text": " But it's not super well understood yet." }, { "start": 595, "end": 601.52, "text": " But if you do it like this, the network here will give you already very, very good representation." }, { "start": 601.52, "end": 607.54, "text": " And we know this because we can take a network like this and then simply train a linear classifier" }, { "start": 607.54, "end": 613.72, "text": " on top of that on a data set and achieve very, very good performance." }, { "start": 613.72, "end": 617.92, "text": " And mind you, you have trained it with unlabeled data, right?" }, { "start": 617.92, "end": 622.52, "text": " So the network has never been trained to solve like ImageNet classification." }, { "start": 622.52, "end": 627.6, "text": " It has simply been trained to look at the pictures and determine if two versions of" }, { "start": 627.6, "end": 630.4399999999999, "text": " a picture come from the same picture or from different pictures." }, { "start": 630.4399999999999, "end": 635.84, "text": " And now, if you simply train a linear classifier on top of these representations, you're doing" }, { "start": 635.84, "end": 637.64, "text": " extremely well already." }, { "start": 637.64, "end": 642.6999999999999, "text": " So we know these representations, they actually learn something about these images." }, { "start": 642.6999999999999, "end": 644.64, "text": " So that's the first part." }, { "start": 644.64, "end": 649.0799999999999, "text": " Then stage two, let's cancel all that." }, { "start": 649.08, "end": 653.5200000000001, "text": " Stage two is you want to do supervised fine tuning." }, { "start": 653.5200000000001, "end": 661.6800000000001, "text": " Now you already see that the arrow here coming out is not this task agnostic big CNN." }, { "start": 661.6800000000001, "end": 665.26, "text": " The arrow is actually coming out of those yellow boxes." }, { "start": 665.26, "end": 668.1600000000001, "text": " And the yellow boxes are these projection heads." }, { "start": 668.1600000000001, "end": 675.1, "text": " So in the original SimClear paper, what they did was they wanted originally, they wanted" }, { "start": 675.1, "end": 678.3000000000001, "text": " to train this network right here." }, { "start": 678.3, "end": 679.92, "text": " This is like a ResNet-50." }, { "start": 679.92, "end": 685.88, "text": " It's pretty standard in these kind of self-supervised approaches and so on to train or these few" }, { "start": 685.88, "end": 690.06, "text": " label approaches to train a standardized network." }, { "start": 690.06, "end": 692.28, "text": " And this is like a ResNet-50." }, { "start": 692.28, "end": 698.4399999999999, "text": " So in the original SimClear paper, they said we want to make ResNet-50 as strong as possible." }, { "start": 698.4399999999999, "end": 705.0799999999999, "text": " But in order to do this loss right here, we are going to attach this projection head just" }, { "start": 705.08, "end": 709.96, "text": " to because the dimensionality here I think is like 2048." }, { "start": 709.96, "end": 715.96, "text": " And we want to do this inner product in a lower dimension of like maybe 256 or so." }, { "start": 715.96, "end": 719.86, "text": " So these are just multi-layer perceptrons." }, { "start": 719.86, "end": 726.24, "text": " These are just fully connected layers that compress the representation down to that." }, { "start": 726.24, "end": 730.48, "text": " And once we're done with the unsupervised pre-training, we're going to throw those away," }, { "start": 730.48, "end": 731.48, "text": " right?" }, { "start": 731.48, "end": 734.8000000000001, "text": " And this ResNet is the thing that we really care about." }, { "start": 734.8, "end": 738.4799999999999, "text": " Now here they claim, OK, it actually works better." }, { "start": 738.4799999999999, "end": 744.88, "text": " And they have experiments to prove this or to show this if you use one, if you actually" }, { "start": 744.88, "end": 747.3399999999999, "text": " leave one of these layers here." }, { "start": 747.3399999999999, "end": 752.68, "text": " So in the end, I guess they converge on three projection head layers." }, { "start": 752.68, "end": 755.76, "text": " And then they only throw away the top two." }, { "start": 755.76, "end": 761.8399999999999, "text": " And like they make this big deal out of the fact where, you know, I can just call this" }, { "start": 761.84, "end": 765.52, "text": " part right here now the encoder." }, { "start": 765.52, "end": 771.72, "text": " And I don't so I don't know exactly like I don't see the giant deal here." }, { "start": 771.72, "end": 774.96, "text": " Like you just made your network one layer bigger." }, { "start": 774.96, "end": 778.32, "text": " And now you consider that to be your encoder." }, { "start": 778.32, "end": 780.76, "text": " And the projection head is now two layers." }, { "start": 780.76, "end": 784.52, "text": " And that will be much easier than calling the projection head three layers." }, { "start": 784.52, "end": 787.64, "text": " But we leave one layer and we train from the middle layer." }, { "start": 787.64, "end": 793.28, "text": " In any case, they have this layer, additional layer right here compared to the old Sinclair." }, { "start": 793.28, "end": 797.16, "text": " And then the representation of that goes into supervised fine tuning." }, { "start": 797.16, "end": 798.48, "text": " Now, this is pretty easy." }, { "start": 798.48, "end": 799.98, "text": " This is exactly what it sounds like." }, { "start": 799.98, "end": 805.22, "text": " So now you use only only the data set that has labels." }, { "start": 805.22, "end": 809.8, "text": " So the part of the data set that has labels, and you do the fine tuning and fine tuning" }, { "start": 809.8, "end": 811.92, "text": " is simply supervised learning." }, { "start": 811.92, "end": 817.6, "text": " You train this network in a supervised fashion on that small fraction of data that has cloud" }, { "start": 817.6, "end": 820.0400000000001, "text": " class labels." }, { "start": 820.0400000000001, "end": 822.28, "text": " And that already performs pretty well." }, { "start": 822.28, "end": 824.16, "text": " And they show this in experiments." }, { "start": 824.16, "end": 832.6800000000001, "text": " But then you can go a step further and do what's known as distillation or self training." }, { "start": 832.6800000000001, "end": 835.8000000000001, "text": " And what's distillation or self training?" }, { "start": 835.8000000000001, "end": 841.88, "text": " It's so distillation is when you have a network that you call the teacher network." }, { "start": 841.88, "end": 849.28, "text": " And that network has been trained to do some classification maybe into three classes pretty," }, { "start": 849.28, "end": 850.28, "text": " pretty well." }, { "start": 850.28, "end": 851.28, "text": " Okay." }, { "start": 851.28, "end": 855.32, "text": " But now this is very large and you want maybe a smaller model." }, { "start": 855.32, "end": 860.28, "text": " So you just want like this tiny model because you want to ship it on a mobile device, right?" }, { "start": 860.28, "end": 863.52, "text": " But it's also supposed to do this." }, { "start": 863.52, "end": 868.68, "text": " And you know that if you just directly train this, which is called the student model, it" }, { "start": 868.68, "end": 871.14, "text": " doesn't perform as well as the teacher model." }, { "start": 871.14, "end": 872.3199999999999, "text": " There is a better way." }, { "start": 872.3199999999999, "end": 877.68, "text": " If you have the teacher model, you can sort of transfer the knowledge to the student model." }, { "start": 877.68, "end": 879, "text": " You can distill the knowledge." }, { "start": 879, "end": 880.4399999999999, "text": " And how do you do that?" }, { "start": 880.4399999999999, "end": 884.4399999999999, "text": " You do that by, so what would you do in supervised training?" }, { "start": 884.4399999999999, "end": 890.04, "text": " In supervised training, you would take an image, put it in, and then put the label that" }, { "start": 890.04, "end": 891.68, "text": " comes along with the image." }, { "start": 891.68, "end": 896.84, "text": " You put it up here and you compare the output to the label and that gives you the loss function." }, { "start": 896.84, "end": 897.84, "text": " Right?" }, { "start": 897.84, "end": 901.4, "text": " So you do that right here." }, { "start": 901.4, "end": 904.9200000000001, "text": " If you distill, you put the image into both." }, { "start": 904.9200000000001, "end": 907.2, "text": " Now the teacher is already trained." }, { "start": 907.2, "end": 910.76, "text": " So its output will be a distribution over classes." }, { "start": 910.76, "end": 912.2800000000001, "text": " It won't be a single label." }, { "start": 912.2800000000001, "end": 918.6, "text": " It will be like, okay, 90% class one, 10% class two, 0% class three, something like" }, { "start": 918.6, "end": 919.6, "text": " this." }, { "start": 919.6, "end": 925.6800000000001, "text": " And now you take this as like a pseudo label, this entire distribution, and you put it here" }, { "start": 925.68, "end": 930.16, "text": " and you compare the output of the student to that of the teacher and that's your loss" }, { "start": 930.16, "end": 931.1999999999999, "text": " function." }, { "start": 931.1999999999999, "end": 936.56, "text": " So this kind of, the teacher might have learned to put some nuance into the classification" }, { "start": 936.56, "end": 941.78, "text": " to say, well, I'm pretty sure this is class one, but I'm not 100% sure." }, { "start": 941.78, "end": 945.0999999999999, "text": " And it can transfer that knowledge to the student." }, { "start": 945.0999999999999, "end": 951.64, "text": " And that makes the student better than had you just trained it from the beginning from," }, { "start": 951.64, "end": 953.0799999999999, "text": " with just the labels." }, { "start": 953.0799999999999, "end": 954.0799999999999, "text": " Right?" }, { "start": 954.08, "end": 960, "text": " So this is distillation and you can do this even what they call self distillation here" }, { "start": 960, "end": 961.64, "text": " or self training." }, { "start": 961.64, "end": 968.76, "text": " So apparently this even helps if the teacher is, if the student model is the same as the" }, { "start": 968.76, "end": 970.08, "text": " teacher model." }, { "start": 970.08, "end": 972, "text": " Now why does it help in this case?" }, { "start": 972, "end": 976.74, "text": " And I think it is not exactly the case in this case because they always say their teacher" }, { "start": 976.74, "end": 979.08, "text": " model has this extra projection layer." }, { "start": 979.08, "end": 980.08, "text": " Right?" }, { "start": 980.08, "end": 983.96, "text": " And then the student model doesn't have that even if they do self training." }, { "start": 983.96, "end": 985.9200000000001, "text": " But why does it help in this case?" }, { "start": 985.9200000000001, "end": 990.44, "text": " I mean, it's, it's kind of shocking and I'm pretty sure it helps in any case, but in this" }, { "start": 990.44, "end": 997.76, "text": " particular case it helps because now you're using the unlabeled data again." }, { "start": 997.76, "end": 1004.46, "text": " So you have a teacher model and the teacher model is trained first using unsupervised" }, { "start": 1004.46, "end": 1009.1600000000001, "text": " like this is the teacher model right here using unsupervised training." }, { "start": 1009.1600000000001, "end": 1013.24, "text": " Then the teacher model is further fine tuned on the small data." }, { "start": 1013.24, "end": 1014.24, "text": " Right?" }, { "start": 1014.24, "end": 1020.72, "text": " So it is now already pretty good at the task, but how can you get a student model that's" }, { "start": 1020.72, "end": 1022.88, "text": " even better than the teacher model?" }, { "start": 1022.88, "end": 1025.2, "text": " It's by using again this unlabeled data." }, { "start": 1025.2, "end": 1027.16, "text": " You have this giant amount of data." }, { "start": 1027.16, "end": 1031.88, "text": " So what you'll do is you take an image from the unlabeled data and you ask the teacher" }, { "start": 1031.88, "end": 1035.04, "text": " model, teacher model, what do you think about that image?" }, { "start": 1035.04, "end": 1036.04, "text": " Right?" }, { "start": 1036.04, "end": 1039.6, "text": " And the teacher model will give you a prediction." }, { "start": 1039.6, "end": 1045.8799999999999, "text": " Like let's say again, this 90%, 10%, 0% and then you take the student model, you input" }, { "start": 1045.8799999999999, "end": 1051.1, "text": " that image and you compare its output to what the teacher said." }, { "start": 1051.1, "end": 1054.1799999999998, "text": " So this combines the teacher model." }, { "start": 1054.1799999999998, "end": 1055.76, "text": " You freeze the teacher model, right?" }, { "start": 1055.76, "end": 1058.9599999999998, "text": " The teacher model is only trained until here." }, { "start": 1058.9599999999998, "end": 1060.6799999999998, "text": " You take it from here." }, { "start": 1060.6799999999998, "end": 1065.3, "text": " The student model is now able to take basically the teacher." }, { "start": 1065.3, "end": 1073.36, "text": " It takes everything that the teacher model knows, not only about this data, but about" }, { "start": 1073.36, "end": 1074.36, "text": " all the data." }, { "start": 1074.36, "end": 1077.68, "text": " So it kind of gets to ask the teacher model, what do you think about this?" }, { "start": 1077.68, "end": 1078.68, "text": " What do you think about this?" }, { "start": 1078.68, "end": 1079.76, "text": " What do you think about this?" }, { "start": 1079.76, "end": 1084.8799999999999, "text": " And it can incorporate all that knowledge about all of this unlabeled data." }, { "start": 1084.8799999999999, "end": 1091.6, "text": " And that's why the student model here in the end, if it's the same size, will probably" }, { "start": 1091.6, "end": 1094.96, "text": " end up even better than the teacher model." }, { "start": 1094.96, "end": 1100, "text": " So distillation, I think also is still kind of a mystery of why you get a better model" }, { "start": 1100, "end": 1106.44, "text": " or, I mean, to make it smaller, if you make it a lot smaller, usually you don't end up" }, { "start": 1106.44, "end": 1109.92, "text": " with a better model, but you end up with a pretty good model that you couldn't have gotten" }, { "start": 1109.92, "end": 1114.4, "text": " by just training the small model." }, { "start": 1114.4, "end": 1115.8400000000001, "text": " So that's already pretty cool." }, { "start": 1115.8400000000001, "end": 1123, "text": " But why you get a better model when they're the same size, I don't think that's well understood" }, { "start": 1123, "end": 1124.3600000000001, "text": " yet." }, { "start": 1124.36, "end": 1127.3799999999999, "text": " So that's the three stage approach." }, { "start": 1127.3799999999999, "end": 1133.9599999999998, "text": " So recap, first, use all of the data without labels to do unsupervised or self supervised" }, { "start": 1133.9599999999998, "end": 1135.9199999999998, "text": " contrastive pre-training." }, { "start": 1135.9199999999998, "end": 1141.4399999999998, "text": " Second, use only the data that has labels to do fine tuning." }, { "start": 1141.4399999999998, "end": 1150.76, "text": " Third, either distill the learned classifier to a smaller model or distill it to a model" }, { "start": 1150.76, "end": 1152.1599999999999, "text": " of the same size." }, { "start": 1152.16, "end": 1160.8400000000001, "text": " Then in both cases, you would again use the unlabeled, all of the unlabeled data." }, { "start": 1160.8400000000001, "end": 1162.3200000000002, "text": " And that's the three step approach." }, { "start": 1162.3200000000002, "end": 1168.72, "text": " That's SEMCLEAR v2 in all of its form." }, { "start": 1168.72, "end": 1172.76, "text": " So they go into fine tuning right here." }, { "start": 1172.76, "end": 1180.68, "text": " And yeah, so they say again, we elaborate with a three layer projection head." }, { "start": 1180.68, "end": 1182.48, "text": " So that's the three layer projection head." }, { "start": 1182.48, "end": 1190.04, "text": " This here is the output of ResNet-50, where Sigma is a ReLU non-linearity and we ignore" }, { "start": 1190.04, "end": 1193.2, "text": " the bias term for brevity, blah, blah, blah, blah, blah." }, { "start": 1193.2, "end": 1194.52, "text": " So they contrast this here." }, { "start": 1194.52, "end": 1200.68, "text": " For fine tuning, SEMCLEAR uses this right here, which is just, it's basically just a" }, { "start": 1200.68, "end": 1210.28, "text": " classifier on top of the output of the ResNet-50." }, { "start": 1210.28, "end": 1214.28, "text": " This is fine tuning from the input layer of the projection head." }, { "start": 1214.28, "end": 1220.52, "text": " To fine tune from the first layer of the projection head, we have a new encoder function as this," }, { "start": 1220.52, "end": 1223.94, "text": " which is ResNet followed by fully connected layers." }, { "start": 1223.94, "end": 1229.76, "text": " And you see they take the ResNet-50 output and they ship it through the first projection" }, { "start": 1229.76, "end": 1233.16, "text": " layer and then there is a task specific classifier." }, { "start": 1233.16, "end": 1239.6399999999999, "text": " Now, again, why, I don't even see why they make like this ginormous deal out of it, especially," }, { "start": 1239.64, "end": 1242.5600000000002, "text": " especially since the last layer of the ResNet-50." }, { "start": 1242.5600000000002, "end": 1248.68, "text": " I'm not, okay, here is, I'm not entirely sure, but are they taking the log?" }, { "start": 1248.68, "end": 1250.3200000000002, "text": " No, they're probably not taking the log." }, { "start": 1250.3200000000002, "end": 1251.72, "text": " It's okay." }, { "start": 1251.72, "end": 1255.68, "text": " But it's, yeah, it's just weird." }, { "start": 1255.68, "end": 1259.5600000000002, "text": " Like is there even a non-linearity at the end right here?" }, { "start": 1259.5600000000002, "end": 1264.76, "text": " Or is this really just like two matrix multiplications in a row, which I'm going to guess there's" }, { "start": 1264.76, "end": 1269.24, "text": " a big chance that that's the case, that the last layer of this encoder is actually not" }, { "start": 1269.24, "end": 1274.52, "text": " even followed by non-linearity and therefore you'll just kind of make the dimension different." }, { "start": 1274.52, "end": 1279.8, "text": " And I don't see why you can't just incorporate this into the model and have to like say it" }, { "start": 1279.8, "end": 1283.44, "text": " over and over again that this is a new special thing, right?" }, { "start": 1283.44, "end": 1287.28, "text": " Again, this is equivalent of tuning from a middle layer of the projection head instead" }, { "start": 1287.28, "end": 1288.76, "text": " of the output layer." }, { "start": 1288.76, "end": 1291.84, "text": " Okay, you just make your model a bit bigger." }, { "start": 1291.84, "end": 1292.84, "text": " Yeah." }, { "start": 1292.84, "end": 1297.24, "text": " So the third step is self-training or knowledge distillation." }, { "start": 1297.24, "end": 1298.96, "text": " And they give two variants right here." }, { "start": 1298.96, "end": 1304.04, "text": " This variant, as you can see here, this is just the cross entropy." }, { "start": 1304.04, "end": 1313.24, "text": " But instead of having labels right here, Y, you have what the teacher model thinks Y is" }, { "start": 1313.24, "end": 1314.24, "text": " given X." }, { "start": 1314.24, "end": 1321.16, "text": " Okay, that's cross entropy, but not with the true labels, but with the output of the teacher" }, { "start": 1321.16, "end": 1322.16, "text": " model." }, { "start": 1322.16, "end": 1323.66, "text": " And you can even mix that." }, { "start": 1323.66, "end": 1330.88, "text": " So you can, as you can see right here, you can mix this with an actual supervised loss." }, { "start": 1330.88, "end": 1333.1200000000001, "text": " So this would be the supervised loss, whatever." }, { "start": 1333.1200000000001, "end": 1335.1200000000001, "text": " Yeah, I guess that I was wrong." }, { "start": 1335.1200000000001, "end": 1340.5800000000002, "text": " That wasn't, I guess P of Y is always one in that case." }, { "start": 1340.5800000000002, "end": 1347.68, "text": " But they don't use this particular kind, I think, except in one of the ablations." }, { "start": 1347.68, "end": 1349.0400000000002, "text": " So how does this work?" }, { "start": 1349.0400000000002, "end": 1352.28, "text": " It works pretty well." }, { "start": 1352.28, "end": 1359.68, "text": " And so one of their experiments, as you see up here, it works pretty well in that if you" }, { "start": 1359.68, "end": 1368.44, "text": " have 1% of the labels, only 1% of ImageNet labels, which they say is smaller or equal" }, { "start": 1368.44, "end": 1375.28, "text": " than 13 images per class, so there's a thousand classes and you only have 13 labels per class" }, { "start": 1375.28, "end": 1377.28, "text": " or less." }, { "start": 1377.28, "end": 1388.32, "text": " If you, and they differentiate, if your encoder that you train is a ResNet 50, then you get," }, { "start": 1388.32, "end": 1391.8999999999999, "text": " and you can see the dashed line here is a supervised baseline." }, { "start": 1391.8999999999999, "end": 1396, "text": " You almost get to the supervised baseline with 1% of the labels." }, { "start": 1396, "end": 1401.52, "text": " And if you actually have a larger ResNet, then you get to the supervised performance" }, { "start": 1401.52, "end": 1405.3799999999999, "text": " without 99% of the labels." }, { "start": 1405.38, "end": 1413.24, "text": " And if you have, excuse me, 10% of the labels, you pass the supervised baseline." }, { "start": 1413.24, "end": 1419.72, "text": " So the supervised baseline is on 100% of the labels, mind you, and you only have 10% and" }, { "start": 1419.72, "end": 1421.88, "text": " this outperforms the supervised baseline." }, { "start": 1421.88, "end": 1427.5200000000002, "text": " Now of course, you could, here you could have another graphic where you show, oh, 100%." }, { "start": 1427.5200000000002, "end": 1431.5200000000002, "text": " What if we, you know, what if we do the whole procedure with 100% of the labels?" }, { "start": 1431.52, "end": 1438.52, "text": " So first we don't label the data, we do supervised, self-supervision, then we fine tune on a 100%" }, { "start": 1438.52, "end": 1439.52, "text": " of the data." }, { "start": 1439.52, "end": 1443.44, "text": " And then we do this distillation again, you would of course be even better." }, { "start": 1443.44, "end": 1448, "text": " And I think they have this somewhere in a table, but this is already pretty, pretty" }, { "start": 1448, "end": 1451.2, "text": " impressive." }, { "start": 1451.2, "end": 1456.24, "text": " And another claim they make right here is about the model sizes." }, { "start": 1456.24, "end": 1463.64, "text": " So and this figure is description, this now relates to the title." }, { "start": 1463.64, "end": 1470.44, "text": " They say bigger models yield larger gains when fine tuning with fewer labeled examples." }, { "start": 1470.44, "end": 1475.72, "text": " So there are three comparative statement words in one sentence." }, { "start": 1475.72, "end": 1479, "text": " Let's unpack this." }, { "start": 1479, "end": 1482.4, "text": " Bigger models yield larger gains." }, { "start": 1482.4, "end": 1491.52, "text": " So the bigger the model, the better the good, let's say, when fine tuning with fewer labeled" }, { "start": 1491.52, "end": 1492.52, "text": " examples." }, { "start": 1492.52, "end": 1493.52, "text": " Let's just look at the graph." }, { "start": 1493.52, "end": 1494.68, "text": " It's pretty, it's really clear." }, { "start": 1494.68, "end": 1497.88, "text": " So here we have number of parameters going over." }, { "start": 1497.88, "end": 1502.72, "text": " So these are the different models they look at, how many parameters they have to do this" }, { "start": 1502.72, "end": 1504.0800000000002, "text": " whole procedure." }, { "start": 1504.0800000000002, "end": 1511.2, "text": " And here is the relative improvement in percent over the top ImageNet 1 top accuracy." }, { "start": 1511.2, "end": 1518.8, "text": " So if you do this whole thing with 100% of the labels, right, I'm going to guess this" }, { "start": 1518.8, "end": 1521.8400000000001, "text": " here, this here is where they start out." }, { "start": 1521.8400000000001, "end": 1528.04, "text": " And you can see as you grow your models, you grow the performance." }, { "start": 1528.04, "end": 1534.24, "text": " And this, this is just by increasing the model size, right, you have the same data set, you" }, { "start": 1534.24, "end": 1538.76, "text": " have the same amount of labels, you have the same number of steps that you train for, and" }, { "start": 1538.76, "end": 1546.52, "text": " so on, just by the fact that you make your model bigger, you gain in performance." }, { "start": 1546.52, "end": 1553.4, "text": " Okay, now you can see that these curves here are above one another." }, { "start": 1553.4, "end": 1558.28, "text": " And these curves refer to getting small, less and less labels." }, { "start": 1558.28, "end": 1564.4, "text": " Okay, so if you only have 10% of the labels, your relative gains are larger." }, { "start": 1564.4, "end": 1569.76, "text": " That doesn't mean that you perform better with 10% of the labels than with 100% of the" }, { "start": 1569.76, "end": 1572.68, "text": " labels, that would be like ridiculous." }, { "start": 1572.68, "end": 1575.9, "text": " Well, I guess in this day and age, nothing is ridiculous." }, { "start": 1575.9, "end": 1582.5600000000002, "text": " But for now, we're still performing better by having more labels if we do the same procedure," }, { "start": 1582.5600000000002, "end": 1583.5600000000002, "text": " right?" }, { "start": 1583.5600000000002, "end": 1585.4, "text": " It's not like here." }, { "start": 1585.4, "end": 1591.4, "text": " So here, this baseline, the supervised baseline only does supervised training, right?" }, { "start": 1591.4, "end": 1595.3200000000002, "text": " So that's why we can outperform it with less of labels." }, { "start": 1595.3200000000002, "end": 1597.6000000000001, "text": " But here, we do the same procedure." }, { "start": 1597.6000000000001, "end": 1599.76, "text": " This is relative improvement, right?" }, { "start": 1599.76, "end": 1608.72, "text": " So this right here, the starting point would be if you had 10% of labels and a 25 million" }, { "start": 1608.72, "end": 1611.5600000000002, "text": " model, parameter model." }, { "start": 1611.5600000000002, "end": 1617.44, "text": " And this right here, for example, is if you have the same amount of labels, but a 200" }, { "start": 1617.44, "end": 1618.76, "text": " million parameter model." }, { "start": 1618.76, "end": 1622.66, "text": " And this is relative improvement, okay?" }, { "start": 1622.66, "end": 1631.64, "text": " But what the graph says is that the relative improvement is larger, the relative improvement" }, { "start": 1631.64, "end": 1639.12, "text": " is higher, the more parameters you have, which is the more you go to the right." }, { "start": 1639.12, "end": 1645.92, "text": " And that effect in itself is higher, the fewer labels you have, which is the different graphs." }, { "start": 1645.92, "end": 1647.6, "text": " And you can see that right here." }, { "start": 1647.6, "end": 1652.74, "text": " So if you have fewer and fewer labels, it becomes more and more important that you have" }, { "start": 1652.74, "end": 1654.24, "text": " bigger models." }, { "start": 1654.24, "end": 1656.78, "text": " And that's really counterintuitive, right?" }, { "start": 1656.78, "end": 1663.76, "text": " Because you would expect that the bigger models, they can overfit much more easily to the fewer" }, { "start": 1663.76, "end": 1664.76, "text": " labels." }, { "start": 1664.76, "end": 1665.76, "text": " But that doesn't seem the case." }, { "start": 1665.76, "end": 1671.9199999999998, "text": " So this self supervision, it really seems to be sort of a counter to this notion of" }, { "start": 1671.9199999999998, "end": 1673.6, "text": " overfitting." }, { "start": 1673.6, "end": 1677.8, "text": " And if you have larger and larger models, that's what they argue in the paper, you might" }, { "start": 1677.8, "end": 1683.4199999999998, "text": " be able to learn more and more features that might be useful for classification." }, { "start": 1683.4199999999998, "end": 1688.48, "text": " So if you have a larger model, you might, you're going to learn more kinds of features," }, { "start": 1688.48, "end": 1692.9199999999998, "text": " and then you're going to outperform because you have more chance that these features are" }, { "start": 1692.9199999999998, "end": 1695.4599999999998, "text": " going to be useful for classification." }, { "start": 1695.4599999999998, "end": 1701.1999999999998, "text": " And I don't think they really make a statement as to why that happens more with the, if you" }, { "start": 1701.1999999999998, "end": 1703.4399999999998, "text": " have less labels." }, { "start": 1703.44, "end": 1704.8, "text": " So let's think about this." }, { "start": 1704.8, "end": 1712, "text": " If I have very few labels, very, very few labels, why does it help me even more if I" }, { "start": 1712, "end": 1713, "text": " have a big model?" }, { "start": 1713, "end": 1717.28, "text": " Well, with the same argumentation, we could say, and maybe they actually say this already." }, { "start": 1717.28, "end": 1722.3200000000002, "text": " So I might be copying them involuntarily." }, { "start": 1722.3200000000002, "end": 1729.4, "text": " Maybe with fewer and fewer labels, like let's say we have all the labels, that's probably" }, { "start": 1729.4, "end": 1731.0800000000002, "text": " too many, right?" }, { "start": 1731.08, "end": 1736.6, "text": " If we can learn a task with some accuracy, we probably had too many labels." }, { "start": 1736.6, "end": 1737.6, "text": " Okay." }, { "start": 1737.6, "end": 1740.9199999999998, "text": " It's like, if we can't learn a task, we know we have too few." }, { "start": 1740.9199999999998, "end": 1745.32, "text": " Somewhere there is a border where we have enough, but that's like kind of one number." }, { "start": 1745.32, "end": 1751.54, "text": " And everything else is too many, technically speaking, like learning theoretically speaking." }, { "start": 1751.54, "end": 1755.36, "text": " So usually we have too many labels." }, { "start": 1755.36, "end": 1756.6399999999999, "text": " And what does that mean?" }, { "start": 1756.6399999999999, "end": 1759, "text": " That probably means that there are multiple ways." }, { "start": 1759, "end": 1763.84, "text": " Like if we have too many labels, there are multiple different features we can pick up" }, { "start": 1763.84, "end": 1764.84, "text": " to learn." }, { "start": 1764.84, "end": 1767.76, "text": " There are multiple different paths to learn our goals." }, { "start": 1767.76, "end": 1773.88, "text": " So if we have ImageNet, and like there's this weird task to recognize a three, and we get" }, { "start": 1773.88, "end": 1778.72, "text": " lots and lots and lots of examples of threes, right?" }, { "start": 1778.72, "end": 1780.2, "text": " We can decide on a feature." }, { "start": 1780.2, "end": 1784.62, "text": " We can say, oh, all the threes that I see, they have this bow down here, or all the threes" }, { "start": 1784.62, "end": 1787.68, "text": " that I see, they have this bend here, and so on." }, { "start": 1787.68, "end": 1793.5600000000002, "text": " But if I only have very few labels, there might only be like a single feature that is" }, { "start": 1793.5600000000002, "end": 1798, "text": " even theoretically possible to learn from the labels I'm given." }, { "start": 1798, "end": 1802.98, "text": " And therefore, if I have a bigger model in cell in pre-training, because the pre-training" }, { "start": 1802.98, "end": 1806.8, "text": " happens with the same amount of data, right?" }, { "start": 1806.8, "end": 1813.44, "text": " If I have a bigger model that does the self-supervised pre-training, it's going to learn more features." }, { "start": 1813.44, "end": 1820.72, "text": " And then there's a higher chance that that one feature that these very few labels that" }, { "start": 1820.72, "end": 1825.46, "text": " I am able to learn something from is going to be in these features." }, { "start": 1825.46, "end": 1831.4, "text": " So that's kind of how I make sense of it in combination with what they're saying right" }, { "start": 1831.4, "end": 1832.72, "text": " here." }, { "start": 1832.72, "end": 1836.9, "text": " Okay, so this was the main points." }, { "start": 1836.9, "end": 1841.56, "text": " They do a lot of empirical studies showing the effects of these sizes." }, { "start": 1841.56, "end": 1848.52, "text": " They stress that it's important to have both deep and wide networks." }, { "start": 1848.52, "end": 1853.06, "text": " And they also do this additional attention mechanism over the convolution filters." }, { "start": 1853.06, "end": 1856.8799999999999, "text": " I don't want to go into that particularly." }, { "start": 1856.8799999999999, "end": 1864.96, "text": " But they also do linear evaluation compared to supervised, compared to fine tuning with" }, { "start": 1864.96, "end": 1866.6799999999998, "text": " 100% of the labels." }, { "start": 1866.68, "end": 1872.88, "text": " So they do a very thorough empirical investigation." }, { "start": 1872.88, "end": 1876.48, "text": " And yeah, I do appreciate that." }, { "start": 1876.48, "end": 1880.16, "text": " And they kind of show the same things." }, { "start": 1880.16, "end": 1883.92, "text": " And here they show the number of layers in the projection head." }, { "start": 1883.92, "end": 1890.3, "text": " So as you increase the number of layers in the projection head and train from the optimal" }, { "start": 1890.3, "end": 1893.94, "text": " layer in the middle, your performance goes up, as you can see." }, { "start": 1893.94, "end": 1899.72, "text": " But it also this effect is stronger when you have fewer labels, right?" }, { "start": 1899.72, "end": 1904.26, "text": " You can see the differences here are greater than the differences here or even here when" }, { "start": 1904.26, "end": 1906.6000000000001, "text": " you have 100% of the labels." }, { "start": 1906.6000000000001, "end": 1913.3400000000001, "text": " So the fewer labels, the fewer the labels, the more benefit you have from the architecture" }, { "start": 1913.3400000000001, "end": 1914.3400000000001, "text": " right here." }, { "start": 1914.3400000000001, "end": 1919.0800000000002, "text": " And here they show that it's not always optimal to train from the last projection layer, but" }, { "start": 1919.0800000000002, "end": 1920.3200000000002, "text": " here the first one." }, { "start": 1920.32, "end": 1925.12, "text": " So I guess they converge on three projection layers, and you always want to keep the first" }, { "start": 1925.12, "end": 1932.1599999999999, "text": " one around after self supervised training, as we mentioned before." }, { "start": 1932.1599999999999, "end": 1938.08, "text": " They investigate different distillation losses and show that it is actually important that" }, { "start": 1938.08, "end": 1944.12, "text": " you do the distillation loss on labeled and unlabeled sets." }, { "start": 1944.12, "end": 1952.7199999999998, "text": " You can see here if you only train with the labels after fine tuning, you get poor performance." }, { "start": 1952.7199999999998, "end": 1959.32, "text": " If you do the label and distillation loss, but only do it on the data set where you have" }, { "start": 1959.32, "end": 1962.5, "text": " labels, then you get more performance." }, { "start": 1962.5, "end": 1967.82, "text": " If you do label and distillation loss, but also include your unlabeled data, you get" }, { "start": 1967.82, "end": 1969.8, "text": " even more performance." }, { "start": 1969.8, "end": 1974.56, "text": " And then if you do that, but you don't do the label loss." }, { "start": 1974.56, "end": 1980.82, "text": " So before we've seen you can mix the distillation loss with the label loss, if you have lots" }, { "start": 1980.82, "end": 1984.32, "text": " of labels, then you drop in performance again." }, { "start": 1984.32, "end": 1988.8799999999999, "text": " And you can see right here, the drop in performance is proportional to how many labeled examples" }, { "start": 1988.8799999999999, "end": 1989.8799999999999, "text": " you have." }, { "start": 1989.8799999999999, "end": 1991.3999999999999, "text": " And that's natural, right?" }, { "start": 1991.3999999999999, "end": 1996.84, "text": " If you have the labels, you can actually mix that information in with the distillation" }, { "start": 1996.84, "end": 1999, "text": " loss and that will make you better." }, { "start": 1999, "end": 2006.96, "text": " And here they drop 0.1% and here they drop less than 1% by leaving away the label." }, { "start": 2006.96, "end": 2014.48, "text": " But their point basically is that it is more important to distill using also unlabeled" }, { "start": 2014.48, "end": 2020.44, "text": " data, then it is to distill, including the label loss." }, { "start": 2020.44, "end": 2022.64, "text": " And it's much easier to not include the label loss." }, { "start": 2022.64, "end": 2026.56, "text": " So they don't do it, I guess." }, { "start": 2026.56, "end": 2030.44, "text": " All right, so I think that was it." }, { "start": 2030.44, "end": 2035.6799999999998, "text": " They compare, as I said, they compare like self distillation, where you distill into" }, { "start": 2035.6799999999998, "end": 2042.6799999999998, "text": " an equally sized model and down distillation, where you distill into a smaller model, maybe" }, { "start": 2042.6799999999998, "end": 2044.36, "text": " that's vice versa." }, { "start": 2044.36, "end": 2047.12, "text": " And they do a lot of comparison to other methods." }, { "start": 2047.12, "end": 2050.52, "text": " So this is a very thorough work, I feel." }, { "start": 2050.52, "end": 2057.68, "text": " And yeah, if you want more about the exact experiments, I invite you to look at the paper." }, { "start": 2057.68, "end": 2063.96, "text": " And let's just have a final look at the broader impact statement right here." }, { "start": 2063.96, "end": 2071.84, "text": " So the broader, remember the broader impact statement is supposed to force you to think" }, { "start": 2071.84, "end": 2078.88, "text": " about how society might be impacted at large by your work." }, { "start": 2078.88, "end": 2082.48, "text": " So it says, the finding described in this paper can potentially be harnessed to improve" }, { "start": 2082.48, "end": 2087.56, "text": " accuracy in any application or computer vision, where it is more expensive or difficult to" }, { "start": 2087.56, "end": 2090.96, "text": " label additional data than to train larger models." }, { "start": 2090.96, "end": 2094.44, "text": " Such applications are clearly beneficial to society." }, { "start": 2094.44, "end": 2099.12, "text": " For example, in medical applications where acquiring high quality labels requires careful" }, { "start": 2099.12, "end": 2103.88, "text": " annotation by clinicians, better semi supervised learning approaches can potentially help save" }, { "start": 2103.88, "end": 2105.32, "text": " lives." }, { "start": 2105.32, "end": 2109.2400000000002, "text": " Application of computer vision to agriculture can increase crop yields, which may help to" }, { "start": 2109.2400000000002, "end": 2111.6000000000004, "text": " improve availability of food." }, { "start": 2111.6000000000004, "end": 2115.92, "text": " However, we also recognize that our approach can become a potential component of harmful" }, { "start": 2115.92, "end": 2118.04, "text": " surveillance systems." }, { "start": 2118.04, "end": 2123.76, "text": " Moreover, there is an entire industry built around human labeling services and technology" }, { "start": 2123.76, "end": 2128, "text": " that reduces the need for these services could lead to short term loss of income for some" }, { "start": 2128, "end": 2131.7200000000003, "text": " of those currently employed or contracted to provide labels." }, { "start": 2131.72, "end": 2139.68, "text": " So ask yourself how much of that statement has to do with the actual novelty of this" }, { "start": 2139.68, "end": 2141.52, "text": " paper?" }, { "start": 2141.52, "end": 2144.24, "text": " And the answer is of course, zero, right?" }, { "start": 2144.24, "end": 2150.6, "text": " Like you can replace like our method in this thing with like machine learning or computer" }, { "start": 2150.6, "end": 2157.9599999999996, "text": " vision in general, like, oh, really SIMClear V2 specifically can increase crop yields?" }, { "start": 2157.96, "end": 2164.68, "text": " Like that specific invention of this paper will lead to higher crop yields, will lead" }, { "start": 2164.68, "end": 2167.12, "text": " to surveillance systems." }, { "start": 2167.12, "end": 2174.56, "text": " So I'm, yeah, you know, I think like, I'm not gonna get too upset about these." }, { "start": 2174.56, "end": 2178.76, "text": " I mean, this, I think it's quite funny." }, { "start": 2178.76, "end": 2188.84, "text": " But just, again, I wonder whether the people advocating for these things are happy with" }, { "start": 2188.84, "end": 2195.5600000000004, "text": " these statements, because clearly, clearly, this is just a template that you copy paste" }, { "start": 2195.5600000000004, "end": 2199.48, "text": " from paper to paper, replacing like a few words." }, { "start": 2199.48, "end": 2202.76, "text": " And if it's computer vision, you're like, oh, my deep fakes." }, { "start": 2202.76, "end": 2206.7200000000003, "text": " And if it's an NLP, it's like, oh, I'm a fake news." }, { "start": 2206.72, "end": 2217.4399999999996, "text": " And yeah, I wonder if really anything like particularly is has I wonder whether these" }, { "start": 2217.4399999999996, "end": 2218.7599999999998, "text": " people are happy now." }, { "start": 2218.7599999999998, "end": 2220.64, "text": " Yeah, I just I wonder." }, { "start": 2220.64, "end": 2227.04, "text": " And if, if they are, I wonder whether it's really for the reason that they claim that," }, { "start": 2227.04, "end": 2232.9599999999996, "text": " oh, now we have a statement here of how it impacts society, because I could have told" }, { "start": 2232.9599999999996, "end": 2233.9599999999996, "text": " you that before." }, { "start": 2233.96, "end": 2237.6, "text": " I even read the title of the paper, right, what the broader impact statement is going" }, { "start": 2237.6, "end": 2238.6, "text": " to be." }, { "start": 2238.6, "end": 2244.6, "text": " In any case, rant too long, check out paper, share it out, leave a like, comment if you" }, { "start": 2244.6, "end": 2247.6, "text": " disagree or agree." }, { "start": 2247.6, "end": 2264.44, "text": " And yeah, bye bye." } ]
cllFzkvrYmE
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "geoff hinton", "geoff hinton capsule networks", "geoff hinton neural networks", "geoffrey hinton", "geoffrey hinton deep learning", "geoffrey hinton glom", "hinton glom", "glom model", "deep learning tutorial", "introduction to deep learning", "capsule networks", "computer vision", "capsule networks explained", "google brain", "google ai", "schmidhuber", "transformer", "attention mechanism", "consensus algorithm", "column" ]
#glom #hinton #capsules Geoffrey Hinton describes GLOM, a Computer Vision model that combines transformers, neural fields, contrastive learning, capsule networks, denoising autoencoders and RNNs. GLOM decomposes an image into a parse tree of objects and their parts. However, unlike previous systems, the parse tree is constructed dynamically and differently for each input, without changing the underlying neural network. This is done by a multi-step consensus algorithm that runs over different levels of abstraction at each location of an image simultaneously. GLOM is just an idea for now but suggests a radically new approach to AI visual scene understanding. OUTLINE: 0:00 - Intro & Overview 3:10 - Object Recognition as Parse Trees 5:40 - Capsule Networks 8:00 - GLOM Architecture Overview 13:10 - Top-Down and Bottom-Up communication 18:30 - Emergence of Islands 22:00 - Cross-Column Attention Mechanism 27:10 - My Improvements for the Attention Mechanism 35:25 - Some Design Decisions 43:25 - Training GLOM as a Denoising Autoencoder & Contrastive Learning 52:20 - Coordinate Transformations & Representing Uncertainty 57:05 - How GLOM handles Video 1:01:10 - Conclusion & Comments Paper: https://arxiv.org/abs/2102.12627 Abstract: This paper does not describe a working system. Instead, it presents a single idea about representation which allows advances made by several different groups to be combined into an imaginary system called GLOM. The advances include transformers, neural fields, contrastive representation learning, distillation and capsules. GLOM answers the question: How can a neural network with a fixed architecture parse an image into a part-whole hierarchy which has a different structure for each image? The idea is simply to use islands of identical vectors to represent the nodes in the parse tree. If GLOM can be made to work, it should significantly improve the interpretability of the representations produced by transformer-like systems when applied to vision or language Authors: Geoffrey Hinton Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher LinkedIn: https://www.linkedin.com/in/yannic-kilcher-488534136/ BiliBili: https://space.bilibili.com/1824646584 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Hi there. Today we'll look at how to represent part-whole hierarchies in a neural network by the legend himself Jeffrey Hinton. He describes a system also known as GLOM that is a new approach to processing visual information using neural networks. And interestingly, the paper starts off by saying this paper does not describe a working system. So this is an idea paper, Jeffrey Hinton's suggestion of how we should go about solving vision or furthering vision in the AI community. He says openly, these are just ideas. Please prove me right, prove me wrong, try them out, and so on. And I absolutely welcome this. Idea papers is a thing that I think we have lost as a community because everything needs to be state of the art and so on. This is super cool, and I encourage more people to do it. I'm not saying you're going to have the same kind of success with an idea paper as Jeff Hinton. He is banking on his name in large part with this, but nevertheless it's just an archive paper. I see people complaining, this would never be possible if it wasn't. Yeah, it wouldn't. People wouldn't pay attention, but you're welcome to write your ideas and post them on archive, or write a blog post, make a YouTube video. Anyone has opinions? So go ahead. So to the paper itself, GLOM, as you can see here, GLOM stems from agglomeration, is a system that instead presents a single idea about representation, which allows advances made by several different groups to be combined into an imaginary system called GLOM. The advances include transformers, neural field, contrastive representation learning, distillation, and capsules. GLOM answers the question, how can a neural network with fixed architecture parse an image into a part-whole hierarchy, which has different structure for each image? The idea is simply to use islands of identical vectors to represent the nodes in the parse tree. If GLOM can be made to work, it should significantly improve the interpretability of the representations produced by transformer-like systems when applied to vision or language. That's the abstract. We'll dive into the system. We'll see what it's about. I think I can actually make a suggestion to improve it. But maybe I'm way behind other folks. So what is the GLOM system? And what are these parse tree about? And why does it combine all of these things? And for that, we look at so it has two core diagrams here. This is the first diagram. This is the second diagram. And at first sight, they have little to do with each other. So let me try to go about it like this. If you have an image, and Hinton looks at vision very much in terms of you have an image or a video, and you want to parse the image into kind of a tree. And the tree should be sort of like a tree of objects and their parts. So let's say it's an image of a car. So the whole notion is very, very object centric. So this is like my best attempt at a car. And a parse tree for this image would look something like this. All right. So this whole thing here is a car. So that's going to be your top node in the parse tree. The car has different parts, namely, it has this cabin, it has a motor and has wheels. So that is going to be those are going to be kind of downstream of that parse tree. Then the cabin itself is going to have two segments here, windows, and maybe here is the door area. So that is going to be window window door, and so on. So you get that we what we want to do is we want to look at an image, sort of create this parse tree over here, this is very much into the into the area of go fi good old fashioned AI people that want to understand a the world in terms of their symbolic representations and relation of the symbols to each other. However, what Hinton is saying is that if you simply do this, it's, it's, you know, you can't really do this with neural networks, neural networks are continuous, and so on. So what would you have to do in In addition, we know that the brain doesn't reconfigure itself every single time you get a new input. So the brain, even though it has some neuroplasticity, while you look at the world and do inference in the world, the connections stay the same. So what we need to do is we need to come up with a system that when we input one image, it can give us one parse tree. But when we input another image, it can give us some kind of other parse tree, maybe now there are two objects in the image. And this one has one descendant only, which in turn has two descendants, and so on, you see the point, the tree structure needs to be different each time. This in part was addressed by Hinton's capsule networks. So in the capsule networks, Hinton's idea was sort of, okay, I'm going to have these capsules here in different layers. And I'm going to have kind of lots of capsules in these layers, lots of capsules in these layers. And I'm going over capsules, because it's kind of important here. So Hinton's idea with capsules was that the first layer of capsules would sort of recognize the smallest parts. So this would be kind of the wheel capsule. And this would be sort of the window capsule, and so on. So there would be a single capsule for every part that could possibly be in an image, right? You already see the limitations. Because if you want to recognize the whole world, you need many capsules. But nevertheless, this was the idea. So a capsule would be active if there was the given object in the image. And then the next thing here, this would be kind of the the motor capsule. So the motor motor capsule, and this would be the cabin capsule, and so on. So the window would activate the cabin capsule, but the door capsule would also activate the cabin capsule, and so on. And the wheel would maybe activate it would maybe activate, I don't know, the wheel should probably be here as well, wheel at this level would activate that and then all of these things here would activate the car capsule. So you can see that this parse tree here is generated dynamically, right? These connections, this routing in capsules is generated every time different. So in the next image, there could be a different object, different capsules are activated, different things are routed together, the parse tree is different. However, you need these many, many capsules for that every one capsule per possible part in the image. And that was just infeasible. And also the routing was very cumbersome in these capsules. So here we go with a new approach. And this new approach is what Hinton describes as the glom architecture is composed of a large number of columns, which all use exactly the same weight. Each column is a stack of spatially local auto encoders that learn multiple levels of representation for what is happening in a small image patch. Okay, so we're going to build up some kind of imagination here. At the at the bottom level, we have our image. So our image is going to be lying flat on the ground, maybe you can see it like this. And it is going to be divided into pixels or small patches, whatever you want. But these are would be called locations. So it would be divided like this into different locations. I am not good at perspective drawing. In any case, above each location, there would be one of these columns. And these columns, I can draw one here, these columns would sort of stack up like this. And these columns would be divided into multiple levels. So there would be a bottom level, which would be this there would be a middle level, higher level, and so on. Hinton suggests about five levels should probably do. And every single level of this column tries to represent the location at the image, right this location down here in a different resolution. So the very bottom level might be aware that there is a part of a wheel like let's say this is actually let's say this is a cat. So here, there's probably Yep, yep. Okay, so you can see there is there is an ear or a part of an ear that stays as a part of an ear in this location. So the very bottom thing would probably represent something like the very structure of the fur. So the bottom thing would represent what's going on at you know, the micro level really the location level, the next layer would represent what's going on at this location in a kind of a broader sense. So that might recognize that that that's an that's actually part of an ear, right. So it goes beyond the location. If you think convolutional neural networks, you're right. So you're going to have a very similar network. So if you think convolutional neural networks, you're in the right ballpark, but we're going to implement this differently. The next layer will recognize well, this location is part of a of a cat of a cat's head. And then the next location will recognize well, this thing is a cat. Now there there is a cat at other places. But at this location, there is a cat, and so on. So maybe we don't have more and this locate at this particular image. But if you consider a different column, like this, this column right here, and you look at what's going on in that column, you'll see similar. So in the top layer, let's just consider the cat the top layer, in the top layer, it might say, well, there's a cat too. But it's also part of it's part of a cat's neck, neck. And then here it's maybe there's a bunch of, well, I don't know, a chin. And there is also a fine first structure of the chin. So you get the idea, every column will build up these rep these representations. And these are vectors. So these are embedding vectors. So at the bottom location, you'd have the fur vector, and then this vector is the ear, whereas here over here, the chin would be very different, or be a different vector at the same layer. So the only thing that agrees here is the cat vector, the cat vector in this top layer would agree between both of these columns. I hope you get the idea, you have a column above each of the locations, every single layer in the column represents that particular location, but at a different level of abstraction and a different level of I don't want to say resolution, but it it would consider more and more of its neighbors. The question is, how does it consider its neighbors? And how do you learn these things, right? So how do you learn these different abstractions? And that's where these columns, they communicate with each other. So Hinton imagines that this is a process over time, where the columns iteratively communicate to each other. And within the column, the layers communicate to each other. And this is one of these first diagrams right here. So this is one single column over time. Okay, this is this would be the, this would be the fur at the ear, this would be the cat's ear, and this would be cat. Okay, so the information that so the embeddings are updated by sending information around every single information around every single embedding, which means that every single vector at every single layer of every single column is updated by simply averaging four things. So we have the embedding at layer l, at time step t plus one is going to be sorry at layer l location x is going to be a sum between the four parts, the four following parts, it's going to be the embedding at the last time step, right? So this is sort of a recurrent neural network. We the new embedding is the old embedding plus it's going to be a function at a top down, that's what Hinton calls top down function of the embedding at the same location in the previous time step at one layer above. So l plus one it is also going to be receiving information from the upwards, I think bottom up, because the bottom up embedding of layer l minus one at the same location at time step t. All right, so this would that's what you can see right here. The green arrows are each level each layer simply passes information to the next time step. This is if any if nothing else happens, you just keep your embedding. Then each embedding also sends itself through a neural network one layer above itself. That's the blue arrows. So the blue arrows here are these and you every everything is a neural network here, every arrow except the green ones, but the green ones could be too. So every arrow is a neural network. So this is a neural network sending information above. And this is intuitive, right? So the ear embedding would sort of send information about itself like saying like, hey, I'm a cat ear sends it above and it goes through a neural network because it needs to be transformed. The neural network has to learn. Well, if it's a cat ear at that level, it might be a cat at the top level. And lastly, every single layer sends information down and that is the red arrows right here. They're also neural networks. So the cat ear says, well, I'm a cat ear. So downstream of myself, there might be, you know, some first structure. So all of these embeddings, they try to predict each other, they try to predict the neighbors of themselves. And Hinton's idea is that by aggregating over time, they will sort of reach a consensus of what is in these columns. Okay, there are a few things missing right here. The one thing that's missing and Hinton pointed this out that all of these different columns that we've drawn, they use the same weights. Okay, so, and he discusses this at the end of the paper, it's not really biologically plausible, but there's an ensemble effect. We won't go into that. But all these, these, so the blue arrows are always the same for each time step, but not necessarily the same between different layers. So that might be this F might be different from this F down here. However, the function passing information from from layer L to layer L plus one is the same in every single column across the image. It's a bit like a convolutional network in terms of weight sharing. So you can imagine it as one by one convolutional network in that sense. But except the information does not only go up the layers, it also goes down the layers over time. As I said, this is an iterative procedure, goes up, down, and laterally. The second thing is, now that you ask, oh, well, if every single column has the same weights, wouldn't that simply sort of how how can you localize any information? And the answer is that you have a side input, like in a neural field, you have a side input annotating each location, basically a positional encoding, honestly. So in in addition to what the image patch looks like, you also get your kind of either your x y coordinates, or you could also get your relative coordinates to some other coordinate frame in there. And so the network knows where it is. And that's going to be important, because what Hinton wants to build are these islands. So the imagination of Hinton is that this is going to be somewhere in between like after time step 10, and you want to run it for 100. And he imagines that there will what will emerge are these sort of islands. So imagine the image is now a 1d vector down here. Or you can imagine these columns in 2d, whatever fits, you know, whatever fits your brain better. But imagine the images, the image is simply the image is simply a 1d line right here. He imagines that the bottom vectors, they will just, you know, happily kind of be describing whatever that is at the very bottom level. But then at the next level, once it goes to sort of higher resolution or lower resolution, higher abstraction, there will be there must necessarily be vectors that are the same if the system works and look at these two vectors and look at these two vectors, they are the same because they now describe objects that are larger than one location, right, the cat's head is larger than simply one location. Therefore, at the layer that represents the cat's head, you expect because these are all all neural all the up and down functions in the same layer have the same weight, you expect that the embedding of a cat's head is the same in in the different columns. Right, that this is if the system works, this must be the case. And then as you go up, you expect more and more of these what what Hinton calls islands to emerge, right. So they they agree. And the idea. The idea between all of this message passing is that over time, all of these things kind of reinforce each other. So we looked at a column before, and we maybe said, okay, so this vector down here, it gets information from the top saying, hey, you know, there's a cat here. So you might be like a cat ear or a cat eye or something like this. And then it gets information from the bottom saying, well, there's a bit of there's, you know, fur here, and there's some cartilage showing and so on. And it has already sort of figured out that it might be an ear. And these informations they own they reinforce itself now like they'd be like, okay, you know, you're saying I'm part of a head and you're saying there's a bit of fur and cartilage. And I already kind of noticed that I'm a bit like an ear. So I'm probably more an ear. So the idea is that over time, you have this consensus algorithm, there's one thing missing. And that is, how do the different columns communicate with each other. So I said there are different parts, there is one missing. And that one missing is going to be, I'm just going to call it whatever a and a is going to be an attention mechanism across all the other columns at the same layer. So if we look here, this cell receives information from above from below from itself, and also, in an attention mechanism way, it's going to receive information from all of the different, all of the different embeddings at the same layer, you can see that, you know, hidden puts in everything we got in here. Now the attention, he says, is easier. And So these are the four parts right here. At each discrete time, and in each column separately, the embedding at a level is updated to be the weighted average of four contributions. The prediction produced by the bottom up neural net acting on the embedding at the level below acting on the embedding at the level below at the previous time, the prediction produced by the top down neural net acting on the embedding at the level above at the previous time, the embedding vector at the previous time step, these three we got, and then the attention weighted average of the embeddings at the same level, right at the same level in nearby columns at the previous time. So nearby, he, sorry, he later backpedals a bit, I think, on nearby and what nearby exactly means. And he at some parts, so this this is idea, I think this is still up for debate. And this is, I think, where I can help. But what he wants to do is he wants to aggregate, he wants to attention aggregate, and he wants to simplify attention. So instead, what we usually have is we're going to produce queries, and keys and values, queries, keys, and values, and they're all going to be different functions of our input. And then we're going to do query times key transposed softmax of that times value, and that is going to be our attention mechanism that allows you know, arbitrary information to be routed around and so on. Hinton says, Nope, what I want is simply that all the queries, the keys and the values, they're all just equal to the embeddings themselves. So the attention mechanism would work out to be the softmax of x times x transposed times x. And what that does is if you yourself are the query, and every vector also itself is the key, what do you attend to, you attend to vectors that are very similar to yourself. And you can see that in Hinton's diagram, the one we circled dark blue, what would it attend to? Well, it would probably attend to its left hand neighbor, the one you can see circled, I'm going to circle it. This one, it will probably attend a lot to this one, it might not attend so much. And the ones over here, it might not attend at all. So what we're going to do is we're going to try to attend to this one to be sure that we have the right thing. You see, this is a consensus algorithm, it is not meant as a way to pass information around, this is not meant like in a transformer as a way to do computation because we have no trainable weights in this process. It is simply meant as a consensus algorithm. So in imagines that by doing this, by sort of attending to things that are similar to you and then integrating their values, there will be these islands forming. And that's what you see right here. You can imagine if two vectors are already close at the same layer, this mechanism will make them even closer. So this is a sort of a clustering algorithm. And so that my question is that these drawings, you look at them, they are very specifically constructed, they're constructed such that a parse tree is emerging. So when you look at this, you have a clear sense I can probably I can probably move all of that crap out of the way. You can see the parse tree, right? Because the black thing is going to be the top node right here, let's leave away the scene level embedding for now, the black thing is going to be the top node. And then it has two child nodes, this one, and this one. And then it has four, every one of those has two child nodes. But it's not it doesn't have to be in this case. So this dynamically and every one of them, you know, the black ones are individual. This is dynamically constructing a parse tree, right? The parse tree here is something like this. And then the the the So this is pretty cool. But it is also drawn deliberately such that a core problem does not arise. And the core problem would be something like, well, what if this vector here was actually also pointing like this, okay, so it is not in it is not in the same. It is not in the same area of the parse tree, right? If you go down the parse tree, it is actually here. Now, if we do what Hinton says, and if for this vector here, we do this aggregation via attention on the same layer, what we will attend to is this vector over here. Now, this is probably not meant to be because this vector over here, it can represent the same thing. But you can see it's not in the in the same path of the parse tree. And he mentions this a little bit throughout, but not necessarily clear. And the drawing makes it seem like there's no problem. But I hope you can see how this is a problem. The attention would pull in information from over here. However, the whole parse tree here and the island on the top layer suggests that these two things should be parsed independently from each other and therefore also processed independently from each other. So here is my suggestion to to extend this and maybe Hinton's already thought of this. But I would suggest that the this attention mechanism here is modulated by how close two things are in the parse tree. So what would that be? So for a given a given vector, it would be how much do you attend to this vector right here? Well, a lot because it agrees with you, right? It you know, this the softmax of the inner product would be high, it agrees with you. And also it is in the same, it is the same branch of the parse tree. So that's perfect, right? This one right here doesn't agree with you, but is in the same branch. So it could potentially later agree with you through a consensus algorithm. However, this one over here, I, you probably shouldn't attend to that too much, even though it points in the same direction, because it's in a different branch of the parse tree, you shouldn't attend zero to it like because these branches on top, they could change. And you know, by you sending information there, this one could change the the top structure here that could agree more with your branch of the parse tree and so on. So my suggestion would be that let's not only get the softmax of the, let's not only get the softmax of the current layer things, but let's do x times and here we're going to have a sum. So this is going to be k. And let's say we're at we're at layer L. And this is layer one, this is layer two, this is layer three, going to number them from the top, actually from the bottom layer m, layer m minus one, and this is layer L. I suck at this. So from the current layer, I want to go up the hierarchy until layer one. And I'm going to take the softmax of the representation at layer L at layer k, where I'm at x k transposed like this. What we aggregate is still the the values on the current layer, but how much we should attend to that should be dependent on the parse tree. And we do that like this. And maybe we have like a kind of a lambda k, L minus k, L minus k. I hope you get what I mean. So how much how much you aggregate this sum here, the sum here is weird. This should go probably. Hi, it's future Yannick. And I just wanted to write that down again. So because I've made some mistakes, obviously, the sum here should be within the softmax because you want to aggregate the distributions in log space, and the softmax should still be valid, you know, distribution. And then the lambda is exponentiated by k and k now properly runs from the zero to all the way up the stacks. So big L would be the total number of layers and little l would be the layer where you're currently at. And you can clearly see that the contribution of these attention matrices it is so lambda would be something smaller than one. And therefore, the contribution is in the current layer is the strongest, but also in the next one up is a bit weaker than one more up is even a bit weaker and so on. So you'd still have essentially the same mechanism as Hinton is suggesting controlling for the fact that things are in different branches of the parse tree. All right, back to classic Yannick who is thoroughly confused by these things. Yeah, I'm not good at I'm not good at coming up with math on the spot. But I hope you can see what it's doing. So it is if, if you simply take the first k, you would simply stay at that layer and it would be what Hinton said. But what I'm saying is you should also consider how much your top your higher layer, one layer up from you agrees with one layer up from the thing you want to attend to. So you also compute that inner product between between the embeddings, and you add that to the softmax distribution. So initially, the softmax distribution would be like you should tend to this thing and this thing, and this thing a lot. But then the next up hierarchy would maybe say, well, we agree, because you know, these are in the same thing, but this one, maybe not so much. And you would add those together, maybe with a lambda factor in here, and then you go one layer up and it would say, well, okay, everything over here basically agrees, right and here, no, but everything over here basically doesn't agree. So you would add that maybe with a lambda squared, as you go up the layers, it would be less and less important, but still you'd consider it. All right. Now, if this is gonna work out, site the channel. Now back to what Hinton says that this is actually the system. This is the system as in a nutshell, you're gonna input the image at the bottom. And Hinton says you could use like a convent at the very bottom to get it into the columns. But then you're going to every time step pass information up the columns down the columns, and between the same layer of the different columns. And that's going to, in some point, this is going to stabilize, I don't know if it has cycles, it probably doesn't have cycles. This is good. Yeah, probably does not have cycles. So at some point, this comes to an end. And if that comes to an end, it should be that the object level embeddings agree on an object, the part level embeddings agree on what parts there are, the sub parts agree, and so on. And they form these islands, these islands give rise to a parse tree. And the parse tree can tell you what object is there, what is it made of, and where are these parts in the image, and so on. So exactly, that is it. And now we're going to look at what Hinton calls some design decisions. How many levels are there? About five. Okay, we can skip that. How fine grained are the locations? Hinton says you could be as fine grained as pixels, or they could correspond to larger image patches. You and he says you could do convolutional neural network to get it in there. Does the bottom up net look at nearby locations? He says, yes, the bottom up net, so this this is not the attention network, that's the bottom up network, it could look at nearby locations. But Hinton imagines that if you have bottom up, top down, and if you have attention drawing information, and if you maybe limit that attention to a neighborhood, then then the the attention will do the job because you can have instead of looking at neighboring locations in the bottom up network, you can simply in two time steps, aggregate that information. So you can do bottom up here, bottom up here, and then using the attention, the lateral mechanism, you can pass that information around this way. And also, it is not as biasing the network to the immediate neighborhood. So the attention mechanism can sort of look farther, which conflicts with what he's saying on top that the attention mechanism might only be looking at the neighbors. I think there are different possibilities here. And only looking at neighbors is actually one of the solution to the problem of having, you know, kind of similar vectors at very distant locations at down the levels. But I think it's not as as good a solutions to simply look at how close things are in pixel space, because even though things are close in pixel space, they might be far away in the parse tree space. How does the attention work? We've already looked at this. So the way that one location attends to another location is going to be the softmax of the inner product between the embeddings here. And the values are also going to be just the embeddings that layer at that layer. The visual input, he says convolutional net could be used. Color and texture. He says, he makes he gives this example, like if you know, if an object is entirely pale or entirely green, or entirely, I don't even know how to pronounce this, the color of a part is straightforward. But what color is the whole object. So this entire notion of capsules, by the way, imagines this as these embeddings represent kind of properties of the object so that the the cat ear embedding represents not only the fact that it is a cat ear, but also different properties about the cat ear and even its location in the image is in the embedding. And, you know, we know that transformers, they must be doing something like this, because we feed in positional embeddings, for example, at the very bottom, and it can still, you know, compute things in terms of positions. So that's the there's an intrinsic connection between kind of capsules and the kind of transformer architecture. He says, one of the motivations of Glom was idea that the whole object has a compound color, which might be called pale green or move. And at the object level, every location belonging to the object has exactly the same compound color. So the object is whatever this all over, when deciding which other locations the object level attend to preference would be given two locations with a similar compound color. So what he's saying right here is that, you know, you could give preference to two similar color locations, when you decide what you want to attend to. But the color isn't as easy as simply saying what color is there in the location that you are at. But you could be so if this is green, and this here is blue, then the bottom layer would say yes, I'm green. And yes, I'm blue. But they could also be saying, well, I am part of a green blue object, right. And then the the higher layer here, you know, attending or caring about multiple or bigger region, its color would then be, you know, green blue, and the consensus could reach on, well, we are a green blue object, even though the object isn't a pure green or pure blue all throughout. So he I think, yeah, it's it's I think it's a side suggestion, maybe he has this as a core motivation between the system. But it's just interesting to see how he thinks of things and he extends the color here to textures and even shapes. Shapes, the individual texture elements have their own shapes and poses in spatial relationships, but an object with a textured surface has exactly the same texture everywhere at the object level. Glom extends this idea to shapes, an object may have parts that are very different from one another, but at the object level, it has exactly the same compound shape in all of the location that it occupies. Basically saying that, okay, every pixel that's part of a cat head is a cat head has the shape of a cat head, even though the individual locations might not recognize that, and that information could be passed around through this consensus mechanism over time. So the cluster discovery versus cluster formation, we've seen that and he makes a lot of he makes a lot of analogies to face recognition. But yeah, the clusters are not the islands of similar embedding vectors at a level can be viewed as clusters, but these clusters are not discovered in immutable data. They are formed by the interaction between the intra level process that favors islands of similarity and dynamically changing suggestions coming from the locations embedding at adjacent levels. So the core here is really this consensus algorithm that creates these clusters. And yeah, the clustering algorithm doesn't work by simply looking at embeddings and deciding which ones go together, but the embeddings themselves update themselves in order to form clusters. And yeah, this is replicating embedding vectors. This is a response to a criticism that I guess he got where someone said, well, why don't why do you represent if you have these, you know, these columns at the bottom, it makes sense, you have all the different vectors. But then as you go up, you know, you have that kind of the same vector for all locations, because it's the same object. Why does it make sense to replicate that everywhere, and not just have one, because, you know, in a database, we just have one. And it basically says that in order to reach the consensus first, of all, it's important to have different vectors, they might be slightly different. So they might have some nuance in them, because, you know, they might get pulled into different directions from the sign of bottom up signal, then from the consensus algorithm on the same layer. So I, you know, I believe that it is that is important. Here, I think it's just this is a criticism he got. And then he decided to put this in here, learning islands. So what we haven't discussed about this yet is how this is trained and Hinton says this is trained as a denoising auto encoder. Let us assume that Glom is trained to reconstruct at its output, the uncorrupted version of an image from which some region has been have been removed. So he goes into self supervised learning with the system. This objective should ensure that information about the input is preserved during the forward pass. And if the regions are sufficiently large, it should also ensure that identifying familiar objects will be helpful for filling in the missing regions. To encourage islands of near identity, we need to add a regularizer. And experience shows that a regularizer that simply encourages similarity between the embeddings of nearby locations can cause representations to collapse. All the embedding vectors may become very small, so that they are all very similar. And the reconstruction will then use very large weights to deal with the very small scale to prevent collapse. And then he says contrastive learning is the answer to this. So how do you regularize the model such that this consensus is formed? He says contrastive learning might be useful, but you can't simply apply it straight out. So it learns to make representations of two different crops of the same image agree, and the representations of two crops from different images disagree. But this is not a sensible thing to do if our aim is to recognize objects. If crop one contains objects A and B and crop two from the same image contains objects B and C, it does not make sense to demand that the representation of the two crops is the same at the object level. Okay, so he says that contrastive learning is good, but you have to pay very careful attention at which layer you employ it. Because if you go down far enough, then contrastive learning, especially this type where you crop the image into different parts, and you say, well, since it's the same image, the representations should agree. Hinton would say, well, at the top layer, yes, but at the bottom layer, certainly not, because they display different things. So you have to be careful where you apply this contrastive learning. And he gives a bunch of suggestions on how to solve that. He says things like, well, negative examples, for example, might not might not even be needed. Well, that's it. Sorry, that's a different thing. So the obvious solution is to regularize the bottom up and top down neural networks by encouraging each of them to predict the consensus option. Yeah, this is the weighted geometric mean of the predictions coming from the top down and bottom up networks, the attention weighted average of the embeddings at nearby locations at the previous time step, the previous state of and I guess, and there should be an end and the previous state of the embedding training, the inter level prediction to agree with the consensus will clearly make the islands found during feed forward inference be more coherent. So he says you could regularize the model to to regress to the consensus option. So it's sort of like a self a self regression. And he asks whether or not that will lead to a collapse, because if you don't have negative examples and contrastive learning, this could lead to simply a collapse. An important question is whether this type of training will necessarily cause collapse if it is not accompanied by training the inter level predictions to be different for negative examples that use the consensus options for unrelated spatial contexts. So here is that problem. Right. If you use the consensus opinion for unrelated spatial context, that might be a problem. He says using layer batch norm should reduce the tendency to collapse, but a more important consideration may be the achievability of the goal. It goes into why regularization could help. And he says if however, an embedding at one location is free to choose which embeddings at other locations it should resemble, the goal can be achieved almost perfectly by learning to form islands of identical vectors and attending almost entirely to other locations that are in the same island. And I don't know, I don't know if this is what I suggested. So I guess this is kind of a convoluted paragraph, and I had to also read it multiple times and I still don't exactly know what he's trying to say right here. But I think what he's saying is that what we want to do is we want to sort of regularize the network to produce this consensus, right. So we have a bottom up signal, a top down signal, we have a current value, and we have the signal from the attention mechanism. Now, what we want to do is we want to reach a consensus such that these islands form. However, if you attend to any sort of things here that have nothing to do with you, you might not be able to reach this consensus, right. That's, I think that's the problem I think he's touching on the problem that I said before. So what he says is, you know, what you should do is you should simply attend to things that are in the same islands already. So if an embedding at one location is free to choose which embedding at other locations it should resemble, the goal can be achieved by learning to form islands of identical vectors and attending almost entirely to other locations that are in the same islands. Now, I think here, what he's doing, he makes the case for the attention mechanism itself, right. So he says, if, if we simply draw in information from the same layer here, you know, anything, any old information might come in, and we might collapse and or we might never reach consensus because any old information might come in. However, if we simply draw in information from the selected neighbors that already are in the same group in the same island as me, then this consensus algorithm works. So if the network, the network is now forced kind of to learn to build these islands of similar things in order to make this consensus work, if we regularize this consensus, then we can actually create a consensus that is similar to the one that we have in the same group. So I think that's the way to make this consensus work if we regularize this consensus. So I believe he makes the case for the attention mechanism. I don't think he, in this case, considers kind of the up the next up layer islands, what I would say is you need to go to the columns in order to decide which things, which locations, right, it's free to choose which embeddings at other locations it should resemble. I think, yeah, this is the case for the attention mechanism. Okay, I hope you're still half with me. If not, I'm, I'm a bit confused because I think what he's doing is he says, contrastive learning would be good, you can use it, but you have to be careful at which layer you do it. Another regularizer to form these islands would be this regularize the network to conform to the consensus option, opinion. However, if you simply aggregate information from the same layer, then that wouldn't work because, you know, the different things in the same layer might correspond to completely different parts of the image. Drawing in information from there would not help you. How do you solve this? By introducing the very attention mechanism that he introduced in order to only draw in information from parts of the same layer that actually are related to you. Okay, the next thing, the next consideration he does is representing coordinate transformations. So how does this represent coordinate transformations? There was a capsule net paper where he explicitly represents coordinate transformations in kind of four dimension quaternion space. And he says that is probably not needed because you don't want to, he says you could represent this by four by four matrices. However, if you simply allocate 16 numbers in each embedding vector, in order to represent the part whole coordinate transformation, like the transformation that relates the part to the whole, that does not make it easy to represent uncertainty about the aspects of pose and certainty about others. So the problem here is that we know that humans, when they watch something right here, when they watch a scene, like this is a chair, and there is a person, a very tiny person on the chair, we don't see necessarily the coordinate frame of the world. What we see is we see the coordinate frame of the chair, like maybe this is the center, and we see the person in relation to the chair, our brain seems to do this intuitively, and hinting things that a system like this should also do it intuitively. So somehow, the coordinate transformations involved going from the eye to the reference through the frame of the chair, and then from the chair to the person, they should be somehow in encoded in this network. However, he also says that it's probably not necessary to encode them explicitly as you know, explicit coordinate transformations, because not only does that make it harder, probably to learn, but also, you can't represent uncertainty. In fact, you can represent uncertainty, that's the next thing right here, much better by having a higher dimensional thing that you're trying to guess, right? If you are trying to guess a distribution with three components, and you simply have a three dimensional vector, you have no way of representing uncertainty. However, if you have a nine dimensional vector, you can have three opinions about the distribution. So this is an opinion, this is an opinion, and then this is an opinion. And then you can sort of aggregate and you can say, well, I'm pretty sure about these two things, because all my opinions are pretty close. But this one here, I'm not so sure because my individual things say different things, things say things. All right, I've this video is too long. So that's his argument right here, we don't need explicit representing of uncertainty, because by simply over parameterizing, we can already represent uncertainty well. And we also don't need disentangled position information and, and so on. Sorry, we don't need different position information. Because, again, the network can take care of that. And he gives a good example, like why would you have disentangled coordinate frame if you have an image. And in the image, the picture in it is this. How do you know if that is a rhomboid shape? Or if it is a rec, if it is a rectangular piece of paper viewed from the side, I should probably draw it way closer, something like something like this. I suck at this. You get probably get what I mean. Like, if it is a different object, it has a like the object and the coordinate transformation are dependent upon each other. And so it makes sense for the neural network to actually entangle the two, because the two things depend on each other. In essence, he's just saying, don't worry about explicitly representing all of the different things. We got it, like the neural network can do all of these things, like uncertainty or position, and pose transformations. So here he compares it to different other architectures. Comparison to CNN, comparison to transformers, comparison to capsule models. And at the end, it goes into video. At the very beginning, he says the paper is about actually a video system. And you can kind of see that because we go through this algorithm in multiple time steps, right? You have, it's like you analyze an image with these columns, which gives you sort of a 3D, 3D tensor with the image at the bottom. And you go in the next time step, you have a new 3D tensor, right? You pass this whole information around with the image at the bottom. Hinton says, well, why does that need to be the same image? That could also be different images. So you could use the system to analyze video. So what he does is he says, at the same time, you do this time step to find agreement, you could actually swap out the video frame, the X, you can swap out the video frame, and produce a slightly different video frame. And you could actually have a kind of an ensemble regularizing effect. So as the whole columns here, the whole system comes to a consensus over time, you feed in different information at the bottom. And what he says is that, you know, if this is a slow enough video, then the top layers here would probably could still reach an agreement, while the bottom layers would change rapidly. But that could be sort of an ensemble or a regularizer regularizing effect that it even has. So he intrinsically connects these two time dimensions, because they would be separate, right, you could input a video, and then in, you know, in each frame, you could do this consensus finding algorithm. But he says, No, it's actually cool to consider them together to do the consensus finding while you sort of watch the video. It's just not clear that you always need the same amount of consensus finding steps as you need as you have video frames. So maybe you want to, maybe you want to take like five consensus steps per video frame or the other way around. Not sure. In any case, I think that's a pretty cool idea. And he says things like, if the changes are rapid, there is no time available to iteratively settle on a good set of embedding vectors for interpreting a specific frame. This means that the GLOM architecture cannot correctly interpret complicated shapes. If the images are changing rapidly, try taking an irregularly shaped potato and throwing it up in the air such a way that it rotates at one or two cycles per second. Even if you smoothly track the potato, you cannot see what shape it is. Now I don't have a potato, but I can give you an avocado. So if you give me a second, how's that? Could you track the shape? I don't know. Probably Hinton's correct. All right. He talks about is this biologically plausible? And I don't want to go too much into this. He discusses some restrictions like, yeah, we still use backprop and is backprop plausible and so on. I love this sentence. In the long run, however, we are all dead. And then the footnote saying there are alternative facts. But yeah, he discusses whether it's biologically plausible. How could you modify it to make it more plausible? For example, when you want to do contrastive learning, there is evidence that dreams during so during sleep, you do contrastive learning, like you produce the negative examples during sleep, and then during the day, you collect the positive examples, and so on. So I think this is a more speculative part of the paper, but it's pretty cool to it's pretty cool to read it. And lastly, he goes into discussion he also says that this paper is too long already. I'm going to just briefly talk about this. And he trashes the neuro symbolic people a bit like he trashes the people that say no, no, you know, neural networks can never do whatever. And he says pretty clearly look, neural networks can represent trees, I've given you a system also BERT can output parse trees. So shut up, I guess. And he comes up with this glom BERT name, which, you know, is is already coined if you wanted to do glom BERT, that's already taken. Sorry. I also by the way also coined then I coined the name may go mania. Right now. Okay, if you want to if you want to use it, it better be a pretty cool machine learning system and be based on glom. Right, that was the paper. I think it's a cool system. It has a bunch of parts that are maybe not super friendly to hardware at the time like this iterative procedure. But honestly, it is not much more than a neural network, sorry, a recurrent neural network with very complicated recurrence functions. The video extension might be a bit tricky. And, but the rest and the regularization might be a bit tricky, the exact objective. So the denoising auto encoder objective isn't super detailed in the paper, he simply says, reconstruct the corrupted version of the input. How exactly the input happens, maybe there's a CNN, maybe the CNN feeds information into actually multiple layers. None of that is exactly specified. So there's lots to figure out. I do think the ideas are very cool. And I love idea papers. And therefore, I recommend that if you're interested more, give this thing a read, give this video a like, share it out, and I'll see you next time. Bye bye.
[ { "start": 0.96, "end": 6.32, "text": " Hi there. Today we'll look at how to represent part-whole hierarchies in a neural network" }, { "start": 6.32, "end": 15.120000000000001, "text": " by the legend himself Jeffrey Hinton. He describes a system also known as GLOM that is a new approach" }, { "start": 15.120000000000001, "end": 22.96, "text": " to processing visual information using neural networks. And interestingly, the paper starts" }, { "start": 22.96, "end": 31.12, "text": " off by saying this paper does not describe a working system. So this is an idea paper," }, { "start": 31.12, "end": 38.32, "text": " Jeffrey Hinton's suggestion of how we should go about solving vision or furthering vision" }, { "start": 38.32, "end": 45.6, "text": " in the AI community. He says openly, these are just ideas. Please prove me right, prove me wrong," }, { "start": 45.6, "end": 53.120000000000005, "text": " try them out, and so on. And I absolutely welcome this. Idea papers is a thing that I think we have" }, { "start": 53.120000000000005, "end": 56.96, "text": " lost as a community because everything needs to be state of the art and so on." }, { "start": 58.480000000000004, "end": 63.36, "text": " This is super cool, and I encourage more people to do it. I'm not saying you're going to have the" }, { "start": 63.36, "end": 69.84, "text": " same kind of success with an idea paper as Jeff Hinton. He is banking on his name in large part" }, { "start": 69.84, "end": 76.24000000000001, "text": " with this, but nevertheless it's just an archive paper. I see people complaining, this would never" }, { "start": 76.24000000000001, "end": 81.28, "text": " be possible if it wasn't. Yeah, it wouldn't. People wouldn't pay attention, but you're welcome to" }, { "start": 81.28, "end": 88.48, "text": " write your ideas and post them on archive, or write a blog post, make a YouTube video." }, { "start": 88.48, "end": 97.44, "text": " Anyone has opinions? So go ahead. So to the paper itself, GLOM, as you can see here," }, { "start": 97.44, "end": 108.08, "text": " GLOM stems from agglomeration, is a system that instead presents a single idea about" }, { "start": 108.08, "end": 113.84, "text": " representation, which allows advances made by several different groups to be combined into" }, { "start": 113.84, "end": 119.68, "text": " an imaginary system called GLOM. The advances include transformers, neural field, contrastive" }, { "start": 119.68, "end": 126.96, "text": " representation learning, distillation, and capsules. GLOM answers the question, how can a" }, { "start": 126.96, "end": 133.12, "text": " neural network with fixed architecture parse an image into a part-whole hierarchy, which has" }, { "start": 133.12, "end": 140.07999999999998, "text": " different structure for each image? The idea is simply to use islands of identical vectors to" }, { "start": 140.07999999999998, "end": 146.56, "text": " represent the nodes in the parse tree. If GLOM can be made to work, it should significantly improve" }, { "start": 146.56, "end": 152.07999999999998, "text": " the interpretability of the representations produced by transformer-like systems when applied" }, { "start": 152.08, "end": 158, "text": " to vision or language. That's the abstract. We'll dive into the system. We'll see what it's about." }, { "start": 158, "end": 166.56, "text": " I think I can actually make a suggestion to improve it. But maybe I'm way behind other folks. So" }, { "start": 167.52, "end": 173.44, "text": " what is the GLOM system? And what are these parse tree about? And why does it combine all of these" }, { "start": 173.44, "end": 181.68, "text": " things? And for that, we look at so it has two core diagrams here. This is the first diagram." }, { "start": 181.68, "end": 187.68, "text": " This is the second diagram. And at first sight, they have little to do with each other. So let" }, { "start": 187.68, "end": 194.4, "text": " me try to go about it like this. If you have an image, and Hinton looks at vision very much in" }, { "start": 194.4, "end": 203.92000000000002, "text": " terms of you have an image or a video, and you want to parse the image into kind of a tree." }, { "start": 203.92000000000002, "end": 210.8, "text": " And the tree should be sort of like a tree of objects and their parts. So let's say it's" }, { "start": 210.8, "end": 219.12, "text": " an image of a car. So the whole notion is very, very object centric. So this is like my best attempt" }, { "start": 219.12, "end": 228.8, "text": " at a car. And a parse tree for this image would look something like this. All right. So this whole" }, { "start": 228.8, "end": 234.72000000000003, "text": " thing here is a car. So that's going to be your top node in the parse tree. The car has different" }, { "start": 234.72, "end": 243.84, "text": " parts, namely, it has this cabin, it has a motor and has wheels. So that is going to be those are" }, { "start": 243.84, "end": 251.84, "text": " going to be kind of downstream of that parse tree. Then the cabin itself is going to have two" }, { "start": 251.84, "end": 258.56, "text": " segments here, windows, and maybe here is the door area. So that is going to be window window door," }, { "start": 259.2, "end": 264.48, "text": " and so on. So you get that we what we want to do is we want to look at an image, sort of create" }, { "start": 264.48, "end": 272, "text": " this parse tree over here, this is very much into the into the area of go fi good old fashioned AI" }, { "start": 272, "end": 279.84000000000003, "text": " people that want to understand a the world in terms of their symbolic representations and relation" }, { "start": 279.84000000000003, "end": 286.88, "text": " of the symbols to each other. However, what Hinton is saying is that if you simply do this, it's," }, { "start": 286.88, "end": 291.36, "text": " it's, you know, you can't really do this with neural networks, neural networks are continuous," }, { "start": 291.36, "end": 298.16, "text": " and so on. So what would you have to do in In addition, we know that the brain doesn't" }, { "start": 298.16, "end": 305.44, "text": " reconfigure itself every single time you get a new input. So the brain, even though it has some" }, { "start": 305.44, "end": 312, "text": " neuroplasticity, while you look at the world and do inference in the world, the connections stay" }, { "start": 312, "end": 318.16, "text": " the same. So what we need to do is we need to come up with a system that when we input one image," }, { "start": 318.16, "end": 324.40000000000003, "text": " it can give us one parse tree. But when we input another image, it can give us some kind of other" }, { "start": 324.40000000000003, "end": 332.16, "text": " parse tree, maybe now there are two objects in the image. And this one has one descendant only," }, { "start": 332.16, "end": 338.96000000000004, "text": " which in turn has two descendants, and so on, you see the point, the tree structure needs to be" }, { "start": 338.96000000000004, "end": 346.40000000000003, "text": " different each time. This in part was addressed by Hinton's capsule networks. So in the capsule" }, { "start": 346.4, "end": 351.44, "text": " networks, Hinton's idea was sort of, okay, I'm going to have these capsules here in different layers." }, { "start": 352.4, "end": 359.35999999999996, "text": " And I'm going to have kind of lots of capsules in these layers, lots of capsules in these layers." }, { "start": 359.35999999999996, "end": 366.64, "text": " And I'm going over capsules, because it's kind of important here. So Hinton's idea with capsules" }, { "start": 366.64, "end": 374.08, "text": " was that the first layer of capsules would sort of recognize the smallest parts. So this would be" }, { "start": 374.08, "end": 380.8, "text": " kind of the wheel capsule. And this would be sort of the window capsule, and so on. So there would" }, { "start": 380.8, "end": 386.71999999999997, "text": " be a single capsule for every part that could possibly be in an image, right? You already see" }, { "start": 386.71999999999997, "end": 394.32, "text": " the limitations. Because if you want to recognize the whole world, you need many capsules. But" }, { "start": 394.32, "end": 401.36, "text": " nevertheless, this was the idea. So a capsule would be active if there was the given object in the" }, { "start": 401.36, "end": 407.44, "text": " image. And then the next thing here, this would be kind of the the motor capsule. So the motor" }, { "start": 408.88, "end": 417.6, "text": " motor capsule, and this would be the cabin capsule, and so on. So the window would activate the cabin" }, { "start": 417.6, "end": 424.08000000000004, "text": " capsule, but the door capsule would also activate the cabin capsule, and so on. And the wheel would" }, { "start": 424.08000000000004, "end": 430.40000000000003, "text": " maybe activate it would maybe activate, I don't know, the wheel should probably be here as well," }, { "start": 430.4, "end": 436, "text": " wheel at this level would activate that and then all of these things here would activate the car" }, { "start": 436, "end": 446.88, "text": " capsule. So you can see that this parse tree here is generated dynamically, right? These connections," }, { "start": 446.88, "end": 452.71999999999997, "text": " this routing in capsules is generated every time different. So in the next image, there could be a" }, { "start": 452.71999999999997, "end": 457.2, "text": " different object, different capsules are activated, different things are routed together, the parse" }, { "start": 457.2, "end": 462.96, "text": " tree is different. However, you need these many, many capsules for that every one capsule per" }, { "start": 462.96, "end": 470.24, "text": " possible part in the image. And that was just infeasible. And also the routing was very" }, { "start": 470.24, "end": 478.56, "text": " cumbersome in these capsules. So here we go with a new approach. And this new approach is what" }, { "start": 480.32, "end": 486.24, "text": " Hinton describes as the glom architecture is composed of a large number of columns," }, { "start": 486.24, "end": 493.44, "text": " which all use exactly the same weight. Each column is a stack of spatially local auto encoders that" }, { "start": 493.44, "end": 500.64, "text": " learn multiple levels of representation for what is happening in a small image patch. Okay, so" }, { "start": 501.84000000000003, "end": 506.48, "text": " we're going to build up some kind of imagination here. At the at the bottom level, we have our" }, { "start": 506.48, "end": 512.48, "text": " image. So our image is going to be lying flat on the ground, maybe you can see it like this." }, { "start": 512.48, "end": 518.48, "text": " And it is going to be divided into pixels or small patches, whatever you want. But these are" }, { "start": 518.48, "end": 527.9200000000001, "text": " would be called locations. So it would be divided like this into different locations. I am not good" }, { "start": 527.9200000000001, "end": 534.24, "text": " at perspective drawing. In any case, above each location, there would be one of these columns." }, { "start": 534.24, "end": 541.12, "text": " And these columns, I can draw one here, these columns would sort of stack up like this." }, { "start": 541.12, "end": 546.08, "text": " And these columns would be divided into multiple levels. So there would be a bottom level," }, { "start": 546.08, "end": 552.16, "text": " which would be this there would be a middle level, higher level, and so on. Hinton suggests about" }, { "start": 552.16, "end": 562.08, "text": " five levels should probably do. And every single level of this column tries to represent the" }, { "start": 562.08, "end": 570, "text": " location at the image, right this location down here in a different resolution. So the very bottom" }, { "start": 570, "end": 577.36, "text": " level might be aware that there is a part of a wheel like let's say this is actually let's say" }, { "start": 577.36, "end": 594.08, "text": " this is a cat. So here, there's probably Yep, yep. Okay, so you can see there is there is an ear or a" }, { "start": 594.08, "end": 602.88, "text": " part of an ear that stays as a part of an ear in this location. So the very bottom thing would" }, { "start": 602.88, "end": 608.56, "text": " probably represent something like the very structure of the fur. So the bottom thing would" }, { "start": 608.56, "end": 615.12, "text": " represent what's going on at you know, the micro level really the location level, the next layer" }, { "start": 615.6, "end": 620.32, "text": " would represent what's going on at this location in a kind of a broader sense. So that might" }, { "start": 620.32, "end": 626.48, "text": " recognize that that that's an that's actually part of an ear, right. So it goes beyond the location." }, { "start": 626.48, "end": 632.08, "text": " If you think convolutional neural networks, you're right. So you're going to have a very" }, { "start": 632.08, "end": 637.9200000000001, "text": " similar network. So if you think convolutional neural networks, you're in the right ballpark," }, { "start": 637.9200000000001, "end": 644, "text": " but we're going to implement this differently. The next layer will recognize well, this location" }, { "start": 644, "end": 654.4000000000001, "text": " is part of a of a cat of a cat's head. And then the next location will recognize well, this thing" }, { "start": 654.4, "end": 662.8, "text": " is a cat. Now there there is a cat at other places. But at this location, there is a cat, and so on." }, { "start": 662.8, "end": 668.48, "text": " So maybe we don't have more and this locate at this particular image. But if you consider a" }, { "start": 668.48, "end": 677.84, "text": " different column, like this, this column right here, and you look at what's going on in that column," }, { "start": 677.84, "end": 684.16, "text": " you'll see similar. So in the top layer, let's just consider the cat the top layer, in the top" }, { "start": 684.16, "end": 691.52, "text": " layer, it might say, well, there's a cat too. But it's also part of it's part of a cat's neck," }, { "start": 693.04, "end": 701.1999999999999, "text": " neck. And then here it's maybe there's a bunch of, well, I don't know, a chin." }, { "start": 702.88, "end": 710.24, "text": " And there is also a fine first structure of the chin. So you get the idea, every column will build" }, { "start": 710.24, "end": 716.48, "text": " up these rep these representations. And these are vectors. So these are embedding vectors. So" }, { "start": 716.48, "end": 723.2, "text": " at the bottom location, you'd have the fur vector, and then this vector is the ear, whereas here over" }, { "start": 723.2, "end": 730.08, "text": " here, the chin would be very different, or be a different vector at the same layer. So the only" }, { "start": 730.08, "end": 736.88, "text": " thing that agrees here is the cat vector, the cat vector in this top layer would agree between both" }, { "start": 736.88, "end": 743.04, "text": " of these columns. I hope you get the idea, you have a column above each of the locations," }, { "start": 743.04, "end": 749.04, "text": " every single layer in the column represents that particular location, but at a different" }, { "start": 750, "end": 755.52, "text": " level of abstraction and a different level of I don't want to say resolution, but it it would" }, { "start": 755.52, "end": 762, "text": " consider more and more of its neighbors. The question is, how does it consider its neighbors?" }, { "start": 762, "end": 767.12, "text": " And how do you learn these things, right? So how do you learn these different abstractions?" }, { "start": 767.12, "end": 774.96, "text": " And that's where these columns, they communicate with each other. So Hinton imagines that this is" }, { "start": 774.96, "end": 784.32, "text": " a process over time, where the columns iteratively communicate to each other. And within the column," }, { "start": 784.32, "end": 789.92, "text": " the layers communicate to each other. And this is one of these first diagrams right here." }, { "start": 789.92, "end": 798.56, "text": " So this is one single column over time. Okay, this is this would be the, this would be the fur" }, { "start": 798.56, "end": 805.36, "text": " at the ear, this would be the cat's ear, and this would be cat. Okay, so" }, { "start": 809.12, "end": 816.7199999999999, "text": " the information that so the embeddings are updated by sending information around every single" }, { "start": 816.72, "end": 822.96, "text": " information around every single embedding, which means that every single vector at every single" }, { "start": 822.96, "end": 831.52, "text": " layer of every single column is updated by simply averaging four things. So we have the embedding" }, { "start": 831.52, "end": 843.2, "text": " at layer l, at time step t plus one is going to be sorry at layer l location x is going to be" }, { "start": 843.2, "end": 850.32, "text": " a sum between the four parts, the four following parts, it's going to be the embedding at the last" }, { "start": 850.32, "end": 858.24, "text": " time step, right? So this is sort of a recurrent neural network. We the new embedding is the old" }, { "start": 858.24, "end": 868.8000000000001, "text": " embedding plus it's going to be a function at a top down, that's what Hinton calls top down function" }, { "start": 868.8, "end": 877.4399999999999, "text": " of the embedding at the same location in the previous time step at one layer above. So l plus one" }, { "start": 879.68, "end": 887.8399999999999, "text": " it is also going to be receiving information from the upwards, I think bottom up, because the bottom" }, { "start": 887.8399999999999, "end": 895.8399999999999, "text": " up embedding of layer l minus one at the same location at time step t. All right, so this would" }, { "start": 895.84, "end": 905.52, "text": " that's what you can see right here. The green arrows are each level each layer simply passes" }, { "start": 905.52, "end": 913.12, "text": " information to the next time step. This is if any if nothing else happens, you just keep your embedding." }, { "start": 913.76, "end": 922.96, "text": " Then each embedding also sends itself through a neural network one layer above itself. That's the" }, { "start": 922.96, "end": 930.08, "text": " blue arrows. So the blue arrows here are these and you every everything is a neural network here," }, { "start": 930.08, "end": 934.96, "text": " every arrow except the green ones, but the green ones could be too. So every arrow is a neural" }, { "start": 934.96, "end": 942.1600000000001, "text": " network. So this is a neural network sending information above. And this is intuitive, right?" }, { "start": 942.1600000000001, "end": 949.0400000000001, "text": " So the ear embedding would sort of send information about itself like saying like, hey, I'm a cat ear" }, { "start": 949.04, "end": 956.64, "text": " sends it above and it goes through a neural network because it needs to be transformed." }, { "start": 956.64, "end": 964.56, "text": " The neural network has to learn. Well, if it's a cat ear at that level, it might be a cat at the" }, { "start": 964.56, "end": 972.48, "text": " top level. And lastly, every single layer sends information down and that is the red arrows right" }, { "start": 972.48, "end": 980.72, "text": " here. They're also neural networks. So the cat ear says, well, I'm a cat ear. So downstream of myself," }, { "start": 980.72, "end": 988.08, "text": " there might be, you know, some first structure. So all of these embeddings, they try to predict" }, { "start": 988.08, "end": 994.08, "text": " each other, they try to predict the neighbors of themselves. And Hinton's idea is that by" }, { "start": 994.08, "end": 1001.12, "text": " aggregating over time, they will sort of reach a consensus of what is in these columns." }, { "start": 1001.12, "end": 1005.84, "text": " Okay, there are a few things missing right here. The one thing that's missing and Hinton pointed" }, { "start": 1005.84, "end": 1012.96, "text": " this out that all of these different columns that we've drawn, they use the same weights. Okay, so," }, { "start": 1013.76, "end": 1018.16, "text": " and he discusses this at the end of the paper, it's not really biologically plausible," }, { "start": 1018.16, "end": 1024.88, "text": " but there's an ensemble effect. We won't go into that. But all these, these, so the blue" }, { "start": 1024.88, "end": 1032, "text": " arrows are always the same for each time step, but not necessarily the same between different" }, { "start": 1032, "end": 1038.64, "text": " layers. So that might be this F might be different from this F down here. However, the function" }, { "start": 1038.64, "end": 1045.0400000000002, "text": " passing information from from layer L to layer L plus one is the same in every single column across" }, { "start": 1045.0400000000002, "end": 1050, "text": " the image. It's a bit like a convolutional network in terms of weight sharing. So you can imagine it" }, { "start": 1050, "end": 1057.12, "text": " as one by one convolutional network in that sense. But except the information does not only go up" }, { "start": 1057.12, "end": 1063.92, "text": " the layers, it also goes down the layers over time. As I said, this is an iterative procedure," }, { "start": 1064.56, "end": 1071.76, "text": " goes up, down, and laterally. The second thing is, now that you ask, oh, well, if every single column" }, { "start": 1071.76, "end": 1080.32, "text": " has the same weights, wouldn't that simply sort of how how can you localize any information?" }, { "start": 1080.32, "end": 1086.32, "text": " And the answer is that you have a side input, like in a neural field, you have a side input" }, { "start": 1086.32, "end": 1094, "text": " annotating each location, basically a positional encoding, honestly. So in in addition to what the" }, { "start": 1094, "end": 1100.32, "text": " image patch looks like, you also get your kind of either your x y coordinates, or you could also get" }, { "start": 1100.32, "end": 1108.8, "text": " your relative coordinates to some other coordinate frame in there. And so the network knows where it" }, { "start": 1108.8, "end": 1117.6, "text": " is. And that's going to be important, because what Hinton wants to build are these islands. So the" }, { "start": 1117.6, "end": 1125.9199999999998, "text": " imagination of Hinton is that this is going to be somewhere in between like after time step 10, and" }, { "start": 1125.92, "end": 1133.44, "text": " you want to run it for 100. And he imagines that there will what will emerge are these sort of" }, { "start": 1133.44, "end": 1142.24, "text": " islands. So imagine the image is now a 1d vector down here. Or you can imagine these columns in 2d," }, { "start": 1142.24, "end": 1149.44, "text": " whatever fits, you know, whatever fits your brain better. But imagine the images, the image is simply" }, { "start": 1149.44, "end": 1156.4, "text": " the image is simply a 1d line right here. He imagines that the bottom vectors, they will just," }, { "start": 1156.4, "end": 1163.2, "text": " you know, happily kind of be describing whatever that is at the very bottom level. But then at the" }, { "start": 1163.2, "end": 1171.04, "text": " next level, once it goes to sort of higher resolution or lower resolution, higher abstraction," }, { "start": 1171.68, "end": 1179.1200000000001, "text": " there will be there must necessarily be vectors that are the same if the system works and look" }, { "start": 1179.12, "end": 1184.8, "text": " at these two vectors and look at these two vectors, they are the same because they now describe" }, { "start": 1184.8, "end": 1191.52, "text": " objects that are larger than one location, right, the cat's head is larger than simply one location." }, { "start": 1191.52, "end": 1199.12, "text": " Therefore, at the layer that represents the cat's head, you expect because these are all all neural" }, { "start": 1199.12, "end": 1205.84, "text": " all the up and down functions in the same layer have the same weight, you expect that the embedding" }, { "start": 1205.84, "end": 1214, "text": " of a cat's head is the same in in the different columns. Right, that this is if the system works," }, { "start": 1214, "end": 1220.32, "text": " this must be the case. And then as you go up, you expect more and more of these what what Hinton calls" }, { "start": 1220.32, "end": 1230.72, "text": " islands to emerge, right. So they they agree. And the idea. The idea between all of this message" }, { "start": 1230.72, "end": 1238.72, "text": " passing is that over time, all of these things kind of reinforce each other. So we looked at" }, { "start": 1238.72, "end": 1246.88, "text": " a column before, and we maybe said, okay, so this vector down here, it gets information from the top" }, { "start": 1248, "end": 1254.72, "text": " saying, hey, you know, there's a cat here. So you might be like a cat ear or a cat eye or something" }, { "start": 1254.72, "end": 1258.8, "text": " like this. And then it gets information from the bottom saying, well, there's a bit of there's," }, { "start": 1258.8, "end": 1265.84, "text": " you know, fur here, and there's some cartilage showing and so on. And it has already sort of" }, { "start": 1265.84, "end": 1271.44, "text": " figured out that it might be an ear. And these informations they own they reinforce itself now" }, { "start": 1271.44, "end": 1275.76, "text": " like they'd be like, okay, you know, you're saying I'm part of a head and you're saying there's a bit" }, { "start": 1275.76, "end": 1282.48, "text": " of fur and cartilage. And I already kind of noticed that I'm a bit like an ear. So I'm probably more" }, { "start": 1282.48, "end": 1288.6399999999999, "text": " an ear. So the idea is that over time, you have this consensus algorithm, there's one thing missing." }, { "start": 1288.64, "end": 1295.6000000000001, "text": " And that is, how do the different columns communicate with each other. So I said there" }, { "start": 1295.6000000000001, "end": 1304.16, "text": " are different parts, there is one missing. And that one missing is going to be, I'm just going" }, { "start": 1304.16, "end": 1314.24, "text": " to call it whatever a and a is going to be an attention mechanism across all the other columns" }, { "start": 1314.24, "end": 1320.88, "text": " at the same layer. So if we look here, this cell receives information from above from below from" }, { "start": 1320.88, "end": 1329.52, "text": " itself, and also, in an attention mechanism way, it's going to receive information from all of the" }, { "start": 1329.52, "end": 1335.6, "text": " different, all of the different embeddings at the same layer, you can see" }, { "start": 1335.6, "end": 1343.52, "text": " that, you know, hidden puts in everything we got in here. Now the attention, he says, is easier. And" }, { "start": 1345.1999999999998, "end": 1351.6, "text": " So these are the four parts right here. At each discrete time, and in each column separately," }, { "start": 1351.6, "end": 1355.6, "text": " the embedding at a level is updated to be the weighted average of four contributions." }, { "start": 1356.3999999999999, "end": 1362.24, "text": " The prediction produced by the bottom up neural net acting on the embedding at the level below" }, { "start": 1362.24, "end": 1368.88, "text": " acting on the embedding at the level below at the previous time, the prediction produced" }, { "start": 1368.88, "end": 1374.64, "text": " by the top down neural net acting on the embedding at the level above at the previous time," }, { "start": 1375.84, "end": 1382, "text": " the embedding vector at the previous time step, these three we got, and then the attention" }, { "start": 1382, "end": 1386.8, "text": " weighted average of the embeddings at the same level, right at the same level" }, { "start": 1386.8, "end": 1396.56, "text": " in nearby columns at the previous time. So nearby, he, sorry, he later backpedals a bit, I think, on" }, { "start": 1396.56, "end": 1403.52, "text": " nearby and what nearby exactly means. And he at some parts, so this this is idea, I think this is" }, { "start": 1403.52, "end": 1410.6399999999999, "text": " still up for debate. And this is, I think, where I can help. But what he wants to do is he wants to" }, { "start": 1410.64, "end": 1417.2, "text": " aggregate, he wants to attention aggregate, and he wants to simplify attention. So instead," }, { "start": 1418, "end": 1425.8400000000001, "text": " what we usually have is we're going to produce queries, and keys and values, queries, keys," }, { "start": 1425.8400000000001, "end": 1433.6000000000001, "text": " and values, and they're all going to be different functions of our input. And then we're going to do" }, { "start": 1433.6, "end": 1440.8, "text": " query times key transposed softmax of that times value, and that is going to be our attention" }, { "start": 1440.8, "end": 1446, "text": " mechanism that allows you know, arbitrary information to be routed around and so on. Hinton" }, { "start": 1446.32, "end": 1453.28, "text": " says, Nope, what I want is simply that all the queries, the keys and the values, they're all just" }, { "start": 1453.52, "end": 1458, "text": " equal to the embeddings themselves. So" }, { "start": 1458, "end": 1461.92, "text": " the attention mechanism would work out to be the softmax" }, { "start": 1463.44, "end": 1475.12, "text": " of x times x transposed times x. And what that does is if you yourself are the query, and every" }, { "start": 1475.12, "end": 1483.2, "text": " vector also itself is the key, what do you attend to, you attend to vectors that are very similar" }, { "start": 1483.2, "end": 1490.56, "text": " to yourself. And you can see that in Hinton's diagram, the one we circled dark blue, what would" }, { "start": 1490.56, "end": 1497.1200000000001, "text": " it attend to? Well, it would probably attend to its left hand neighbor, the one you can see circled," }, { "start": 1497.1200000000001, "end": 1504.8, "text": " I'm going to circle it. This one, it will probably attend a lot to this one, it might not attend so" }, { "start": 1504.8, "end": 1512.8, "text": " much. And the ones over here, it might not attend at all. So what we're going to do is we're going" }, { "start": 1512.8, "end": 1519.28, "text": " to try to attend to this one to be sure that we have the right thing. You see, this is a" }, { "start": 1519.28, "end": 1526.32, "text": " consensus algorithm, it is not meant as a way to pass information around, this is not meant like" }, { "start": 1526.32, "end": 1533.44, "text": " in a transformer as a way to do computation because we have no trainable weights in this process." }, { "start": 1533.44, "end": 1542.48, "text": " It is simply meant as a consensus algorithm. So in imagines that by doing this, by sort of attending" }, { "start": 1542.48, "end": 1548.16, "text": " to things that are similar to you and then integrating their values, there will be these" }, { "start": 1548.16, "end": 1553.44, "text": " islands forming. And that's what you see right here. You can imagine if two vectors are already" }, { "start": 1553.44, "end": 1560.24, "text": " close at the same layer, this mechanism will make them even closer. So this is a sort of a clustering" }, { "start": 1560.24, "end": 1569.1200000000001, "text": " algorithm. And so that my question is that these drawings, you look at them, they are very" }, { "start": 1569.12, "end": 1577.76, "text": " specifically constructed, they're constructed such that a parse tree is emerging. So when you look at" }, { "start": 1577.76, "end": 1585.4399999999998, "text": " this, you have a clear sense I can probably I can probably move all of that crap out of the way." }, { "start": 1587.36, "end": 1594.8, "text": " You can see the parse tree, right? Because the black thing is going to be the top node right here," }, { "start": 1594.8, "end": 1599.2, "text": " let's leave away the scene level embedding for now, the black thing is going to be the top node." }, { "start": 1600.24, "end": 1607.36, "text": " And then it has two child nodes, this one, and this one. And then it has four, every one of those" }, { "start": 1607.36, "end": 1613.12, "text": " has two child nodes. But it's not it doesn't have to be in this case. So this dynamically and every" }, { "start": 1613.12, "end": 1618.8799999999999, "text": " one of them, you know, the black ones are individual. This is dynamically constructing" }, { "start": 1618.88, "end": 1627.68, "text": " a parse tree, right? The parse tree here is something like this. And then the the the" }, { "start": 1630.16, "end": 1636.0800000000002, "text": " So this is pretty cool. But it is also drawn deliberately such that a core problem does not" }, { "start": 1636.0800000000002, "end": 1644.96, "text": " arise. And the core problem would be something like, well, what if this vector here was actually also" }, { "start": 1644.96, "end": 1652.24, "text": " pointing like this, okay, so it is not in it is not in the same. It is not in the same area of the" }, { "start": 1652.24, "end": 1660.4, "text": " parse tree, right? If you go down the parse tree, it is actually here. Now, if we do what Hinton says," }, { "start": 1660.4, "end": 1668, "text": " and if for this vector here, we do this aggregation via attention on the same layer," }, { "start": 1668, "end": 1675.84, "text": " what we will attend to is this vector over here. Now, this is probably not meant to be because this" }, { "start": 1675.84, "end": 1682.24, "text": " vector over here, it can represent the same thing. But you can see it's not in the in the same path" }, { "start": 1682.24, "end": 1690.24, "text": " of the parse tree. And he mentions this a little bit throughout, but not necessarily clear." }, { "start": 1691.92, "end": 1697.2, "text": " And the drawing makes it seem like there's no problem. But I hope you can see how this is a" }, { "start": 1697.2, "end": 1702.96, "text": " problem. The attention would pull in information from over here. However, the whole parse tree" }, { "start": 1702.96, "end": 1708.24, "text": " here and the island on the top layer suggests that these two things should be parsed independently" }, { "start": 1708.24, "end": 1714.8, "text": " from each other and therefore also processed independently from each other. So here is my" }, { "start": 1714.8, "end": 1723.44, "text": " suggestion to to extend this and maybe Hinton's already thought of this. But I would suggest that" }, { "start": 1723.44, "end": 1733.04, "text": " the this attention mechanism here is modulated by how close two things are in the parse tree." }, { "start": 1734, "end": 1740.72, "text": " So what would that be? So for a given a given vector, it would be how much do you attend" }, { "start": 1740.72, "end": 1747.1200000000001, "text": " to this vector right here? Well, a lot because it agrees with you, right? It you know, this the" }, { "start": 1747.12, "end": 1753.6799999999998, "text": " softmax of the inner product would be high, it agrees with you. And also it is in the same," }, { "start": 1754.3999999999999, "end": 1760.08, "text": " it is the same branch of the parse tree. So that's perfect, right? This one right here doesn't agree" }, { "start": 1760.08, "end": 1765.28, "text": " with you, but is in the same branch. So it could potentially later agree with you through a consensus" }, { "start": 1765.28, "end": 1771.4399999999998, "text": " algorithm. However, this one over here, I, you probably shouldn't attend to that too much," }, { "start": 1771.44, "end": 1777.2, "text": " even though it points in the same direction, because it's in a different branch of the parse" }, { "start": 1777.2, "end": 1783.8400000000001, "text": " tree, you shouldn't attend zero to it like because these branches on top, they could change. And you" }, { "start": 1783.8400000000001, "end": 1790.56, "text": " know, by you sending information there, this one could change the the top structure here that could" }, { "start": 1790.56, "end": 1797.76, "text": " agree more with your branch of the parse tree and so on. So my suggestion would be that let's not only" }, { "start": 1797.76, "end": 1806.16, "text": " get the softmax of the, let's not only get the softmax of the current layer things, but let's do" }, { "start": 1806.16, "end": 1813.6, "text": " x times and here we're going to have a sum. So this is going to be k. And let's say we're at" }, { "start": 1813.6, "end": 1821.04, "text": " we're at layer L. And this is layer one, this is layer two, this is layer three, going to number" }, { "start": 1821.04, "end": 1830.1599999999999, "text": " them from the top, actually from the bottom layer m, layer m minus one, and this is layer L." }, { "start": 1830.1599999999999, "end": 1838.56, "text": " I suck at this. So from the current layer, I want to go up the hierarchy until layer one." }, { "start": 1840.1599999999999, "end": 1850.08, "text": " And I'm going to take the softmax of the representation at layer L at layer k, where I'm at" }, { "start": 1850.08, "end": 1861.4399999999998, "text": " x k transposed like this. What we aggregate is still the the values on the current layer," }, { "start": 1861.4399999999998, "end": 1866.3999999999999, "text": " but how much we should attend to that should be dependent on the parse tree. And we do that" }, { "start": 1866.3999999999999, "end": 1876.1599999999999, "text": " like this. And maybe we have like a kind of a lambda k, L minus k, L minus k. I hope you get" }, { "start": 1876.16, "end": 1884.64, "text": " what I mean. So how much how much you aggregate this sum here, the sum here is weird. This should" }, { "start": 1884.64, "end": 1895.6000000000001, "text": " go probably. Hi, it's future Yannick. And I just wanted to write that down again. So because I've" }, { "start": 1895.6000000000001, "end": 1903.2, "text": " made some mistakes, obviously, the sum here should be within the softmax because you want to" }, { "start": 1903.2, "end": 1909.76, "text": " aggregate the distributions in log space, and the softmax should still be valid, you know," }, { "start": 1909.76, "end": 1919.44, "text": " distribution. And then the lambda is exponentiated by k and k now properly runs from the zero to all" }, { "start": 1919.44, "end": 1928.8, "text": " the way up the stacks. So big L would be the total number of layers and little l would be the layer" }, { "start": 1928.8, "end": 1936.24, "text": " where you're currently at. And you can clearly see that the contribution of these attention matrices" }, { "start": 1936.96, "end": 1944.6399999999999, "text": " it is so lambda would be something smaller than one. And therefore, the contribution is in the" }, { "start": 1944.6399999999999, "end": 1951.04, "text": " current layer is the strongest, but also in the next one up is a bit weaker than one more up is" }, { "start": 1951.04, "end": 1957.6, "text": " even a bit weaker and so on. So you'd still have essentially the same mechanism as Hinton is suggesting" }, { "start": 1957.6, "end": 1963.4399999999998, "text": " controlling for the fact that things are in different branches of the parse tree. All right," }, { "start": 1963.4399999999998, "end": 1972.24, "text": " back to classic Yannick who is thoroughly confused by these things. Yeah, I'm not good at I'm not good" }, { "start": 1972.24, "end": 1979.1999999999998, "text": " at coming up with math on the spot. But I hope you can see what it's doing. So it is if, if you" }, { "start": 1979.1999999999998, "end": 1984.3999999999999, "text": " simply take the first k, you would simply stay at that layer and it would be what Hinton said." }, { "start": 1984.4, "end": 1993.2, "text": " But what I'm saying is you should also consider how much your top your higher layer, one layer up" }, { "start": 1993.2, "end": 1999.52, "text": " from you agrees with one layer up from the thing you want to attend to. So you also compute that" }, { "start": 1999.52, "end": 2006.64, "text": " inner product between between the embeddings, and you add that to the softmax distribution. So" }, { "start": 2006.64, "end": 2011.8400000000001, "text": " initially, the softmax distribution would be like you should tend to this thing and this thing," }, { "start": 2011.84, "end": 2020.1599999999999, "text": " and this thing a lot. But then the next up hierarchy would maybe say, well, we agree," }, { "start": 2020.1599999999999, "end": 2025.1999999999998, "text": " because you know, these are in the same thing, but this one, maybe not so much. And you would add" }, { "start": 2025.1999999999998, "end": 2030.32, "text": " those together, maybe with a lambda factor in here, and then you go one layer up and it would say," }, { "start": 2030.32, "end": 2037.04, "text": " well, okay, everything over here basically agrees, right and here, no, but everything over here" }, { "start": 2037.04, "end": 2041.92, "text": " basically doesn't agree. So you would add that maybe with a lambda squared, as you go up the" }, { "start": 2041.92, "end": 2049.2, "text": " layers, it would be less and less important, but still you'd consider it. All right. Now," }, { "start": 2049.2, "end": 2056.8, "text": " if this is gonna work out, site the channel. Now back to what Hinton says that this is actually" }, { "start": 2056.8, "end": 2065.2, "text": " the system. This is the system as in a nutshell, you're gonna input the image at the bottom." }, { "start": 2065.2, "end": 2070.96, "text": " And Hinton says you could use like a convent at the very bottom to get it into the columns. But" }, { "start": 2070.96, "end": 2076.56, "text": " then you're going to every time step pass information up the columns down the columns," }, { "start": 2076.56, "end": 2085.3599999999997, "text": " and between the same layer of the different columns. And that's going to, in some point," }, { "start": 2085.3599999999997, "end": 2089.4399999999996, "text": " this is going to stabilize, I don't know if it has cycles, it probably doesn't have cycles." }, { "start": 2089.44, "end": 2096.48, "text": " This is good. Yeah, probably does not have cycles. So at some point, this comes to an end. And if" }, { "start": 2096.48, "end": 2103.84, "text": " that comes to an end, it should be that the object level embeddings agree on an object," }, { "start": 2103.84, "end": 2109.6, "text": " the part level embeddings agree on what parts there are, the sub parts agree, and so on. And" }, { "start": 2109.6, "end": 2114.2400000000002, "text": " they form these islands, these islands give rise to a parse tree. And the parse tree can tell you" }, { "start": 2114.24, "end": 2120.4799999999996, "text": " what object is there, what is it made of, and where are these parts in the image, and so on. So" }, { "start": 2123.04, "end": 2132.08, "text": " exactly, that is it. And now we're going to look at what Hinton calls some design decisions. How" }, { "start": 2132.08, "end": 2139.4399999999996, "text": " many levels are there? About five. Okay, we can skip that. How fine grained are the locations?" }, { "start": 2139.44, "end": 2146.16, "text": " Hinton says you could be as fine grained as pixels, or they could correspond to larger image patches." }, { "start": 2146.16, "end": 2151.44, "text": " You and he says you could do convolutional neural network to get it in there." }, { "start": 2152.8, "end": 2160.88, "text": " Does the bottom up net look at nearby locations? He says, yes, the bottom up net, so this this is" }, { "start": 2160.88, "end": 2166.7200000000003, "text": " not the attention network, that's the bottom up network, it could look at nearby locations." }, { "start": 2166.72, "end": 2173.12, "text": " But Hinton imagines that if you have bottom up, top down, and if you have attention drawing" }, { "start": 2173.12, "end": 2182, "text": " information, and if you maybe limit that attention to a neighborhood, then then the the attention" }, { "start": 2182, "end": 2186.56, "text": " will do the job because you can have instead of looking at neighboring locations in the bottom" }, { "start": 2186.56, "end": 2192.9599999999996, "text": " up network, you can simply in two time steps, aggregate that information. So you can do bottom" }, { "start": 2192.96, "end": 2198.2400000000002, "text": " up here, bottom up here, and then using the attention, the lateral mechanism, you can pass" }, { "start": 2198.2400000000002, "end": 2206.16, "text": " that information around this way. And also, it is not as biasing the network to the immediate" }, { "start": 2206.16, "end": 2213.28, "text": " neighborhood. So the attention mechanism can sort of look farther, which conflicts with what he's" }, { "start": 2213.28, "end": 2219.76, "text": " saying on top that the attention mechanism might only be looking at the neighbors. I think there" }, { "start": 2219.76, "end": 2226.2400000000002, "text": " are different possibilities here. And only looking at neighbors is actually one of the solution" }, { "start": 2226.2400000000002, "end": 2232.48, "text": " to the problem of having, you know, kind of similar vectors at very distant locations at" }, { "start": 2232.48, "end": 2238.7200000000003, "text": " down the levels. But I think it's not as as good a solutions to simply look at how close things" }, { "start": 2238.7200000000003, "end": 2243.6800000000003, "text": " are in pixel space, because even though things are close in pixel space, they might be far away" }, { "start": 2243.68, "end": 2251.12, "text": " in the parse tree space. How does the attention work? We've already looked at this. So the way" }, { "start": 2251.68, "end": 2258.3999999999996, "text": " that one location attends to another location is going to be the softmax of the inner product" }, { "start": 2258.3999999999996, "end": 2265.8399999999997, "text": " between the embeddings here. And the values are also going to be just the embeddings that layer" }, { "start": 2265.84, "end": 2276.8, "text": " at that layer. The visual input, he says convolutional net could be used. Color and texture." }, { "start": 2278.7200000000003, "end": 2286.1600000000003, "text": " He says, he makes he gives this example, like if you know, if an object is entirely pale or" }, { "start": 2286.1600000000003, "end": 2291.6800000000003, "text": " entirely green, or entirely, I don't even know how to pronounce this, the color of a part is" }, { "start": 2291.68, "end": 2298.72, "text": " straightforward. But what color is the whole object. So this entire notion of capsules, by the way," }, { "start": 2299.68, "end": 2308.16, "text": " imagines this as these embeddings represent kind of properties of the object so that the" }, { "start": 2308.7999999999997, "end": 2315.44, "text": " the cat ear embedding represents not only the fact that it is a cat ear, but also different" }, { "start": 2315.44, "end": 2322.32, "text": " properties about the cat ear and even its location in the image is in the embedding. And, you know," }, { "start": 2322.32, "end": 2328.2400000000002, "text": " we know that transformers, they must be doing something like this, because we feed in positional" }, { "start": 2328.2400000000002, "end": 2333.76, "text": " embeddings, for example, at the very bottom, and it can still, you know, compute things in terms" }, { "start": 2333.76, "end": 2343.04, "text": " of positions. So that's the there's an intrinsic connection between kind of capsules and the kind" }, { "start": 2343.04, "end": 2350.32, "text": " of transformer architecture. He says, one of the motivations of Glom was idea that the whole object" }, { "start": 2350.32, "end": 2357.6, "text": " has a compound color, which might be called pale green or move. And at the object level," }, { "start": 2358.16, "end": 2362.4, "text": " every location belonging to the object has exactly the same compound color." }, { "start": 2363.84, "end": 2369.7599999999998, "text": " So the object is whatever this all over, when deciding which other locations the object level" }, { "start": 2369.76, "end": 2376.48, "text": " attend to preference would be given two locations with a similar compound color. So what he's saying" }, { "start": 2376.48, "end": 2383.0400000000004, "text": " right here is that, you know, you could give preference to two similar color locations," }, { "start": 2383.0400000000004, "end": 2389.6800000000003, "text": " when you decide what you want to attend to. But the color isn't as easy as simply saying what color" }, { "start": 2389.6800000000003, "end": 2399.1200000000003, "text": " is there in the location that you are at. But you could be so if this is green, and this here is blue," }, { "start": 2399.12, "end": 2405.04, "text": " then the bottom layer would say yes, I'm green. And yes, I'm blue. But they could also be saying," }, { "start": 2405.2799999999997, "end": 2411.8399999999997, "text": " well, I am part of a green blue object, right. And then the the higher layer here, you know," }, { "start": 2411.8399999999997, "end": 2419.12, "text": " attending or caring about multiple or bigger region, its color would then be, you know, green" }, { "start": 2419.12, "end": 2424.3199999999997, "text": " blue, and the consensus could reach on, well, we are a green blue object, even though the object" }, { "start": 2424.32, "end": 2434.4, "text": " isn't a pure green or pure blue all throughout. So he I think, yeah, it's it's I think it's a side" }, { "start": 2434.4, "end": 2442.2400000000002, "text": " suggestion, maybe he has this as a core motivation between the system. But it's just interesting to" }, { "start": 2442.2400000000002, "end": 2448.32, "text": " see how he thinks of things and he extends the color here to textures and even shapes." }, { "start": 2448.32, "end": 2454.7200000000003, "text": " Shapes, the individual texture elements have their own shapes and poses in spatial relationships," }, { "start": 2454.7200000000003, "end": 2459.92, "text": " but an object with a textured surface has exactly the same texture everywhere at the object level." }, { "start": 2460.8, "end": 2467.36, "text": " Glom extends this idea to shapes, an object may have parts that are very different from one another," }, { "start": 2467.36, "end": 2472.1600000000003, "text": " but at the object level, it has exactly the same compound shape in all of the location that it" }, { "start": 2472.16, "end": 2479.44, "text": " occupies. Basically saying that, okay, every pixel that's part of a cat head is a cat head has the" }, { "start": 2479.44, "end": 2484.56, "text": " shape of a cat head, even though the individual locations might not recognize that, and that" }, { "start": 2484.56, "end": 2492.16, "text": " information could be passed around through this consensus mechanism over time. So the cluster" }, { "start": 2492.16, "end": 2498.64, "text": " discovery versus cluster formation, we've seen that and he makes a lot of he makes a lot of" }, { "start": 2498.64, "end": 2505.04, "text": " analogies to face recognition. But yeah, the clusters are not the islands of similar embedding" }, { "start": 2505.04, "end": 2510.72, "text": " vectors at a level can be viewed as clusters, but these clusters are not discovered in immutable data." }, { "start": 2510.72, "end": 2516.7999999999997, "text": " They are formed by the interaction between the intra level process that favors islands of" }, { "start": 2516.7999999999997, "end": 2522.48, "text": " similarity and dynamically changing suggestions coming from the locations embedding at adjacent" }, { "start": 2522.48, "end": 2531.04, "text": " levels. So the core here is really this consensus algorithm that creates these clusters. And yeah," }, { "start": 2531.04, "end": 2535.2, "text": " the clustering algorithm doesn't work by simply looking at embeddings and deciding which ones go" }, { "start": 2535.2, "end": 2540.48, "text": " together, but the embeddings themselves update themselves in order to form clusters." }, { "start": 2542.96, "end": 2550.2400000000002, "text": " And yeah, this is replicating embedding vectors. This is a response to a criticism that I guess he" }, { "start": 2550.24, "end": 2555.68, "text": " got where someone said, well, why don't why do you represent if you have these, you know, these" }, { "start": 2555.68, "end": 2560.3999999999996, "text": " columns at the bottom, it makes sense, you have all the different vectors. But then as you go up," }, { "start": 2560.3999999999996, "end": 2565.4399999999996, "text": " you know, you have that kind of the same vector for all locations, because it's the same object." }, { "start": 2565.4399999999996, "end": 2571.8399999999997, "text": " Why does it make sense to replicate that everywhere, and not just have one, because, you know," }, { "start": 2571.8399999999997, "end": 2579.2799999999997, "text": " in a database, we just have one. And it basically says that in order to reach the consensus first," }, { "start": 2579.28, "end": 2583.0400000000004, "text": " of all, it's important to have different vectors, they might be slightly different. So they might" }, { "start": 2583.0400000000004, "end": 2588.4, "text": " have some nuance in them, because, you know, they might get pulled into different directions" }, { "start": 2588.4, "end": 2596.2400000000002, "text": " from the sign of bottom up signal, then from the consensus algorithm on the same layer. So I, you" }, { "start": 2596.2400000000002, "end": 2602.4, "text": " know, I believe that it is that is important. Here, I think it's just this is a criticism he got." }, { "start": 2602.4, "end": 2610.1600000000003, "text": " And then he decided to put this in here, learning islands. So what we haven't discussed about this" }, { "start": 2610.1600000000003, "end": 2617.44, "text": " yet is how this is trained and Hinton says this is trained as a denoising auto encoder. Let us" }, { "start": 2617.44, "end": 2624, "text": " assume that Glom is trained to reconstruct at its output, the uncorrupted version of an image from" }, { "start": 2624, "end": 2632.88, "text": " which some region has been have been removed. So he goes into self supervised learning with the system." }, { "start": 2633.84, "end": 2638.96, "text": " This objective should ensure that information about the input is preserved during the forward" }, { "start": 2638.96, "end": 2644.8, "text": " pass. And if the regions are sufficiently large, it should also ensure that identifying familiar" }, { "start": 2644.8, "end": 2653.36, "text": " objects will be helpful for filling in the missing regions. To encourage islands of near identity," }, { "start": 2653.36, "end": 2659.04, "text": " we need to add a regularizer. And experience shows that a regularizer that simply encourages" }, { "start": 2659.04, "end": 2664.2400000000002, "text": " similarity between the embeddings of nearby locations can cause representations to collapse." }, { "start": 2665.1200000000003, "end": 2670.96, "text": " All the embedding vectors may become very small, so that they are all very similar. And the" }, { "start": 2670.96, "end": 2676.48, "text": " reconstruction will then use very large weights to deal with the very small scale to prevent collapse." }, { "start": 2676.48, "end": 2683.76, "text": " And then he says contrastive learning is the answer to this. So how do you regularize the model" }, { "start": 2683.76, "end": 2691.92, "text": " such that this consensus is formed? He says contrastive learning might be useful, but you" }, { "start": 2691.92, "end": 2698.48, "text": " can't simply apply it straight out. So it learns to make representations of two different crops of" }, { "start": 2698.48, "end": 2702.88, "text": " the same image agree, and the representations of two crops from different images disagree." }, { "start": 2702.88, "end": 2709.76, "text": " But this is not a sensible thing to do if our aim is to recognize objects. If crop one contains" }, { "start": 2709.76, "end": 2715.84, "text": " objects A and B and crop two from the same image contains objects B and C, it does not make sense" }, { "start": 2715.84, "end": 2723.04, "text": " to demand that the representation of the two crops is the same at the object level. Okay, so he says" }, { "start": 2723.04, "end": 2729.76, "text": " that contrastive learning is good, but you have to pay very careful attention at which layer you" }, { "start": 2729.76, "end": 2738.88, "text": " employ it. Because if you go down far enough, then contrastive learning, especially this type" }, { "start": 2738.88, "end": 2743.92, "text": " where you crop the image into different parts, and you say, well, since it's the same image," }, { "start": 2743.92, "end": 2749.1200000000003, "text": " the representations should agree. Hinton would say, well, at the top layer, yes, but at the bottom" }, { "start": 2749.1200000000003, "end": 2755.36, "text": " layer, certainly not, because they display different things. So you have to be careful" }, { "start": 2755.36, "end": 2764.96, "text": " where you apply this contrastive learning. And he gives a bunch of suggestions on how to solve that." }, { "start": 2764.96, "end": 2771.2000000000003, "text": " He says things like, well, negative examples, for example, might not might not even be needed." }, { "start": 2772.08, "end": 2776.6400000000003, "text": " Well, that's it. Sorry, that's a different thing. So the obvious solution is to regularize" }, { "start": 2777.2000000000003, "end": 2780.96, "text": " the bottom up and top down neural networks by encouraging each of them to predict the" }, { "start": 2780.96, "end": 2790.4, "text": " consensus option. Yeah, this is the weighted geometric mean of the predictions coming from" }, { "start": 2790.4, "end": 2795.6, "text": " the top down and bottom up networks, the attention weighted average of the embeddings at nearby" }, { "start": 2795.6, "end": 2802.4, "text": " locations at the previous time step, the previous state of and I guess, and there should be an end" }, { "start": 2803.12, "end": 2808.88, "text": " and the previous state of the embedding training, the inter level prediction to agree with the" }, { "start": 2808.88, "end": 2814, "text": " consensus will clearly make the islands found during feed forward inference be more coherent." }, { "start": 2815.2000000000003, "end": 2824, "text": " So he says you could regularize the model to to regress to the consensus option. So" }, { "start": 2824, "end": 2833.6, "text": " it's sort of like a self a self regression. And he asks whether or not that will lead to a collapse," }, { "start": 2833.6, "end": 2839.7599999999998, "text": " because if you don't have negative examples and contrastive learning, this could lead to simply a" }, { "start": 2839.7599999999998, "end": 2847.2, "text": " collapse. An important question is whether this type of training will necessarily cause collapse" }, { "start": 2847.2, "end": 2851.44, "text": " if it is not accompanied by training the inter level predictions to be different for negative" }, { "start": 2851.44, "end": 2857.7599999999998, "text": " examples that use the consensus options for unrelated spatial contexts. So here is that" }, { "start": 2857.76, "end": 2864.1600000000003, "text": " problem. Right. If you use the consensus opinion for unrelated spatial context," }, { "start": 2866.88, "end": 2873.76, "text": " that might be a problem. He says using layer batch norm should reduce the tendency to collapse," }, { "start": 2873.76, "end": 2880.4, "text": " but a more important consideration may be the achievability of the goal. It goes into why" }, { "start": 2880.4, "end": 2887.36, "text": " regularization could help. And he says if however, an embedding at one location is free to choose" }, { "start": 2887.36, "end": 2891.6, "text": " which embeddings at other locations it should resemble, the goal can be achieved almost" }, { "start": 2891.6, "end": 2896.56, "text": " perfectly by learning to form islands of identical vectors and attending almost entirely to other" }, { "start": 2896.56, "end": 2905.92, "text": " locations that are in the same island. And I don't know, I don't know if this is what I suggested." }, { "start": 2905.92, "end": 2912.08, "text": " So I guess this is kind of a convoluted paragraph, and I had to also read it multiple times and I" }, { "start": 2912.08, "end": 2918.7200000000003, "text": " still don't exactly know what he's trying to say right here. But I think what he's saying is that" }, { "start": 2919.6800000000003, "end": 2925.76, "text": " what we want to do is we want to sort of regularize the network to produce this consensus," }, { "start": 2925.76, "end": 2932.16, "text": " right. So we have a bottom up signal, a top down signal, we have a current value," }, { "start": 2932.16, "end": 2939.12, "text": " and we have the signal from the attention mechanism. Now, what we want to do is we want to" }, { "start": 2939.12, "end": 2947.04, "text": " reach a consensus such that these islands form. However, if you attend to any sort of things here" }, { "start": 2947.04, "end": 2953.44, "text": " that have nothing to do with you, you might not be able to reach this consensus, right. That's," }, { "start": 2953.44, "end": 2957.8399999999997, "text": " I think that's the problem I think he's touching on the problem that I said before." }, { "start": 2957.84, "end": 2966.4, "text": " So what he says is, you know, what you should do is you should simply attend to things that are" }, { "start": 2966.4, "end": 2973.28, "text": " in the same islands already. So if an embedding at one location is free to choose which embedding" }, { "start": 2973.28, "end": 2979.28, "text": " at other locations it should resemble, the goal can be achieved by learning to form islands of" }, { "start": 2979.28, "end": 2984.88, "text": " identical vectors and attending almost entirely to other locations that are in the same islands." }, { "start": 2984.88, "end": 2991.92, "text": " Now, I think here, what he's doing, he makes the case for the attention mechanism itself," }, { "start": 2991.92, "end": 2998.8, "text": " right. So he says, if, if we simply draw in information from the same layer here," }, { "start": 2998.8, "end": 3004.7200000000003, "text": " you know, anything, any old information might come in, and we might collapse and or we might" }, { "start": 3004.7200000000003, "end": 3010.2400000000002, "text": " never reach consensus because any old information might come in. However, if we simply draw in" }, { "start": 3010.24, "end": 3017.04, "text": " information from the selected neighbors that already are in the same group in the same island" }, { "start": 3017.04, "end": 3023.12, "text": " as me, then this consensus algorithm works. So if the network, the network is now forced kind of" }, { "start": 3023.12, "end": 3029.9199999999996, "text": " to learn to build these islands of similar things in order to make this consensus work, if we" }, { "start": 3029.9199999999996, "end": 3036.64, "text": " regularize this consensus, then we can actually create a consensus that is similar to the one" }, { "start": 3036.64, "end": 3044, "text": " that we have in the same group. So I think that's the way to make this consensus work if we" }, { "start": 3044, "end": 3052.24, "text": " regularize this consensus. So I believe he makes the case for the attention mechanism. I don't think" }, { "start": 3052.24, "end": 3060.08, "text": " he, in this case, considers kind of the up the next up layer islands, what I would say is you need" }, { "start": 3060.08, "end": 3068.48, "text": " to go to the columns in order to decide which things, which locations, right, it's free to" }, { "start": 3068.48, "end": 3075.2, "text": " choose which embeddings at other locations it should resemble. I think, yeah, this is the case" }, { "start": 3075.2, "end": 3086.88, "text": " for the attention mechanism. Okay, I hope you're still half with me. If not, I'm, I'm a bit confused" }, { "start": 3086.88, "end": 3092.1600000000003, "text": " because I think what he's doing is he says, contrastive learning would be good, you can use it," }, { "start": 3092.1600000000003, "end": 3100.4, "text": " but you have to be careful at which layer you do it. Another regularizer to form these islands" }, { "start": 3100.4, "end": 3108.7200000000003, "text": " would be this regularize the network to conform to the consensus option, opinion. However, if you" }, { "start": 3108.7200000000003, "end": 3115.6800000000003, "text": " simply aggregate information from the same layer, then that wouldn't work because, you know, the" }, { "start": 3115.68, "end": 3121.44, "text": " different things in the same layer might correspond to completely different parts of the image." }, { "start": 3121.8399999999997, "end": 3126.8799999999997, "text": " Drawing in information from there would not help you. How do you solve this? By introducing the" }, { "start": 3126.8799999999997, "end": 3133.9199999999996, "text": " very attention mechanism that he introduced in order to only draw in information from parts of" }, { "start": 3133.9199999999996, "end": 3144.3199999999997, "text": " the same layer that actually are related to you. Okay, the next thing, the next consideration he" }, { "start": 3144.32, "end": 3150.2400000000002, "text": " does is representing coordinate transformations. So how does this represent coordinate transformations?" }, { "start": 3150.2400000000002, "end": 3157.6800000000003, "text": " There was a capsule net paper where he explicitly represents coordinate transformations in kind of" }, { "start": 3157.6800000000003, "end": 3166.56, "text": " four dimension quaternion space. And he says that is probably not needed because you don't want to," }, { "start": 3166.56, "end": 3178, "text": " he says you could represent this by four by four matrices. However, if you simply allocate 16" }, { "start": 3178, "end": 3184.32, "text": " numbers in each embedding vector, in order to represent the part whole coordinate transformation," }, { "start": 3184.32, "end": 3189.2799999999997, "text": " like the transformation that relates the part to the whole, that does not make it easy to represent" }, { "start": 3189.28, "end": 3196.7200000000003, "text": " uncertainty about the aspects of pose and certainty about others. So the problem here is that we know" }, { "start": 3196.7200000000003, "end": 3202.5600000000004, "text": " that humans, when they watch something right here, when they watch a scene, like this is a chair," }, { "start": 3203.1200000000003, "end": 3210.88, "text": " and there is a person, a very tiny person on the chair, we don't see necessarily the coordinate" }, { "start": 3210.88, "end": 3216.88, "text": " frame of the world. What we see is we see the coordinate frame of the chair, like maybe this is" }, { "start": 3216.88, "end": 3224.7200000000003, "text": " the center, and we see the person in relation to the chair, our brain seems to do this intuitively," }, { "start": 3224.7200000000003, "end": 3229.6800000000003, "text": " and hinting things that a system like this should also do it intuitively. So somehow," }, { "start": 3229.6800000000003, "end": 3235.04, "text": " the coordinate transformations involved going from the eye to the reference through the frame" }, { "start": 3235.04, "end": 3242.32, "text": " of the chair, and then from the chair to the person, they should be somehow in encoded in this" }, { "start": 3242.32, "end": 3249.52, "text": " network. However, he also says that it's probably not necessary to encode them explicitly as you" }, { "start": 3249.52, "end": 3253.76, "text": " know, explicit coordinate transformations, because not only does that make it harder," }, { "start": 3253.76, "end": 3261.44, "text": " probably to learn, but also, you can't represent uncertainty. In fact, you can represent uncertainty," }, { "start": 3261.44, "end": 3266.6400000000003, "text": " that's the next thing right here, much better by having a higher dimensional thing that you're" }, { "start": 3266.64, "end": 3274.3199999999997, "text": " trying to guess, right? If you are trying to guess a distribution with three components," }, { "start": 3275.12, "end": 3280.3199999999997, "text": " and you simply have a three dimensional vector, you have no way of representing uncertainty." }, { "start": 3280.3199999999997, "end": 3287.12, "text": " However, if you have a nine dimensional vector, you can have three opinions about the distribution." }, { "start": 3287.12, "end": 3294.16, "text": " So this is an opinion, this is an opinion, and then this is an opinion. And then you can sort" }, { "start": 3294.16, "end": 3299.12, "text": " of aggregate and you can say, well, I'm pretty sure about these two things, because all my opinions" }, { "start": 3299.12, "end": 3307.04, "text": " are pretty close. But this one here, I'm not so sure because my individual things say different" }, { "start": 3307.04, "end": 3314.64, "text": " things, things say things. All right, I've this video is too long. So that's his argument right" }, { "start": 3314.64, "end": 3321.6, "text": " here, we don't need explicit representing of uncertainty, because by simply over parameterizing," }, { "start": 3321.6, "end": 3330.88, "text": " we can already represent uncertainty well. And we also don't need disentangled position information" }, { "start": 3330.88, "end": 3341.44, "text": " and, and so on. Sorry, we don't need different position information. Because, again, the network" }, { "start": 3341.44, "end": 3346.72, "text": " can take care of that. And he gives a good example, like why would you have disentangled" }, { "start": 3346.72, "end": 3353.8399999999997, "text": " coordinate frame if you have an image. And in the image, the picture in it is this." }, { "start": 3357.12, "end": 3363.2, "text": " How do you know if that is a rhomboid shape? Or if it is a" }, { "start": 3364.8799999999997, "end": 3371.52, "text": " rec, if it is a rectangular piece of paper viewed from the side, I should probably draw it way closer," }, { "start": 3371.52, "end": 3380.96, "text": " something like something like this. I suck at this. You get probably get what I mean. Like," }, { "start": 3380.96, "end": 3386.8, "text": " if it is a different object, it has a like the object and the coordinate transformation are" }, { "start": 3386.8, "end": 3393.28, "text": " dependent upon each other. And so it makes sense for the neural network to actually entangle the two," }, { "start": 3393.28, "end": 3400.56, "text": " because the two things depend on each other. In essence, he's just saying, don't worry about" }, { "start": 3400.56, "end": 3407.44, "text": " explicitly representing all of the different things. We got it, like the neural network can do" }, { "start": 3407.44, "end": 3415.2799999999997, "text": " all of these things, like uncertainty or position, and pose transformations. So here he compares it" }, { "start": 3415.2799999999997, "end": 3425.44, "text": " to different other architectures. Comparison to CNN, comparison to transformers, comparison to" }, { "start": 3425.44, "end": 3432.16, "text": " capsule models. And at the end, it goes into video. At the very beginning, he says the paper is about" }, { "start": 3432.16, "end": 3439.2000000000003, "text": " actually a video system. And you can kind of see that because we go through this algorithm in" }, { "start": 3439.2000000000003, "end": 3445.68, "text": " multiple time steps, right? You have, it's like you analyze an image with these columns, which gives" }, { "start": 3445.68, "end": 3455.7599999999998, "text": " you sort of a 3D, 3D tensor with the image at the bottom. And you go in the next time step, you have" }, { "start": 3455.7599999999998, "end": 3461.52, "text": " a new 3D tensor, right? You pass this whole information around with the image at the bottom." }, { "start": 3462.72, "end": 3468.16, "text": " Hinton says, well, why does that need to be the same image? That could also be different images." }, { "start": 3468.16, "end": 3475.12, "text": " So you could use the system to analyze video. So what he does is he says, at the same time," }, { "start": 3475.12, "end": 3481.8399999999997, "text": " you do this time step to find agreement, you could actually swap out the video frame, the X," }, { "start": 3481.8399999999997, "end": 3486.72, "text": " you can swap out the video frame, and produce a slightly different video frame. And you could" }, { "start": 3486.72, "end": 3492.72, "text": " actually have a kind of an ensemble regularizing effect. So as the whole columns here, the whole" }, { "start": 3492.72, "end": 3499.52, "text": " system comes to a consensus over time, you feed in different information at the bottom. And what" }, { "start": 3499.52, "end": 3507.68, "text": " he says is that, you know, if this is a slow enough video, then the top layers here would probably" }, { "start": 3507.68, "end": 3513.7599999999998, "text": " could still reach an agreement, while the bottom layers would change rapidly. But that could be" }, { "start": 3513.7599999999998, "end": 3521.6, "text": " sort of an ensemble or a regularizer regularizing effect that it even has. So he intrinsically" }, { "start": 3522.16, "end": 3527.84, "text": " connects these two time dimensions, because they would be separate, right, you could input a video," }, { "start": 3527.84, "end": 3535.52, "text": " and then in, you know, in each frame, you could do this consensus finding algorithm. But he says," }, { "start": 3535.52, "end": 3541.04, "text": " No, it's actually cool to consider them together to do the consensus finding while you sort of" }, { "start": 3541.04, "end": 3546.88, "text": " watch the video. It's just not clear that you always need the same amount of consensus finding" }, { "start": 3546.88, "end": 3552.96, "text": " steps as you need as you have video frames. So maybe you want to, maybe you want to take like" }, { "start": 3552.96, "end": 3560, "text": " five consensus steps per video frame or the other way around. Not sure. In any case, I think that's" }, { "start": 3560, "end": 3568.08, "text": " a pretty cool idea. And he says things like, if the changes are rapid, there is no time available" }, { "start": 3568.08, "end": 3573.04, "text": " to iteratively settle on a good set of embedding vectors for interpreting a specific frame." }, { "start": 3573.04, "end": 3577.92, "text": " This means that the GLOM architecture cannot correctly interpret complicated shapes. If the" }, { "start": 3577.92, "end": 3583.92, "text": " images are changing rapidly, try taking an irregularly shaped potato and throwing it up" }, { "start": 3583.92, "end": 3589.52, "text": " in the air such a way that it rotates at one or two cycles per second. Even if you smoothly track" }, { "start": 3589.52, "end": 3596.48, "text": " the potato, you cannot see what shape it is. Now I don't have a potato, but I can give you an avocado." }, { "start": 3596.48, "end": 3611.12, "text": " So if you give me a second, how's that? Could you track the shape? I don't know." }, { "start": 3612.64, "end": 3621.6, "text": " Probably Hinton's correct. All right. He talks about is this biologically plausible? And I don't" }, { "start": 3621.6, "end": 3627.52, "text": " want to go too much into this. He discusses some restrictions like, yeah, we still use backprop" }, { "start": 3627.52, "end": 3633.12, "text": " and is backprop plausible and so on. I love this sentence. In the long run, however, we are all" }, { "start": 3633.12, "end": 3639.8399999999997, "text": " dead. And then the footnote saying there are alternative facts. But yeah, he discusses whether" }, { "start": 3639.8399999999997, "end": 3645.8399999999997, "text": " it's biologically plausible. How could you modify it to make it more plausible? For example," }, { "start": 3645.84, "end": 3652.6400000000003, "text": " when you want to do contrastive learning, there is evidence that dreams during so during sleep," }, { "start": 3652.6400000000003, "end": 3658.7200000000003, "text": " you do contrastive learning, like you produce the negative examples during sleep, and then during" }, { "start": 3658.7200000000003, "end": 3666.48, "text": " the day, you collect the positive examples, and so on. So I think this is a more speculative part" }, { "start": 3666.48, "end": 3675.1200000000003, "text": " of the paper, but it's pretty cool to it's pretty cool to read it. And lastly, he goes into discussion" }, { "start": 3675.12, "end": 3682.88, "text": " he also says that this paper is too long already. I'm going to just briefly talk about this. And he" }, { "start": 3682.88, "end": 3691.2, "text": " trashes the neuro symbolic people a bit like he trashes the people that say no, no, you know," }, { "start": 3691.2, "end": 3697.92, "text": " neural networks can never do whatever. And he says pretty clearly look, neural networks can represent" }, { "start": 3697.92, "end": 3705.92, "text": " trees, I've given you a system also BERT can output parse trees. So shut up, I guess. And he comes" }, { "start": 3705.92, "end": 3714.64, "text": " up with this glom BERT name, which, you know, is is already coined if you wanted to do glom BERT," }, { "start": 3714.64, "end": 3726.64, "text": " that's already taken. Sorry. I also by the way also coined then I coined the name may go mania." }, { "start": 3726.64, "end": 3732.56, "text": " Right now. Okay, if you want to if you want to use it, it better be a pretty cool machine learning" }, { "start": 3732.56, "end": 3741.52, "text": " system and be based on glom. Right, that was the paper. I think it's a cool system. It has a bunch" }, { "start": 3741.52, "end": 3746.96, "text": " of parts that are maybe not super friendly to hardware at the time like this iterative procedure." }, { "start": 3746.96, "end": 3752.24, "text": " But honestly, it is not much more than a neural network, sorry, a recurrent neural network with" }, { "start": 3752.24, "end": 3761.12, "text": " very complicated recurrence functions. The video extension might be a bit tricky. And, but the rest" }, { "start": 3761.12, "end": 3765.7599999999998, "text": " and the regularization might be a bit tricky, the exact objective. So the denoising auto encoder" }, { "start": 3765.7599999999998, "end": 3771.2799999999997, "text": " objective isn't super detailed in the paper, he simply says, reconstruct the corrupted version of" }, { "start": 3771.2799999999997, "end": 3777.7599999999998, "text": " the input. How exactly the input happens, maybe there's a CNN, maybe the CNN feeds information" }, { "start": 3777.76, "end": 3784.7200000000003, "text": " into actually multiple layers. None of that is exactly specified. So there's lots to figure out." }, { "start": 3784.7200000000003, "end": 3793.6000000000004, "text": " I do think the ideas are very cool. And I love idea papers. And therefore, I recommend that if" }, { "start": 3793.6000000000004, "end": 3799.2000000000003, "text": " you're interested more, give this thing a read, give this video a like, share it out," }, { "start": 3799.2, "end": 3808.48, "text": " and I'll see you next time. Bye bye." } ]
eCH0M4wzKJs
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
WHO ARE YOU? 10k Subscribers Special (w/ Channel Analytics)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "special" ]
An in-depth look at this channel's analytics. Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there! We have just crossed 10,000 subscribers on this channel and that is an absolutely mind-blowing number. To everyone who's subscribed, thank you! And today, for a bit of a special occasion, I thought we would look at you. Yes, you handsome! One of the 10,000 subscribers of this channel. And we're going to dive into the YouTube analytics and see who you are and what you like and how you behave. So if you've never done YouTube as a creator, this is what it looks like. You can see right here, there are 10,071 subscribers right now. And if you look at the videos, Attention Is All You Need is the most popular video on this channel. It is also one of the oldest videos on this channel. Probably it's in many university curricula now and there's like half a lecture allocated to it. And that is not enough to understand this paper. So people come to this channel. I was still using kind of Adobe Reader at the time and etching in sort of... There's only one color, it's super laggy, but for some reason people like it and I'm not going to debate with people about what they like and what they don't. I might do another one on Attention Is All You Need just because I understand Transformers much better nowadays and I think I could do a better job at explaining them more clearly. So a lot of the NLP models tend to do very well as videos because I think people are interested and practitioners are interested and they just want to learn about these models and what they're doing. Also, there's Siraj controversy, very popular. It was a fun event and a sad event. This video right here, Deconstructing Lottery Tickets. It just outperforms all the other videos, absolutely mind blowingly rockets over everything. But if you look at the retention, I only retain people for two minutes on average, which is the retention is the lowest of any of my videos. And I could not understand this for the longest time. And then it occurred to me, if you title a video Deconstructing Lottery Tickets and you put a bunch of math on the thumbnail. People are going to click and then be very, very, very disappointed when you don't tell them how to win the lottery. And that takes them about one to two minutes to notice like, oh crap, I can't make any money using this video. So if you go to the general analytics tabs, it's very interesting to see what people like. You see the last 28 days I've uploaded every single day, except here. This is a bit of a gap. I have accidentally deleted the back prop in the brain video and had to re-upload it the next day. So if you look at the last year, the views have gone up substantially. The subscribers have gone up, but there are subscriber spikes right here. And these spikes are usually sometimes when some large personality recommends this channel. The channel gains a bunch of subscribers that doesn't necessarily translate into more views. It's just people that click on subscribe and then never care anymore. The metric that's most interesting to me personally is watch time. And as you can see here, watch time has gone substantially up in the last month, which I find to be encouraging. One minute of watch time means that I get to transmit one minute of information to the viewer. And that's what really matters to me. Of course, if I'm doing a worse job at explaining, then the viewer has to watch for longer. But that usually doesn't really work out because they're just going to click away. All right, to the fun part, the audience. Who are you? Now, as you can see here, 69% of you are not subscribed. What are you doing not being subscribed? Though this has changed in the last month significantly. I believe now it's about half and half. What I find most interesting, though, is that 10% of you have this bell notification on. So 10% of you actively want to be disturbed whenever I upload a video. I am incredibly flattered by this. This is the biggest compliment. Not even I do this for the channels that I follow. Demographics also very fun. About 93% of you tend to be male and about 6% of you tend to be female, at least according to YouTube statistics. And that's a pretty good intersection of YouTube being mostly male and machine learning field also being mostly male. If I'm doing anything to attract any particular type of person, please let me know so I can diversify a bit. Everyone's welcome here. You tend to be above 18, which is good because we have very, very much adult content on this channel. I'm happy to see that none of you is underage. I think that's YouTube just not reporting underage statistics. But most of you tend to be 18 to 45 years old, though to the older viewers, you're of course also very welcome. Though I'm pretty sure that some of these is just because you were underage when you created your account and you just told them you were born in 1923. So most of you tend to come from the United States or India or Germany. This is very incomplete list. I think the most people simply are in the other category of countries, which I think means YouTube just doesn't know. But it is cool to see that India is so high up. One of the reasons I started this channel. So the main reason is because it forces myself to read papers more thoroughly if I have to explain them. One other main reason I started this channel was because I thought I thought there was a gap between what you could get from a beginner's course, any sort of Coursera course and where research is right now. And to bridge that gap, you basically have to go to a very good university. And I know that most of the world doesn't have access to these very good universities. So my idea was to kind of bridge that gap to make that person that has a basic understanding of machine learning be able to be up to speed with current research, to be able to read current research papers. And the fact that I have quite a number of people watching from countries where top universities maybe aren't located is very encouraging to me. Thanks for watching from all around the world. One person is watching this with Russian subtitles. Okay, we can go into the advanced statistics here. And that is pretty interesting as well. You see here most videos kind of spike when they come out, especially the news videos. They tend to be very popular. Just the moment that I release them and then they sort of fall down. Traffic source I find particularly interesting. If you look at the last 90 days, there are these spikes and these spikes tend to come mainly from from Google searches. So at some point, people simply Google search for stuff and then they find this channel, which is encouraging. Most people actually search for attention is all you need or things like this. YouTube doesn't show you anymore what people searched on Google, but YouTube shows you what people searched for on YouTube. And that tends to be mostly attention is all you need. You zero my name. Hello. So if you correlate these spikes in searches with geography, some of these spikes tend to be worldwide like this one here or this one here. But this particular spike is only United States. And if you look at the videos that I released during that time period, one of these videos right here. So either it is the Schmidhuber drama, a lot of people searching for that. Maybe maybe it's ImageNet V2 or the online conferences. I don't know. You can see right here, I didn't make many subscribers of of these spikes. It's simply people being interested in content, which is pretty cool. It is also interesting to see that these spikes right here, they correspond mainly to mobile phone users. So mobile phone users go in Google searching for content on this channel. I have no idea what's going on. All right. Now, the last question to solve is, of course, monetization. How much money does this channel make? And the answer is none so far. So there are multiple reasons why I haven't applied for monetization yet. I find YouTube ads just incredibly annoying, especially now that they've decided to stick two ads in front of videos. I just don't want to bug users with that. If you look at what I gain from it, it's not that much. Any money that I would make, I would like to sort of reinvest into the channel. And right now, I just don't have any requirements. That might change in the future, maybe once we get to 800,000 subscribers. All right. That was it for YouTube analytics of this channel. I hope you are still enjoying this content. If you're not subscribed, please do. Next update will be at a hundred thousand. And I hope that everything is as enjoyable as ever. Thank you for watching. Thank you for subscribing. Thanks for being here and to the future.
[ { "start": 0, "end": 8.76, "text": " Hi there! We have just crossed 10,000 subscribers on this channel and that is an absolutely mind-blowing number." }, { "start": 8.76, "end": 15.84, "text": " To everyone who's subscribed, thank you! And today, for a bit of a special occasion, I thought we would look at you." }, { "start": 15.84, "end": 21.080000000000002, "text": " Yes, you handsome! One of the 10,000 subscribers of this channel." }, { "start": 21.080000000000002, "end": 27.88, "text": " And we're going to dive into the YouTube analytics and see who you are and what you like and how you behave." }, { "start": 27.88, "end": 31.56, "text": " So if you've never done YouTube as a creator, this is what it looks like." }, { "start": 31.56, "end": 37.76, "text": " You can see right here, there are 10,071 subscribers right now." }, { "start": 37.76, "end": 43.44, "text": " And if you look at the videos, Attention Is All You Need is the most popular video on this channel." }, { "start": 43.44, "end": 46.4, "text": " It is also one of the oldest videos on this channel." }, { "start": 46.4, "end": 52.28, "text": " Probably it's in many university curricula now and there's like half a lecture allocated to it." }, { "start": 52.28, "end": 57.88, "text": " And that is not enough to understand this paper. So people come to this channel." }, { "start": 57.88, "end": 64.52, "text": " I was still using kind of Adobe Reader at the time and etching in sort of..." }, { "start": 64.52, "end": 72.84, "text": " There's only one color, it's super laggy, but for some reason people like it and I'm not going to debate with people about what they like and what they don't." }, { "start": 72.84, "end": 82.04, "text": " I might do another one on Attention Is All You Need just because I understand Transformers much better nowadays and I think I could do a better job at explaining them more clearly." }, { "start": 82.04, "end": 93.36000000000001, "text": " So a lot of the NLP models tend to do very well as videos because I think people are interested and practitioners are interested and they just want to learn about these models and what they're doing." }, { "start": 93.36000000000001, "end": 99.60000000000001, "text": " Also, there's Siraj controversy, very popular. It was a fun event and a sad event." }, { "start": 99.60000000000001, "end": 103.04, "text": " This video right here, Deconstructing Lottery Tickets." }, { "start": 103.04, "end": 110.4, "text": " It just outperforms all the other videos, absolutely mind blowingly rockets over everything." }, { "start": 110.4, "end": 119.2, "text": " But if you look at the retention, I only retain people for two minutes on average, which is the retention is the lowest of any of my videos." }, { "start": 119.2, "end": 132.72, "text": " And I could not understand this for the longest time. And then it occurred to me, if you title a video Deconstructing Lottery Tickets and you put a bunch of math on the thumbnail." }, { "start": 132.72, "end": 140.96, "text": " People are going to click and then be very, very, very disappointed when you don't tell them how to win the lottery." }, { "start": 140.96, "end": 147.56, "text": " And that takes them about one to two minutes to notice like, oh crap, I can't make any money using this video." }, { "start": 147.56, "end": 154.32, "text": " So if you go to the general analytics tabs, it's very interesting to see what people like." }, { "start": 154.32, "end": 160.07999999999998, "text": " You see the last 28 days I've uploaded every single day, except here." }, { "start": 160.08, "end": 168.08, "text": " This is a bit of a gap. I have accidentally deleted the back prop in the brain video and had to re-upload it the next day." }, { "start": 168.08, "end": 173.92000000000002, "text": " So if you look at the last year, the views have gone up substantially." }, { "start": 173.92000000000002, "end": 179.20000000000002, "text": " The subscribers have gone up, but there are subscriber spikes right here." }, { "start": 179.20000000000002, "end": 186.20000000000002, "text": " And these spikes are usually sometimes when some large personality recommends this channel." }, { "start": 186.2, "end": 191.04, "text": " The channel gains a bunch of subscribers that doesn't necessarily translate into more views." }, { "start": 191.04, "end": 194.79999999999998, "text": " It's just people that click on subscribe and then never care anymore." }, { "start": 194.79999999999998, "end": 198.28, "text": " The metric that's most interesting to me personally is watch time." }, { "start": 198.28, "end": 205.83999999999997, "text": " And as you can see here, watch time has gone substantially up in the last month, which I find to be encouraging." }, { "start": 205.83999999999997, "end": 213.12, "text": " One minute of watch time means that I get to transmit one minute of information to the viewer." }, { "start": 213.12, "end": 214.92, "text": " And that's what really matters to me." }, { "start": 214.92, "end": 220.28, "text": " Of course, if I'm doing a worse job at explaining, then the viewer has to watch for longer." }, { "start": 220.28, "end": 224.23999999999998, "text": " But that usually doesn't really work out because they're just going to click away." }, { "start": 224.23999999999998, "end": 228.39999999999998, "text": " All right, to the fun part, the audience." }, { "start": 228.39999999999998, "end": 230.72, "text": " Who are you?" }, { "start": 230.72, "end": 235.04, "text": " Now, as you can see here, 69% of you are not subscribed." }, { "start": 235.04, "end": 237.27999999999997, "text": " What are you doing not being subscribed?" }, { "start": 237.27999999999997, "end": 240.48, "text": " Though this has changed in the last month significantly." }, { "start": 240.48, "end": 242.79999999999998, "text": " I believe now it's about half and half." }, { "start": 242.8, "end": 249.04000000000002, "text": " What I find most interesting, though, is that 10% of you have this bell notification on." }, { "start": 249.04000000000002, "end": 255.12, "text": " So 10% of you actively want to be disturbed whenever I upload a video." }, { "start": 255.12, "end": 257.48, "text": " I am incredibly flattered by this." }, { "start": 257.48, "end": 259.32, "text": " This is the biggest compliment." }, { "start": 259.32, "end": 262.64, "text": " Not even I do this for the channels that I follow." }, { "start": 262.64, "end": 264.88, "text": " Demographics also very fun." }, { "start": 264.88, "end": 272.76, "text": " About 93% of you tend to be male and about 6% of you tend to be female, at least according to YouTube statistics." }, { "start": 272.76, "end": 280.56, "text": " And that's a pretty good intersection of YouTube being mostly male and machine learning field also being mostly male." }, { "start": 280.56, "end": 288.03999999999996, "text": " If I'm doing anything to attract any particular type of person, please let me know so I can diversify a bit." }, { "start": 288.03999999999996, "end": 290.56, "text": " Everyone's welcome here." }, { "start": 290.56, "end": 298.96, "text": " You tend to be above 18, which is good because we have very, very much adult content on this channel." }, { "start": 298.96, "end": 301.76, "text": " I'm happy to see that none of you is underage." }, { "start": 301.76, "end": 306.96, "text": " I think that's YouTube just not reporting underage statistics." }, { "start": 306.96, "end": 315.84, "text": " But most of you tend to be 18 to 45 years old, though to the older viewers, you're of course also very welcome." }, { "start": 315.84, "end": 325.64, "text": " Though I'm pretty sure that some of these is just because you were underage when you created your account and you just told them you were born in 1923." }, { "start": 325.64, "end": 331.91999999999996, "text": " So most of you tend to come from the United States or India or Germany." }, { "start": 331.91999999999996, "end": 333.64, "text": " This is very incomplete list." }, { "start": 333.64, "end": 341.88, "text": " I think the most people simply are in the other category of countries, which I think means YouTube just doesn't know." }, { "start": 341.88, "end": 344.76, "text": " But it is cool to see that India is so high up." }, { "start": 344.76, "end": 346.91999999999996, "text": " One of the reasons I started this channel." }, { "start": 346.91999999999996, "end": 353.8, "text": " So the main reason is because it forces myself to read papers more thoroughly if I have to explain them." }, { "start": 353.8, "end": 362.32, "text": " One other main reason I started this channel was because I thought I thought there was a gap between what you could get from a beginner's course," }, { "start": 362.32, "end": 367.44, "text": " any sort of Coursera course and where research is right now." }, { "start": 367.44, "end": 372.52, "text": " And to bridge that gap, you basically have to go to a very good university." }, { "start": 372.52, "end": 377.08000000000004, "text": " And I know that most of the world doesn't have access to these very good universities." }, { "start": 377.08, "end": 388.96, "text": " So my idea was to kind of bridge that gap to make that person that has a basic understanding of machine learning be able to be up to speed with current research," }, { "start": 388.96, "end": 391.91999999999996, "text": " to be able to read current research papers." }, { "start": 391.91999999999996, "end": 400.88, "text": " And the fact that I have quite a number of people watching from countries where top universities maybe aren't located is very encouraging to me." }, { "start": 400.88, "end": 404.44, "text": " Thanks for watching from all around the world." }, { "start": 404.44, "end": 410, "text": " One person is watching this with Russian subtitles." }, { "start": 410, "end": 413.76, "text": " Okay, we can go into the advanced statistics here." }, { "start": 413.76, "end": 415.84, "text": " And that is pretty interesting as well." }, { "start": 415.84, "end": 421, "text": " You see here most videos kind of spike when they come out, especially the news videos." }, { "start": 421, "end": 423.92, "text": " They tend to be very popular." }, { "start": 423.92, "end": 428.8, "text": " Just the moment that I release them and then they sort of fall down." }, { "start": 428.8, "end": 431.4, "text": " Traffic source I find particularly interesting." }, { "start": 431.4, "end": 440.15999999999997, "text": " If you look at the last 90 days, there are these spikes and these spikes tend to come mainly from from Google searches." }, { "start": 440.15999999999997, "end": 448.52, "text": " So at some point, people simply Google search for stuff and then they find this channel, which is encouraging." }, { "start": 448.52, "end": 452.59999999999997, "text": " Most people actually search for attention is all you need or things like this." }, { "start": 452.59999999999997, "end": 459.79999999999995, "text": " YouTube doesn't show you anymore what people searched on Google, but YouTube shows you what people searched for on YouTube." }, { "start": 459.8, "end": 463.36, "text": " And that tends to be mostly attention is all you need." }, { "start": 463.36, "end": 465.68, "text": " You zero my name." }, { "start": 465.68, "end": 466.6, "text": " Hello." }, { "start": 466.6, "end": 475.32, "text": " So if you correlate these spikes in searches with geography, some of these spikes tend to be worldwide like this one here or this one here." }, { "start": 475.32, "end": 479.32, "text": " But this particular spike is only United States." }, { "start": 479.32, "end": 485.92, "text": " And if you look at the videos that I released during that time period, one of these videos right here." }, { "start": 485.92, "end": 491.40000000000003, "text": " So either it is the Schmidhuber drama, a lot of people searching for that." }, { "start": 491.40000000000003, "end": 496.48, "text": " Maybe maybe it's ImageNet V2 or the online conferences." }, { "start": 496.48, "end": 502.12, "text": " I don't know. You can see right here, I didn't make many subscribers of of these spikes." }, { "start": 502.12, "end": 506.08000000000004, "text": " It's simply people being interested in content, which is pretty cool." }, { "start": 506.08000000000004, "end": 514.72, "text": " It is also interesting to see that these spikes right here, they correspond mainly to mobile phone users." }, { "start": 514.72, "end": 520.96, "text": " So mobile phone users go in Google searching for content on this channel." }, { "start": 520.96, "end": 523.1600000000001, "text": " I have no idea what's going on." }, { "start": 523.1600000000001, "end": 527.48, "text": " All right. Now, the last question to solve is, of course, monetization." }, { "start": 527.48, "end": 530.88, "text": " How much money does this channel make?" }, { "start": 530.88, "end": 534.4, "text": " And the answer is none so far." }, { "start": 534.4, "end": 538.6, "text": " So there are multiple reasons why I haven't applied for monetization yet." }, { "start": 538.6, "end": 546.4, "text": " I find YouTube ads just incredibly annoying, especially now that they've decided to stick two ads in front of videos." }, { "start": 546.4, "end": 549.24, "text": " I just don't want to bug users with that." }, { "start": 549.24, "end": 552.16, "text": " If you look at what I gain from it, it's not that much." }, { "start": 552.16, "end": 556.72, "text": " Any money that I would make, I would like to sort of reinvest into the channel." }, { "start": 556.72, "end": 559.48, "text": " And right now, I just don't have any requirements." }, { "start": 559.48, "end": 564.0400000000001, "text": " That might change in the future, maybe once we get to 800,000 subscribers." }, { "start": 564.0400000000001, "end": 567.28, "text": " All right. That was it for YouTube analytics of this channel." }, { "start": 567.28, "end": 569.52, "text": " I hope you are still enjoying this content." }, { "start": 569.52, "end": 571.64, "text": " If you're not subscribed, please do." }, { "start": 571.64, "end": 574.6, "text": " Next update will be at a hundred thousand." }, { "start": 574.6, "end": 579.24, "text": " And I hope that everything is as enjoyable as ever." }, { "start": 579.24, "end": 581.8399999999999, "text": " Thank you for watching. Thank you for subscribing." }, { "start": 581.84, "end": 598.84, "text": " Thanks for being here and to the future." } ]
nQDZmf2Yb9k
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
PonderNet: Learning to Ponder (Machine Learning Research Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "pondernet", "deepmind", "pondernet learning to ponder", "deepmind pondernet", "pondernet explained", "dynamic computation", "deep learning classic algorithms", "halting probability", "deep learning recurrent computation", "dynamic recurrent network", "broader impact", "deep network learning to stop" ]
#pondernet #deepmind #machinelearning Humans don't spend the same amount of mental effort on all problems equally. Instead, we respond quickly to easy tasks, and we take our time to deliberate hard tasks. DeepMind's PonderNet attempts to achieve the same by dynamically deciding how many computation steps to allocate to any single input sample. This is done via a recurrent architecture and a trainable function that computes a halting probability. The resulting model performs well in dynamic computation tasks and is surprisingly robust to different hyperparameter settings. OUTLINE: 0:00 - Intro & Overview 2:30 - Problem Statement 8:00 - Probabilistic formulation of dynamic halting 14:40 - Training via unrolling 22:30 - Loss function and regularization of the halting distribution 27:35 - Experimental Results 37:10 - Sensitivity to hyperparameter choice 41:15 - Discussion, Conclusion, Broader Impact Paper: https://arxiv.org/abs/2107.05407 Abstract: In standard neural networks the amount of computation used grows with the size of the inputs, but not with the complexity of the problem being learnt. To overcome this limitation we introduce PonderNet, a new algorithm that learns to adapt the amount of computation based on the complexity of the problem at hand. PonderNet learns end-to-end the number of computational steps to achieve an effective compromise between training prediction accuracy, computational cost and generalization. On a complex synthetic problem, PonderNet dramatically improves performance over previous adaptive computation methods and additionally succeeds at extrapolation tests where traditional neural networks fail. Also, our method matched the current state of the art results on a real world question and answering dataset, but using less compute. Finally, PonderNet reached state of the art results on a complex task designed to test the reasoning capabilities of neural networks.1 Authors: Andrea Banino, Jan Balaguer, Charles Blundell Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher LinkedIn: https://www.linkedin.com/in/yannic-kilcher-488534136/ BiliBili: https://space.bilibili.com/1824646584 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Hello there! Today we'll look at PonderNet Learning to Ponder by Andrea Bonino, Jan Ballager and Charles Blundell. This paper on a high level introduces a recurrent architecture or a principle of recurrent computation for deep networks that essentially says the network recurrently computes its output at each step and at each step it can decide to stop now because it is satisfied with the answer that it has. The idea is that at a complex task you can compute for many steps because it requires many steps of thinking and then give the output and for an easy task the network can decide to output right away because it already has computed the solution. This decision can be done on a per sample basis so for each sample the network can decide when it's time to give the final output. This is not necessarily a paper that just makes something bigger and then pushes state-of-the-art on some benchmark and that's why it piqued my interest is that it tries to rephrase a little bit how we think about the connection of deep learning and algorithms like classic algorithms by themselves. Essentially this is a dynamic if condition in this algorithm that decides when it's when it's time to stop and I appreciate that you know it not everything has to be state-of-the-art pushing here this is simply a cool method to do something that's relatively new. Of course things like this have been done before and they are discussed at length in this paper how this paper is different from other papers that do similar things and it does push state-of-the-art just not on benchmarks that you might be super duper familiar with. But yeah it's it's a cool paper it's a short paper the idea is pretty simple and it appears to work and yeah that's exciting stuff. So we're gonna dive into this paper have a look have a look at what's new in this particular model how it works and as always if you have feedback leave a comment subscribe I'd be happy for that and yeah thanks for being here. Okay so in the abstract here they say that in a standard neural network the amount of computation used grows with the size of the inputs but not with the complexity of the problem being learned. So which is true right in a standard neural network you have a forward pass be that in a fully connected neural network where you have you know you have your input and then you go layer layer layer layer layer and then you have your output. This computation here is always the same no matter the input even in a recurrent neural network right you have kind of an input right here at the beginning you have a layer then you have an input again and then you have this that goes into the same layer and then you have the next input that goes into the same layer even a recurrent neural network usually usually just does the same forward pass. This is a little bit different if you have something like a language model that can emit at some point a you know a stop token or an end of sentence token at which point the computation essentially stops but it's a little bit of a different thing than we consider right here. Right here we consider a neural network that has to find the answer to a particular problem and we're gonna see the problems down but one problem that they present is the parity problem. So the parity problem is you get a string of zeros and ones I think there is also negative ones in there but I think they're a bit for a distraction and the answer you're looking for is as a whole is the parity so the amount of ones in this string odd or even right so this requires a let's say an integrated view of computation this is essentially a classic algorithm that you have to perform over this string and neural networks as good as they are in computer vision and speech recognition they are having trouble with simple algorithmic tasks like this so the idea of this paper here is that well it doesn't make sense to apply a neural network that always does the same amount of compute right I shove this sequence just like in here it doesn't make sense because you know if there is just a single one in the string and I see that right away I can give the answer right away however if it's a long string and it has a bunch of ones I might need to think about this problem for a while and thus adapt the number of computation steps I do in my head I might you know first if I look at this string I might first connect these two you know and then that's two and then I might connect these two that's two again and then I might connect these two that's four there's nothing here there's nothing here right okay four so that's kind of like one two three steps of computation so that's the the rough idea whereas this if the string was shorter and and more regular I might need less computation so they say to overcome this limitation we introduce ponder net a new algorithm that learns to adapt the amount of computation based on the complexity of the problem at hand ponder net learns end to end the number of computational steps to achieve an effective compromise between training prediction accuracy computational cost and generalization so we are going to see how they do this yeah exactly so they then they go into the the tasks their experimental tasks in this paper are sort of these constructed tasks where people know you need this dynamic computation they're not gonna they're not gonna compete on like image net or something like this so the majority of the paper is in in contra posing their model against this a CT model the adaptive computation time I believe so there have been previous attempts at doing dynamic computation time yet either they have so it turns out they're kind of finicky and this model here this pondernet model has a bunch of advantages they say they present pondernet that builds on the previous ideas it's fully differentiable which allows for low variance gradient estimates unlike reinforce so a couple of previous attempts have been with reinforcement learning so let's just learn the number of steps or when to stop using reinforcement learning and that as you might know is very very noisy it has unbiased gradient estimates which is also unlike other models in the past and yeah so they say this has consequences in all three in all aspects of the model in pondernet the halting node predicts the probability of halting conditional on not having halted before this kind of seems obvious but apparently that no one has done this so far so what do we need for an architecture for pondernet they say this down here essentially that's the architecture it's an inline formula which you know but that's the architecture so what you need is you need an input okay you need an input which is X your input and X is transformed into a hidden state this is let's say the hidden state at step one those two or you can also reformulate this as just a hidden state the hidden state is going into s the so-called step function and that's the recurrent function right here so into this step function you can put anything you want you can put like a CNN inside you can treat this as an LSTM since we're going to apply it recursively sorry recurrently and anything you want can be the step function as long as it can be applied recurrently so this step function is going to give you the next hidden state right so you can see it's a recurrent neural network however it is also going to give you the output at that particular point in time so y1 I guess that be here and it's also going to give you this number lambda n now what are these so from here you could apply the step function again you'd get h3 you get the output 2 and you'd get lambda sorry that's that's a 1 that's a 2 so it seems like it's just a recurrent neural network and if I were to put push this to the end right I go give my H H H and then at the end I get my Y N and I treat that as the output of the computation then it's just a recurrent neural network however as we said the network can in this case decide to stop anywhere in between for example if it decides to stop at this particular step then that would be the output of the computation so every computation step the network computes and a potential output a suggestion for an output and then it also thinks about whether or not it really wants to answer with that output or whether it wants to continue and to do another step essentially take another shot at answering the question because it doesn't yet have the correct answer and that's where this lambda thing comes in so the lambda is a probability of stopping essentially so here you can see the output lambda is a number between zero and one and that is the probability of halting this is the output considered that the network halts so whenever this is one the network will halt conditioned on the fact that it hasn't previously halted yeah it seemed as I said it seems obvious to formulate it like this because you can you know you can only halt if you haven't previously halted but apparently previous models have simply output a number that is sort of the probability of halting in general which doesn't give you a bias sorry an unbiased gradient if you try to back propagate through it so if you consider the lambdas to be like this if you unroll for an entire training run then you get we get the probability of halting at any particular step this one so this is what this is what the previous networks would have estimated directly however this network estimates these lambdas these ones here you can see how you can compute the probability that for example the network halts after three steps by multiplying up the probability that network has not halted which is this one at step one has not halted at step two and then the probability that network halts at step three that it given that it hasn't halted at the previous steps so that is a valid probability distribution it's a generalization of the geometric distribution and essentially it encapsulates a decision tree right so at you're at the beginning you can halt sorry let's go a halt or not or continue if you continue then again you can halt or you can continue if again you can halt or continue and so on and all of this so if you want the probability that the network halts after you know this the third step then you would consider this node which means that you'd multiply that you multiply up these paths right here and that's the probability that it holds after three steps okay so the network can output this lambda at every step if the lambda is high then the network halts of course at inference this is done probabilistically now at training time this is done a little bit differently so you I hope you can see at inference time you simply go forward and you get a lambda maybe the lambda in the first step is point one and then you flip the coin a biased coin right if if it comes up heads you stop with the probability of point one it comes up tails which is a point nine probability you continue then maybe at the second step it's it's point zero five so maybe maybe you stop but probably you won't stop and then at the third step it like comes up point nine the network thinks yeah I should probably stop here and you sample from that and yes you you might indeed in nine out of ten cases you actually stop there so that's inference how about training how about we train this thing during training what we do is again we input X our input into an encoder for a hidden state and as I said you can also input X all the time into your step function as you see right here but what you do is you unroll the network for a number of steps right independent of these output nodes independent of the sorry if the halting probability let's say we we unroll it for for five steps right here and at every point we get a output and a value y3 y4 this is lambda 2 lambda 3 lambda 4 so at training we simply unroll until a given step now there are some technical difficulties with doing with unrolling for a finite amount of step like how do you normalize the probability distribution because essentially this tree can go on until infinity they find okay we we can simply unroll until kind of the rest probability the probability we haven't used yet is is really small and then just load that all onto the last step but these are technical difficulties that you really only care when you then go and implement however so we unroll for a number of steps and then our we consider all the outputs at the same time now this is one big difference I believe to one of the previous networks to this a CT so what a CT does is it always unrolls and then the the output of the network so for a CT the output of the network would simply be a weighted output of the lambda I y I so the output of the network is always a waiting between the different steps okay and the network can decide okay how do I want to wait the individual outputs whereas here it's different here the output is really either y1 or y2 or y3 or y4 and to in order to pack this into a single loss function what we can do sorry I should probably leave this in order to pack this into a single loss function we simply take okay what's the loss what would be the loss if we answered y1 right what would be the loss and we weigh that by the probability and we say okay what would be the loss of y2 we weighed by the probability that the network output so now if we and so on so plus essentially we compute the expected loss given the probabilities that the network has output so now if we back prop this we back prop through these losses we have of course two paths of back propping so we back prop through the wise which means it's at some so there is a loss right and both these things and these things go into the loss right so the loss is well how bad is this times how probably it was so on so the back propagation path would actually attack at two different paths you can see so the back prop goes into why because you want the network to compute a a better output but the propagation also goes into the lambda because you want the network to get better at estimating when its output is good and when not this I see a little bit as a tricky situation because usually this this seems a little bit unstable just from experience from other papers and so on if you have a back prop through two different things especially that are appear to be multiplied together and that you know the network can now trade off one versus the other which might you might think is desirable right it can either choose to make its output better if it wants to keep the probability high of outputting this thing or it can just reduce the probability that it's going to output whatever it wants to output and you know then it doesn't have to necessarily make the output itself correct because the loss the loss won't be as high for that particular thing because the probability of outputting it is low so network essentially has a choice as I said this might be desirable but usually that's kind of unstable and I think this is just my personal opinion I think a lot of them why this might work might rest on whether or not or let's say the complexity itself of assessing of making why better versus adjusting these probabilities of course yeah so you see if the output y is very complex right then this you know the same gradient signal for that might mean much less than simply reducing the probability okay so if the output is very very complex right not the problem but just the output itself right how to arrive at an output if the output is an entire pixel map or something like this and that has dependencies and so on the network might just choose to always reduce the probability because it's like well how am I gonna how am I gonna make this better at all I don't know I can just reduce the probability I'm going to output this crap right and it will probably do this then for every you know single step which you know if it's complex problem makes sense but still that's it that would be a bit my my fear here and that this is not really discussed in the paper itself so I think the fact that this works might rely on sort of a balance of the of the complexity or information content that you get from the loss at the output node versus the loss at the probability node so okay enough about that so in yeah during training you simply compute the expected loss weighted by the probabilities and then you can back prop through that and I hope you can see the difference between these two one is a they both seem to sum up somehow the outputs weighted by these these factors however one considers the actual output of the network to be a weighted combination of outputs of the individual steps where the other one says no no no the network output is actually one of them we don't know which one ergo for the loss we need to compute the expectation of the loss that seems to be a bit of a let's just say yeah it seems to be a more reasonable formulation though in hindsight you can say many things are reasonable if they work better right yeah so they discuss things like maximum number of pondering steps and so on again which I think is a technical detail and this is interesting so there you have the training loss as we just discussed now we've discussed this part right here which they call the reconstruction loss because you have some kind of desired y and you have a y that comes from this and I was a little bit wrong here in my formulation of course the expectation you don't have you don't want to take the lambdas you actually want to take the probabilities that each thing happens which means that you need to compute this P number you know going along this tree as we did because the P is the actual probability that you reach that node whereas the lambda is only the conditional probability that you reach a node given you were at the previous node so yeah consider there that if you if you are crazy enough to implement things straight as I speak in the videos lucid rains shout out the second part of the loss here and you can see this is a hyper parameter so you you're gonna trade off two of two losses right here because right now we saw okay you can either continue or not continue and for the network you know it might actually be easier as I said if the loss of the output comes reasonably complex right here it might be easier to simply say well in this case I'm just always going to reduce my probabilities you might counteract this with having this number of steps not like maximum number of steps but essentially this term here is what counteracts that really there is a regularization term on these probabilities as you can see right here so we regularize with the KL divergence which is sort of a distance measure don't tell this to a mathematician it's a it's a divergence it's a sort of a distance measure between the distribution that the network outputs for the steps and this thing right here which is a geometric distribution with this parameter and this parameter lambda p is another hyper parameter so what does that mean essentially if you consider here the number of steps that the network thinks right think things for what you regularize for this distribution right here is a geometric distribution I'll go something like maybe no something like this so essentially a geometric distribution is set exactly computes this tree that we computed right so at each step you can essentially stop and the question is after you know this distribution gives you a indication after what's the probability that you stop after one step two steps three steps four steps considering the fact that in order to stop after four steps you already have to have made three non-stopping steps except in the geometric distribution the probability of continuing is always the same whereas in our network our network for each node and the tree it can output a different probability otherwise you know there'd be no point we can simply put in the fixed distribution now what that probability is of stopping at each point that's exactly this lambda p hyper parameter right here so you regularize for a KL for this which means that you tell the network look here is a a reasonable reasonable distribution of when you should stop so you should stop so it should be you know somewhat probable that you stop after one step and somewhat probable if you've already done one step that you stop after two steps and so on so you give it sort of a default probability of stopping after each step so if this is 0.1 for example you tell the network essentially look at any given step there's like a default 10% chance that you should stop I as a designer of the algorithm think that's a reasonable prior to have now the network can decide differently the network can decide no no no no no I actually want to stop way earlier right like like this it puts much more emphasis on the first steps which of course in turn because you need to normalize put less emphasis on the latter steps so the network can still decide to violate this prior if the if it may reduce the loss for enough so this is as I said a trade-off there are two hyper parameters the geometric distribution shape and the amount that you regularize by this KL divergence and yeah so now we come into the experimental results and these are pretty pretty neat because yeah they I think these are straightforward experimental results they're not super big large-scale results or anything like this but they show that look on tasks where we sort of know that this dynamic computation has an advantage our model will outperform both previous attempts at dynamic computation and especially networks that have no dynamic computation built in whatsoever so this is the parity task which we're going to look at as you can see here the orange is this a CT which is the previous work that they compare most with that is most similar to them you can see in terms of accuracy pondir net beats this network by quite a bit also appreciate the error bars in this one they almost overlap but they don't so you can say that you're definitely better and interestingly the number of compute steps even though yeah the error bars overlap as well here but pondir net itself needs less compute steps which might be you know I don't I don't know why why exactly that happens but you can speculate that it is because pondir net sort of fixes on a single like it outputs a single answer whereas the a CT it outputs this weighing of things and therefore when it when it outputs that say the first step answer it always needs to consider that this needs to be compatible with potential future steps so just formulating so just formulating how a CT output stuff it seems like it becomes a lot less dynamic because the output is always a waiting of different outputs and therefore the first steps they have to they can't just output what they think is the correct solution but they sort of already have to incorporate the future and estimate well if I'm going to continue computing then you know there's going to be stuff added to my output right here and they have to take this into account so it can be ironically less dynamic of a network and that's why I think pondir net might need less steps here I might be totally wrong though so this is the parity task and specifically they train with string lengths between you know so this is a string length of one and then string length of we've before we had like eight right something like this so they train up from one until 49 lengths one until 49 and this is a little bit important I think because their training set contains all of them which you know this is a little bit of an experimental trick right so in order for your network what you wanted to learn is kind of the general principle of parity independent of string length so you construct the training data set to be sort of a distribution of lengths of string rather than just strings of a fixed length and then you assess their parity so yeah that that's maybe a bit of a lesson for if you do experiments construct your tasks themselves already such that they help find the correct solution right so they train with strings of length one up up until 49 and then they try to extrapolate which is this B right here so this is extrapolation where then they test so first here they test they train on small strings they test on small strings here in B they train on the same small strings up till length 49 but then as I understand it they give it length 50 to what 99 or so in 2 or 96 it says it somewhere just longer strings that it has been trained with right and now that the setup is you know clear it's clear why they did the different length strings in the training set and not just fixed length strings because there's a reasonable chance the network does not learn to extrapolate just from one particular or two particular lengths of string nevertheless they test how does the network extrapolate to longer strings and you can see right here that a CT even though it also has been trained on the dynamic length strings it is that's 50% right that's pure chance so it's a parity test right it's the output is either odd or even so a CT just gets a pure random chance as a result whereas the pondernet as you can see has like an accuracy of 0.9 which I guess is pretty good especially on strings that are so long you've never seen them so what can we read from this I'm not exactly sure there's always the possibility that you know they've just trained a CT wrong or something like this but it's also it's also reasonable to say that just how the previous models were constructed either they didn't learn the concept or their their output is just weird in the way a CT is or since a CT has biased gradients estimates and pondernet doesn't yada yada we don't know what we do know is that in their experiments this pondernet was actually able to solve the extrapolation task right here the interesting thing is that if you look at the number of compute steps done you can see that pondernet in contrast to what it was trained with during inference sorry that's an alarm in in contrast to what it was trained with during inference during inference it has like two point between 2.5 and three steps let's say three steps computes for about three steps during inference time that's what it decides on for the smaller strings yet the same model right train on the same strings this is the same model during inference time on the longer strings all of a sudden it raises its compute to five steps whereas a CT okay a CT doesn't work in the in this one it just decides to stick around two or three steps as it does in training right so the authors sort of claim that this is good evidence that pondernet learns to solve the actual task right here and as the task gets more complex pondernet needs more steps to think about the task and this might be exactly you know what we saw that you have some sort of a string of zeros and ones and you learn during training you learn a how to take one of these maybe in multiple steps and get an output but now you all of a sudden you have a longer string right well so now what you can do is you can also learn an output for this one and now you have two outputs right and now you can learn a series of steps to transform the two outputs here into a single output and that might just need one or two more computation steps which is exactly what we see right here happening so it's a good it's a good indication that something like this is happening I would be wondering pondering one might say haha if you know how this actually happens like like what do the individual computation steps represent is it in fact a for example in this parity task is the network going about this task in a hierarchical fashion you know like like I've shown here is it something different is it going about it in sort of a purely recurrent fashion where even though we as I understand it we input the entire string at the beginning does it only look at the string position by position or you know how does this work how does the scaling behave in general if you know they only show small strings large strings but how does it behave in general as you go up the length and so on it would be really interesting to introspect this model a little bit more than simply showing kind of end results here of the individual tasks okay what they also find is that the hyper parameter how you regularize the shape we've seen this up here how you regularize this shape is you know that is a hyper parameter but it doesn't seem to be terribly important again they compare to a CT which has another hyper parameter that does the similar thing that regularizes the shape of the of the desired halting distribution which they call tau tau doesn't mean a particular thing in so they say it does not have any straightforward interpretation though I guess the authors of a CT might disagree but as you can see here so if I draw the the means there is a region where the tau where a selection of tau performs high though you have to say see that is all around sort of the same value of like 5e minus 4 or something like this and then for the other values that you might set it for it simply doesn't work at all so you the authors claim you have to hit this tau pretty correctly in order to even get the network to do anything whereas they claim in pondernet this variable right here first of all it's between 0 and 1 and not just an arbitrary value right because it's a probability and they claim that you know it kind of works for for most things except this one right here where essentially you bias the network to just output everything after one step so the trick is for the geometric distribution you have to take the inverse so one over this lambda p and that will give you the expected number of steps that the network would compute according to this prior so when you put in 0.9 that would essentially be a single step that you ask the network to do but for all the other things well you you judge for yourself whether this here is really good but what you can say is that look it goes from 0 to 1 so you have a clear range and for most of that range the the thing seems to work okay ish and what they highlight is even down here so even if they do this even if they said lambda p to 1 or sorry to point 1 which would essentially bias the network towards 10 steps that the prior is please do 10 steps of computation in this parity task as I understand it even for that point one you can see the network it doesn't do 10 steps it actually also goes towards 3 4 or 5 steps most of the time so the network learns to be sort of somewhat robust to this prior distribution I mean I guess that's also a function largely of the hyper parameter here where you trade it off we don't know the effect of that just from the paper but even you know even if they set that to really low it's it it of course then the network is kind of robust to the choice of the lambda p yet it's still good news because that means you would mean you wouldn't have to regularize the the model super heavily in order to get it to work okay they go into two other tasks right here again these aren't tasks that you might necessarily know they are tasks where this type of computation shines particularly and yeah as I said I see the paper more as sort of an interesting an interesting task an interesting niche tasks subtask you might say of of connecting deep learning and classic algorithms there are a number of things that I think you can do right here to extend this so it's completely thinkable that you know the loss might be a bit different that you don't ask the network to output the direct answer at each point but you know you might you might want to attach memories and so on at at these output nodes you might want it want them to output intermediate results or something like this another thing you could do is you could work with sort of adversarial losses instead of of you know kind of reconstruction losses or whatnot so you could you could have some sort of a GAN going on inside of this in order to decide on the on the stopping probability that there's lots of stuff one can fiddle around with this type of network and you can even think of crazier architectures I don't know hopfield like structures where you decide you know how far you iterate because you don't you may not always want to iterate until fixed points I don't know I'm just I'm just talking crap right now okay one last shout out to the broader impact statement of this paper what a beautiful beautiful piece of of writing so essentially they say well this enables neural networks to adapt their computational complexity to the tasks they are trying to solve you know neural networks are good but currently they require much time expensive hardware they often fail pondernet expands the capabilities they say look it you know it can do this it can do that makes it particularly well suited for platforms with limited resources such as mobile phones which is a good thing right it can also generalize better that means it's better for real-world problems and they say it we encourage other researchers to pursue the questions we have considered on this work we believe that biasing neural network architectures to behave more like algorithms and less like flat mappings will help developing deep learning methods to their full potential and that is indeed the broader impact of this work like that is that's the impact it had on me and that's the impact that it it should have yeah I'm not like at today's conferences that must might be kicked out because of course it doesn't say technology good technology bad technology biased but you know respect for that and that was it for me let me know what you think and bye bye
[ { "start": 0, "end": 5.48, "text": " Hello there! Today we'll look at PonderNet Learning to Ponder by Andrea Bonino," }, { "start": 5.48, "end": 11.48, "text": " Jan Ballager and Charles Blundell. This paper on a high level introduces a" }, { "start": 11.48, "end": 17.92, "text": " recurrent architecture or a principle of recurrent computation for deep networks" }, { "start": 17.92, "end": 23.580000000000002, "text": " that essentially says the network recurrently computes its output at each" }, { "start": 23.580000000000002, "end": 29.400000000000002, "text": " step and at each step it can decide to stop now because it is satisfied with" }, { "start": 29.4, "end": 35.8, "text": " the answer that it has. The idea is that at a complex task you can compute for" }, { "start": 35.8, "end": 42.04, "text": " many steps because it requires many steps of thinking and then give the" }, { "start": 42.04, "end": 47.08, "text": " output and for an easy task the network can decide to output right away because" }, { "start": 47.08, "end": 52.64, "text": " it already has computed the solution. This decision can be done on a per" }, { "start": 52.64, "end": 57.12, "text": " sample basis so for each sample the network can decide when it's time to" }, { "start": 57.12, "end": 64.32, "text": " give the final output. This is not necessarily a paper that just" }, { "start": 64.32, "end": 68.8, "text": " makes something bigger and then pushes state-of-the-art on some benchmark and" }, { "start": 68.8, "end": 75.47999999999999, "text": " that's why it piqued my interest is that it tries to rephrase a little bit how we" }, { "start": 75.47999999999999, "end": 79.88, "text": " think about the connection of deep learning and algorithms like classic" }, { "start": 79.88, "end": 86.47999999999999, "text": " algorithms by themselves. Essentially this is a dynamic if condition in this" }, { "start": 86.48, "end": 91.76, "text": " algorithm that decides when it's when it's time to stop and I appreciate that" }, { "start": 91.76, "end": 97.12, "text": " you know it not everything has to be state-of-the-art pushing here this is" }, { "start": 97.12, "end": 103.76, "text": " simply a cool method to do something that's relatively new. Of course things" }, { "start": 103.76, "end": 108.28, "text": " like this have been done before and they are discussed at length in this paper" }, { "start": 108.28, "end": 114, "text": " how this paper is different from other papers that do similar things and it" }, { "start": 114, "end": 118.88, "text": " does push state-of-the-art just not on benchmarks that you might be super duper" }, { "start": 118.88, "end": 123.68, "text": " familiar with. But yeah it's it's a cool paper it's a short paper the idea is" }, { "start": 123.68, "end": 130.76, "text": " pretty simple and it appears to work and yeah that's exciting stuff. So we're gonna" }, { "start": 130.76, "end": 135.04, "text": " dive into this paper have a look have a look at what's new in this particular" }, { "start": 135.04, "end": 141.92000000000002, "text": " model how it works and as always if you have feedback leave a comment subscribe" }, { "start": 141.92, "end": 148.44, "text": " I'd be happy for that and yeah thanks for being here. Okay so in the abstract" }, { "start": 148.44, "end": 153.92, "text": " here they say that in a standard neural network the amount of computation used" }, { "start": 153.92, "end": 159.83999999999997, "text": " grows with the size of the inputs but not with the complexity of the problem" }, { "start": 159.83999999999997, "end": 165.56, "text": " being learned. So which is true right in a standard neural network you have a" }, { "start": 165.56, "end": 171.07999999999998, "text": " forward pass be that in a fully connected neural network where you have" }, { "start": 171.08, "end": 174.20000000000002, "text": " you know you have your input and then you go layer layer layer layer layer" }, { "start": 174.20000000000002, "end": 179.44, "text": " and then you have your output. This computation here is always the same no" }, { "start": 179.44, "end": 185.8, "text": " matter the input even in a recurrent neural network right you have kind of an" }, { "start": 185.8, "end": 189.72000000000003, "text": " input right here at the beginning you have a layer then you have an input" }, { "start": 189.72000000000003, "end": 193.60000000000002, "text": " again and then you have this that goes into the same layer and then you have" }, { "start": 193.60000000000002, "end": 198, "text": " the next input that goes into the same layer even a recurrent neural network" }, { "start": 198, "end": 205.24, "text": " usually usually just does the same forward pass. This is a little bit" }, { "start": 205.24, "end": 209.96, "text": " different if you have something like a language model that can emit at some" }, { "start": 209.96, "end": 216.4, "text": " point a you know a stop token or an end of sentence token at which point the" }, { "start": 216.4, "end": 221.6, "text": " computation essentially stops but it's a little bit of a different thing than we" }, { "start": 221.6, "end": 227.52, "text": " consider right here. Right here we consider a neural network that has to" }, { "start": 227.52, "end": 235.56, "text": " find the answer to a particular problem and we're gonna see the problems down" }, { "start": 235.56, "end": 241.32000000000002, "text": " but one problem that they present is the parity problem. So the parity problem is" }, { "start": 241.32000000000002, "end": 246.56, "text": " you get a string of zeros and ones I think there is also negative ones in" }, { "start": 246.56, "end": 251.28, "text": " there but I think they're a bit for a distraction and the answer you're" }, { "start": 251.28, "end": 259.16, "text": " looking for is as a whole is the parity so the amount of ones in this string odd" }, { "start": 259.16, "end": 267.12, "text": " or even right so this requires a let's say an integrated view of computation" }, { "start": 267.12, "end": 271.52, "text": " this is essentially a classic algorithm that you have to perform over this" }, { "start": 271.52, "end": 276.8, "text": " string and neural networks as good as they are in computer vision and speech" }, { "start": 276.8, "end": 284.48, "text": " recognition they are having trouble with simple algorithmic tasks like this so" }, { "start": 284.48, "end": 292.40000000000003, "text": " the idea of this paper here is that well it doesn't make sense to apply a neural" }, { "start": 292.40000000000003, "end": 296.36, "text": " network that always does the same amount of compute right I shove this sequence" }, { "start": 296.36, "end": 302.36, "text": " just like in here it doesn't make sense because you know if there is just a" }, { "start": 302.36, "end": 306.92, "text": " single one in the string and I see that right away I can give the answer right" }, { "start": 306.92, "end": 311.88, "text": " away however if it's a long string and it has a bunch of ones I might" }, { "start": 311.88, "end": 317.48, "text": " need to think about this problem for a while and thus adapt the number of" }, { "start": 317.48, "end": 322.84000000000003, "text": " computation steps I do in my head I might you know first if I look at this" }, { "start": 322.84000000000003, "end": 327.16, "text": " string I might first connect these two you know and then that's two and then I" }, { "start": 327.16, "end": 330.64, "text": " might connect these two that's two again and then I might connect these two" }, { "start": 330.64, "end": 334.71999999999997, "text": " that's four there's nothing here there's nothing here right okay four so that's" }, { "start": 334.71999999999997, "end": 341.03999999999996, "text": " kind of like one two three steps of computation so that's the the rough idea" }, { "start": 341.03999999999996, "end": 346, "text": " whereas this if the string was shorter and and more regular I might need less" }, { "start": 346, "end": 354.76, "text": " computation so they say to overcome this limitation we introduce ponder net a new" }, { "start": 354.76, "end": 358.59999999999997, "text": " algorithm that learns to adapt the amount of computation based on the" }, { "start": 358.6, "end": 365.44, "text": " complexity of the problem at hand ponder net learns end to end the number of" }, { "start": 365.44, "end": 369.44, "text": " computational steps to achieve an effective compromise between training" }, { "start": 369.44, "end": 375.84000000000003, "text": " prediction accuracy computational cost and generalization so we are going to" }, { "start": 375.84000000000003, "end": 383.36, "text": " see how they do this yeah exactly so they then they go into the the tasks" }, { "start": 383.36, "end": 388.84000000000003, "text": " their experimental tasks in this paper are sort of these constructed tasks" }, { "start": 388.84000000000003, "end": 393.8, "text": " where people know you need this dynamic computation they're not gonna they're" }, { "start": 393.8, "end": 400.6, "text": " not gonna compete on like image net or something like this so the majority of" }, { "start": 400.6, "end": 410.04, "text": " the paper is in in contra posing their model against this a CT model the" }, { "start": 410.04, "end": 417.04, "text": " adaptive computation time I believe so there have been previous attempts at" }, { "start": 417.04, "end": 426.44, "text": " doing dynamic computation time yet either they have so it turns out they're" }, { "start": 426.44, "end": 432.12, "text": " kind of finicky and this model here this pondernet model has a bunch of" }, { "start": 432.12, "end": 438.48, "text": " advantages they say they present pondernet that builds on the previous" }, { "start": 438.48, "end": 443.28000000000003, "text": " ideas it's fully differentiable which allows for low variance gradient" }, { "start": 443.28000000000003, "end": 448.48, "text": " estimates unlike reinforce so a couple of previous attempts have been with" }, { "start": 448.48, "end": 453.08000000000004, "text": " reinforcement learning so let's just learn the number of steps or when to" }, { "start": 453.08000000000004, "end": 459.52000000000004, "text": " stop using reinforcement learning and that as you might know is very very" }, { "start": 459.52000000000004, "end": 466.12, "text": " noisy it has unbiased gradient estimates which is also unlike other models in the" }, { "start": 466.12, "end": 473.48, "text": " past and yeah so they say this has consequences in all three in all aspects" }, { "start": 473.48, "end": 480, "text": " of the model in pondernet the halting node predicts the probability of halting" }, { "start": 480, "end": 485.68, "text": " conditional on not having halted before this kind of seems obvious but" }, { "start": 485.68, "end": 490.08, "text": " apparently that no one has done this so far so what do we need for an" }, { "start": 490.08, "end": 496.24, "text": " architecture for pondernet they say this down here essentially that's the" }, { "start": 496.24, "end": 500.84, "text": " architecture it's an inline formula which you know but that's the" }, { "start": 500.84, "end": 509.08, "text": " architecture so what you need is you need an input okay you need an input" }, { "start": 509.08, "end": 518.9, "text": " which is X your input and X is transformed into a hidden state this is" }, { "start": 518.9, "end": 524.92, "text": " let's say the hidden state at step one those two or you can also reformulate" }, { "start": 524.92, "end": 530.36, "text": " this as just a hidden state the hidden state is going into s the so-called step" }, { "start": 530.36, "end": 534.72, "text": " function and that's the recurrent function right here so into this step" }, { "start": 534.72, "end": 540.04, "text": " function you can put anything you want you can put like a CNN inside you can" }, { "start": 540.04, "end": 545.96, "text": " treat this as an LSTM since we're going to apply it recursively sorry recurrently" }, { "start": 545.96, "end": 551.64, "text": " and anything you want can be the step function as long as it can be applied" }, { "start": 551.64, "end": 557.36, "text": " recurrently so this step function is going to give you the next hidden state" }, { "start": 557.36, "end": 562.44, "text": " right so you can see it's a recurrent neural network however it is also going" }, { "start": 562.44, "end": 571.2800000000001, "text": " to give you the output at that particular point in time so y1 I guess" }, { "start": 571.28, "end": 579.92, "text": " that be here and it's also going to give you this number lambda n now what are" }, { "start": 579.92, "end": 586.4399999999999, "text": " these so from here you could apply the step function again you'd get h3 you get" }, { "start": 586.4399999999999, "end": 595.28, "text": " the output 2 and you'd get lambda sorry that's that's a 1 that's a 2 so it seems" }, { "start": 595.28, "end": 600.04, "text": " like it's just a recurrent neural network and if I were to put push this" }, { "start": 600.04, "end": 606.9599999999999, "text": " to the end right I go give my H H H and then at the end I get my Y N and I treat" }, { "start": 606.9599999999999, "end": 611.5999999999999, "text": " that as the output of the computation then it's just a recurrent neural" }, { "start": 611.5999999999999, "end": 617.8199999999999, "text": " network however as we said the network can in this case decide to stop anywhere" }, { "start": 617.8199999999999, "end": 623.8399999999999, "text": " in between for example if it decides to stop at this particular step then that" }, { "start": 623.8399999999999, "end": 628.18, "text": " would be the output of the computation so every computation step the network" }, { "start": 628.18, "end": 633.8399999999999, "text": " computes and a potential output a suggestion for an output and then it" }, { "start": 633.8399999999999, "end": 638.7399999999999, "text": " also thinks about whether or not it really wants to answer with that output" }, { "start": 638.7399999999999, "end": 645.56, "text": " or whether it wants to continue and to do another step essentially take another" }, { "start": 645.56, "end": 650.68, "text": " shot at answering the question because it doesn't yet have the correct answer" }, { "start": 650.68, "end": 660, "text": " and that's where this lambda thing comes in so the lambda is a probability of" }, { "start": 660, "end": 666.76, "text": " stopping essentially so here you can see the output lambda is a number between" }, { "start": 666.76, "end": 676.4799999999999, "text": " zero and one and that is the probability of halting this is the output considered" }, { "start": 676.48, "end": 684.04, "text": " that the network halts so whenever this is one the network will halt conditioned" }, { "start": 684.04, "end": 690.2, "text": " on the fact that it hasn't previously halted yeah it seemed as I said it seems" }, { "start": 690.2, "end": 693.88, "text": " obvious to formulate it like this because you can you know you can only" }, { "start": 693.88, "end": 699.4, "text": " halt if you haven't previously halted but apparently previous models have simply" }, { "start": 699.4, "end": 705.48, "text": " output a number that is sort of the probability of halting in general which" }, { "start": 705.48, "end": 711.44, "text": " doesn't give you a bias sorry an unbiased gradient if you try to back" }, { "start": 711.44, "end": 717.32, "text": " propagate through it so if you consider the lambdas to be like this if you" }, { "start": 717.32, "end": 724.6, "text": " unroll for an entire training run then you get we get the probability of" }, { "start": 724.6, "end": 731.4, "text": " halting at any particular step this one so this is what this is what the" }, { "start": 731.4, "end": 736.48, "text": " previous networks would have estimated directly however this network estimates" }, { "start": 736.48, "end": 741.52, "text": " these lambdas these ones here you can see how you can compute the probability" }, { "start": 741.52, "end": 747.48, "text": " that for example the network halts after three steps by multiplying up the" }, { "start": 747.48, "end": 753.24, "text": " probability that network has not halted which is this one at step one has not" }, { "start": 753.24, "end": 757.88, "text": " halted at step two and then the probability that network halts at step" }, { "start": 757.88, "end": 762.64, "text": " three that it given that it hasn't halted at the previous steps so that is" }, { "start": 762.64, "end": 767.32, "text": " a valid probability distribution it's a generalization of the geometric" }, { "start": 767.32, "end": 774.16, "text": " distribution and essentially it encapsulates a decision tree right so" }, { "start": 774.16, "end": 781.8, "text": " at you're at the beginning you can halt sorry let's go a halt or not or continue" }, { "start": 781.8, "end": 788.76, "text": " if you continue then again you can halt or you can continue if again you can" }, { "start": 788.76, "end": 797.28, "text": " halt or continue and so on and all of this so if you want the probability" }, { "start": 797.28, "end": 803.52, "text": " that the network halts after you know this the third step then you would" }, { "start": 803.52, "end": 809.1999999999999, "text": " consider this node which means that you'd multiply that you multiply up" }, { "start": 809.2, "end": 813.1600000000001, "text": " these paths right here and that's the probability that it holds after three" }, { "start": 813.1600000000001, "end": 821.4000000000001, "text": " steps okay so the network can output this lambda at every step if the lambda" }, { "start": 821.4000000000001, "end": 826.96, "text": " is high then the network halts of course at inference this is done" }, { "start": 826.96, "end": 833, "text": " probabilistically now at training time this is done a little bit differently so" }, { "start": 833, "end": 837.8000000000001, "text": " you I hope you can see at inference time you simply go forward and you get a" }, { "start": 837.8, "end": 844.1999999999999, "text": " lambda maybe the lambda in the first step is point one and then you flip the" }, { "start": 844.1999999999999, "end": 850.3199999999999, "text": " coin a biased coin right if if it comes up heads you stop with the probability" }, { "start": 850.3199999999999, "end": 854.3, "text": " of point one it comes up tails which is a point nine probability you continue" }, { "start": 854.3, "end": 861.0799999999999, "text": " then maybe at the second step it's it's point zero five so maybe maybe you stop" }, { "start": 861.0799999999999, "end": 866.16, "text": " but probably you won't stop and then at the third step it like comes up point" }, { "start": 866.16, "end": 871.1999999999999, "text": " nine the network thinks yeah I should probably stop here and you sample from" }, { "start": 871.1999999999999, "end": 876.8399999999999, "text": " that and yes you you might indeed in nine out of ten cases you actually stop" }, { "start": 876.8399999999999, "end": 883.7199999999999, "text": " there so that's inference how about training how about we train this thing" }, { "start": 883.7199999999999, "end": 892.56, "text": " during training what we do is again we input X our input into an encoder for a" }, { "start": 892.56, "end": 897.2399999999999, "text": " hidden state and as I said you can also input X all the time into your step" }, { "start": 897.2399999999999, "end": 903.8399999999999, "text": " function as you see right here but what you do is you unroll the network for a" }, { "start": 903.8399999999999, "end": 909.9599999999999, "text": " number of steps right independent of these output nodes independent of the" }, { "start": 909.9599999999999, "end": 915, "text": " sorry if the halting probability let's say we we unroll it for for five steps" }, { "start": 915, "end": 925.76, "text": " right here and at every point we get a output and a value y3 y4 this is lambda" }, { "start": 925.76, "end": 933.44, "text": " 2 lambda 3 lambda 4 so at training we simply unroll until a given step now" }, { "start": 933.44, "end": 939.4, "text": " there are some technical difficulties with doing with unrolling for a finite" }, { "start": 939.4, "end": 943.4, "text": " amount of step like how do you normalize the probability distribution because" }, { "start": 943.4, "end": 950.0799999999999, "text": " essentially this tree can go on until infinity they find okay we we can simply" }, { "start": 950.0799999999999, "end": 956.52, "text": " unroll until kind of the rest probability the probability we haven't" }, { "start": 956.52, "end": 961.6, "text": " used yet is is really small and then just load that all onto the last step but" }, { "start": 961.6, "end": 967.0799999999999, "text": " these are technical difficulties that you really only care when you then go" }, { "start": 967.08, "end": 976.1600000000001, "text": " and implement however so we unroll for a number of steps and then our we consider" }, { "start": 976.1600000000001, "end": 980.48, "text": " all the outputs at the same time now this is one big difference I believe to" }, { "start": 980.48, "end": 985.84, "text": " one of the previous networks to this a CT so what a CT does is it always unrolls" }, { "start": 985.84, "end": 991.84, "text": " and then the the output of the network so for a CT the output of the network" }, { "start": 991.84, "end": 1000.2800000000001, "text": " would simply be a weighted output of the lambda I y I so the output of the" }, { "start": 1000.2800000000001, "end": 1004.1600000000001, "text": " network is always a waiting between the different steps okay and the network can" }, { "start": 1004.1600000000001, "end": 1008.9200000000001, "text": " decide okay how do I want to wait the individual outputs whereas here it's" }, { "start": 1008.9200000000001, "end": 1017.4200000000001, "text": " different here the output is really either y1 or y2 or y3 or y4 and to in" }, { "start": 1017.42, "end": 1024.24, "text": " order to pack this into a single loss function what we can do sorry I should" }, { "start": 1024.24, "end": 1029.1599999999999, "text": " probably leave this in order to pack this into a single loss function we" }, { "start": 1029.1599999999999, "end": 1035, "text": " simply take okay what's the loss what would be the loss if we answered y1" }, { "start": 1035, "end": 1042.8799999999999, "text": " right what would be the loss and we weigh that by the probability and we say" }, { "start": 1042.88, "end": 1048.68, "text": " okay what would be the loss of y2 we weighed by the probability that the" }, { "start": 1048.68, "end": 1056.0400000000002, "text": " network output so now if we and so on so plus essentially we compute the expected" }, { "start": 1056.0400000000002, "end": 1062, "text": " loss given the probabilities that the network has output so now if we back" }, { "start": 1062, "end": 1068.0400000000002, "text": " prop this we back prop through these losses we have of course two paths of" }, { "start": 1068.04, "end": 1074.44, "text": " back propping so we back prop through the wise which means it's at some so" }, { "start": 1074.44, "end": 1081.72, "text": " there is a loss right and both these things and these things go into the loss" }, { "start": 1081.72, "end": 1089.8799999999999, "text": " right so the loss is well how bad is this times how probably it was so on so" }, { "start": 1089.8799999999999, "end": 1094.52, "text": " the back propagation path would actually attack at two different paths you can" }, { "start": 1094.52, "end": 1099.76, "text": " see so the back prop goes into why because you want the network to compute" }, { "start": 1099.76, "end": 1110.4, "text": " a a better output but the propagation also goes into the lambda because you" }, { "start": 1110.4, "end": 1116.12, "text": " want the network to get better at estimating when its output is good and" }, { "start": 1116.12, "end": 1123.56, "text": " when not this I see a little bit as a tricky situation because usually this" }, { "start": 1123.56, "end": 1128.76, "text": " this seems a little bit unstable just from experience from other papers and so" }, { "start": 1128.76, "end": 1133.96, "text": " on if you have a back prop through two different things especially that are" }, { "start": 1133.96, "end": 1140.28, "text": " appear to be multiplied together and that you know the network can now trade" }, { "start": 1140.28, "end": 1144.9199999999998, "text": " off one versus the other which might you might think is desirable right it can" }, { "start": 1144.9199999999998, "end": 1153.08, "text": " either choose to make its output better if it wants to keep the probability high" }, { "start": 1153.08, "end": 1157.72, "text": " of outputting this thing or it can just reduce the probability that it's going" }, { "start": 1157.72, "end": 1163.04, "text": " to output whatever it wants to output and you know then it doesn't have to" }, { "start": 1163.04, "end": 1169.32, "text": " necessarily make the output itself correct because the loss the loss won't" }, { "start": 1169.32, "end": 1175.1599999999999, "text": " be as high for that particular thing because the probability of outputting it" }, { "start": 1175.1599999999999, "end": 1181.32, "text": " is low so network essentially has a choice as I said this might be desirable" }, { "start": 1181.32, "end": 1188.28, "text": " but usually that's kind of unstable and I think this is just my personal opinion" }, { "start": 1188.28, "end": 1196.9199999999998, "text": " I think a lot of them why this might work might rest on whether or not or" }, { "start": 1196.9199999999998, "end": 1204.9199999999998, "text": " let's say the complexity itself of assessing of making why better versus" }, { "start": 1204.92, "end": 1214.48, "text": " adjusting these probabilities of course yeah so you see if the output y is very" }, { "start": 1214.48, "end": 1222.92, "text": " complex right then this you know the same gradient signal for that might mean" }, { "start": 1222.92, "end": 1228.0800000000002, "text": " much less than simply reducing the probability okay so if the output is" }, { "start": 1228.0800000000002, "end": 1233.48, "text": " very very complex right not the problem but just the output itself right how to" }, { "start": 1233.48, "end": 1237.88, "text": " arrive at an output if the output is an entire pixel map or something like this" }, { "start": 1237.88, "end": 1243.44, "text": " and that has dependencies and so on the network might just choose to" }, { "start": 1243.44, "end": 1248.04, "text": " always reduce the probability because it's like well how am I gonna how am I" }, { "start": 1248.04, "end": 1251.8, "text": " gonna make this better at all I don't know I can just reduce the" }, { "start": 1251.8, "end": 1256.88, "text": " probability I'm going to output this crap right and it will probably do this" }, { "start": 1256.88, "end": 1261.24, "text": " then for every you know single step which you know if it's complex" }, { "start": 1261.24, "end": 1267.04, "text": " problem makes sense but still that's it that would be a bit my my fear here and" }, { "start": 1267.04, "end": 1274.96, "text": " that this is not really discussed in the paper itself so I think the fact that" }, { "start": 1274.96, "end": 1280.2, "text": " this works might rely on sort of a balance of the of the complexity or" }, { "start": 1280.2, "end": 1284.56, "text": " information content that you get from the loss at the output node versus the" }, { "start": 1284.56, "end": 1292.1599999999999, "text": " loss at the probability node so okay enough about that so in yeah during" }, { "start": 1292.1599999999999, "end": 1296.56, "text": " training you simply compute the expected loss weighted by the probabilities and" }, { "start": 1296.56, "end": 1300.84, "text": " then you can back prop through that and I hope you can see the difference between" }, { "start": 1300.84, "end": 1309.04, "text": " these two one is a they both seem to sum up somehow the outputs weighted by these" }, { "start": 1309.04, "end": 1314.32, "text": " these factors however one considers the actual output of the network to be a" }, { "start": 1314.32, "end": 1318.8, "text": " weighted combination of outputs of the individual steps where the other one" }, { "start": 1318.8, "end": 1323.1599999999999, "text": " says no no no the network output is actually one of them we don't know which" }, { "start": 1323.1599999999999, "end": 1328.28, "text": " one ergo for the loss we need to compute the expectation of the loss that seems" }, { "start": 1328.28, "end": 1334.1599999999999, "text": " to be a bit of a let's just say yeah it seems to be a more reasonable" }, { "start": 1334.1599999999999, "end": 1339, "text": " formulation though in hindsight you can say many things are reasonable if they" }, { "start": 1339, "end": 1344.68, "text": " work better right yeah so they discuss things like maximum number of pondering" }, { "start": 1344.68, "end": 1351.92, "text": " steps and so on again which I think is a technical detail and this is interesting" }, { "start": 1351.92, "end": 1357.44, "text": " so there you have the training loss as we just discussed now we've discussed" }, { "start": 1357.44, "end": 1362.32, "text": " this part right here which they call the reconstruction loss because you have" }, { "start": 1362.32, "end": 1369.3999999999999, "text": " some kind of desired y and you have a y that comes from this and I was a little" }, { "start": 1369.3999999999999, "end": 1373.9199999999998, "text": " bit wrong here in my formulation of course the expectation you don't have" }, { "start": 1373.9199999999998, "end": 1377.9199999999998, "text": " you don't want to take the lambdas you actually want to take the probabilities" }, { "start": 1377.9199999999998, "end": 1382.6799999999998, "text": " that each thing happens which means that you need to compute this P number you" }, { "start": 1382.6799999999998, "end": 1388.56, "text": " know going along this tree as we did because the P is the actual probability" }, { "start": 1388.56, "end": 1392.56, "text": " that you reach that node whereas the lambda is only the conditional probability" }, { "start": 1392.56, "end": 1398.1599999999999, "text": " that you reach a node given you were at the previous node so yeah consider" }, { "start": 1398.1599999999999, "end": 1404.04, "text": " there that if you if you are crazy enough to implement things straight as I" }, { "start": 1404.04, "end": 1410.6399999999999, "text": " speak in the videos lucid rains shout out the second part of the loss here and" }, { "start": 1410.6399999999999, "end": 1415.2, "text": " you can see this is a hyper parameter so you you're gonna trade off two of two" }, { "start": 1415.2, "end": 1420.88, "text": " losses right here because right now we saw okay you can either continue or not" }, { "start": 1420.88, "end": 1426.0800000000002, "text": " continue and for the network you know it might actually be easier as I said if" }, { "start": 1426.0800000000002, "end": 1430.88, "text": " the loss of the output comes reasonably complex right here it might be easier to" }, { "start": 1430.88, "end": 1437.8, "text": " simply say well in this case I'm just always going to reduce my probabilities" }, { "start": 1437.8, "end": 1442.68, "text": " you might counteract this with having this number of steps not like maximum" }, { "start": 1442.68, "end": 1446.76, "text": " number of steps but essentially this term here is what counteracts that" }, { "start": 1446.76, "end": 1452.44, "text": " really there is a regularization term on these probabilities as you can see right" }, { "start": 1452.44, "end": 1457.96, "text": " here so we regularize with the KL divergence which is sort of a distance" }, { "start": 1457.96, "end": 1465.6000000000001, "text": " measure don't tell this to a mathematician it's a it's a divergence" }, { "start": 1465.6000000000001, "end": 1470.8600000000001, "text": " it's a sort of a distance measure between the distribution that the" }, { "start": 1470.86, "end": 1475.84, "text": " network outputs for the steps and this thing right here which is a geometric" }, { "start": 1475.84, "end": 1480.8799999999999, "text": " distribution with this parameter and this parameter lambda p is another hyper" }, { "start": 1480.8799999999999, "end": 1487.08, "text": " parameter so what does that mean essentially if you consider here the" }, { "start": 1487.08, "end": 1492.6799999999998, "text": " number of steps that the network thinks right think things for what you" }, { "start": 1492.6799999999998, "end": 1498.6, "text": " regularize for this distribution right here is a geometric distribution I'll" }, { "start": 1498.6, "end": 1505.56, "text": " go something like maybe no something like this so essentially a geometric" }, { "start": 1505.56, "end": 1511.8, "text": " distribution is set exactly computes this tree that we computed right so at" }, { "start": 1511.8, "end": 1518.1599999999999, "text": " each step you can essentially stop and the question is after you know this" }, { "start": 1518.1599999999999, "end": 1524.8, "text": " distribution gives you a indication after what's the probability that you" }, { "start": 1524.8, "end": 1529.6, "text": " stop after one step two steps three steps four steps considering the fact" }, { "start": 1529.6, "end": 1534.08, "text": " that in order to stop after four steps you already have to have made three" }, { "start": 1534.08, "end": 1538.84, "text": " non-stopping steps except in the geometric distribution the probability" }, { "start": 1538.84, "end": 1544.36, "text": " of continuing is always the same whereas in our network our network for each node" }, { "start": 1544.36, "end": 1549.12, "text": " and the tree it can output a different probability otherwise you know there'd" }, { "start": 1549.12, "end": 1554.52, "text": " be no point we can simply put in the fixed distribution now what that" }, { "start": 1554.52, "end": 1559.6399999999999, "text": " probability is of stopping at each point that's exactly this lambda p hyper" }, { "start": 1559.6399999999999, "end": 1567.78, "text": " parameter right here so you regularize for a KL for this which means that you" }, { "start": 1567.78, "end": 1574.84, "text": " tell the network look here is a a reasonable reasonable distribution of" }, { "start": 1574.84, "end": 1581.96, "text": " when you should stop so you should stop so it should be you know somewhat" }, { "start": 1581.96, "end": 1586.1200000000001, "text": " probable that you stop after one step and somewhat probable if you've already" }, { "start": 1586.1200000000001, "end": 1591.4, "text": " done one step that you stop after two steps and so on so you give it sort of a" }, { "start": 1591.4, "end": 1597.76, "text": " default probability of stopping after each step so if this is 0.1 for example" }, { "start": 1597.76, "end": 1603.68, "text": " you tell the network essentially look at any given step there's like a default" }, { "start": 1603.68, "end": 1608.4, "text": " 10% chance that you should stop I as a designer of the algorithm think that's a" }, { "start": 1608.4, "end": 1615.44, "text": " reasonable prior to have now the network can decide differently the network can" }, { "start": 1615.44, "end": 1623.4, "text": " decide no no no no no I actually want to stop way earlier right like like this it" }, { "start": 1623.4, "end": 1628.72, "text": " puts much more emphasis on the first steps which of course in turn because" }, { "start": 1628.72, "end": 1634.68, "text": " you need to normalize put less emphasis on the latter steps so the network can" }, { "start": 1634.68, "end": 1641.8400000000001, "text": " still decide to violate this prior if the if it may reduce the loss for enough" }, { "start": 1641.8400000000001, "end": 1647.68, "text": " so this is as I said a trade-off there are two hyper parameters the geometric" }, { "start": 1647.68, "end": 1653.6000000000001, "text": " distribution shape and the amount that you regularize by this KL divergence" }, { "start": 1653.6000000000001, "end": 1660.8, "text": " and yeah so now we come into the experimental results and these are" }, { "start": 1660.8, "end": 1668.9199999999998, "text": " pretty pretty neat because yeah they I think these are straightforward" }, { "start": 1668.9199999999998, "end": 1674.52, "text": " experimental results they're not super big large-scale results or anything like" }, { "start": 1674.52, "end": 1681.56, "text": " this but they show that look on tasks where we sort of know that this dynamic" }, { "start": 1681.56, "end": 1689.36, "text": " computation has an advantage our model will outperform both previous attempts" }, { "start": 1689.36, "end": 1695.84, "text": " at dynamic computation and especially networks that have no dynamic" }, { "start": 1695.84, "end": 1701.04, "text": " computation built in whatsoever so this is the parity task which we're going to" }, { "start": 1701.04, "end": 1706.4799999999998, "text": " look at as you can see here the orange is this a CT which is the previous work" }, { "start": 1706.4799999999998, "end": 1713.4799999999998, "text": " that they compare most with that is most similar to them you can see in terms of" }, { "start": 1713.48, "end": 1720.92, "text": " accuracy pondir net beats this network by quite a bit also appreciate the error" }, { "start": 1720.92, "end": 1726.04, "text": " bars in this one they almost overlap but they don't so you can say that you're" }, { "start": 1726.04, "end": 1733.32, "text": " definitely better and interestingly the number of compute steps even though yeah" }, { "start": 1733.32, "end": 1739.2, "text": " the error bars overlap as well here but pondir net itself needs less compute" }, { "start": 1739.2, "end": 1744.32, "text": " steps which might be you know I don't I don't know why why exactly that happens" }, { "start": 1744.32, "end": 1752, "text": " but you can speculate that it is because pondir net sort of fixes on a single like" }, { "start": 1752, "end": 1758.76, "text": " it outputs a single answer whereas the a CT it outputs this weighing of things and" }, { "start": 1758.76, "end": 1764.48, "text": " therefore when it when it outputs that say the first step answer it always" }, { "start": 1764.48, "end": 1770.2, "text": " needs to consider that this needs to be compatible with potential future steps so" }, { "start": 1770.2, "end": 1778.72, "text": " just formulating so just formulating how a CT output stuff it seems like it" }, { "start": 1778.72, "end": 1784.8, "text": " becomes a lot less dynamic because the output is always a waiting of different" }, { "start": 1784.8, "end": 1790.44, "text": " outputs and therefore the first steps they have to they can't just output what" }, { "start": 1790.44, "end": 1794.8, "text": " they think is the correct solution but they sort of already have to incorporate" }, { "start": 1794.8, "end": 1802.76, "text": " the future and estimate well if I'm going to continue computing then you know" }, { "start": 1802.76, "end": 1807.48, "text": " there's going to be stuff added to my output right here and they have to take" }, { "start": 1807.48, "end": 1815.2, "text": " this into account so it can be ironically less dynamic of a network and that's why" }, { "start": 1815.2, "end": 1820.92, "text": " I think pondir net might need less steps here I might be totally wrong though so" }, { "start": 1820.92, "end": 1826.64, "text": " this is the parity task and specifically they train with string lengths between" }, { "start": 1826.64, "end": 1833.0800000000002, "text": " you know so this is a string length of one and then string length of we've" }, { "start": 1833.0800000000002, "end": 1838.44, "text": " before we had like eight right something like this so they train up from one" }, { "start": 1838.44, "end": 1846, "text": " until 49 lengths one until 49 and this is a little bit important I think" }, { "start": 1846, "end": 1852.64, "text": " because their training set contains all of them which you know this is a little" }, { "start": 1852.64, "end": 1859.16, "text": " bit of an experimental trick right so in order for your network what you wanted" }, { "start": 1859.16, "end": 1863.04, "text": " to learn is kind of the general principle of parity independent of" }, { "start": 1863.04, "end": 1867.76, "text": " string length so you construct the training data set to be sort of a" }, { "start": 1867.76, "end": 1875.48, "text": " distribution of lengths of string rather than just strings of a fixed length and" }, { "start": 1875.48, "end": 1882.32, "text": " then you assess their parity so yeah that that's maybe a bit of a lesson for" }, { "start": 1882.32, "end": 1890.64, "text": " if you do experiments construct your tasks themselves already such that they" }, { "start": 1890.64, "end": 1897.12, "text": " help find the correct solution right so they train with strings of length one up" }, { "start": 1897.12, "end": 1904.1599999999999, "text": " up until 49 and then they try to extrapolate which is this B right here" }, { "start": 1904.1599999999999, "end": 1909.8799999999999, "text": " so this is extrapolation where then they test so first here they test they train" }, { "start": 1909.8799999999999, "end": 1915.56, "text": " on small strings they test on small strings here in B they train on the same" }, { "start": 1915.56, "end": 1922.08, "text": " small strings up till length 49 but then as I understand it they give it length" }, { "start": 1922.08, "end": 1932.12, "text": " 50 to what 99 or so in 2 or 96 it says it somewhere just longer strings that it" }, { "start": 1932.12, "end": 1937.6799999999998, "text": " has been trained with right and now that the setup is you know clear it's clear" }, { "start": 1937.6799999999998, "end": 1941.24, "text": " why they did the different length strings in the training set and not just" }, { "start": 1941.24, "end": 1946.6799999999998, "text": " fixed length strings because there's a reasonable chance the network does not" }, { "start": 1946.6799999999998, "end": 1951.4399999999998, "text": " learn to extrapolate just from one particular or two particular lengths of" }, { "start": 1951.44, "end": 1960.48, "text": " string nevertheless they test how does the network extrapolate to longer strings" }, { "start": 1960.48, "end": 1966.4, "text": " and you can see right here that a CT even though it also has been trained on" }, { "start": 1966.4, "end": 1976.8, "text": " the dynamic length strings it is that's 50% right that's pure chance so it's a" }, { "start": 1976.8, "end": 1984.56, "text": " parity test right it's the output is either odd or even so a CT just gets a" }, { "start": 1984.56, "end": 1990.2, "text": " pure random chance as a result whereas the pondernet as you can see has like an" }, { "start": 1990.2, "end": 1996.36, "text": " accuracy of 0.9 which I guess is pretty good especially on strings that are so" }, { "start": 1996.36, "end": 2002.56, "text": " long you've never seen them so what can we read from this I'm not exactly sure" }, { "start": 2002.56, "end": 2007.24, "text": " there's always the possibility that you know they've just trained a CT wrong or" }, { "start": 2007.24, "end": 2012.76, "text": " something like this but it's also it's also reasonable to say that just how the" }, { "start": 2012.76, "end": 2018.3999999999999, "text": " previous models were constructed either they didn't learn the concept or their" }, { "start": 2018.3999999999999, "end": 2025.24, "text": " their output is just weird in the way a CT is or since a CT has biased gradients" }, { "start": 2025.24, "end": 2031.3999999999999, "text": " estimates and pondernet doesn't yada yada we don't know what we do know is" }, { "start": 2031.4, "end": 2037.3200000000002, "text": " that in their experiments this pondernet was actually able to solve the" }, { "start": 2037.3200000000002, "end": 2042.92, "text": " extrapolation task right here the interesting thing is that if you look at" }, { "start": 2042.92, "end": 2050.44, "text": " the number of compute steps done you can see that pondernet in contrast to what" }, { "start": 2050.44, "end": 2059.04, "text": " it was trained with during inference sorry that's an alarm in in contrast to" }, { "start": 2059.04, "end": 2062.52, "text": " what it was trained with during inference during inference it has like" }, { "start": 2062.52, "end": 2068, "text": " two point between 2.5 and three steps let's say three steps computes for about" }, { "start": 2068, "end": 2073.7599999999998, "text": " three steps during inference time that's what it decides on for the smaller" }, { "start": 2073.7599999999998, "end": 2078.96, "text": " strings yet the same model right train on the same strings this is the same" }, { "start": 2078.96, "end": 2085.46, "text": " model during inference time on the longer strings all of a sudden it raises" }, { "start": 2085.46, "end": 2093.2, "text": " its compute to five steps whereas a CT okay a CT doesn't work in the in this" }, { "start": 2093.2, "end": 2099.68, "text": " one it just decides to stick around two or three steps as it does in training" }, { "start": 2099.68, "end": 2106.28, "text": " right so the authors sort of claim that this is good evidence that pondernet" }, { "start": 2106.28, "end": 2112.8, "text": " learns to solve the actual task right here and as the task gets more complex" }, { "start": 2112.8, "end": 2119.04, "text": " pondernet needs more steps to think about the task and this might be exactly" }, { "start": 2119.04, "end": 2124.88, "text": " you know what we saw that you have some sort of a string of zeros and ones and" }, { "start": 2124.88, "end": 2131.04, "text": " you learn during training you learn a how to take one of these maybe in" }, { "start": 2131.04, "end": 2134.76, "text": " multiple steps and get an output but now you all of a sudden you have a longer" }, { "start": 2134.76, "end": 2140.84, "text": " string right well so now what you can do is you can also learn an output for this" }, { "start": 2140.84, "end": 2145.1200000000003, "text": " one and now you have two outputs right and now you can learn a series of steps" }, { "start": 2145.1200000000003, "end": 2151.56, "text": " to transform the two outputs here into a single output and that might just need" }, { "start": 2151.56, "end": 2157.84, "text": " one or two more computation steps which is exactly what we see right here" }, { "start": 2157.84, "end": 2163.48, "text": " happening so it's a good it's a good indication that something like this is" }, { "start": 2163.48, "end": 2171.28, "text": " happening I would be wondering pondering one might say haha if you know how this" }, { "start": 2171.28, "end": 2175.8, "text": " actually happens like like what do the individual computation steps represent is" }, { "start": 2175.8, "end": 2182.16, "text": " it in fact a for example in this parity task is the network going about this" }, { "start": 2182.16, "end": 2187.88, "text": " task in a hierarchical fashion you know like like I've shown here is it" }, { "start": 2187.88, "end": 2193.44, "text": " something different is it going about it in sort of a purely recurrent fashion" }, { "start": 2193.44, "end": 2197.84, "text": " where even though we as I understand it we input the entire string at the" }, { "start": 2197.84, "end": 2203.68, "text": " beginning does it only look at the string position by position or you know" }, { "start": 2203.68, "end": 2210.2400000000002, "text": " how does this work how does the scaling behave in general if you know they only" }, { "start": 2210.2400000000002, "end": 2216.12, "text": " show small strings large strings but how does it behave in general as you go up" }, { "start": 2216.12, "end": 2221.92, "text": " the length and so on it would be really interesting to introspect this model a" }, { "start": 2221.92, "end": 2229.2000000000003, "text": " little bit more than simply showing kind of end results here of the individual" }, { "start": 2229.2000000000003, "end": 2235.52, "text": " tasks okay what they also find is that the hyper parameter how you regularize" }, { "start": 2235.52, "end": 2242.08, "text": " the shape we've seen this up here how you regularize this shape is you know" }, { "start": 2242.08, "end": 2246.2000000000003, "text": " that is a hyper parameter but it doesn't seem to be terribly important again they" }, { "start": 2246.2000000000003, "end": 2251.2400000000002, "text": " compare to a CT which has another hyper parameter that does the similar thing" }, { "start": 2251.24, "end": 2259.24, "text": " that regularizes the shape of the of the desired halting distribution which they" }, { "start": 2259.24, "end": 2265.7999999999997, "text": " call tau tau doesn't mean a particular thing in so they say it does not have" }, { "start": 2265.7999999999997, "end": 2270.58, "text": " any straightforward interpretation though I guess the authors of a CT might" }, { "start": 2270.58, "end": 2278.72, "text": " disagree but as you can see here so if I draw the the means there is a region" }, { "start": 2278.72, "end": 2285.2799999999997, "text": " where the tau where a selection of tau performs high though you have to say see" }, { "start": 2285.2799999999997, "end": 2291.04, "text": " that is all around sort of the same value of like 5e minus 4 or something" }, { "start": 2291.04, "end": 2295.64, "text": " like this and then for the other values that you might set it for it simply" }, { "start": 2295.64, "end": 2301.52, "text": " doesn't work at all so you the authors claim you have to hit this tau pretty" }, { "start": 2301.52, "end": 2306.4399999999996, "text": " correctly in order to even get the network to do anything whereas they" }, { "start": 2306.44, "end": 2313.92, "text": " claim in pondernet this variable right here first of all it's between 0 and 1" }, { "start": 2313.92, "end": 2320.8, "text": " and not just an arbitrary value right because it's a probability and they" }, { "start": 2320.8, "end": 2329.06, "text": " claim that you know it kind of works for for most things except this one right" }, { "start": 2329.06, "end": 2334.1, "text": " here where essentially you bias the network to just output everything after" }, { "start": 2334.1, "end": 2338.68, "text": " one step so the trick is for the geometric distribution you have to take" }, { "start": 2338.68, "end": 2343.6, "text": " the inverse so one over this lambda p and that will give you the expected" }, { "start": 2343.6, "end": 2348.3199999999997, "text": " number of steps that the network would compute according to this prior so when" }, { "start": 2348.3199999999997, "end": 2354.4, "text": " you put in 0.9 that would essentially be a single step that you ask the network" }, { "start": 2354.4, "end": 2360.6, "text": " to do but for all the other things well you you judge for yourself whether" }, { "start": 2360.6, "end": 2368.08, "text": " this here is really good but what you can say is that look it goes from 0 to 1" }, { "start": 2368.08, "end": 2373.12, "text": " so you have a clear range and for most of that range the the thing seems to" }, { "start": 2373.12, "end": 2381.08, "text": " work okay ish and what they highlight is even down here so even if they do this" }, { "start": 2381.08, "end": 2387.24, "text": " even if they said lambda p to 1 or sorry to point 1 which would essentially bias" }, { "start": 2387.24, "end": 2392.9599999999996, "text": " the network towards 10 steps that the prior is please do 10 steps of" }, { "start": 2392.9599999999996, "end": 2399.6, "text": " computation in this parity task as I understand it even for that point one" }, { "start": 2399.6, "end": 2406.3199999999997, "text": " you can see the network it doesn't do 10 steps it actually also goes towards 3" }, { "start": 2406.3199999999997, "end": 2413.52, "text": " 4 or 5 steps most of the time so the network learns to be sort of somewhat" }, { "start": 2413.52, "end": 2418.8, "text": " robust to this prior distribution I mean I guess that's also a function largely" }, { "start": 2418.8, "end": 2425.16, "text": " of the hyper parameter here where you trade it off we don't know the effect of" }, { "start": 2425.16, "end": 2430.88, "text": " that just from the paper but even you know even if they set that to really low" }, { "start": 2430.88, "end": 2437.16, "text": " it's it it of course then the network is kind of robust to the choice of the" }, { "start": 2437.16, "end": 2442.36, "text": " lambda p yet it's still good news because that means you would mean you" }, { "start": 2442.36, "end": 2447.28, "text": " wouldn't have to regularize the the model super heavily in order to get it" }, { "start": 2447.28, "end": 2453.6400000000003, "text": " to work okay they go into two other tasks right here again these aren't" }, { "start": 2453.6400000000003, "end": 2458.08, "text": " tasks that you might necessarily know they are tasks where this type of" }, { "start": 2458.08, "end": 2466.32, "text": " computation shines particularly and yeah as I said I see the paper more as sort" }, { "start": 2466.32, "end": 2472.04, "text": " of an interesting an interesting task an interesting niche tasks subtask you" }, { "start": 2472.04, "end": 2477.44, "text": " might say of of connecting deep learning and classic algorithms there are a" }, { "start": 2477.44, "end": 2484.7599999999998, "text": " number of things that I think you can do right here to extend this so it's" }, { "start": 2484.7599999999998, "end": 2491.56, "text": " completely thinkable that you know the loss might be a bit different that you" }, { "start": 2491.56, "end": 2497.56, "text": " don't ask the network to output the direct answer at each point but you know" }, { "start": 2497.56, "end": 2503.36, "text": " you might you might want to attach memories and so on at at these output" }, { "start": 2503.36, "end": 2508.7999999999997, "text": " nodes you might want it want them to output intermediate results or" }, { "start": 2508.7999999999997, "end": 2513.16, "text": " something like this another thing you could do is you could work with sort of" }, { "start": 2513.16, "end": 2519.7599999999998, "text": " adversarial losses instead of of you know kind of reconstruction losses or" }, { "start": 2519.7599999999998, "end": 2526.56, "text": " whatnot so you could you could have some sort of a GAN going on inside of this in" }, { "start": 2526.56, "end": 2531.88, "text": " order to decide on the on the stopping probability that there's lots of stuff" }, { "start": 2531.88, "end": 2540.2, "text": " one can fiddle around with this type of network and you can even think of" }, { "start": 2540.2, "end": 2545.24, "text": " crazier architectures I don't know hopfield like structures where you" }, { "start": 2545.24, "end": 2551.12, "text": " decide you know how far you iterate because you don't you may not always want" }, { "start": 2551.12, "end": 2556.12, "text": " to iterate until fixed points I don't know I'm just I'm just talking crap" }, { "start": 2556.12, "end": 2563.04, "text": " right now okay one last shout out to the broader impact statement of this paper" }, { "start": 2563.04, "end": 2572.68, "text": " what a beautiful beautiful piece of of writing so essentially they say well" }, { "start": 2572.68, "end": 2576.6, "text": " this enables neural networks to adapt their computational" }, { "start": 2576.6, "end": 2583.44, "text": " complexity to the tasks they are trying to solve you know neural networks are" }, { "start": 2583.44, "end": 2588.44, "text": " good but currently they require much time expensive hardware they often fail" }, { "start": 2588.44, "end": 2594.28, "text": " pondernet expands the capabilities they say look it you know it can do this it" }, { "start": 2594.28, "end": 2599.16, "text": " can do that makes it particularly well suited for platforms with limited" }, { "start": 2599.16, "end": 2605.12, "text": " resources such as mobile phones which is a good thing right it can also" }, { "start": 2605.12, "end": 2613.42, "text": " generalize better that means it's better for real-world problems and they say it" }, { "start": 2613.42, "end": 2617.2400000000002, "text": " we encourage other researchers to pursue the questions we have considered on this" }, { "start": 2617.2400000000002, "end": 2621.32, "text": " work we believe that biasing neural network architectures to behave more" }, { "start": 2621.32, "end": 2625.84, "text": " like algorithms and less like flat mappings will help developing deep" }, { "start": 2625.84, "end": 2632.64, "text": " learning methods to their full potential and that is indeed the broader impact of" }, { "start": 2632.64, "end": 2638.6, "text": " this work like that is that's the impact it had on me and that's the impact that" }, { "start": 2638.6, "end": 2646.68, "text": " it it should have yeah I'm not like at today's conferences that must might be" }, { "start": 2646.68, "end": 2650.7999999999997, "text": " kicked out because of course it doesn't say technology good technology bad" }, { "start": 2650.7999999999997, "end": 2656.96, "text": " technology biased but you know respect for that and that was it for me let me" }, { "start": 2656.96, "end": 2670.8, "text": " know what you think and bye bye" } ]
SY5PvZrJhLE
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
GPT-3: Language Models are Few-Shot Learners (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "transformers", "attention", "nlp", "natural language processing", "gpt3", "gpt-3", "gpt2", "gpt-2", "openai", "language model", "mlm", "autoregressive", "heads", "bert", "turing", "microsoft", "question answering", "news", "glue", "superglue", "sota", "preplexity", "corpus", "common crawl", "wikipedia", "natural questions", "boolq", "math", "strings", "context", "deep language", "zero shot", "few shot", "training data" ]
#gpt3 #openai #gpt-3 How far can you go with ONLY language modeling? Can a large enough language model perform NLP task out of the box? OpenAI take on these and other questions by training a transformer that is an order of magnitude larger than anything that has ever been built before and the results are astounding. OUTLINE: 0:00 - Intro & Overview 1:20 - Language Models 2:45 - Language Modeling Datasets 3:20 - Model Size 5:35 - Transformer Models 7:25 - Fine Tuning 10:15 - In-Context Learning 17:15 - Start of Experimental Results 19:10 - Question Answering 23:10 - What I think is happening 28:50 - Translation 31:30 - Winograd Schemes 33:00 - Commonsense Reasoning 37:00 - Reading Comprehension 37:30 - SuperGLUE 40:40 - NLI 41:40 - Arithmetic Expressions 48:30 - Word Unscrambling 50:30 - SAT Analogies 52:10 - News Article Generation 58:10 - Made-up Words 1:01:10 - Training Set Contamination 1:03:10 - Task Examples https://arxiv.org/abs/2005.14165 https://github.com/openai/gpt-3 Abstract: Recent work has demonstrated substantial gains on many NLP tasks and benchmarks by pre-training on a large corpus of text followed by fine-tuning on a specific task. While typically task-agnostic in architecture, this method still requires task-specific fine-tuning datasets of thousands or tens of thousands of examples. By contrast, humans can generally perform a new language task from only a few examples or from simple instructions - something which current NLP systems still largely struggle to do. Here we show that scaling up language models greatly improves task-agnostic, few-shot performance, sometimes even reaching competitiveness with prior state-of-the-art fine-tuning approaches. Specifically, we train GPT-3, an autoregressive language model with 175 billion parameters, 10x more than any previous non-sparse language model, and test its performance in the few-shot setting. For all tasks, GPT-3 is applied without any gradient updates or fine-tuning, with tasks and few-shot demonstrations specified purely via text interaction with the model. GPT-3 achieves strong performance on many NLP datasets, including translation, question-answering, and cloze tasks, as well as several tasks that require on-the-fly reasoning or domain adaptation, such as unscrambling words, using a novel word in a sentence, or performing 3-digit arithmetic. At the same time, we also identify some datasets where GPT-3's few-shot learning still struggles, as well as some datasets where GPT-3 faces methodological issues related to training on large web corpora. Finally, we find that GPT-3 can generate samples of news articles which human evaluators have difficulty distinguishing from articles written by humans. We discuss broader societal impacts of this finding and of GPT-3 in general. Authors: Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, Dario Amodei Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hello there! Today we're looking at language models are few-shot learners by Tom B Brown, Benjamin Mann, Nick Ryder and Melanie Sabaya and a whole slew of authors from OpenAI. This paper also called GPT-3 just came out recently. GPT-3 is a language model and it comes out of a succession of language models of OpenAI. This paper is basically an investigation into what you can do with giant language models. Now this language model is an order of magnitude larger than anyone has ever built a language model and it can do some absolutely crazy things. So we'll basically go over the architecture, over what the model does and over the experimental results. It turns out that if you train a language model on enough data it is able to solve NLP tasks that it has never seen just out of the box. We're going to look into this very cool kind of formulation of the problem. As you can see here the paper is 40 pages long without the appendix. It needs its own table of contents which is crazy. So we're going to skip a fair bit of things. First of all what is a language model? For those of you who don't know I've done a bunch of videos and you can see those in my natural language processing playlist about language models and specifically about transformer language models. So a language model let's just take an example this sentence right here. Just the sentence as such like third humans do not require large supervised datasets to learn most language tasks. This is an English sentence and a language model would be a model that if you cross out a portion from the end here like this right here it would be able to tell you what comes next. So in a language model you would input this part right here and it will tell you the next word is datasets. So that's basically all the language model does and once you've trained one you can basically generate word after word after word from it or you can ask it a question like which word is most likely to come next or more likely. So a language model is nothing but a model that can kind of generate language in a probabilistic way and the cool thing about language models is that you can train it on any sort of text data and that's what they do here. So they train a language model on giant amounts of data specifically right here they go into the datasets they use. They use this common crawl dataset which they filter down for quality and this is basically a crawl of the entire internet if you will together with these books datasets and the web text dataset and the Wikipedia dataset. So they throw all of this text that they scrape from the internet together and then train a language model on that. Now the language model right here is called GPT-3 and they train various sizes of it and we'll get into how it's built in a second but just compare this to a language model like BERT. BERT required this much flops to train and this is a log scale so this is right here this is several orders of magnitude larger and bigger model and is trained for way longer on this text so naturally it is going to be a lot better at language modeling. You can see right here the size of these models that they trained on. Remember the previous largest language model the Turing NLG of Microsoft had something like 17 billion parameters so it would be comparable to this right here whereas GPT-3 has 175 billion parameters which this is absolutely crazy. This is an order of magnitude higher than anything that's ever existed and if you look at the last GPT the GPT-2 model that if you remember I've made a video about it is too dangerous to be released well now it has been released but was too dangerous to be released it clocked in at about 1.5 billion parameters so compared to this GPT-3 XL model right here they train these multiple models to basically estimate the effect of the model size and you can see here the largest model has 96 attention layers. Each layer has 96 attention heads and each head is 128 dimensional and it trains on batches of size 3.2 million. This is the batch size absolutely crazy so they train this on a giant distributed cluster that apparently is provided by Microsoft and yes crazy crazy things. So how does this model look? This model is a transformer model and right here we don't even have like a description of a transformer model it's just assumed you know what that is. I have made several videos on transformer models and especially things like attention is all you need or BERT or something like this but for those who don't know if I have a transformer model and I want to build a language model from it let's take this sentence right here I would input a what's called a context which is the thing I already have right I would input that into a transformer model and a transformer model is just several layers of attention mechanism. Now an attention mechanism is basically a way where information is routed in between the different tokens right here and as it goes up the layer basically the the information is routed around and the model can make various inferences and at the end the model is supposed to come up with the next word that you're going to put here. Specifically in this paper they use sub words like word piece tokens like it is common in NLP right now but essentially this is an autoregressive language model so it's not like BERT it's not bi-directional it is autoregressive it goes from left to right always produces the next word it is like GPT-2 they even say this they say we use the same model and architecture as GPT-2 they just have more layers and wider layers and more data to train it on. So how do they train it? Okay that's we already said they train it in simply in simply a language modeling way just next word prediction that's it okay it so it's not even something fancy like BERT. The interesting part is when you do the now the the single tasks so what you usually did with something like BERT so with something like BERT you would do first pre-train so there you would this is the language modeling right here this pre-training phase where you teach BERT about the English language by just feeding it a lot of data and then second you had a step called fine-tuning fine I can't even write tuning so on the second one you'd have something like the task you're actually interested in and let's say the task you're actually interested in is sentiment classification so in sentiment classification you have like a sentence like blah blah blah and you want to know is that a positive sentiment like is a happy sentence or is it a sad sentence and you would have a database of labeled instances of that so in this database you'd have a bunch of sentences and for each one you would know is it good is it is it positive or is it negative and then you'd have like a smaller test set right here and you would you would train you would basically take this pre-trained model train it on this data set in a supervised machine learning way and then test it on this test set right here this is called fine-tuning that's what they display here so in fine-tuning the model is trained via repeated gradient updates using a large corpus of example tasks all right so the example task right here could be translating to French so in your training database of the translation task would be this would be sea otter is called Luther de Mer and in and and then you'd actually change your model you do a gradient update I mean if if you're in the NLP world this seems very natural but they are going to argue in a second that this isn't the only way that you can teach a model a task right so this this seems very natural right you're going to change your model you take your pre-trained model and you're going to fine-tune it on this task and if you have a different task right if you have now question answering task you're going to have a different data set right here with a train and test data set and you're going to take the pre-trained model and then fine-tune it on that data set and evaluate it on that test set so this gives you basically with as many models as you have tasks and you for each one you need a big big training data set in order to perform well sometimes we have this sometimes we don't what they are interested in is basically to take the pre-trained model and directly go and evaluate it on the test data set in a sort of a zero-shot fashion now it is not zero shot as they will argue so what are they doing in a true zero-shot fashion you would just take your your language model that you pre-trained and you just input the following text you input what they call a task description and a prompt so this is the input and you simply ask the model as a language model to predict the next work it's just what comes here now what you're counting on is basically that in the training data the model has seen a structure like this enough to understand what's going on so that in the training data somewhere in the internet there was the structure of translate something to something and then there would be a word here of something and you know it kind of has to realize that this goes here like that the next word so basically what you're asking it is if you were to find this text on a website or on Wikipedia or in any of the books data set if you were to find this piece of text what would be the next word in that piece of text and you kind of hope that this this is enough if you've trained a good language model that this is enough to to to actually produce the French translation here now before I realize I've said the language modeling is to teach the model the English language actually not true in this common crawl corpus you also have many foreign languages so you basically teach you the general model of the internet now they trend they they contrast this to what they call one-shot learning so in one-shot learning you not only do you have the task description right here and this is this is a string right you don't specifically tell the model that this is now a translation task you simply input this as a string so not only do you have the task description and the prompt right here but you also have one example and the example and this is where they this is where they bring in the where they say it's not exactly zero shot where's my little drawing here so the example is going to come from the training data set of the task that you're interested in but the important part is you never train on it you never explicitly train on that example you simply put it in the context so you simply put this string so translate English to French new line see order is Luther de Mer new line cheese is what you simply input that string into the model as a language model and you ask it what's the next word right here okay so I hope I hope this is clear this is what they call kind of one-shot generalization and by one shot they basically mean you simply provide this thing in the context of the model as a language model now the the advantage here is immediately clear that you only have to train one model then and then basically at inference time you can just input the task description and the sort of training data for the task into its its evaluation context and the task itself and it will if if it is if it really does what they claim it does it would be able to sort of understand the prompt here understand what it means to translate from English to French it would look at this example and say oh that's what you want me to do okay and then it would be able to generalize to this input right here to say okay from the task description and the example I so I get I get what you want me to do I will the next word here is cheese what's cheese in French I don't remember from a from a now the way the language model is going to interpret that is slightly different as we said before the way the language model is going to interpret is if you were to find the following text on a website somewhere the text is called translating which to French new line see order goes to loot the name new line cheese goes to what would be the next word on that website so that's what the model sees right you have to differentiate between what the human wants and what the model sees the model is just a language model that is going to take the next that is just going to determine if I were to see this text somewhere what will be the most likely next word so you have to phrase your tasks in a way that makes sense in that thing and they also have this few shot thing where you not only provide one context but you provide a bunch of context to basically tell the model more of what it what it should do right now this doesn't only work in a free mode where you basically say what's the next word here what you can also do if you have such a language hold with the exact same model you can give it basically a couple of possibilities so you can give it it's you can say like it's either shop or it's from us or it's hotel I think that has like this so you can you can basically restrict it to only produce one of these three things so in translation this might not be you know the the way to go but in if you have like yes no answers questions you can restrict it to that so in a lot of these NLP tasks you have some options given for a given question and you can also restrict it so don't you know you always have to go with the task at hand but this is in essence what the model does and this is I think this is the new well not the new per se but this is one of the core ideas of this paper if you take anything from it there's no new architecture right here there's no new wisdom in training they train in a standard way in a standard language modeling fashion a standard transformer architecture this just happens to be ginormous okay this right here this thing where they say most of these things would fine-tune and then basically end up with one model per task and you need a big data set per task but we simply can do this since we have such a large language model it is basically already basically already knows how to do these tasks as long as we formulate them in a language model way we can have the model perform these tasks and they will show that this works surprisingly well throughout this paper now we get into the experimental results right here and the experimental results first of all on language modeling as you can see here they basically say as you go up with the parameters you see the more yellow ones are the parameters you go into your validation loss goes down and down and down and down and I believe this is sort of a log scale as well so this is the log probability so the perplexity and that the this basically follows a trend oh no this is a log scale this this is a log scale it follows a trend where as you scale up the model and as you scale up the compute that the model gets and we know for these big language models we basically know you have to scale up model size compute time and data set size in the same fashion for them to make these gains but if you do that it follows like a a power law where as you scale up these things the model basically gets better and better and better and the question of course is you know how far how far can we go with this but for now it seems to hold quite well that you can just make improvements by scaling up your model on language modeling at least so where do we where do we basically go from here so before we dive into the actual results of the individual tasks so now they're going to formulate these individual tasks so they have like pure language modeling tasks right here like Alice was friends with Bob Alice went to visit her friend and then it's like what's the next word okay is Bob and George bought some baseball equipment a ball a glove and a what's the next word and I guess this should be hat sorry bat right here but we're going to go into the into the tasks and one of them is for example question answering so in question answering you simply get either you get just a pure question or a context and a question and they do the facts they test where a situation where you just get the question so you just get I don't know who is the queen of England or something like this and the model is simply to produce either the results direct or to choose from a bunch of answers which one is the most likely as a language model and as you can see as you scale up the language model the zero shot one shot and few shot predictions so in few shot you give 64 different examples from the training set in the context so you always have so your context is going to look something like this and they have examples at the bottom and haven't looked at the QA task but the the example is going to be something like this you have a task description like answer the following questions answer the question and then you have your examples in zero shot that's zero and one shot it's one that's what I'd like and then you say how tall who sorry who I don't know who climbed Everest the first the rest the first and then you say Hillary I think it was Hillary no I don't remember and then you say I don't know how how tall is the Empire State building and then you have like some number here and at the end you say what was it was it was a question from before I don't know who is the Queen of England yeah who is the Queen of England and then you ask the model to predict the next word right here okay and you do this in a closed book setting which means you have no access to Wikipedia or whatever like usually these systems they can go and query Wikipedia but this system doesn't so you just you just want to know what has the model learned about the world by simply absorbing giant amounts of text so if somewhere in the training data the fact that the Queen of England is Elizabeth the second is present it should complete this right here and it performs surprisingly well as you can see here so it manages to outperform a fine-tuned state-of-the-art model that is actually that is fine-tuned on question answering right this has it has been built for question answering and this model outperforms it by simply having a lot of language so this here is the results on on these open domain QA tasks and you you see right here it the this this few shot it outperforms this open domain and open domain means that the model can go and look at some Wikipedia page and yeah so so this is pretty cool but there are other things like the natural questions where it under performs compared to this open domain thing and they say this is mainly due to the natural questions being like it's very much about factual Wikipedia knowledge and so on maybe like the question we just made maybe is more of a natural question type of thing and since and the model is apparently not as good at that but it's still impressive that the model is able to do this out of the box okay so before I said something like before we go into the experiments I want the following so I have like some sort of hypothesis it's not it's an it's not an uncommon hypothesis that basically these things these giant language models right they're just these transformers layer after layer after layer with their connections in here what I think is happening is they are simply storing the training data right they are simply storing the training data in these connections right here so usually you think of storing the training data in some form of maybe we have like some module right here some database module in the neural network and it learns to query the module but ultimately if you train a neural network what you have is data and you train a function with parameters on that data and ultimately what you're doing is you're distilling the data into these parameters and you kind of hope to learn some regularities from it but ultimately the information about your training data influences or determines your final parameters of your function now I can imagine that if you have such a giant neural network with so many weights like 17 sorry 170 billion weights that you can pretty efficiently actually store the training data in that model and when you ask this model now to do something what it basically does is what these people sort of argue is that it has learned these language tasks is learned to reason over language and so on what I think is happening much more is it will simply go to the training data since it has stored the entire training data in its weights and it will sort of pull out the five to ten to fifty training examples that are most relevant to what you put in and it will sort of interpolate right you go to the training data and it'll pull out a bunch of training samples that are relevant to the context you put in right now and then it will sort of integrate those into the next word that's going to come out right here and I think if you look at this paper in terms of this so you always write you input a context and the context is split into a task description and then it is split into k different examples and then it is it is it has a prompt sorry this year this is the prompt so the task description is please translate from English to French and the k different things are k different translations and then the prompt is you know what what you should do so it's like half of a K half of one of these boxes right here so these boxes are have blah blah blah turns to blah blah blah and then the prompt is simply without the the right side I think what it does is it will simply take all of this and it will go to its own training data which it has stored in its weights and it will filter the training data and basically take out the the things that sort of pattern match sort of regex match in a fuzzy way to this context and then it will kind of interpolate these training examples in order to come up with the answer I don't think there is reasoning happening here and we're going to if you go through the paper with this view then you can a lot of things actually make sense and I actually I think that we need we need what we need when think people think of like explainable machine learning they often think that if I'm going to input something like I'm going to input an image into a classifier and it comes out a certain class car I like the explainability should be which part of this image was it the wheels was it the the hood which part of the image which part of the input image is responsible for making that determination what I think in especially in these language models what we should do is if the model predicts something right here the next word I think we should somehow have a method of determining which of the training examples that the model used to interpolate given this context because I'm pretty sure these training is you will find so if you'll find that for example this weight and this weight and this weight was very responsible for making this prediction happen I'm pretty sure you can somehow during training build an index of which of the which five training examples had most influence on that particular weight or on this combination of weights and then you can sort of go backwards and say you made this decision right here model please tell me which of the training data samples were responsible for making that decision actually pretty sure that already exists like I'm never the first one to think of these things though if I am site me site the channel no but just an interesting way to think about this model and an interesting way to think about kind of what does what would explain ability even mean in a model like this and my argument is since it interpolates the training data the interpret ability should come from the fact of which training samples does it interpolate okay let's go to translation so in translation as we said they simply input the like the task and then the few examples and then and then the output okay and you can see right here what you can see is that again as the model goes up in parameters the performance generally increases and also you can see that the performance is pretty good every time that this model goes to English so it goes if it if the target language is English which sort of makes sense because like a large part of the corpus they train on is English so being an English language model it should be pretty good if it is asked to produce English and it's not as good if it is asked to go into the different direction now what you also see is that it is not really a difference whether you translate from from which language you translate but if you go to English but it very much matters to which language you go if it is from English so this sort of makes sense in that it is just trained on a lot of English data and right here sometimes they are on par with the with the state-of-the-art supervised methods and also other times they outperform these methods right here and these methods are unsupervised but are specifically so they don't have a supervised training data set that goes let's say from English to French but they are built with this in mind that they need to translate later so they are sort of task specific but don't have a supervised training set and this model right here it just learns whatever it learns and it it just it just does it just does this this language model learning and at the end just because it has seen some websites where language of both things appear it can now translate reasonably well okay now yeah so the results here are a bit noisy but it is still interesting to see that it sometimes even gets close to the supervised thing though they say that they are not familiar with the literature and are not sure that these models that these numbers are you know good okay okay the next thing is these um Winograd schemes where you do have where is the text here is a classic NLP task that involves determining which word a pronoun refers to when the pronoun is grammatically ambiguous but semantically unambiguous to a human so these are sort of human produced sentences where it's kind of a pronoun could refer to multiple things I don't have a example present but where do we have the right here you can see that this model will out produce a fine-tuned large but will not out produce a fine-tuned Roberto large so it is going to it is going to come it is competing at least with the fine-tuned models that were made specifically for that task right again this is pretty pretty interesting and you also see that the larger models here it starts to make a difference whether or not you give it one zero or one or more examples okay so we'll get into we'll get into the the more interesting things right here in this thing right here where is it yes this is the kind of a physical physical question physical QA where it is a bit of common-sense reasoning so you're asked to I don't yeah these are like science questions multiple-choice questions collected from a third to ninth grade exams and the physical QA is physical QA asks common-sense question about how the physical word work world works and is intended as a probe of grounded understanding of the world so it has questions as I understand it it has questions like if a drop a ball will it fall on the ground or where will it fall or something like this and they say that they can outperform a fine-tuned state of the art model on this if they go just high enough and you can also see that there isn't much of a difference between zero one and few shot the methods of this model right here even though zero shot is even higher than one shot so this is probably just noise but then you find out that they have an asterisk here and this means that this is potentially a contaminated data set so they have potential contamination issue so what they found was there was a significant overlap between the data set this data set and their training data set and they even they only realized this too late because there was a bug in their deduplication code and then they couldn't change it anymore like I because this model is so large that they couldn't restart the training because they've already spent like so much money and energy on it this is crazy I think these language models are getting so large that we should building them we should more think of it like we built the the International Space Station or something like this where it's a project where humanity sort of collaborates or there's a big effort and you build it once and whatever you have you have right so these these good numbers here are simply or not simply are because or could be influenced by this contamination and I think that's what's happening right here even though they will make the case that this contamination isn't really an issue I can probably show you that it might be it may be actually is an issue because on the other data sets at the the fine tuned state-of-the-art model outperform the GPT-3 quite a bit so and also the the fact that the you know if you provide a demonstration or many demonstrations it doesn't actually change that much it kind of tells me that the model sort of already knows what the answer is and doesn't really need demonstrations because it doesn't help if you have the training data stored or the test data you don't really have to get demonstrations right so they have a few other a few other things right here where on these coca tasks they perform pretty poorly compared to others or poorly let's say they perform well but not particularly more well than a state-of-the-art and they perform especially poorly on the reading comprehension sorry that's the that's the cocoa so in reading comprehension what you have to do is abstractive multiple choice and span based answer formats in both dialogue and single question setting so basically have to read a piece of text like this and then answer a question about the piece of text now this is something where I think you cannot really interpolate the training data super well and therefore so you can't really just pattern match and interpret because you have to do actual reasoning and I think that's why the model performs poorly here they do measure this on on super glue which is a NLP benchmark and also here you can see it doesn't outperform a fine-tuned state-of-the-art model on these tasks but it does outperform a fine-tuned BERT model slightly the BERT model is fine tuned on these things whereas GPT-3 isn't but notice the tasks in which it does well and in which it doesn't do well compared to the state-of-the-art model so for example in the bool queue it doesn't do particularly well right the state-of-the-art is 91 and only has 76 that's quite a large difference and actually have the glue benchmark open here and you can see this is the bool queue so an example here would be is France the same time zone as the UK and then there is like a passage and you need to reason about from this passage about whether or not this answer is true or false okay this this is very much not language modeling this is reasoning and that's why the model is doing poorly here whereas in another thing you see these for example this copa right here the model is doing almost as good as a fine-tuned state-of-the-art and I have to stress this model has never actually learned this task in a supervised way it's simply a language model and I have this copa task right here and these are the examples so one example is the premise the man broke his toe what was the cause of this and you have two different things that it could be either he got a hole in his sock or he dropped a hammer on his foot and the way you phrase it in this model is you would give the premise as the context and then you simply ask the model since it's a language model which of these two things is more probable to come and of course it is going to select the thing that kind of happened more often in the training data and you know broke his toe the cause of breaking his toe that is a hammer this is entirely conceivable that a language model would know this and with enough training data could sort of pull from the training data examples where hammer on foot and broke toe appear a bunch of times and hole in sock would be rather unrelated so as long as these questions are not too adversarial constructed specifically that a language model can't solve them there the model is going to perform pretty well right here right so it is very interesting to see that if you view this as interpolating the training data it suddenly makes sense where it's good and where it isn't good so this was the super glue and and NLI it is performing particularly poorly on NLI which is the ability to understand the relationship between two sentences right so where the model classifies whether the second sentence logically follows from the first contradicts the first or is possibly true neutral okay so this is the reasoning part of this model is not given it is simply recalling the training data and doing language modeling now they say oh we can test this we can test this with synthetic and qualitative tasks so they invent some own tasks since you know now it's pretty easy since you don't have to fine-tune the model you don't have to turn to generate an actual training set for a task so you can focus on generating a test set and and you know that's what they do so they do something like arithmetic so they say okay can we come up with a bunch of arithmetic tasks for example to digit addition so what the model would see would so this is an example and what the model would see is simply this as a context right here for the prompt and if you give it examples so if this is like one-shot learning you would input add the following numbers the following numbers as a string right then a new line and then you would give it one example like what is 11 plus 12 and with the answer together with the answer answer is I don't even know 23 and then you the prompt goes here so what is 48 plus 76 and then you ask what is the next word right here what is the next string token that comes here now the the inference here is that if the model manages to do this it can't simply because these are all strings the model basically has no clue how to do math these are numbers to the model these are just tokens as strings and the inference is if the model can do this it must have learned you know some kind of reasoning ability it must have learned to like perform some logic inside so they go into two-digit addition three-digit addition four-digit addition five-digit addition and even multiplication and subtraction and the results are right here so as you can see the lower parameter models they perform pretty poorly but as you go up the parameters the big model is performing really well in the two-digit range is performing also really well so accuracy of look that accuracy 80 90 percent in three-digit addition and subtraction but then if as soon as you get to the four-digit or the two-digit multiplication and so on the performance drops now they say that's because multiplication is harder and you know it's is logically very computationally you know but the two-digit addition and so on model has learned something about the world I disagree because so here's the because what you will do is you will simply and this you simply recall the training data so look at the two-digit addition with zero shot you already get 76 percent but with one shot you get 99 percent and with few shot you get a hundred percent so if you interpret this model as simply filtering the training data to pattern match then it makes a lot of sense that the one shot would like the examples here would give you a much improvement because if you have a bunch of examples where please add right add and then oh I erased our example again so you have like 48 plus 72 equals blah blah blah you have these of this if you give more and more example all of a sudden this looks like a table and they say we made sure that the strings here these particular strings were not in our training data right so these strings never appeared but I just have an issue with this d duplication stuff because what can appear actually is not the what can appear is a table and in table often you have columns and then another column will be the sum of these columns on the left and if you are asked to pattern match you'll naturally find websites right if you have a few of these examples you'll find websites where the columns exactly refer to these things and then you'll find the sum here and if you filter for websites that appear to match your scheme in the examples you'll find all the website with a table on them where the column one column is an addition of the others and I can actually do that so I went and I typed in just a bunch of these things so 98 plus 45 is 143 18 plus 55 is 70 I believe at least and I can find now Google makes it hard because they localize and everything but you can still find what you're going to find our tables and tables and tables and tables and now I actually went to doc.go to basically say you know they they don't you know really personalize it to me and what's the first thing I find when I type in just these numbers is math skip counting missing sequence number and a website where basically the answers are already given look at that so all the model has to do is recall this particular training example from the samples it already has right and it will it will basically be able in quotes to perform addition like this is financial data and another one where you have to subtract stuff right so I'm pretty sure all the model is doing here is interpolating the training data and that's also why it performs worse if if you up the digits because longer digit numbers are simply less frequent in the in in the training data multiplication is first of all less frequent and second of all it also results in larger numbers which are less frequent right so it explains a lot so I yeah I have my issues with people saying yeah this this shows some reasoning I don't think it does the same thing here with word scramble so in word scramble they have different things you see okay they they they look whether or not only 17 matches 0.8 percent of the math things are in their training data is like no you haven't searched well enough and the rest of their deduplication by the way is also pretty weak I would say because they just look for like 13 gram overlaps between the training data and the in the and their their test data so they have these word scrambling tasks where they basically scramble words and they ask the model to unscramble it for example this word is inevitably scrambled so they always you know they give like anagrams and they give random insertion into the word like this word right here or they reverse the word and they say so this I think this is the thing at the very beginning but if you can see right here also as the model goes up then this this improves and they also say well this means maybe some kind of reasoning but I think this is just it's learning the language and it's learning that you know the the words in in sorry that the letters make up a word and the letters correspond to word pieces or are associated with word pieces and it always learns to English a good task to check this would actually be to scramble words so if you unscramble words you always end up with an English word so all it has to do is basically check which word has the highest overlap in word pieces but you could do something like please scramble this word and then always count it correctly when any of the scrambling of the words so instead of going from this to this which you can simply solve by knowing the English language but you would have basically no clue what the task is that you don't have to understand that as a model you could ask it to go from this to this given a few examples right then it would really need to understand what the task is that it's supposed to actually scramble a word and would need to learn that from its context given examples but they as far as I see they don't do that and again I think it's recalling the the training data the this is a sat analogy so the SAT or this test that the US high schoolers take to get into college and the the this they say a typical example this is dying on me no it's scrolled okay a typical example is the following this I find I find pretty hilarious all Dacius is to boldness as sanctimonious is to hypocrisy anonymous is to identity remorseful is to misdeed deleterious is to result or impressionable is to temptation this is a as as a okay I'm not a native speaker but this is a hard question right and you have to you know see that these these high schoolers they're stressed like this is very much a time-based test so you need to make a decision quickly while the model of course is basically able to sift through its entire training data in the time it takes the GPUs to perform inference but it's still funny that GPT-3 achieves 50 65 percent in the few shot setting and 59 percent in the one shot setting 53 percent is zero shot setting whereas the average score among college applicants was 57 percent so it outperforms the average college applicant it's pretty funny but you would expect the language model to have a pretty good grasp of these kind of synonyms and relations between words because these are just absolutely statistical associations between words so yeah this I found this to be pretty pretty funny and the last thing and this is what everyone's freaking out over is this news article generation where basically they give it the beginning of a few of a news article and then they let humans decide whether or not the news article is written by a machine or by a human and they say here by contrast mean human accuracy at detecting articles that were produced by the 175 billion parameter model it was barely above chance at 52 percent human abilities to detect model generated text appear to decrease as model size increases there appears to be a trend towards chance accuracy with model size and human detection of GPT-3 is close to chance okay so what they do is they give and they have some examples right here they give the model the following input the title the subtitle of an article and then this word article and the model is supposed to complete the rest of the article right here and you can also you know give do this in a few short setting such that the model basically knows that it's if you give it a few a few examples the model knows it is supposed to produce a news article right okay so there are two two ways that you can think of this first way the model has learned the language so well and it writes code it has learned to write coherent language and so on it's learned to reason keep context and blah blah blah okay second way the model sees this thing right here it sees the few you know K few shot examples that it has before in the context it will take them filter the training data to in this case it just sees news articles so do just news articles it will take this thing filter the training data even more to just the news articles that pertain largely to topics or words that appear in here and then lastly will interpolate the few training examples to produce this thing now they argue that this isn't really possible because they have actually checked that this news article is not in the training data but I have simply gone and taken a I've really taken a random substring here I've taken this substring voted to strengthen the ban on the ordination of just this substring and I've put it into Google and Babidi bah I find a book with voted to strengthen prohibitions to ban LGBTQ people from being ordained and ministers so it's you know I find this it's not the same article but it's talking about the same incident the article talks about and it is using the same language probably read the article and the author is like I can't really you know copy paste that would be you know not really cool so I'll just kind of you know write it in my own words but largely the same thing the Associated Press here also a different article you know see different title than this one right here but about the same thing and also with the same language right here voted to stay to strengthen the faiths divisive bands on same-sex marriage and ordination of LGBT clergy and generally so the argument this article wasn't in the training data is just not really something I buy in this in this case so I think it the article as such wasn't there but many articles about this topics were and I think this will just interpolate these now they say this was the hardest article for the humans to decide and this here was the easiest so it's it says I don't know Starr talks promise draws Megyn Kelly's sarcasm and it says a year ago joking Phoenix made headlines when he appeared on the red carpet at the Golden Globes wearing a tuxedo with a paper bag over his head that read I'm a shape-shifter blah blah you you would guess that joking Phoenix would do something like this but they say their human raiders were US based right and you see right here it says Megyn Kelly was not impressed and she let him have it on the tonight show now that tonight show is not what Megyn Kelly is and US based people would I guess know something like this and would immediately feel like this is wrong so I think this thing is interpolated from is interpolated from a bunch of different news articles about this and the interpolation just let it like made it such that this person is on this show which that they aren't and the humans noticed right but it doesn't change the fact that it probably just went to the training data filtered a bunch of articles about these words and then interpolated like mash them together it is a good language model right it can grammar it's very good at grammar so we can interpolate different passages of text and I feel that the the really really useful application of this will be sort of as a search engine as a fuzzy search engine so now I can like input for example my my machine learning research ideas and what will output will be sort of an abstract of a paper that is kind of a mush together of other papers on the same thing and that that you know you can think of many applications I don't think we have built something really intelligent here and what this is this is though is pretty cool they they give examples like this here where they make up a word and then ask the model to use the word in a sentence so to scree is something sorry to screech something is to swing a sword at it an example of a sentence that uses the word screech is and of course the model what's the model is going to do is it's going to take this it's going to filter the training data for all of the instances where sort of this construction appears like an example of using the words which is mostly so dictionaries then it's going to not know that word but it's you can interpolate it from interpolated from all this data right here and the so the cool thing is it actually conjugates the word we screed at each other for several minutes and then we went outside and ate ice cream so you can see how this is comes to be but I think it would really be fun to have a model that tells us which training data samples were used here it can also correct English grammar which is pretty obvious though again it can never correct so the the input always here is poor English good English poor English good English poor good poor English and then good English and that's what the model is asked to to output and I'm actually not sure pretty sure this here shouldn't be bold I'm fairly sure this shouldn't be bold this is given to the model the model is only asked to produce this otherwise I'd be I'd be actually impressed but yes nothing task specific is provided aside from the examples from few example as conditioning and the poor English input good English output framing so the good English output thing here should not be in boldface authors if you're listening this should not be bold thank you okay but again it is always as you can see it's too good English it's always the target is the good English whereas if the model really understood the task it should also be able to do the inverse it should be able to to produce something poor from something good because then you eliminate the fact that it's just a good English language model right because it can basically produce something like this without having a clue what the task is it will simply you condition on this input and it will simply output this sentence because it's very likely because it's already almost here and it will output it in better English because it's a good language model right it's it's a good English language model so yeah that so they measure this overfitting the degree to which their training to which their test data is in this common crawl thing and they say they have a conservative bound on how many percent of the data in the data set are clean and as you can see here they measure then how much the performance differs to to up or down if you only evaluate on the clean portion of this data set but again their deduplication is so weak they do like n-gram deduplication whereas I think you should really like in the news articles you should really do much more fuzzy deduplication much more of a meaning deduplication if you then want to argue that the model has learned to reason like if you simply want to argue that the model is a good language model fine right but yeah and also look at this like I would expect of a data set a test data set if you know if you have like a natural questions data set is constructed from Wikipedia pages and you have the Wikipedia page in there you can either either the entire thing is clean or none of it is clean and also these we know grad data set if this data set somehow leaked into the common crawl corpus either the entire thing is clean or none of it is clean I just have kind of problems with the fact that there are so many in between things right here and yeah so I'm not I'm not convinced here that this deduplication I still think it's a cool thing but I don't I think it's mostly a training data filter and interpolator rather than actual reasoning and they go through some of the limitations here and the broader input this broader impact statement is like five pages long and yeah okay you can do you can you know bad people take the model to do bad things okay and that's pretty much it so what I appreciate here is at the bottom they have basically all the results but also a lot of tasks descriptions like how they framed each tasks more outputs and they give more outputs on their website right so you can see here how each of the tasks was framed where you always have this is what this here is what the model sees and then this is what it's asked to produce right so you have this for for all many of these things and so on squad you have this context and the question okay so the the context is actually in there I didn't know that but you have the context and the question and the model is asked to complete something right here so you can look at how the model sees tasks and maybe you can evaluate for yourself how you think how difficult you think these tasks are all right I hope this was informative it is a long paper therefore it is a long video if you're still here and haven't subscribed yet do maybe if you like this if you want more leave it a like tell me in the comments what you think of it whether you think it's actually a GI or not and I'll see you next time bye bye
[ { "start": 0, "end": 4.6000000000000005, "text": " Hello there! Today we're looking at language models are few-shot learners by" }, { "start": 4.6000000000000005, "end": 12.040000000000001, "text": " Tom B Brown, Benjamin Mann, Nick Ryder and Melanie Sabaya and a whole slew of" }, { "start": 12.040000000000001, "end": 19.44, "text": " authors from OpenAI. This paper also called GPT-3 just came out recently." }, { "start": 19.44, "end": 26.64, "text": " GPT-3 is a language model and it comes out of a succession of" }, { "start": 26.64, "end": 30.96, "text": " language models of OpenAI. This paper is basically an investigation into what" }, { "start": 30.96, "end": 35.52, "text": " you can do with giant language models. Now this language model is an order of" }, { "start": 35.52, "end": 41.04, "text": " magnitude larger than anyone has ever built a language model and it can do" }, { "start": 41.04, "end": 46.519999999999996, "text": " some absolutely crazy things. So we'll basically go over the architecture, over" }, { "start": 46.519999999999996, "end": 51.480000000000004, "text": " what the model does and over the experimental results. It turns out that" }, { "start": 51.48, "end": 58.31999999999999, "text": " if you train a language model on enough data it is able to solve NLP tasks that" }, { "start": 58.31999999999999, "end": 64.08, "text": " it has never seen just out of the box. We're going to look into this very" }, { "start": 64.08, "end": 69.02, "text": " cool kind of formulation of the problem. As you can see here the paper is 40" }, { "start": 69.02, "end": 74.24, "text": " pages long without the appendix. It needs its own table of contents which is crazy." }, { "start": 74.24, "end": 79.4, "text": " So we're going to skip a fair bit of things. First of all what is a" }, { "start": 79.4, "end": 84.5, "text": " language model? For those of you who don't know I've done a bunch of videos and you" }, { "start": 84.5, "end": 88.52000000000001, "text": " can see those in my natural language processing playlist about language" }, { "start": 88.52000000000001, "end": 93.2, "text": " models and specifically about transformer language models. So a language model" }, { "start": 93.2, "end": 98.24000000000001, "text": " let's just take an example this sentence right here. Just the sentence as such" }, { "start": 98.24000000000001, "end": 103.2, "text": " like third humans do not require large supervised" }, { "start": 103.2, "end": 107.88000000000001, "text": " datasets to learn most language tasks. This is an English sentence and a" }, { "start": 107.88, "end": 113.03999999999999, "text": " language model would be a model that if you cross out a portion from the end" }, { "start": 113.03999999999999, "end": 119.72, "text": " here like this right here it would be able to tell you what comes next. So in" }, { "start": 119.72, "end": 125.32, "text": " a language model you would input this part right here and it will tell you the" }, { "start": 125.32, "end": 130.64, "text": " next word is datasets. So that's basically all the language model does and" }, { "start": 130.64, "end": 135.56, "text": " once you've trained one you can basically generate word after word after" }, { "start": 135.56, "end": 141.08, "text": " word from it or you can ask it a question like which word is most likely" }, { "start": 141.08, "end": 146.44, "text": " to come next or more likely. So a language model is nothing but a model" }, { "start": 146.44, "end": 151.2, "text": " that can kind of generate language in a probabilistic way and the cool thing" }, { "start": 151.2, "end": 156.32, "text": " about language models is that you can train it on any sort of text data and" }, { "start": 156.32, "end": 162.8, "text": " that's what they do here. So they train a language model on giant amounts of data" }, { "start": 162.8, "end": 168.08, "text": " specifically right here they go into the datasets they use. They use this" }, { "start": 168.08, "end": 176.48000000000002, "text": " common crawl dataset which they filter down for quality and this is" }, { "start": 176.48000000000002, "end": 182.92000000000002, "text": " basically a crawl of the entire internet if you will together with these books" }, { "start": 182.92000000000002, "end": 188.84, "text": " datasets and the web text dataset and the Wikipedia dataset. So they throw all" }, { "start": 188.84, "end": 192.24, "text": " of this text that they scrape from the internet together and then train a" }, { "start": 192.24, "end": 201.08, "text": " language model on that. Now the language model right here is called" }, { "start": 201.08, "end": 206.4, "text": " GPT-3 and they train various sizes of it and we'll get into how it's built in a" }, { "start": 206.4, "end": 213.32000000000002, "text": " second but just compare this to a language model like BERT. BERT required" }, { "start": 213.32000000000002, "end": 220.24, "text": " this much flops to train and this is a log scale so this is right here" }, { "start": 220.24, "end": 225.84, "text": " this is several orders of magnitude larger and bigger model and is trained" }, { "start": 225.84, "end": 231.12, "text": " for way longer on this text so naturally it is going to be a lot better at" }, { "start": 231.12, "end": 237.72, "text": " language modeling. You can see right here the size of these models that they" }, { "start": 237.72, "end": 244.60000000000002, "text": " trained on. Remember the previous largest language model the Turing NLG of" }, { "start": 244.60000000000002, "end": 248.60000000000002, "text": " Microsoft had something like 17 billion parameters so it would be comparable to" }, { "start": 248.6, "end": 257.24, "text": " this right here whereas GPT-3 has 175 billion parameters which this is" }, { "start": 257.24, "end": 261.56, "text": " absolutely crazy. This is an order of magnitude higher than anything that's" }, { "start": 261.56, "end": 268.24, "text": " ever existed and if you look at the last GPT the GPT-2 model that if you remember" }, { "start": 268.24, "end": 272.96, "text": " I've made a video about it is too dangerous to be released well now it has" }, { "start": 272.96, "end": 278.48, "text": " been released but was too dangerous to be released it clocked in at about 1.5" }, { "start": 278.48, "end": 285.64000000000004, "text": " billion parameters so compared to this GPT-3 XL model right here they train" }, { "start": 285.64000000000004, "end": 289.6, "text": " these multiple models to basically estimate the effect of the model size" }, { "start": 289.6, "end": 297.16, "text": " and you can see here the largest model has 96 attention layers. Each layer" }, { "start": 297.16, "end": 306.72, "text": " has 96 attention heads and each head is 128 dimensional and it trains on batches" }, { "start": 306.72, "end": 312.52000000000004, "text": " of size 3.2 million. This is the batch size absolutely crazy so they train" }, { "start": 312.52000000000004, "end": 319.16, "text": " this on a giant distributed cluster that apparently is provided by Microsoft and" }, { "start": 319.16, "end": 326.32000000000005, "text": " yes crazy crazy things. So how does this model look? This model is a transformer" }, { "start": 326.32000000000005, "end": 331.04, "text": " model and right here we don't even have like a description of a transformer" }, { "start": 331.04, "end": 335.72, "text": " model it's just assumed you know what that is. I have made several videos on" }, { "start": 335.72, "end": 340.08000000000004, "text": " transformer models and especially things like attention is all you need or BERT" }, { "start": 340.08000000000004, "end": 344.92, "text": " or something like this but for those who don't know if I have a transformer" }, { "start": 344.92, "end": 349.44000000000005, "text": " model and I want to build a language model from it let's take this sentence" }, { "start": 349.44000000000005, "end": 355.56, "text": " right here I would input a what's called a context which is the thing I already" }, { "start": 355.56, "end": 360.20000000000005, "text": " have right I would input that into a transformer model and a transformer" }, { "start": 360.20000000000005, "end": 365.12, "text": " model is just several layers of attention mechanism. Now an attention" }, { "start": 365.12, "end": 369.6, "text": " mechanism is basically a way where information is routed in between the" }, { "start": 369.6, "end": 375.8, "text": " different tokens right here and as it goes up the layer basically the the" }, { "start": 375.8, "end": 379.96, "text": " information is routed around and the model can make various inferences and at" }, { "start": 379.96, "end": 386, "text": " the end the model is supposed to come up with the next word that you're going to" }, { "start": 386, "end": 392.04, "text": " put here. Specifically in this paper they use sub words like word piece tokens" }, { "start": 392.04, "end": 397.44, "text": " like it is common in NLP right now but essentially this is an autoregressive" }, { "start": 397.44, "end": 401.52000000000004, "text": " language model so it's not like BERT it's not bi-directional it is" }, { "start": 401.52000000000004, "end": 406.24, "text": " autoregressive it goes from left to right always produces the next word it" }, { "start": 406.24, "end": 412.24, "text": " is like GPT-2 they even say this they say we use the same model and" }, { "start": 412.24, "end": 420.88, "text": " architecture as GPT-2 they just have more layers and wider layers and more" }, { "start": 420.88, "end": 429.48, "text": " data to train it on. So how do they train it? Okay that's we already said they" }, { "start": 429.48, "end": 435.08, "text": " train it in simply in simply a language modeling way just next word prediction" }, { "start": 435.08, "end": 439.71999999999997, "text": " that's it okay it so it's not even something fancy like BERT. The" }, { "start": 439.71999999999997, "end": 445.52, "text": " interesting part is when you do the now the the single tasks so what you usually" }, { "start": 445.52, "end": 451.59999999999997, "text": " did with something like BERT so with something like BERT you would do first" }, { "start": 451.59999999999997, "end": 457.08, "text": " pre-train so there you would this is the language modeling right here this" }, { "start": 457.08, "end": 461.52, "text": " pre-training phase where you teach BERT about the English language by just" }, { "start": 461.52, "end": 468.24, "text": " feeding it a lot of data and then second you had a step called fine-tuning fine I" }, { "start": 468.24, "end": 474.59999999999997, "text": " can't even write tuning so on the second one you'd have something like the task" }, { "start": 474.6, "end": 477.6, "text": " you're actually interested in and let's say the task you're actually interested" }, { "start": 477.6, "end": 481.72, "text": " in is sentiment classification so in sentiment classification you have like a" }, { "start": 481.72, "end": 488.40000000000003, "text": " sentence like blah blah blah and you want to know is that a positive" }, { "start": 488.40000000000003, "end": 492.8, "text": " sentiment like is a happy sentence or is it a sad sentence and you would have a" }, { "start": 492.8, "end": 498.6, "text": " database of labeled instances of that so in this database you'd have a bunch of" }, { "start": 498.6, "end": 502.92, "text": " sentences and for each one you would know is it good is it is it positive or" }, { "start": 502.92, "end": 508.56, "text": " is it negative and then you'd have like a smaller test set right here and you" }, { "start": 508.56, "end": 513.2, "text": " would you would train you would basically take this pre-trained model" }, { "start": 513.2, "end": 518.36, "text": " train it on this data set in a supervised machine learning way and then" }, { "start": 518.36, "end": 522.8000000000001, "text": " test it on this test set right here this is called fine-tuning that's what they" }, { "start": 522.8000000000001, "end": 530, "text": " display here so in fine-tuning the model is trained via repeated gradient updates" }, { "start": 530, "end": 536.56, "text": " using a large corpus of example tasks all right so the example task right here" }, { "start": 536.56, "end": 540.44, "text": " could be translating to French so in your training database of the" }, { "start": 540.44, "end": 545.28, "text": " translation task would be this would be sea otter is called Luther de Mer and" }, { "start": 545.28, "end": 552.12, "text": " in and and then you'd actually change your model you do a gradient update I" }, { "start": 552.12, "end": 557.44, "text": " mean if if you're in the NLP world this seems very natural but they are going to" }, { "start": 557.44, "end": 563, "text": " argue in a second that this isn't the only way that you can teach a model a" }, { "start": 563, "end": 568.5600000000001, "text": " task right so this this seems very natural right you're going to change" }, { "start": 568.5600000000001, "end": 572.48, "text": " your model you take your pre-trained model and you're going to fine-tune it" }, { "start": 572.48, "end": 576, "text": " on this task and if you have a different task right if you have now" }, { "start": 576, "end": 580.72, "text": " question answering task you're going to have a different data set right here" }, { "start": 580.72, "end": 586.96, "text": " with a train and test data set and you're going to take the pre-trained" }, { "start": 586.96, "end": 592.0400000000001, "text": " model and then fine-tune it on that data set and evaluate it on that test set so" }, { "start": 592.0400000000001, "end": 596.8000000000001, "text": " this gives you basically with as many models as you have tasks and you for" }, { "start": 596.8000000000001, "end": 601.64, "text": " each one you need a big big training data set in order to perform well" }, { "start": 601.64, "end": 605.88, "text": " sometimes we have this sometimes we don't what they are interested in is" }, { "start": 605.88, "end": 611.36, "text": " basically to take the pre-trained model and directly go and evaluate it on the" }, { "start": 611.36, "end": 616.5600000000001, "text": " test data set in a sort of a zero-shot fashion now it is not zero shot as they" }, { "start": 616.56, "end": 622.0799999999999, "text": " will argue so what are they doing in a true zero-shot fashion you would just" }, { "start": 622.0799999999999, "end": 627.92, "text": " take your your language model that you pre-trained and you just input the" }, { "start": 627.92, "end": 633.88, "text": " following text you input what they call a task description and a prompt so this" }, { "start": 633.88, "end": 639.52, "text": " is the input and you simply ask the model as a language model to predict the" }, { "start": 639.52, "end": 644.04, "text": " next work it's just what comes here now what you're counting on is basically" }, { "start": 644.04, "end": 649.4, "text": " that in the training data the model has seen a structure like this enough to" }, { "start": 649.4, "end": 653.36, "text": " understand what's going on so that in the training data somewhere in the" }, { "start": 653.36, "end": 658.28, "text": " internet there was the structure of translate something to something and" }, { "start": 658.28, "end": 663.48, "text": " then there would be a word here of something and you know it kind of has to" }, { "start": 663.48, "end": 667.8, "text": " realize that this goes here like that the next word so basically what you're" }, { "start": 667.8, "end": 675.52, "text": " asking it is if you were to find this text on a website or on Wikipedia or in" }, { "start": 675.52, "end": 681.8, "text": " any of the books data set if you were to find this piece of text what would be" }, { "start": 681.8, "end": 688.8399999999999, "text": " the next word in that piece of text and you kind of hope that this this is" }, { "start": 688.8399999999999, "end": 695.88, "text": " enough if you've trained a good language model that this is enough to to to" }, { "start": 695.88, "end": 700, "text": " actually produce the French translation here now before I realize I've said the" }, { "start": 700, "end": 703.56, "text": " language modeling is to teach the model the English language actually not true" }, { "start": 703.56, "end": 708.2, "text": " in this common crawl corpus you also have many foreign languages so you" }, { "start": 708.2, "end": 716.56, "text": " basically teach you the general model of the internet now they trend they they" }, { "start": 716.56, "end": 722.92, "text": " contrast this to what they call one-shot learning so in one-shot learning you not" }, { "start": 722.92, "end": 726.76, "text": " only do you have the task description right here and this is this is a string" }, { "start": 726.76, "end": 730.5999999999999, "text": " right you don't specifically tell the model that this is now a translation" }, { "start": 730.5999999999999, "end": 734.76, "text": " task you simply input this as a string so not only do you have the task" }, { "start": 734.76, "end": 740.76, "text": " description and the prompt right here but you also have one example and the" }, { "start": 740.76, "end": 746.7199999999999, "text": " example and this is where they this is where they bring in the where they say" }, { "start": 746.7199999999999, "end": 751.28, "text": " it's not exactly zero shot where's my little drawing here so the example is" }, { "start": 751.28, "end": 758.12, "text": " going to come from the training data set of the task that you're interested in" }, { "start": 758.12, "end": 765.28, "text": " but the important part is you never train on it you never explicitly train" }, { "start": 765.28, "end": 769.52, "text": " on that example you simply put it in the context so you simply put this string so" }, { "start": 769.52, "end": 776.8399999999999, "text": " translate English to French new line see order is Luther de Mer new line cheese" }, { "start": 776.84, "end": 782.72, "text": " is what you simply input that string into the model as a language model and" }, { "start": 782.72, "end": 789.32, "text": " you ask it what's the next word right here okay so I hope I hope this is clear" }, { "start": 789.32, "end": 795.32, "text": " this is what they call kind of one-shot generalization and by one shot they" }, { "start": 795.32, "end": 801.08, "text": " basically mean you simply provide this thing in the context of the model as a" }, { "start": 801.08, "end": 807.0400000000001, "text": " language model now the the advantage here is immediately clear that you only" }, { "start": 807.0400000000001, "end": 813.44, "text": " have to train one model then and then basically at inference time you can just" }, { "start": 813.44, "end": 820.64, "text": " input the task description and the sort of training data for the task into its" }, { "start": 820.64, "end": 828.48, "text": " its evaluation context and the task itself and it will if if it is if it" }, { "start": 828.48, "end": 834.4, "text": " really does what they claim it does it would be able to sort of understand the" }, { "start": 834.4, "end": 838.5600000000001, "text": " prompt here understand what it means to translate from English to French it" }, { "start": 838.5600000000001, "end": 844.12, "text": " would look at this example and say oh that's what you want me to do okay and" }, { "start": 844.12, "end": 849.6800000000001, "text": " then it would be able to generalize to this input right here to say okay from" }, { "start": 849.6800000000001, "end": 854.48, "text": " the task description and the example I so I get I get what you want me to do I" }, { "start": 854.48, "end": 861.8000000000001, "text": " will the next word here is cheese what's cheese in French I don't remember from" }, { "start": 861.8000000000001, "end": 868.6, "text": " a from a now the way the language model is going to interpret that is slightly" }, { "start": 868.6, "end": 872.08, "text": " different as we said before the way the language model is going to interpret is" }, { "start": 872.08, "end": 878.24, "text": " if you were to find the following text on a website somewhere the text is" }, { "start": 878.24, "end": 882.88, "text": " called translating which to French new line see order goes to loot the name new" }, { "start": 882.88, "end": 888.64, "text": " line cheese goes to what would be the next word on that website so that's what" }, { "start": 888.64, "end": 891.68, "text": " the model sees right you have to differentiate between what the human" }, { "start": 891.68, "end": 895.52, "text": " wants and what the model sees the model is just a language model that is going" }, { "start": 895.52, "end": 899.92, "text": " to take the next that is just going to determine if I were to see this text" }, { "start": 899.92, "end": 904.56, "text": " somewhere what will be the most likely next word so you have to phrase your" }, { "start": 904.56, "end": 910.4, "text": " tasks in a way that makes sense in that thing and they also have this few shot" }, { "start": 910.4, "end": 914.72, "text": " thing where you not only provide one context but you provide a bunch of" }, { "start": 914.72, "end": 922.0799999999999, "text": " context to basically tell the model more of what it what it should do right now" }, { "start": 922.0799999999999, "end": 927.52, "text": " this doesn't only work in a free mode where you basically say what's the next" }, { "start": 927.52, "end": 930.88, "text": " word here what you can also do if you have such a language hold with the exact" }, { "start": 930.88, "end": 935.4, "text": " same model you can give it basically a couple of possibilities so you can give" }, { "start": 935.4, "end": 942.84, "text": " it it's you can say like it's either shop or it's from us or it's hotel I" }, { "start": 942.84, "end": 949, "text": " think that has like this so you can you can basically restrict it to only" }, { "start": 949, "end": 954.1999999999999, "text": " produce one of these three things so in translation this might not be you know" }, { "start": 954.1999999999999, "end": 960.24, "text": " the the way to go but in if you have like yes no answers questions you can" }, { "start": 960.24, "end": 964.96, "text": " restrict it to that so in a lot of these NLP tasks you have some options given" }, { "start": 964.96, "end": 969, "text": " for a given question and you can also restrict it so don't you know you always" }, { "start": 969, "end": 974.24, "text": " have to go with the task at hand but this is in essence what the model does" }, { "start": 974.24, "end": 980.2800000000001, "text": " and this is I think this is the new well not the new per se but this is one of" }, { "start": 980.2800000000001, "end": 984.32, "text": " the core ideas of this paper if you take anything from it there's no new" }, { "start": 984.32, "end": 988.9200000000001, "text": " architecture right here there's no new wisdom in training they train in a" }, { "start": 988.9200000000001, "end": 993.4000000000001, "text": " standard way in a standard language modeling fashion a standard transformer" }, { "start": 993.4, "end": 997.52, "text": " architecture this just happens to be ginormous okay this right here this" }, { "start": 997.52, "end": 1001.76, "text": " thing where they say most of these things would fine-tune and then" }, { "start": 1001.76, "end": 1007.24, "text": " basically end up with one model per task and you need a big data set per task but" }, { "start": 1007.24, "end": 1013.64, "text": " we simply can do this since we have such a large language model it is basically" }, { "start": 1013.64, "end": 1018, "text": " already basically already knows how to do these tasks as long as we formulate" }, { "start": 1018, "end": 1023.8, "text": " them in a language model way we can have the model perform these tasks and they" }, { "start": 1023.8, "end": 1029.96, "text": " will show that this works surprisingly well throughout this paper now we get" }, { "start": 1029.96, "end": 1036.64, "text": " into the experimental results right here and the experimental results first of" }, { "start": 1036.64, "end": 1043.48, "text": " all on language modeling as you can see here they basically say as you go up" }, { "start": 1043.48, "end": 1048.24, "text": " with the parameters you see the more yellow ones are the parameters you go" }, { "start": 1048.24, "end": 1053.72, "text": " into your validation loss goes down and down and down and down and I believe" }, { "start": 1053.72, "end": 1059.6, "text": " this is sort of a log scale as well so this is the log probability so the" }, { "start": 1059.6, "end": 1067.56, "text": " perplexity and that the this basically follows a trend oh no this is a log" }, { "start": 1067.56, "end": 1074.04, "text": " scale this this is a log scale it follows a trend where as you scale up" }, { "start": 1074.04, "end": 1078.3999999999999, "text": " the model and as you scale up the compute that the model gets and we know" }, { "start": 1078.3999999999999, "end": 1082, "text": " for these big language models we basically know you have to scale up model" }, { "start": 1082, "end": 1088.1599999999999, "text": " size compute time and data set size in the same fashion for them to make these" }, { "start": 1088.1599999999999, "end": 1094.76, "text": " gains but if you do that it follows like a a power law where as you scale up" }, { "start": 1094.76, "end": 1097.8799999999999, "text": " these things the model basically gets better and better and better and the" }, { "start": 1097.8799999999999, "end": 1103.08, "text": " question of course is you know how far how far can we go with this but for now" }, { "start": 1103.08, "end": 1107.64, "text": " it seems to hold quite well that you can just make improvements by scaling up" }, { "start": 1107.64, "end": 1116.68, "text": " your model on language modeling at least so where do we where do we basically go" }, { "start": 1116.68, "end": 1122.32, "text": " from here so before we dive into the actual results of the individual tasks" }, { "start": 1122.32, "end": 1126.72, "text": " so now they're going to formulate these individual tasks so they have like pure" }, { "start": 1126.72, "end": 1130.6399999999999, "text": " language modeling tasks right here like Alice was friends with Bob Alice went to" }, { "start": 1130.6399999999999, "end": 1135.08, "text": " visit her friend and then it's like what's the next word okay is Bob and" }, { "start": 1135.08, "end": 1139.76, "text": " George bought some baseball equipment a ball a glove and a what's the next word" }, { "start": 1139.76, "end": 1147.8799999999999, "text": " and I guess this should be hat sorry bat right here but we're going to go into" }, { "start": 1147.88, "end": 1155.4, "text": " the into the tasks and one of them is for example question answering so in" }, { "start": 1155.4, "end": 1160.2, "text": " question answering you simply get either you get just a pure question or a" }, { "start": 1160.2, "end": 1168.64, "text": " context and a question and they do the facts they test where a situation where" }, { "start": 1168.64, "end": 1172.72, "text": " you just get the question so you just get I don't know who is the queen of" }, { "start": 1172.72, "end": 1179.08, "text": " England or something like this and the model is simply to produce either the" }, { "start": 1179.08, "end": 1185.08, "text": " results direct or to choose from a bunch of answers which one is the most likely" }, { "start": 1185.08, "end": 1192.1200000000001, "text": " as a language model and as you can see as you scale up the language model the" }, { "start": 1192.1200000000001, "end": 1197.68, "text": " zero shot one shot and few shot predictions so in few shot you give 64" }, { "start": 1197.68, "end": 1203.3600000000001, "text": " different examples from the training set in the context so you always have so" }, { "start": 1203.3600000000001, "end": 1208.2, "text": " your context is going to look something like this and they have examples at the" }, { "start": 1208.2, "end": 1213.0800000000002, "text": " bottom and haven't looked at the QA task but the the example is going to be" }, { "start": 1213.0800000000002, "end": 1217.28, "text": " something like this you have a task description like answer the following" }, { "start": 1217.28, "end": 1223.48, "text": " questions answer the question and then you have your examples in zero shot" }, { "start": 1223.48, "end": 1228.96, "text": " that's zero and one shot it's one that's what I'd like and then you say how tall" }, { "start": 1228.96, "end": 1239.32, "text": " who sorry who I don't know who climbed Everest the first the rest the first and" }, { "start": 1239.32, "end": 1246.84, "text": " then you say Hillary I think it was Hillary no I don't remember and then you" }, { "start": 1246.84, "end": 1252.88, "text": " say I don't know how how tall is the Empire State building and then you have" }, { "start": 1252.88, "end": 1260.24, "text": " like some number here and at the end you say what was it was it was a question" }, { "start": 1260.24, "end": 1264.92, "text": " from before I don't know who is the Queen of England yeah who is the Queen" }, { "start": 1264.92, "end": 1271.68, "text": " of England and then you ask the model to predict the next word right here okay" }, { "start": 1271.68, "end": 1279.0400000000002, "text": " and you do this in a closed book setting which means you have no access to" }, { "start": 1279.04, "end": 1283.72, "text": " Wikipedia or whatever like usually these systems they can go and query Wikipedia" }, { "start": 1283.72, "end": 1289.04, "text": " but this system doesn't so you just you just want to know what has the model" }, { "start": 1289.04, "end": 1294.92, "text": " learned about the world by simply absorbing giant amounts of text so if" }, { "start": 1294.92, "end": 1299.84, "text": " somewhere in the training data the fact that the Queen of England is Elizabeth" }, { "start": 1299.84, "end": 1305.3999999999999, "text": " the second is present it should complete this right here and it performs" }, { "start": 1305.4, "end": 1311.64, "text": " surprisingly well as you can see here so it manages to outperform a fine-tuned" }, { "start": 1311.64, "end": 1316.0800000000002, "text": " state-of-the-art model that is actually that is fine-tuned on question" }, { "start": 1316.0800000000002, "end": 1319.72, "text": " answering right this has it has been built for question answering and this" }, { "start": 1319.72, "end": 1327.48, "text": " model outperforms it by simply having a lot of language so this here is the" }, { "start": 1327.48, "end": 1336.92, "text": " results on on these open domain QA tasks and you you see right here it the this" }, { "start": 1336.92, "end": 1343.88, "text": " this few shot it outperforms this open domain and open domain means that the" }, { "start": 1343.88, "end": 1356.32, "text": " model can go and look at some Wikipedia page and yeah so so this is pretty cool" }, { "start": 1356.32, "end": 1361.28, "text": " but there are other things like the natural questions where it under" }, { "start": 1361.28, "end": 1367.3999999999999, "text": " performs compared to this open domain thing and they say this is mainly due to" }, { "start": 1367.3999999999999, "end": 1372.56, "text": " the natural questions being like it's very much about factual Wikipedia" }, { "start": 1372.56, "end": 1377.52, "text": " knowledge and so on maybe like the question we just made maybe is more of a" }, { "start": 1377.52, "end": 1383.04, "text": " natural question type of thing and since and the model is apparently not as good" }, { "start": 1383.04, "end": 1388.3999999999999, "text": " at that but it's still impressive that the model is able to do this out of the" }, { "start": 1388.3999999999999, "end": 1395.68, "text": " box okay so before I said something like before we go into the experiments I want" }, { "start": 1395.68, "end": 1401.32, "text": " the following so I have like some sort of hypothesis it's not it's an it's not" }, { "start": 1401.32, "end": 1407.56, "text": " an uncommon hypothesis that basically these things these giant language models" }, { "start": 1407.56, "end": 1411.72, "text": " right they're just these transformers layer after layer after layer with their" }, { "start": 1411.72, "end": 1417.88, "text": " connections in here what I think is happening is they are simply storing the" }, { "start": 1417.88, "end": 1423.24, "text": " training data right they are simply storing the training data in these" }, { "start": 1423.24, "end": 1428.08, "text": " connections right here so usually you think of storing the training data in" }, { "start": 1428.08, "end": 1432.16, "text": " some form of maybe we have like some module right here some database module" }, { "start": 1432.16, "end": 1437.28, "text": " in the neural network and it learns to query the module but ultimately if you" }, { "start": 1437.28, "end": 1443.16, "text": " train a neural network what you have is data and you train a function with" }, { "start": 1443.16, "end": 1449.3999999999999, "text": " parameters on that data and ultimately what you're doing is you're distilling" }, { "start": 1449.3999999999999, "end": 1455.28, "text": " the data into these parameters and you kind of hope to learn some regularities" }, { "start": 1455.28, "end": 1460.08, "text": " from it but ultimately the information about your training data influences or" }, { "start": 1460.08, "end": 1465.68, "text": " determines your final parameters of your function now I can imagine that if you" }, { "start": 1465.68, "end": 1472, "text": " have such a giant neural network with so many weights like 17 sorry 170 billion" }, { "start": 1472, "end": 1478.48, "text": " weights that you can pretty efficiently actually store the training data in that" }, { "start": 1478.48, "end": 1486.04, "text": " model and when you ask this model now to do something what it basically does is" }, { "start": 1486.04, "end": 1491.04, "text": " what these people sort of argue is that it has learned these language tasks is" }, { "start": 1491.04, "end": 1495.92, "text": " learned to reason over language and so on what I think is happening much more" }, { "start": 1495.92, "end": 1501.84, "text": " is it will simply go to the training data since it has stored the entire" }, { "start": 1501.84, "end": 1507.2, "text": " training data in its weights and it will sort of pull out the five to ten to fifty" }, { "start": 1507.2, "end": 1513.48, "text": " training examples that are most relevant to what you put in and it will sort of" }, { "start": 1513.48, "end": 1517.56, "text": " interpolate right you go to the training data and it'll pull out a bunch of" }, { "start": 1517.56, "end": 1521.8, "text": " training samples that are relevant to the context you put in right now and" }, { "start": 1521.8, "end": 1526.9199999999998, "text": " then it will sort of integrate those into the next word that's going to come" }, { "start": 1526.9199999999998, "end": 1533.3999999999999, "text": " out right here and I think if you look at this paper in terms of this so you" }, { "start": 1533.3999999999999, "end": 1538.84, "text": " always write you input a context and the context is split into a task" }, { "start": 1538.84, "end": 1545.52, "text": " description and then it is split into k different examples and then it is it is" }, { "start": 1545.52, "end": 1549.44, "text": " it has a prompt sorry this year this is the prompt so the task description is" }, { "start": 1549.44, "end": 1553.56, "text": " please translate from English to French and the k different things are k" }, { "start": 1553.56, "end": 1557.8799999999999, "text": " different translations and then the prompt is you know what what you should" }, { "start": 1557.8799999999999, "end": 1563.34, "text": " do so it's like half of a K half of one of these boxes right here so these boxes" }, { "start": 1563.34, "end": 1567.4, "text": " are have blah blah blah turns to blah blah blah and then the prompt is simply" }, { "start": 1567.4, "end": 1574.6399999999999, "text": " without the the right side I think what it does is it will simply take all of" }, { "start": 1574.64, "end": 1581.16, "text": " this and it will go to its own training data which it has stored in its weights" }, { "start": 1581.16, "end": 1587, "text": " and it will filter the training data and basically take out the the things that" }, { "start": 1587, "end": 1593, "text": " sort of pattern match sort of regex match in a fuzzy way to this context and" }, { "start": 1593, "end": 1597.6000000000001, "text": " then it will kind of interpolate these training examples in order to come up" }, { "start": 1597.6, "end": 1604.9199999999998, "text": " with the answer I don't think there is reasoning happening here and we're going" }, { "start": 1604.9199999999998, "end": 1610.36, "text": " to if you go through the paper with this view then you can a lot of things" }, { "start": 1610.36, "end": 1615.9199999999998, "text": " actually make sense and I actually I think that we need we need what we need" }, { "start": 1615.9199999999998, "end": 1620.74, "text": " when think people think of like explainable machine learning they often" }, { "start": 1620.74, "end": 1624.12, "text": " think that if I'm going to input something like I'm going to input an" }, { "start": 1624.12, "end": 1630.2399999999998, "text": " image into a classifier and it comes out a certain class car I like the" }, { "start": 1630.2399999999998, "end": 1635.36, "text": " explainability should be which part of this image was it the wheels was it the" }, { "start": 1635.36, "end": 1639.6399999999999, "text": " the hood which part of the image which part of the input image is responsible" }, { "start": 1639.6399999999999, "end": 1644.04, "text": " for making that determination what I think in especially in these language" }, { "start": 1644.04, "end": 1649.56, "text": " models what we should do is if the model predicts something right here the next" }, { "start": 1649.56, "end": 1655.24, "text": " word I think we should somehow have a method of determining which of the" }, { "start": 1655.24, "end": 1661.08, "text": " training examples that the model used to interpolate given this context" }, { "start": 1661.08, "end": 1666.36, "text": " because I'm pretty sure these training is you will find so if you'll find that" }, { "start": 1666.36, "end": 1670.84, "text": " for example this weight and this weight and this weight was very responsible for" }, { "start": 1670.84, "end": 1676.72, "text": " making this prediction happen I'm pretty sure you can somehow during training" }, { "start": 1676.72, "end": 1682.08, "text": " build an index of which of the which five training examples had most influence" }, { "start": 1682.08, "end": 1685.96, "text": " on that particular weight or on this combination of weights and then you can" }, { "start": 1685.96, "end": 1691.88, "text": " sort of go backwards and say you made this decision right here model please" }, { "start": 1691.88, "end": 1696.68, "text": " tell me which of the training data samples were responsible for making that" }, { "start": 1696.68, "end": 1702.08, "text": " decision actually pretty sure that already exists like I'm never the first" }, { "start": 1702.08, "end": 1709.76, "text": " one to think of these things though if I am site me site the channel no but just" }, { "start": 1709.76, "end": 1715.1999999999998, "text": " an interesting way to think about this model and an interesting way to think" }, { "start": 1715.1999999999998, "end": 1719.6799999999998, "text": " about kind of what does what would explain ability even mean in a model" }, { "start": 1719.6799999999998, "end": 1725.1999999999998, "text": " like this and my argument is since it interpolates the training data the" }, { "start": 1725.1999999999998, "end": 1730.1599999999999, "text": " interpret ability should come from the fact of which training samples does it" }, { "start": 1730.16, "end": 1737.28, "text": " interpolate okay let's go to translation so in translation as we said they simply" }, { "start": 1737.28, "end": 1748, "text": " input the like the task and then the few examples and then and then the output" }, { "start": 1748, "end": 1753.68, "text": " okay and you can see right here what you can see is that again as the model goes" }, { "start": 1753.68, "end": 1760.16, "text": " up in parameters the performance generally increases and also you can see" }, { "start": 1760.16, "end": 1765.5600000000002, "text": " that the performance is pretty good every time that this model goes to" }, { "start": 1765.5600000000002, "end": 1771.72, "text": " English so it goes if it if the target language is English which sort of makes" }, { "start": 1771.72, "end": 1777.0800000000002, "text": " sense because like a large part of the corpus they train on is English so being" }, { "start": 1777.0800000000002, "end": 1782.5600000000002, "text": " an English language model it should be pretty good if it is asked to produce" }, { "start": 1782.56, "end": 1787.12, "text": " English and it's not as good if it is asked to go into the different direction" }, { "start": 1787.12, "end": 1793.8799999999999, "text": " now what you also see is that it is not really a difference whether you translate" }, { "start": 1793.8799999999999, "end": 1800.08, "text": " from from which language you translate but if you go to English but it very" }, { "start": 1800.08, "end": 1807.6, "text": " much matters to which language you go if it is from English so this sort of makes" }, { "start": 1807.6, "end": 1812.9599999999998, "text": " sense in that it is just trained on a lot of English data and right here" }, { "start": 1812.9599999999998, "end": 1821.28, "text": " sometimes they are on par with the with the state-of-the-art supervised methods" }, { "start": 1821.28, "end": 1825.1599999999999, "text": " and also other times they outperform these methods right here and these" }, { "start": 1825.1599999999999, "end": 1829.1599999999999, "text": " methods are unsupervised but are specifically so they don't have a" }, { "start": 1829.1599999999999, "end": 1833.6799999999998, "text": " supervised training data set that goes let's say from English to French but" }, { "start": 1833.68, "end": 1839.3200000000002, "text": " they are built with this in mind that they need to translate later so they are" }, { "start": 1839.3200000000002, "end": 1844.52, "text": " sort of task specific but don't have a supervised training set and this model" }, { "start": 1844.52, "end": 1853.3600000000001, "text": " right here it just learns whatever it learns and it it just it just does it" }, { "start": 1853.3600000000001, "end": 1857.2, "text": " just does this this language model learning and at the end just because it" }, { "start": 1857.2, "end": 1862.88, "text": " has seen some websites where language of both things appear it can now translate" }, { "start": 1862.88, "end": 1873.7600000000002, "text": " reasonably well okay now yeah so the results here are a bit noisy but it is" }, { "start": 1873.7600000000002, "end": 1876.48, "text": " still interesting to see that it sometimes even gets close to the" }, { "start": 1876.48, "end": 1882.3200000000002, "text": " supervised thing though they say that they are not familiar with the literature" }, { "start": 1882.3200000000002, "end": 1889.22, "text": " and are not sure that these models that these numbers are you know good okay okay" }, { "start": 1889.22, "end": 1897.4, "text": " the next thing is these um Winograd schemes where you do have where is the" }, { "start": 1897.4, "end": 1903.6000000000001, "text": " text here is a classic NLP task that involves determining which word a" }, { "start": 1903.6000000000001, "end": 1909.92, "text": " pronoun refers to when the pronoun is grammatically ambiguous but semantically" }, { "start": 1909.92, "end": 1916.68, "text": " unambiguous to a human so these are sort of human produced sentences where" }, { "start": 1916.68, "end": 1923.04, "text": " it's kind of a pronoun could refer to multiple things I don't have a example" }, { "start": 1923.04, "end": 1931.96, "text": " present but where do we have the right here you can see that this model will" }, { "start": 1931.96, "end": 1939.88, "text": " out produce a fine-tuned large but will not out produce a fine-tuned Roberto" }, { "start": 1939.88, "end": 1947.3200000000002, "text": " large so it is going to it is going to come it is competing at least with the" }, { "start": 1947.3200000000002, "end": 1953.3200000000002, "text": " fine-tuned models that were made specifically for that task right again" }, { "start": 1953.3200000000002, "end": 1960.1200000000001, "text": " this is pretty pretty interesting and you also see that the larger models here" }, { "start": 1960.1200000000001, "end": 1964.68, "text": " it starts to make a difference whether or not you give it one zero or one or" }, { "start": 1964.68, "end": 1976.92, "text": " more examples okay so we'll get into we'll get into the the more interesting" }, { "start": 1976.92, "end": 1986.3600000000001, "text": " things right here in this thing right here where is it yes this is the kind of" }, { "start": 1986.36, "end": 1995.1599999999999, "text": " a physical physical question physical QA where it is a bit of common-sense" }, { "start": 1995.1599999999999, "end": 2004.8799999999999, "text": " reasoning so you're asked to I don't yeah these are like science questions" }, { "start": 2004.8799999999999, "end": 2010.6, "text": " multiple-choice questions collected from a third to ninth grade exams and the" }, { "start": 2010.6, "end": 2019.1599999999999, "text": " physical QA is physical QA asks common-sense question about how the" }, { "start": 2019.1599999999999, "end": 2024.32, "text": " physical word work world works and is intended as a probe of grounded" }, { "start": 2024.32, "end": 2030.24, "text": " understanding of the world so it has questions as I understand it it has" }, { "start": 2030.24, "end": 2035.9599999999998, "text": " questions like if a drop a ball will it fall on the ground or where will it fall" }, { "start": 2035.96, "end": 2042.6000000000001, "text": " or something like this and they say that they can outperform a fine-tuned state" }, { "start": 2042.6000000000001, "end": 2048.92, "text": " of the art model on this if they go just high enough and you can also see that" }, { "start": 2048.92, "end": 2055.6, "text": " there isn't much of a difference between zero one and few shot the methods of" }, { "start": 2055.6, "end": 2060.56, "text": " this model right here even though zero shot is even higher than one shot so" }, { "start": 2060.56, "end": 2066.56, "text": " this is probably just noise but then you find out that they have an asterisk here" }, { "start": 2066.56, "end": 2075.08, "text": " and this means that this is potentially a contaminated data set so they have" }, { "start": 2075.08, "end": 2079.56, "text": " potential contamination issue so what they found was there was a significant" }, { "start": 2079.56, "end": 2085.72, "text": " overlap between the data set this data set and their training data set and" }, { "start": 2085.72, "end": 2092.16, "text": " they even they only realized this too late because there was a bug in their" }, { "start": 2092.16, "end": 2099.08, "text": " deduplication code and then they couldn't change it anymore like I because" }, { "start": 2099.08, "end": 2103.9599999999996, "text": " this model is so large that they couldn't restart the training because" }, { "start": 2103.9599999999996, "end": 2108.3599999999997, "text": " they've already spent like so much money and energy on it this is crazy I think" }, { "start": 2108.3599999999997, "end": 2112.24, "text": " these language models are getting so large that we should building them we" }, { "start": 2112.24, "end": 2118.04, "text": " should more think of it like we built the the International Space Station or" }, { "start": 2118.04, "end": 2122.8799999999997, "text": " something like this where it's a project where humanity sort of collaborates or" }, { "start": 2122.8799999999997, "end": 2126.3199999999997, "text": " there's a big effort and you build it once and whatever you have you have" }, { "start": 2126.3199999999997, "end": 2135.3999999999996, "text": " right so these these good numbers here are simply or not simply are because or" }, { "start": 2135.3999999999996, "end": 2139.9599999999996, "text": " could be influenced by this contamination and I think that's what's" }, { "start": 2139.96, "end": 2143.92, "text": " happening right here even though they will make the case that this" }, { "start": 2143.92, "end": 2150.16, "text": " contamination isn't really an issue I can probably show you that it might be" }, { "start": 2150.16, "end": 2156.32, "text": " it may be actually is an issue because on the other data sets at the the fine" }, { "start": 2156.32, "end": 2165.88, "text": " tuned state-of-the-art model outperform the GPT-3 quite a bit so and also the" }, { "start": 2165.88, "end": 2170, "text": " the fact that the you know if you provide a demonstration or many" }, { "start": 2170, "end": 2173.84, "text": " demonstrations it doesn't actually change that much it kind of tells me" }, { "start": 2173.84, "end": 2177.6, "text": " that the model sort of already knows what the answer is and doesn't really" }, { "start": 2177.6, "end": 2181.44, "text": " need demonstrations because it doesn't help if you have the training data" }, { "start": 2181.44, "end": 2192.04, "text": " stored or the test data you don't really have to get demonstrations right so they" }, { "start": 2192.04, "end": 2197.84, "text": " have a few other a few other things right here where on these coca tasks they" }, { "start": 2197.84, "end": 2204.08, "text": " perform pretty poorly compared to others or poorly let's say they perform well" }, { "start": 2204.08, "end": 2213.88, "text": " but not particularly more well than a state-of-the-art and they perform" }, { "start": 2213.88, "end": 2217.88, "text": " especially poorly on the reading comprehension sorry that's the that's" }, { "start": 2217.88, "end": 2225.28, "text": " the cocoa so in reading comprehension what you have to do is abstractive" }, { "start": 2225.28, "end": 2230.28, "text": " multiple choice and span based answer formats in both dialogue and single" }, { "start": 2230.28, "end": 2235.52, "text": " question setting so basically have to read a piece of text like this and then" }, { "start": 2235.52, "end": 2241.6400000000003, "text": " answer a question about the piece of text now this is something where I think" }, { "start": 2241.64, "end": 2248.92, "text": " you cannot really interpolate the training data super well and therefore" }, { "start": 2248.92, "end": 2252.8399999999997, "text": " so you can't really just pattern match and interpret because you have to do" }, { "start": 2252.8399999999997, "end": 2259.52, "text": " actual reasoning and I think that's why the model performs poorly here they do" }, { "start": 2259.52, "end": 2267.8599999999997, "text": " measure this on on super glue which is a NLP benchmark and also here you can see" }, { "start": 2267.86, "end": 2273.76, "text": " it doesn't outperform a fine-tuned state-of-the-art model on these tasks" }, { "start": 2273.76, "end": 2280.08, "text": " but it does outperform a fine-tuned BERT model slightly the BERT model is fine" }, { "start": 2280.08, "end": 2284.84, "text": " tuned on these things whereas GPT-3 isn't but notice the tasks in which it" }, { "start": 2284.84, "end": 2290.04, "text": " does well and in which it doesn't do well compared to the state-of-the-art" }, { "start": 2290.04, "end": 2296.9, "text": " model so for example in the bool queue it doesn't do particularly well right" }, { "start": 2296.9, "end": 2301.32, "text": " the state-of-the-art is 91 and only has 76 that's quite a large difference and" }, { "start": 2301.32, "end": 2307.12, "text": " actually have the glue benchmark open here and you can see this is the" }, { "start": 2307.12, "end": 2314.12, "text": " bool queue so an example here would be is France the same time zone as the UK" }, { "start": 2314.12, "end": 2319.28, "text": " and then there is like a passage and you need to reason about from this passage" }, { "start": 2319.28, "end": 2326.56, "text": " about whether or not this answer is true or false okay this this is very much not" }, { "start": 2326.56, "end": 2331.36, "text": " language modeling this is reasoning and that's why the model is doing poorly" }, { "start": 2331.36, "end": 2336.52, "text": " here whereas in another thing you see these for example this copa right here" }, { "start": 2336.52, "end": 2342.32, "text": " the model is doing almost as good as a fine-tuned state-of-the-art and I have to" }, { "start": 2342.32, "end": 2347.2799999999997, "text": " stress this model has never actually learned this task in a supervised way" }, { "start": 2347.2799999999997, "end": 2353.72, "text": " it's simply a language model and I have this copa task right here and these are" }, { "start": 2353.72, "end": 2359.9199999999996, "text": " the examples so one example is the premise the man broke his toe what was" }, { "start": 2359.9199999999996, "end": 2364.9199999999996, "text": " the cause of this and you have two different things that it could be either" }, { "start": 2364.9199999999996, "end": 2370.7599999999998, "text": " he got a hole in his sock or he dropped a hammer on his foot and the way you" }, { "start": 2370.7599999999998, "end": 2374.7599999999998, "text": " phrase it in this model is you would give the premise as the context and then" }, { "start": 2374.7599999999998, "end": 2379.24, "text": " you simply ask the model since it's a language model which of these two things" }, { "start": 2379.24, "end": 2386, "text": " is more probable to come and of course it is going to select the thing that" }, { "start": 2386, "end": 2393, "text": " kind of happened more often in the training data and you know broke his toe" }, { "start": 2393, "end": 2398.2799999999997, "text": " the cause of breaking his toe that is a hammer this is entirely conceivable that" }, { "start": 2398.2799999999997, "end": 2403.8399999999997, "text": " a language model would know this and with enough training data could sort of" }, { "start": 2403.8399999999997, "end": 2408.3999999999996, "text": " pull from the training data examples where hammer on foot and broke toe" }, { "start": 2408.4, "end": 2414.96, "text": " appear a bunch of times and hole in sock would be rather unrelated so as long as" }, { "start": 2414.96, "end": 2419.38, "text": " these questions are not too adversarial constructed specifically that a language" }, { "start": 2419.38, "end": 2424.28, "text": " model can't solve them there the model is going to perform pretty well right" }, { "start": 2424.28, "end": 2430, "text": " here right so it is very interesting to see that if you view this as" }, { "start": 2430, "end": 2434, "text": " interpolating the training data it suddenly makes sense where it's good and" }, { "start": 2434, "end": 2446.6, "text": " where it isn't good so this was the super glue and and NLI it is performing" }, { "start": 2446.6, "end": 2453.12, "text": " particularly poorly on NLI which is the ability to understand the relationship" }, { "start": 2453.12, "end": 2458.88, "text": " between two sentences right so where the model classifies whether the second" }, { "start": 2458.88, "end": 2462.68, "text": " sentence logically follows from the first contradicts the first or is" }, { "start": 2462.68, "end": 2469.96, "text": " possibly true neutral okay so this is the reasoning part of this model is not" }, { "start": 2469.96, "end": 2475.3199999999997, "text": " given it is simply recalling the training data and doing language modeling" }, { "start": 2475.3199999999997, "end": 2481.12, "text": " now they say oh we can test this we can test this with synthetic and qualitative" }, { "start": 2481.12, "end": 2485.8799999999997, "text": " tasks so they invent some own tasks since you know now it's pretty easy since" }, { "start": 2485.8799999999997, "end": 2489.3199999999997, "text": " you don't have to fine-tune the model you don't have to turn to generate an" }, { "start": 2489.32, "end": 2496.04, "text": " actual training set for a task so you can focus on generating a test set and" }, { "start": 2496.04, "end": 2504.1200000000003, "text": " and you know that's what they do so they do something like arithmetic so they say" }, { "start": 2504.1200000000003, "end": 2509.4, "text": " okay can we come up with a bunch of arithmetic tasks for example to digit" }, { "start": 2509.4, "end": 2514.92, "text": " addition so what the model would see would so this is an example and what the" }, { "start": 2514.92, "end": 2522.36, "text": " model would see is simply this as a context right here for the prompt and if" }, { "start": 2522.36, "end": 2530.04, "text": " you give it examples so if this is like one-shot learning you would input add" }, { "start": 2530.04, "end": 2535.08, "text": " the following numbers the following numbers as a string right then a new" }, { "start": 2535.08, "end": 2544.28, "text": " line and then you would give it one example like what is 11 plus 12 and with" }, { "start": 2544.28, "end": 2550.6800000000003, "text": " the answer together with the answer answer is I don't even know 23 and then" }, { "start": 2550.6800000000003, "end": 2559.76, "text": " you the prompt goes here so what is 48 plus 76 and then you ask what is the" }, { "start": 2559.76, "end": 2566.1600000000003, "text": " next word right here what is the next string token that comes here now the the" }, { "start": 2566.1600000000003, "end": 2571.48, "text": " inference here is that if the model manages to do this it can't simply" }, { "start": 2571.48, "end": 2575.56, "text": " because these are all strings the model basically has no clue how to do math" }, { "start": 2575.56, "end": 2579.64, "text": " these are numbers to the model these are just tokens as strings and the" }, { "start": 2579.64, "end": 2584, "text": " inference is if the model can do this it must have learned you know some kind of" }, { "start": 2584, "end": 2590.48, "text": " reasoning ability it must have learned to like perform some logic inside so" }, { "start": 2590.48, "end": 2594.12, "text": " they go into two-digit addition three-digit addition four-digit" }, { "start": 2594.12, "end": 2601.52, "text": " addition five-digit addition and even multiplication and subtraction and the" }, { "start": 2601.52, "end": 2609.6, "text": " results are right here so as you can see the lower parameter models they perform" }, { "start": 2609.6, "end": 2614.68, "text": " pretty poorly but as you go up the parameters the big model is performing" }, { "start": 2614.68, "end": 2621.48, "text": " really well in the two-digit range is performing also really well so accuracy" }, { "start": 2621.48, "end": 2627.4, "text": " of look that accuracy 80 90 percent in three-digit addition and subtraction but" }, { "start": 2627.4, "end": 2631.2, "text": " then if as soon as you get to the four-digit or the two-digit multiplication" }, { "start": 2631.2, "end": 2637.3, "text": " and so on the performance drops now they say that's because multiplication is" }, { "start": 2637.3, "end": 2642.32, "text": " harder and you know it's is logically very computationally you know but the" }, { "start": 2642.32, "end": 2647.08, "text": " two-digit addition and so on model has learned something about the world I" }, { "start": 2647.08, "end": 2658, "text": " disagree because so here's the because what you will do is you will simply and" }, { "start": 2658, "end": 2664.12, "text": " this you simply recall the training data so look at the two-digit addition with" }, { "start": 2664.12, "end": 2669.36, "text": " zero shot you already get 76 percent but with one shot you get 99 percent and" }, { "start": 2669.36, "end": 2675.84, "text": " with few shot you get a hundred percent so if you interpret this model as simply" }, { "start": 2675.84, "end": 2682.96, "text": " filtering the training data to pattern match then it makes a lot of sense that" }, { "start": 2682.96, "end": 2688.76, "text": " the one shot would like the examples here would give you a much improvement" }, { "start": 2688.76, "end": 2696.36, "text": " because if you have a bunch of examples where please add right add and then oh I" }, { "start": 2696.36, "end": 2703.08, "text": " erased our example again so you have like 48 plus 72 equals blah blah blah you" }, { "start": 2703.08, "end": 2709.44, "text": " have these of this if you give more and more example all of a sudden this looks" }, { "start": 2709.44, "end": 2715.84, "text": " like a table and they say we made sure that the strings here these particular" }, { "start": 2715.84, "end": 2719.88, "text": " strings were not in our training data right so these strings never appeared" }, { "start": 2719.88, "end": 2725.3199999999997, "text": " but I just have an issue with this d duplication stuff because what can" }, { "start": 2725.32, "end": 2734.56, "text": " appear actually is not the what can appear is a table and in table often you" }, { "start": 2734.56, "end": 2739.8, "text": " have columns and then another column will be the sum of these columns on the" }, { "start": 2739.8, "end": 2744.6800000000003, "text": " left and if you are asked to pattern match you'll naturally find websites" }, { "start": 2744.6800000000003, "end": 2748.56, "text": " right if you have a few of these examples you'll find websites where the" }, { "start": 2748.56, "end": 2754.1200000000003, "text": " columns exactly refer to these things and then you'll find the sum here and if" }, { "start": 2754.12, "end": 2759.8199999999997, "text": " you filter for websites that appear to match your scheme in the examples you'll" }, { "start": 2759.8199999999997, "end": 2764.56, "text": " find all the website with a table on them where the column one column is an" }, { "start": 2764.56, "end": 2770.7999999999997, "text": " addition of the others and I can actually do that so I went and I typed in" }, { "start": 2770.7999999999997, "end": 2779.24, "text": " just a bunch of these things so 98 plus 45 is 143 18 plus 55 is 70 I believe at" }, { "start": 2779.24, "end": 2784.8399999999997, "text": " least and I can find now Google makes it hard because they localize and" }, { "start": 2784.8399999999997, "end": 2789.68, "text": " everything but you can still find what you're going to find our tables and" }, { "start": 2789.68, "end": 2797.8799999999997, "text": " tables and tables and tables and now I actually went to doc.go to basically say" }, { "start": 2797.8799999999997, "end": 2802.9199999999996, "text": " you know they they don't you know really personalize it to me and what's the" }, { "start": 2802.9199999999996, "end": 2807.7599999999998, "text": " first thing I find when I type in just these numbers is math skip counting" }, { "start": 2807.76, "end": 2815.0800000000004, "text": " missing sequence number and a website where basically the answers are already" }, { "start": 2815.0800000000004, "end": 2820.1600000000003, "text": " given look at that so all the model has to do is recall this particular training" }, { "start": 2820.1600000000003, "end": 2826.6800000000003, "text": " example from the samples it already has right and it will it will basically be" }, { "start": 2826.6800000000003, "end": 2831.48, "text": " able in quotes to perform addition like this is financial data and another one" }, { "start": 2831.48, "end": 2837.1600000000003, "text": " where you have to subtract stuff right so I'm pretty sure all the model is doing" }, { "start": 2837.16, "end": 2843.92, "text": " here is interpolating the training data and that's also why it performs worse if" }, { "start": 2843.92, "end": 2850.8799999999997, "text": " if you up the digits because longer digit numbers are simply less frequent" }, { "start": 2850.8799999999997, "end": 2857.2, "text": " in the in in the training data multiplication is first of all less" }, { "start": 2857.2, "end": 2861.2799999999997, "text": " frequent and second of all it also results in larger numbers which are less" }, { "start": 2861.28, "end": 2870.6400000000003, "text": " frequent right so it explains a lot so I yeah I have my issues with people" }, { "start": 2870.6400000000003, "end": 2877, "text": " saying yeah this this shows some reasoning I don't think it does the same" }, { "start": 2877, "end": 2882.52, "text": " thing here with word scramble so in word scramble they have different things you" }, { "start": 2882.52, "end": 2889.6800000000003, "text": " see okay they they they look whether or not only 17 matches 0.8 percent of the" }, { "start": 2889.68, "end": 2893.7999999999997, "text": " math things are in their training data is like no you haven't searched well" }, { "start": 2893.7999999999997, "end": 2899.44, "text": " enough and the rest of their deduplication by the way is also pretty" }, { "start": 2899.44, "end": 2904.3599999999997, "text": " weak I would say because they just look for like 13 gram overlaps between the" }, { "start": 2904.3599999999997, "end": 2911, "text": " training data and the in the and their their test data so they have these word" }, { "start": 2911, "end": 2917.52, "text": " scrambling tasks where they basically scramble words and they ask the model to" }, { "start": 2917.52, "end": 2923.8, "text": " unscramble it for example this word is inevitably scrambled so they always you" }, { "start": 2923.8, "end": 2928.32, "text": " know they give like anagrams and they give random insertion into the word like" }, { "start": 2928.32, "end": 2935.92, "text": " this word right here or they reverse the word and they say so this I think this" }, { "start": 2935.92, "end": 2944.48, "text": " is the thing at the very beginning but if you can see right here also as the" }, { "start": 2944.48, "end": 2949.72, "text": " model goes up then this this improves and they also say well this means maybe" }, { "start": 2949.72, "end": 2956.2, "text": " some kind of reasoning but I think this is just it's learning the language and" }, { "start": 2956.2, "end": 2963.44, "text": " it's learning that you know the the words in in sorry that the letters make" }, { "start": 2963.44, "end": 2969.2400000000002, "text": " up a word and the letters correspond to word pieces or are associated with word" }, { "start": 2969.24, "end": 2975.12, "text": " pieces and it always learns to English a good task to check this would actually" }, { "start": 2975.12, "end": 2979.7999999999997, "text": " be to scramble words so if you unscramble words you always end up with" }, { "start": 2979.7999999999997, "end": 2983.4799999999996, "text": " an English word so all it has to do is basically check which word has the" }, { "start": 2983.4799999999996, "end": 2989, "text": " highest overlap in word pieces but you could do something like please scramble" }, { "start": 2989, "end": 2993.2, "text": " this word and then always count it correctly when any of the scrambling of" }, { "start": 2993.2, "end": 2999.22, "text": " the words so instead of going from this to this which you can simply solve by" }, { "start": 2999.22, "end": 3004.64, "text": " knowing the English language but you would have basically no clue what the" }, { "start": 3004.64, "end": 3008.52, "text": " task is that you don't have to understand that as a model you could ask" }, { "start": 3008.52, "end": 3013.04, "text": " it to go from this to this given a few examples right then it would really need" }, { "start": 3013.04, "end": 3018.3599999999997, "text": " to understand what the task is that it's supposed to actually scramble a word and" }, { "start": 3018.3599999999997, "end": 3023.6, "text": " would need to learn that from its context given examples but they as far" }, { "start": 3023.6, "end": 3030.52, "text": " as I see they don't do that and again I think it's recalling the the training" }, { "start": 3030.52, "end": 3037.68, "text": " data the this is a sat analogy so the SAT or this test that the US high" }, { "start": 3037.68, "end": 3044.44, "text": " schoolers take to get into college and the the this they say a typical example" }, { "start": 3044.44, "end": 3052.48, "text": " this is dying on me no it's scrolled okay a typical example is the following" }, { "start": 3052.48, "end": 3059.92, "text": " this I find I find pretty hilarious all Dacius is to boldness as sanctimonious" }, { "start": 3059.92, "end": 3064.96, "text": " is to hypocrisy anonymous is to identity remorseful is to misdeed" }, { "start": 3064.96, "end": 3069.96, "text": " deleterious is to result or impressionable is to temptation this is a" }, { "start": 3069.96, "end": 3075.4, "text": " as as a okay I'm not a native speaker but this is a hard question right and" }, { "start": 3075.4, "end": 3080, "text": " you have to you know see that these these high schoolers they're stressed" }, { "start": 3080, "end": 3083.8, "text": " like this is very much a time-based test so you need to make a decision quickly" }, { "start": 3083.8, "end": 3087.92, "text": " while the model of course is basically able to sift through its entire training" }, { "start": 3087.92, "end": 3092.8, "text": " data in the time it takes the GPUs to perform inference but it's still funny" }, { "start": 3092.8, "end": 3101.24, "text": " that GPT-3 achieves 50 65 percent in the few shot setting and 59 percent in the" }, { "start": 3101.24, "end": 3106.24, "text": " one shot setting 53 percent is zero shot setting whereas the average score among" }, { "start": 3106.24, "end": 3110.9199999999996, "text": " college applicants was 57 percent so it outperforms the average college applicant" }, { "start": 3110.9199999999996, "end": 3114.8399999999997, "text": " it's pretty funny but you would expect the language model to have a pretty good" }, { "start": 3114.8399999999997, "end": 3120.56, "text": " grasp of these kind of synonyms and relations between words because these" }, { "start": 3120.56, "end": 3127.68, "text": " are just absolutely statistical associations between words so yeah this" }, { "start": 3127.68, "end": 3132.6, "text": " I found this to be pretty pretty funny and the last thing and this is what" }, { "start": 3132.6, "end": 3138.92, "text": " everyone's freaking out over is this news article generation where basically" }, { "start": 3138.92, "end": 3146.72, "text": " they give it the beginning of a few of a news article and then they let humans" }, { "start": 3146.72, "end": 3152.04, "text": " decide whether or not the news article is written by a machine or by a human" }, { "start": 3152.04, "end": 3159.44, "text": " and they say here by contrast mean human accuracy at detecting articles that were" }, { "start": 3159.44, "end": 3165.2000000000003, "text": " produced by the 175 billion parameter model it was barely above chance at 52" }, { "start": 3165.2000000000003, "end": 3171.36, "text": " percent human abilities to detect model generated text appear to decrease as" }, { "start": 3171.36, "end": 3176.4, "text": " model size increases there appears to be a trend towards chance accuracy with" }, { "start": 3176.4, "end": 3184.32, "text": " model size and human detection of GPT-3 is close to chance okay so what they do" }, { "start": 3184.32, "end": 3190.1600000000003, "text": " is they give and they have some examples right here they give the model the" }, { "start": 3190.1600000000003, "end": 3196.44, "text": " following input the title the subtitle of an article and then this word article" }, { "start": 3196.44, "end": 3200.6400000000003, "text": " and the model is supposed to complete the rest of the article right here and" }, { "start": 3200.6400000000003, "end": 3205.36, "text": " you can also you know give do this in a few short setting such that the model" }, { "start": 3205.36, "end": 3211.32, "text": " basically knows that it's if you give it a few a few examples the model knows it" }, { "start": 3211.32, "end": 3218.44, "text": " is supposed to produce a news article right okay so there are two two ways" }, { "start": 3218.44, "end": 3223.96, "text": " that you can think of this first way the model has learned the language so well" }, { "start": 3223.96, "end": 3228.88, "text": " and it writes code it has learned to write coherent language and so on it's" }, { "start": 3228.88, "end": 3235.32, "text": " learned to reason keep context and blah blah blah okay second way the model sees" }, { "start": 3235.32, "end": 3242.34, "text": " this thing right here it sees the few you know K few shot examples that it has" }, { "start": 3242.34, "end": 3248.4, "text": " before in the context it will take them filter the training data to in this case" }, { "start": 3248.4, "end": 3252.0800000000004, "text": " it just sees news articles so do just news articles it will take this thing" }, { "start": 3252.0800000000004, "end": 3256.44, "text": " filter the training data even more to just the news articles that pertain" }, { "start": 3256.44, "end": 3262.96, "text": " largely to topics or words that appear in here and then lastly will interpolate" }, { "start": 3262.96, "end": 3267.52, "text": " the few training examples to produce this thing now they argue that this" }, { "start": 3267.52, "end": 3273.4, "text": " isn't really possible because they have actually checked that this news article" }, { "start": 3273.4, "end": 3282.12, "text": " is not in the training data but I have simply gone and taken a I've really" }, { "start": 3282.12, "end": 3286.28, "text": " taken a random substring here I've taken this substring voted to strengthen the" }, { "start": 3286.28, "end": 3293.6400000000003, "text": " ban on the ordination of just this substring and I've put it into Google and" }, { "start": 3293.6400000000003, "end": 3300.88, "text": " Babidi bah I find a book with voted to strengthen prohibitions to ban LGBTQ" }, { "start": 3300.88, "end": 3305.76, "text": " people from being ordained and ministers so it's you know I find this it's not" }, { "start": 3305.76, "end": 3310.5, "text": " the same article but it's talking about the same incident the article talks" }, { "start": 3310.5, "end": 3315.1600000000003, "text": " about and it is using the same language probably read the article and the" }, { "start": 3315.16, "end": 3319.92, "text": " author is like I can't really you know copy paste that would be you know not" }, { "start": 3319.92, "end": 3325.72, "text": " really cool so I'll just kind of you know write it in my own words but largely" }, { "start": 3325.72, "end": 3331.72, "text": " the same thing the Associated Press here also a different article you know see" }, { "start": 3331.72, "end": 3338.8399999999997, "text": " different title than this one right here but about the same thing and also with" }, { "start": 3338.8399999999997, "end": 3343.48, "text": " the same language right here voted to stay to strengthen the faiths divisive" }, { "start": 3343.48, "end": 3350.2400000000002, "text": " bands on same-sex marriage and ordination of LGBT clergy and generally" }, { "start": 3350.2400000000002, "end": 3355.6, "text": " so the argument this article wasn't in the training data is just not really" }, { "start": 3355.6, "end": 3364.4, "text": " something I buy in this in this case so I think it the article as such wasn't" }, { "start": 3364.4, "end": 3369, "text": " there but many articles about this topics were and I think this will just" }, { "start": 3369, "end": 3374.32, "text": " interpolate these now they say this was the hardest article for the humans to" }, { "start": 3374.32, "end": 3382.24, "text": " decide and this here was the easiest so it's it says I don't know" }, { "start": 3382.24, "end": 3387.12, "text": " Starr talks promise draws Megyn Kelly's sarcasm and it says a year ago joking" }, { "start": 3387.12, "end": 3389.76, "text": " Phoenix made headlines when he appeared on the red carpet at the Golden Globes" }, { "start": 3389.76, "end": 3393.36, "text": " wearing a tuxedo with a paper bag over his head that read I'm a shape-shifter" }, { "start": 3393.36, "end": 3396.88, "text": " blah blah you you would guess that joking Phoenix would do something like" }, { "start": 3396.88, "end": 3401.2400000000002, "text": " this but they say their human raiders were US based right and you see right" }, { "start": 3401.2400000000002, "end": 3405.08, "text": " here it says Megyn Kelly was not impressed and she let him have it on the" }, { "start": 3405.08, "end": 3410.36, "text": " tonight show now that tonight show is not what Megyn Kelly is and US based" }, { "start": 3410.36, "end": 3415.6800000000003, "text": " people would I guess know something like this and would immediately feel like" }, { "start": 3415.6800000000003, "end": 3426, "text": " this is wrong so I think this thing is interpolated from is interpolated from a" }, { "start": 3426, "end": 3431.48, "text": " bunch of different news articles about this and the interpolation just let it" }, { "start": 3431.48, "end": 3436.4, "text": " like made it such that this person is on this show which that they aren't and the" }, { "start": 3436.4, "end": 3440.76, "text": " humans noticed right but it doesn't change the fact that it probably just" }, { "start": 3440.76, "end": 3445, "text": " went to the training data filtered a bunch of articles about these words and" }, { "start": 3445, "end": 3449.08, "text": " then interpolated like mash them together it is a good language model" }, { "start": 3449.08, "end": 3453.48, "text": " right it can grammar it's very good at grammar so we can interpolate different" }, { "start": 3453.48, "end": 3460.8, "text": " passages of text and I feel that the the really really useful application of this" }, { "start": 3460.8, "end": 3465.2, "text": " will be sort of as a search engine as a fuzzy search engine so now I can like" }, { "start": 3465.2, "end": 3471.6, "text": " input for example my my machine learning research ideas and what will output will" }, { "start": 3471.6, "end": 3475.4, "text": " be sort of an abstract of a paper that is kind of a mush together of other" }, { "start": 3475.4, "end": 3481.28, "text": " papers on the same thing and that that you know you can think of many" }, { "start": 3481.28, "end": 3487.2400000000002, "text": " applications I don't think we have built something really intelligent here and" }, { "start": 3487.2400000000002, "end": 3492.96, "text": " what this is this is though is pretty cool they they give examples like this" }, { "start": 3492.96, "end": 3497.28, "text": " here where they make up a word and then ask the model to use the word in a" }, { "start": 3497.28, "end": 3503.6000000000004, "text": " sentence so to scree is something sorry to screech something is to swing a" }, { "start": 3503.6000000000004, "end": 3508, "text": " sword at it an example of a sentence that uses the word screech is and of" }, { "start": 3508, "end": 3511.96, "text": " course the model what's the model is going to do is it's going to take this" }, { "start": 3511.96, "end": 3517.2, "text": " it's going to filter the training data for all of the instances where sort of" }, { "start": 3517.2, "end": 3521.32, "text": " this construction appears like an example of using the words which is" }, { "start": 3521.32, "end": 3526.2, "text": " mostly so dictionaries then it's going to not know that word but it's you can" }, { "start": 3526.2, "end": 3531.56, "text": " interpolate it from interpolated from all this data right here and the so the" }, { "start": 3531.56, "end": 3537, "text": " cool thing is it actually conjugates the word we screed at each other for several" }, { "start": 3537, "end": 3543.8, "text": " minutes and then we went outside and ate ice cream so you can see how this is" }, { "start": 3543.8, "end": 3548.4, "text": " comes to be but I think it would really be fun to have a model that tells us" }, { "start": 3548.4, "end": 3554.56, "text": " which training data samples were used here it can also correct English grammar" }, { "start": 3554.56, "end": 3562.84, "text": " which is pretty obvious though again it can never correct so the the input" }, { "start": 3562.84, "end": 3569.08, "text": " always here is poor English good English poor English good English poor good poor" }, { "start": 3569.08, "end": 3574.4, "text": " English and then good English and that's what the model is asked to to output and" }, { "start": 3574.4, "end": 3581.92, "text": " I'm actually not sure pretty sure this here shouldn't be bold I'm fairly sure" }, { "start": 3581.92, "end": 3585.28, "text": " this shouldn't be bold this is given to the model the model is only asked to" }, { "start": 3585.28, "end": 3593.6800000000003, "text": " produce this otherwise I'd be I'd be actually impressed but yes nothing task" }, { "start": 3593.6800000000003, "end": 3597.4, "text": " specific is provided aside from the examples from few example as" }, { "start": 3597.4, "end": 3602.96, "text": " conditioning and the poor English input good English output framing so the good" }, { "start": 3602.96, "end": 3607.2400000000002, "text": " English output thing here should not be in boldface authors if you're listening" }, { "start": 3607.2400000000002, "end": 3615.1200000000003, "text": " this should not be bold thank you okay but again it is always as you can" }, { "start": 3615.12, "end": 3619.7999999999997, "text": " see it's too good English it's always the target is the good English whereas" }, { "start": 3619.7999999999997, "end": 3625.52, "text": " if the model really understood the task it should also be able to do the inverse" }, { "start": 3625.52, "end": 3629.2, "text": " it should be able to to produce something poor from something good" }, { "start": 3629.2, "end": 3633.7599999999998, "text": " because then you eliminate the fact that it's just a good English language model" }, { "start": 3633.7599999999998, "end": 3640.16, "text": " right because it can basically produce something like this without having a" }, { "start": 3640.16, "end": 3645.48, "text": " clue what the task is it will simply you condition on this input and it will" }, { "start": 3645.48, "end": 3650.8399999999997, "text": " simply output this sentence because it's very likely because it's already almost" }, { "start": 3650.8399999999997, "end": 3656.16, "text": " here and it will output it in better English because it's a good language" }, { "start": 3656.16, "end": 3665.2799999999997, "text": " model right it's it's a good English language model so yeah that so they" }, { "start": 3665.28, "end": 3670.6000000000004, "text": " measure this overfitting the degree to which their training to which their test" }, { "start": 3670.6000000000004, "end": 3676.0400000000004, "text": " data is in this common crawl thing and they say they have a conservative bound" }, { "start": 3676.0400000000004, "end": 3681.0400000000004, "text": " on how many percent of the data in the data set are clean and as you can see" }, { "start": 3681.0400000000004, "end": 3686.2000000000003, "text": " here they measure then how much the performance differs to to up or down if" }, { "start": 3686.2000000000003, "end": 3691.36, "text": " you only evaluate on the clean portion of this data set but again their" }, { "start": 3691.36, "end": 3696.1200000000003, "text": " deduplication is so weak they do like n-gram deduplication whereas I think you" }, { "start": 3696.1200000000003, "end": 3700.6800000000003, "text": " should really like in the news articles you should really do much more fuzzy" }, { "start": 3700.6800000000003, "end": 3707.04, "text": " deduplication much more of a meaning deduplication if you then want to argue" }, { "start": 3707.04, "end": 3710.6400000000003, "text": " that the model has learned to reason like if you simply want to argue that" }, { "start": 3710.6400000000003, "end": 3716.88, "text": " the model is a good language model fine right but yeah and also look at this" }, { "start": 3716.88, "end": 3722.88, "text": " like I would expect of a data set a test data set if you know if you have like a" }, { "start": 3722.88, "end": 3726.6400000000003, "text": " natural questions data set is constructed from Wikipedia pages and you" }, { "start": 3726.6400000000003, "end": 3732.2000000000003, "text": " have the Wikipedia page in there you can either either the entire thing is clean" }, { "start": 3732.2000000000003, "end": 3738.48, "text": " or none of it is clean and also these we know grad data set if this data set" }, { "start": 3738.48, "end": 3742.1600000000003, "text": " somehow leaked into the common crawl corpus either the entire thing is clean" }, { "start": 3742.16, "end": 3747.08, "text": " or none of it is clean I just have kind of problems with the fact that there are" }, { "start": 3747.08, "end": 3756.3599999999997, "text": " so many in between things right here and yeah so I'm not I'm not convinced here" }, { "start": 3756.3599999999997, "end": 3763.3999999999996, "text": " that this deduplication I still think it's a cool thing but I don't I think" }, { "start": 3763.3999999999996, "end": 3769, "text": " it's mostly a training data filter and interpolator rather than actual" }, { "start": 3769, "end": 3774.44, "text": " reasoning and they go through some of the limitations here and the broader" }, { "start": 3774.44, "end": 3780.84, "text": " input this broader impact statement is like five pages long and yeah okay you" }, { "start": 3780.84, "end": 3789.04, "text": " can do you can you know bad people take the model to do bad things okay and" }, { "start": 3789.04, "end": 3794.44, "text": " that's pretty much it so what I appreciate here is at the bottom they" }, { "start": 3794.44, "end": 3799.7200000000003, "text": " have basically all the results but also a lot of tasks descriptions like how" }, { "start": 3799.7200000000003, "end": 3805.4, "text": " they framed each tasks more outputs and they give more outputs on their website" }, { "start": 3805.4, "end": 3809.32, "text": " right so you can see here how each of the tasks was framed where you always" }, { "start": 3809.32, "end": 3814.04, "text": " have this is what this here is what the model sees and then this is what it's" }, { "start": 3814.04, "end": 3821.92, "text": " asked to produce right so you have this for for all many of these things and so" }, { "start": 3821.92, "end": 3828.4, "text": " on squad you have this context and the question okay so the the context is" }, { "start": 3828.4, "end": 3833.56, "text": " actually in there I didn't know that but you have the context and the question" }, { "start": 3833.56, "end": 3838.8, "text": " and the model is asked to complete something right here so you can look at" }, { "start": 3838.8, "end": 3843.64, "text": " how the model sees tasks and maybe you can evaluate for yourself how you think" }, { "start": 3843.64, "end": 3850, "text": " how difficult you think these tasks are all right I hope this was informative it" }, { "start": 3850, "end": 3854.6, "text": " is a long paper therefore it is a long video if you're still here and haven't" }, { "start": 3854.6, "end": 3862.32, "text": " subscribed yet do maybe if you like this if you want more leave it a like tell me" }, { "start": 3862.32, "end": 3867.04, "text": " in the comments what you think of it whether you think it's actually a GI or" }, { "start": 3867.04, "end": 3881.64, "text": " not and I'll see you next time bye bye" } ]
9Kec_7WFyp0
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Growing Neural Cellular Automata
[ "Science & Technology" ]
[ "machine learning", "deep learning", "cellular automata", "game of life", "conway", "google", "distill", "interactive", "colab", "local", "global", "update" ]
The Game of Life on steroids! This model learns to grow complex patterns in an entirely local way. Each cell is trained to listen to its neighbors and update itself in a way such that, collectively, an overall goal is reached. Fascinating and interactive! https://distill.pub/2020/growing-ca/ https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life Abstract: Most multicellular organisms begin their life as a single egg cell - a single cell whose progeny reliably self-assemble into highly complex anatomies with many organs and tissues in precisely the same arrangement each time. The ability to build their own bodies is probably the most fundamental skill every living creature possesses. Morphogenesis (the process of an organism’s shape development) is one of the most striking examples of a phenomenon called self-organisation. Cells, the tiny building blocks of bodies, communicate with their neighbors to decide the shape of organs and body plans, where to grow each organ, how to interconnect them, and when to eventually stop. Understanding the interplay of the emergence of complex outcomes from simple rules and homeostatic 1 feedback loops is an active area of research. What is clear is that evolution has learned to exploit the laws of physics and computation to implement the highly robust morphogenetic software that runs on genome-encoded cellular hardware. This process is extremely robust to perturbations. Even when the organism is fully developed, some species still have the capability to repair damage - a process known as regeneration. Some creatures, such as salamanders, can fully regenerate vital organs, limbs, eyes, or even parts of the brain! Morphogenesis is a surprisingly adaptive process. Sometimes even a very atypical development process can result in a viable organism - for example, when an early mammalian embryo is cut in two, each half will form a complete individual - monozygotic twins! The biggest puzzle in this field is the question of how the cell collective knows what to build and when to stop. The sciences of genomics and stem cell biology are only part of the puzzle, as they explain the distribution of specific components in each cell, and the establishment of different types of cells. While we know of many genes that are required for the process of regeneration, we still do not know the algorithm that is sufficient for cells to know how to build or remodel complex organs to a very specific anatomical end-goal. Thus, one major lynch-pin of future work in biomedicine is the discovery of the process by which large-scale anatomy is specified within cell collectives, and how we can rewrite this information to have rational control of growth and form. It is also becoming clear that the software of life possesses numerous modules or subroutines, such as “build an eye here”, which can be activated with simple signal triggers. Discovery of such subroutines and a mapping out of the developmental logic is a new field at the intersection of developmental biology and computer science. An important next step is to try to formulate computational models of this process, both to enrich the conceptual toolkit of biologists and to help translate the discoveries of biology into better robotics and computational technology. Imagine if we could design systems of the same plasticity and robustness as biological life: structures and machines that could grow and repair themselves. Such technology would transform the current efforts in regenerative medicine, where scientists and clinicians seek to discover the inputs or stimuli that could cause cells in the body to build structures on demand as needed. To help crack the puzzle of the morphogenetic code, and also exploit the insights of biology to create self-repairing systems in real life, we try to replicate some of the desired properties in an in silico experiment. Authors: Alexander Mordvintsev, Ettore Randazzo, Eyvind Niklasson, Michael Levin Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there. Today I thought we would be looking at growing neural cellular automata, which is an article on distill.pub, which I found pretty neat. So this is kind of an interactive article. If you don't know distill.pub, check it out. It is a cool new concept as an alternative to the classical journals or the conference system. So what it allows you to do is to kind of write articles that are a bit more interactive, a bit more engaging and don't have the... There's no PDFs, there's no pages, there are animations and so on. So I thought we'd be looking at this article today, which is kind of a growing neural cellular automata. So if you don't know what cellular automata are, this is a very kind of old concept. The most famous one is called the game of life, where you have these cells. Here you can see every pixel is a cell and they follow some kind of update rule. And usually it's the update rule, something like if my neighbor is alive, I'm going to be alive as well in the next time step. Or if enough neighbors are alive and if only very few neighbors are alive, I'm going to die. So this gives rise to these kinds of patterns. And here the same is done with color. And the update rules are a bit more complicated. So basically, ah, traveler. Oh, nice. Okay. So in the game of life, if you play it, the most prestigious thing to get is are these kind of travelers. I've not... This is the first time I've managed to do this in this thing. So what does it do? So each pixel here is kind of an autonomous thing that is only allowed to look at its neighbors in order to decide whether or not in the next time step it is going to be alive. Look, it's like incorporating again. So each cell looks at its neighbors and then decides what its next state will be. And here it's not only alive or dead. Dead would be white and alive would be anything else. But it is also, I guess this white is... It is also the color. So each cell decides on what color it should have. And then this is a live thing. So it kind of reproduces, right? You can see if I start it new. If you double click here, it grows from somewhere else. And this is completely local. So these cells really only look at their neighbors. That's the special part, right? They don't look at the global structure. It's not like a GAN that can look at the entire picture and decide what's still missing. What these can also do if you destroy part of it, they can kind of grow back just, again, just out of local update rules at the level of the individual cells and their neighbors. They're trained to do these big structures. So let's look at how they do it. So basically, here's how they model a cell. And let's go over here. So each cell, as I said, is made up of 16 channels. And here it's modeled as three by three, but I think each cell is really one pixel. And each cell is allowed to look at its eight neighbors, right? So each cell is allowed to look at its eight neighbors across 16 different channels. And the 16 channels here mean the first three are RGB. So this is the actual color that is seen. Then there is an alive or dead channel. So what they call an alpha channel. So if this channel is high, the cell is considered alive. Otherwise, it is considered dead and not part of the pattern. So a cell can come alive or die, depending on its neighbors. And then the rest, the rest 12 channels are what they call hidden channels. So the cell is allowed to encode some hidden state there. So there's each cell is represented by the 16 dimensional vector, which is not much right. And then each cell is allowed to look at three things. So from the bottom here, it's allowed to look at its own state, so at its own 16 dimensional vectors, and it is allowed to look at its neighbors. And it does this by doing a convolution with a sobel filter. And the sobel filter is simply a fixed filter that you do a three by three convolution with, as you can see here, is basically a gradient filter. So basically measures the difference between what's to the left of the cell and what's to the right of the cell. And here in the sobel y direction, the same in the y direction. So it's basically allowed to look at gradients in states of its neighbors. This is modeled after real cells kind of looking at chemical gradients in their neighborhoods. So this is all this, this is all that the cell has to decide what it's supposed to do next, right. And what we want is we want that each individual cell only looking at its neighbors produces in total, they will produce these kind of very complex pattern. So the update rule is the following, you convolute with the sobel filters and you take the cell identity, you put this all into a vector, you put it through a very, very small neural network. So this is one dense layer, one relu, and then another dense layer to get the next 16 dimensional vector, which is the next state. And that defines your update rules. That doesn't really define the next state that defines the Delta to the next state, kind of like a residual neural network. So basically, which cells need to come alive in the next time step, which cells need to die and how are they to change their colors, right. And then you get the output of the next step, right. So that's, that's basically the entire thing. So all that is learned here is the the update rule of the neural network, right. So basically, the neural network decides, it looks at a cell and its neighbors and decides what the information in the cell in the next step should be, right. And you do this for multiple time steps. That's I actually want to go down here, you do this for multiple time steps, the initial state is simply one cell that is alive here in the middle, everything else is dead, this cell is alive and black, you do this for many steps, right. And then at some point, you get an output. And you compare the output to your desired output, you compute a loss that is differentiable. And because your update rule is differentiable, and your loss is differentiable, you can backprop through time to the original pattern here. And you can basically learn this update rule by backproping through time. This is a bit like an LSTM. And if you see in the architecture here, I think this residual connection is really the key to making this work over time. Because usually, I would not expect something like this to easily emerge over time because you have the problem of vanishing and exploding gradients. And you have no way of mitigating this problem here, this problem here in this simple neural network. But in any case, they backprop through time here. So each of these update steps, which again, this isn't one neural network with many layers, this is the same neural network applied over and over and over and over again, and then there is a loss computed. So basically, the gradients will accumulate over these steps, and they tell the network what it needs to adjust to go from this one single black pixel to this final desired state. If you do this over and over again, you learn things, you learn a update rule that will give rise to that pattern, hopefully. Now, here is a kind of an illustration of this alive and dead thing. So what they do is they consider cells that have an alpha channel, one of these channels called alpha, they have an alpha channel above 0.1, it's considered alive, right, and part of the loss. Then the neighbors, the neighbors of these cells that are below 0.1, but are neighboring a cell that is mature, alive, they're called growing, they're also part of the loss, right. So simply by being close to something, someone that is alive, a cell that is alive, you are considered alive as well, but your neighbors aren't, right, only the neighbors of really alive. So there's really alive, kind of alive, and then there is dead. And dead, the meaning of dead here, the gray ones, is they're not, they won't become part of the pattern, part of the loss, right, they're dead. All right, so what will this get you initially? So here is an animation, if they train this just like that, just back up through time with a target pattern, and then they let it run, you see these patterns actually emerge. So that's pretty cool. But then if you let them run for longer than they've been trained, you basically have no guarantees on what's going to happen. Like these update rules are simply trained to achieve the pattern within a certain number of steps, right. If you run for more than that, and apply the update rules for longer than that, you you have like there's little like you have no guarantee what's going to happen, these update rules will simply continue, as you can see here and produce some weird stuff. So they are trying to fix this. So what they do is basically they train for longer, but they do it in a in a kind of different way. So at each at each step of training, and as a step, I mean, a batch over these number of time steps. So so they sample a batch, initially, it's just all black pixels, right, as we see above. And then they optimize for these number of time steps. And then they're at the end. So what they do is they don't always start from the black pixel. But sometimes they also start from a previously seen end state. So basically, they take the end state of a previous training run, and then they just continue from that instead of starting from the initial point. And you see after some training, they get better and better. So initially, you see the thing on the left here. The thing on the left here being a starting state. And then it progressively gets better. So basically, by starting from end states of other things, you learn to. So if the end state of the other thing isn't very good, you basically learn to go to the good pattern to the pattern you want. But of course, over time, there's going to be more and more of these end states that you train from that are already pretty close to the pattern you want. And so then what that means is you learn to reproduce the pattern. So you are already at a good point, you learn to stay at that good point. And then that enables you to basically learn update rules that if you're not at the pattern you want, they go towards the pattern you want. But also if you run for longer, if you are already are at the pattern you want, then you stay at the pattern you want. So that's what we basically saw in the very initial demonstration where you could, this is a live demonstration like this thing up here, this is a live, this is running, right. And you see the update rules data, they are continuously applied, they basically stay at the pattern where they are. And that is also that is learned because of this protocol that you train from end states as well as from beginning states. So the next thing is what I'm doing here is I can destroy part of the pattern, and it will kind of regrow right you see that here. So this is also a part so for now we've also only learned to go from a single pixel like here from a black pixel to the pattern. But now we also want to learn to go to regrow when destroyed because that is, you can see this is modeled after kind of live tissue. So here you can see the parts are cut away and then the cells try to regrow. So this is I think initially, this is initially when you just train them, they exhibit some of that property, but not like very satisfying in some cases. So what they do is they train not only do they use end states, like we saw before, but also some of their training samples are simply the pattern destroyed a bit. So as you can see in some of these samples, like these here, they in each sample, they kind of cut out part of the sample and they train the update rules to regrow that part that gives you that now gives you the ability to if you damage to pretty consistently regrow the pattern, as you can see here. And they also train for rotation, which is non trivial if you have these kind of pixel based, pixel based models. But I want to jump that because I want to keep it kind of short here. So the entire goal of this is to kind of model the behavior of natural cells, because the natural cells, they don't have an overarching view, they only have the view of their neighbors, right, and they are able to grow into very complex structures. I invite you to give this a try. The distilled out pop journal is very cool. It's very interactive, you can play around with it, you can reproduce things in a collab. And yeah, shout out to the authors here, Alexander Morbintsev, Ettore Randazzo, Evan Nicholson and Michael Levin. Yeah, that was it from me. Thanks for watching and bye bye.
[ { "start": 0, "end": 8.16, "text": " Hi there. Today I thought we would be looking at growing neural cellular automata, which" }, { "start": 8.16, "end": 16.48, "text": " is an article on distill.pub, which I found pretty neat. So this is kind of an interactive" }, { "start": 16.48, "end": 23, "text": " article. If you don't know distill.pub, check it out. It is a cool new concept as an alternative" }, { "start": 23, "end": 31.32, "text": " to the classical journals or the conference system. So what it allows you to do is to" }, { "start": 31.32, "end": 41.519999999999996, "text": " kind of write articles that are a bit more interactive, a bit more engaging and don't" }, { "start": 41.519999999999996, "end": 48, "text": " have the... There's no PDFs, there's no pages, there are animations and so on. So I thought" }, { "start": 48, "end": 55.12, "text": " we'd be looking at this article today, which is kind of a growing neural cellular automata." }, { "start": 55.12, "end": 61.76, "text": " So if you don't know what cellular automata are, this is a very kind of old concept. The" }, { "start": 61.76, "end": 66.84, "text": " most famous one is called the game of life, where you have these cells. Here you can see" }, { "start": 66.84, "end": 73.2, "text": " every pixel is a cell and they follow some kind of update rule. And usually it's the" }, { "start": 73.2, "end": 78.2, "text": " update rule, something like if my neighbor is alive, I'm going to be alive as well in" }, { "start": 78.2, "end": 84.28, "text": " the next time step. Or if enough neighbors are alive and if only very few neighbors are" }, { "start": 84.28, "end": 88.88, "text": " alive, I'm going to die. So this gives rise to these kinds of patterns. And here the same" }, { "start": 88.88, "end": 96.96000000000001, "text": " is done with color. And the update rules are a bit more complicated. So basically, ah," }, { "start": 96.96, "end": 105.67999999999999, "text": " traveler. Oh, nice. Okay. So in the game of life, if you play it, the most prestigious" }, { "start": 105.67999999999999, "end": 112.6, "text": " thing to get is are these kind of travelers. I've not... This is the first time I've managed" }, { "start": 112.6, "end": 120.28, "text": " to do this in this thing. So what does it do? So each pixel here is kind of an autonomous" }, { "start": 120.28, "end": 125.75999999999999, "text": " thing that is only allowed to look at its neighbors in order to decide whether or not" }, { "start": 125.76, "end": 133.52, "text": " in the next time step it is going to be alive. Look, it's like incorporating again. So each" }, { "start": 133.52, "end": 139.28, "text": " cell looks at its neighbors and then decides what its next state will be. And here it's" }, { "start": 139.28, "end": 146.96, "text": " not only alive or dead. Dead would be white and alive would be anything else. But it is" }, { "start": 146.96, "end": 153.08, "text": " also, I guess this white is... It is also the color. So each cell decides on what color" }, { "start": 153.08, "end": 161.32000000000002, "text": " it should have. And then this is a live thing. So it kind of reproduces, right? You can see" }, { "start": 161.32000000000002, "end": 167.16000000000003, "text": " if I start it new. If you double click here, it grows from somewhere else. And this is" }, { "start": 167.16000000000003, "end": 172.08, "text": " completely local. So these cells really only look at their neighbors. That's the special" }, { "start": 172.08, "end": 176.8, "text": " part, right? They don't look at the global structure. It's not like a GAN that can look" }, { "start": 176.8, "end": 183.48000000000002, "text": " at the entire picture and decide what's still missing. What these can also do if you destroy" }, { "start": 183.48000000000002, "end": 190.56, "text": " part of it, they can kind of grow back just, again, just out of local update rules at the" }, { "start": 190.56, "end": 196.76000000000002, "text": " level of the individual cells and their neighbors. They're trained to do these big structures." }, { "start": 196.76000000000002, "end": 205.78, "text": " So let's look at how they do it. So basically, here's how they model a cell. And let's go" }, { "start": 205.78, "end": 213.56, "text": " over here. So each cell, as I said, is made up of 16 channels. And here it's modeled as" }, { "start": 213.56, "end": 220.84, "text": " three by three, but I think each cell is really one pixel. And each cell is allowed to look" }, { "start": 220.84, "end": 226.8, "text": " at its eight neighbors, right? So each cell is allowed to look at its eight neighbors" }, { "start": 226.8, "end": 235.32, "text": " across 16 different channels. And the 16 channels here mean the first three are RGB. So this" }, { "start": 235.32, "end": 240.95999999999998, "text": " is the actual color that is seen. Then there is an alive or dead channel. So what they" }, { "start": 240.95999999999998, "end": 248.35999999999999, "text": " call an alpha channel. So if this channel is high, the cell is considered alive. Otherwise," }, { "start": 248.35999999999999, "end": 254.51999999999998, "text": " it is considered dead and not part of the pattern. So a cell can come alive or die," }, { "start": 254.51999999999998, "end": 259.68, "text": " depending on its neighbors. And then the rest, the rest 12 channels are what they call hidden" }, { "start": 259.68, "end": 267.44, "text": " channels. So the cell is allowed to encode some hidden state there. So there's each cell" }, { "start": 267.44, "end": 272.6, "text": " is represented by the 16 dimensional vector, which is not much right. And then each cell" }, { "start": 272.6, "end": 278.78000000000003, "text": " is allowed to look at three things. So from the bottom here, it's allowed to look at its" }, { "start": 278.78000000000003, "end": 285.52, "text": " own state, so at its own 16 dimensional vectors, and it is allowed to look at its neighbors." }, { "start": 285.52, "end": 291.03999999999996, "text": " And it does this by doing a convolution with a sobel filter. And the sobel filter is simply" }, { "start": 291.03999999999996, "end": 298.2, "text": " a fixed filter that you do a three by three convolution with, as you can see here, is" }, { "start": 298.2, "end": 305.56, "text": " basically a gradient filter. So basically measures the difference between what's to" }, { "start": 305.56, "end": 309.96, "text": " the left of the cell and what's to the right of the cell. And here in the sobel y direction," }, { "start": 309.96, "end": 316.56, "text": " the same in the y direction. So it's basically allowed to look at gradients in states of" }, { "start": 316.56, "end": 324.44, "text": " its neighbors. This is modeled after real cells kind of looking at chemical gradients" }, { "start": 324.44, "end": 330.64, "text": " in their neighborhoods. So this is all this, this is all that the cell has to decide what" }, { "start": 330.64, "end": 337.06, "text": " it's supposed to do next, right. And what we want is we want that each individual cell" }, { "start": 337.06, "end": 342.44, "text": " only looking at its neighbors produces in total, they will produce these kind of very" }, { "start": 342.44, "end": 348.9, "text": " complex pattern. So the update rule is the following, you convolute with the sobel filters" }, { "start": 348.9, "end": 354.28, "text": " and you take the cell identity, you put this all into a vector, you put it through a very," }, { "start": 354.28, "end": 359.84000000000003, "text": " very small neural network. So this is one dense layer, one relu, and then another dense" }, { "start": 359.84000000000003, "end": 365.44, "text": " layer to get the next 16 dimensional vector, which is the next state. And that defines" }, { "start": 365.44, "end": 370.28, "text": " your update rules. That doesn't really define the next state that defines the Delta to the" }, { "start": 370.28, "end": 376.6, "text": " next state, kind of like a residual neural network. So basically, which cells need to" }, { "start": 376.6, "end": 381.24, "text": " come alive in the next time step, which cells need to die and how are they to change their" }, { "start": 381.24, "end": 389.1, "text": " colors, right. And then you get the output of the next step, right. So that's, that's" }, { "start": 389.1, "end": 395.64000000000004, "text": " basically the entire thing. So all that is learned here is the the update rule of the" }, { "start": 395.64000000000004, "end": 400.48, "text": " neural network, right. So basically, the neural network decides, it looks at a cell and its" }, { "start": 400.48, "end": 406.88, "text": " neighbors and decides what the information in the cell in the next step should be, right." }, { "start": 406.88, "end": 411.96000000000004, "text": " And you do this for multiple time steps. That's I actually want to go down here, you do this" }, { "start": 411.96000000000004, "end": 417.28000000000003, "text": " for multiple time steps, the initial state is simply one cell that is alive here in the" }, { "start": 417.28, "end": 422.59999999999997, "text": " middle, everything else is dead, this cell is alive and black, you do this for many steps," }, { "start": 422.59999999999997, "end": 428.84, "text": " right. And then at some point, you get an output. And you compare the output to your" }, { "start": 428.84, "end": 434.28, "text": " desired output, you compute a loss that is differentiable. And because your update rule" }, { "start": 434.28, "end": 442.88, "text": " is differentiable, and your loss is differentiable, you can backprop through time to the original" }, { "start": 442.88, "end": 447.76, "text": " pattern here. And you can basically learn this update rule by backproping through time." }, { "start": 447.76, "end": 453.64, "text": " This is a bit like an LSTM. And if you see in the architecture here, I think this residual" }, { "start": 453.64, "end": 459.96, "text": " connection is really the key to making this work over time. Because usually, I would not" }, { "start": 459.96, "end": 465.04, "text": " expect something like this to easily emerge over time because you have the problem of" }, { "start": 465.04, "end": 470.68, "text": " vanishing and exploding gradients. And you have no way of mitigating this problem here," }, { "start": 470.68, "end": 480.84000000000003, "text": " this problem here in this simple neural network. But in any case, they backprop through time" }, { "start": 480.84000000000003, "end": 487.6, "text": " here. So each of these update steps, which again, this isn't one neural network with" }, { "start": 487.6, "end": 493.24, "text": " many layers, this is the same neural network applied over and over and over and over again," }, { "start": 493.24, "end": 498.88, "text": " and then there is a loss computed. So basically, the gradients will accumulate over these steps," }, { "start": 498.88, "end": 504.32, "text": " and they tell the network what it needs to adjust to go from this one single black pixel" }, { "start": 504.32, "end": 511.24, "text": " to this final desired state. If you do this over and over again, you learn things, you" }, { "start": 511.24, "end": 518.96, "text": " learn a update rule that will give rise to that pattern, hopefully. Now, here is a kind" }, { "start": 518.96, "end": 525.4, "text": " of an illustration of this alive and dead thing. So what they do is they consider cells" }, { "start": 525.4, "end": 531.28, "text": " that have an alpha channel, one of these channels called alpha, they have an alpha channel above" }, { "start": 531.28, "end": 541.6, "text": " 0.1, it's considered alive, right, and part of the loss. Then the neighbors, the neighbors" }, { "start": 541.6, "end": 550.28, "text": " of these cells that are below 0.1, but are neighboring a cell that is mature, alive," }, { "start": 550.28, "end": 554.68, "text": " they're called growing, they're also part of the loss, right. So simply by being close" }, { "start": 554.68, "end": 560.64, "text": " to something, someone that is alive, a cell that is alive, you are considered alive as" }, { "start": 560.64, "end": 565.88, "text": " well, but your neighbors aren't, right, only the neighbors of really alive. So there's" }, { "start": 565.88, "end": 572.3599999999999, "text": " really alive, kind of alive, and then there is dead. And dead, the meaning of dead here," }, { "start": 572.3599999999999, "end": 578.12, "text": " the gray ones, is they're not, they won't become part of the pattern, part of the loss," }, { "start": 578.12, "end": 590.36, "text": " right, they're dead. All right, so what will this get you initially? So here is an animation," }, { "start": 590.36, "end": 595.68, "text": " if they train this just like that, just back up through time with a target pattern, and" }, { "start": 595.68, "end": 600.6, "text": " then they let it run, you see these patterns actually emerge. So that's pretty cool. But" }, { "start": 600.6, "end": 606.28, "text": " then if you let them run for longer than they've been trained, you basically have no guarantees" }, { "start": 606.28, "end": 612.68, "text": " on what's going to happen. Like these update rules are simply trained to achieve the pattern" }, { "start": 612.68, "end": 617.4399999999999, "text": " within a certain number of steps, right. If you run for more than that, and apply the" }, { "start": 617.4399999999999, "end": 624.0799999999999, "text": " update rules for longer than that, you you have like there's little like you have no" }, { "start": 624.0799999999999, "end": 629.06, "text": " guarantee what's going to happen, these update rules will simply continue, as you can see" }, { "start": 629.06, "end": 635.4399999999999, "text": " here and produce some weird stuff. So they are trying to fix this. So what they do is" }, { "start": 635.44, "end": 639.7600000000001, "text": " basically they train for longer, but they do it in a in a kind of different way. So" }, { "start": 639.7600000000001, "end": 649.7600000000001, "text": " at each at each step of training, and as a step, I mean, a batch over these number of" }, { "start": 649.7600000000001, "end": 656.5200000000001, "text": " time steps. So so they sample a batch, initially, it's just all black pixels, right, as we see" }, { "start": 656.5200000000001, "end": 663.44, "text": " above. And then they optimize for these number of time steps. And then they're at the end." }, { "start": 663.44, "end": 668.7600000000001, "text": " So what they do is they don't always start from the black pixel. But sometimes they also" }, { "start": 668.7600000000001, "end": 678.72, "text": " start from a previously seen end state. So basically, they take the end state of a previous" }, { "start": 678.72, "end": 684.7600000000001, "text": " training run, and then they just continue from that instead of starting from the initial" }, { "start": 684.7600000000001, "end": 693.32, "text": " point. And you see after some training, they get better and better. So initially, you see" }, { "start": 693.32, "end": 701.12, "text": " the thing on the left here. The thing on the left here being a starting state. And then" }, { "start": 701.12, "end": 708.1600000000001, "text": " it progressively gets better. So basically, by starting from end states of other things," }, { "start": 708.1600000000001, "end": 715.6800000000001, "text": " you learn to. So if the end state of the other thing isn't very good, you basically learn" }, { "start": 715.6800000000001, "end": 722.34, "text": " to go to the good pattern to the pattern you want. But of course, over time, there's going" }, { "start": 722.34, "end": 726.94, "text": " to be more and more of these end states that you train from that are already pretty close" }, { "start": 726.94, "end": 734.8000000000001, "text": " to the pattern you want. And so then what that means is you learn to reproduce the pattern." }, { "start": 734.8000000000001, "end": 740.44, "text": " So you are already at a good point, you learn to stay at that good point. And then that" }, { "start": 740.44, "end": 747.6, "text": " enables you to basically learn update rules that if you're not at the pattern you want," }, { "start": 747.6, "end": 753, "text": " they go towards the pattern you want. But also if you run for longer, if you are already" }, { "start": 753, "end": 759.5, "text": " are at the pattern you want, then you stay at the pattern you want. So that's what we" }, { "start": 759.5, "end": 765.36, "text": " basically saw in the very initial demonstration where you could, this is a live demonstration" }, { "start": 765.36, "end": 771.6, "text": " like this thing up here, this is a live, this is running, right. And you see the update" }, { "start": 771.6, "end": 776.6800000000001, "text": " rules data, they are continuously applied, they basically stay at the pattern where they" }, { "start": 776.68, "end": 782.4, "text": " are. And that is also that is learned because of this protocol that you train from end states" }, { "start": 782.4, "end": 791.88, "text": " as well as from beginning states. So the next thing is what I'm doing here is I can destroy" }, { "start": 791.88, "end": 799, "text": " part of the pattern, and it will kind of regrow right you see that here. So this is also a" }, { "start": 799, "end": 804.0799999999999, "text": " part so for now we've also only learned to go from a single pixel like here from a black" }, { "start": 804.08, "end": 811.8000000000001, "text": " pixel to the pattern. But now we also want to learn to go to regrow when destroyed because" }, { "start": 811.8000000000001, "end": 823.12, "text": " that is, you can see this is modeled after kind of live tissue. So here you can see the" }, { "start": 823.12, "end": 834, "text": " parts are cut away and then the cells try to regrow. So this is I think initially, this" }, { "start": 834, "end": 840.04, "text": " is initially when you just train them, they exhibit some of that property, but not like" }, { "start": 840.04, "end": 847.32, "text": " very satisfying in some cases. So what they do is they train not only do they use end" }, { "start": 847.32, "end": 854.36, "text": " states, like we saw before, but also some of their training samples are simply the pattern" }, { "start": 854.36, "end": 861.04, "text": " destroyed a bit. So as you can see in some of these samples, like these here, they in" }, { "start": 861.04, "end": 867.92, "text": " each sample, they kind of cut out part of the sample and they train the update rules" }, { "start": 867.92, "end": 875.76, "text": " to regrow that part that gives you that now gives you the ability to if you damage to" }, { "start": 875.76, "end": 884.52, "text": " pretty consistently regrow the pattern, as you can see here. And they also train for" }, { "start": 884.52, "end": 891.72, "text": " rotation, which is non trivial if you have these kind of pixel based, pixel based models." }, { "start": 891.72, "end": 898.64, "text": " But I want to jump that because I want to keep it kind of short here. So the entire" }, { "start": 898.64, "end": 905.76, "text": " goal of this is to kind of model the behavior of natural cells, because the natural cells," }, { "start": 905.76, "end": 910.92, "text": " they don't have an overarching view, they only have the view of their neighbors, right," }, { "start": 910.92, "end": 918.4399999999999, "text": " and they are able to grow into very complex structures. I invite you to give this a try." }, { "start": 918.4399999999999, "end": 923.5999999999999, "text": " The distilled out pop journal is very cool. It's very interactive, you can play around" }, { "start": 923.5999999999999, "end": 931.1999999999999, "text": " with it, you can reproduce things in a collab. And yeah, shout out to the authors here," }, { "start": 931.2, "end": 944.12, "text": " Alexander Morbintsev, Ettore Randazzo, Evan Nicholson and Michael Levin. Yeah, that was" }, { "start": 944.12, "end": 961.48, "text": " it from me. Thanks for watching and bye bye." } ]
hDQNCWR3HLQ
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
[Drama] Schmidhuber: Critique of Honda Prize for Dr. Hinton
[ "Science & Technology" ]
[ "deep learning", "machine learning", "schmidhuber", "hinton", "seppo", "rummelhardt", "hochreiter", "lstm", "rbm", "backpropagation", "credit", "science" ]
Schmidhuber writes up a critique of Hinton receiving the Honda Price... AND HINTON REPLIES! Schmidhuber's Blog Entry: http://people.idsia.ch/~juergen/critique-honda-prize-hinton.html Hinton's Reply: https://www.reddit.com/r/MachineLearning/comments/g5ali0/d_schmidhuber_critique_of_honda_prize_for_dr/ Thumbnail Images: By Eviatar Bach -https://de.m.wikipedia.org/wiki/Datei:Geoffrey_Hinton_at_UBC.jpg By ITU/R.Farrell - https://www.flickr.com/photos/itupictures/34343385563, CC BY 2.0, https://commons.wikimedia.org/w/index.php?curid=75018240 Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
On April 21st, Jürgen Schmidhuber tweeted out, Stop crediting the wrong people for inventions made by others. At least in science, the facts will always win at the end, as long as the facts have not yet won. It is not yet the end. No fancy award can ever change that. Hashtag self-correcting science, hashtag plagiarism. And links to an article of his own website, where he wrote, Critique of Honda Prize for Dr. Hinton. So this is on Schmidhuber's own website, and it's by himself. Don't you love this? How to pronounce his name, Jürgen Schmidhuber. You again. Sorry. This is absolutely great. So both, actually, Schmidhuber and Hinton are on Twitter. You can tweet at them and follow them. This article here is basically a critique of the press release of Honda when they awarded Jeff Hinton for his achievements. And it goes through it step by step. And we won't look at the whole thing, but just for you to get the flavor. So here Honda says, Dr. Hinton has created a number of technologies that have enabled the broader application of AI, including the backpropagation algorithm that forms the basis of deep learning approach to AI. And Schmidhuber just goes off. He basically claims, while Hinton and his coworkers have made certain significant contributions to deep learning, the claim above is plain wrong. He says Hinton did not invent backpropagation. The person who invented backpropagation was Seppo Linayma. He says basically many papers failed to cite Linayma, who was the original inventor of backprop and so on. And he goes through a history of this and how it's even earlier. I always have a bit of a trouble with claims like who invented what, because when is an algorithm really the same thing? And when is it a variation on another algorithm? And when is it something completely new? It's never entirely clear. But the points here made that the things, the backpropagation algorithm existed before Hinton. And also that some of the papers, some of the seminal papers did not cite the correct origin. Statement 2. In 2002 he introduced a fast learning algorithm for restricted Boltzmann machines that allowed them to learn a single layer of distributed representation without requiring any labeled data. These methods allowed deep learning to work better and they led to the current deep learning revolution. And he basically goes, no, Dr. Hinton's interesting unsupervised pre-training for deep neural networks was irrelevant for the current deep learning revolution. In 2010 our team showed that the feed forward networks can be trained by plain backprop, do not at all require pre-training. And he basically again says, apart from this Hinton's unsupervised pre-training was conceptually a rehash of my unsupervised pre-training for deep recurrent neural networks. So he, you know, as you know, Schmidhuber has done a lot of work in recurrent neural networks and he basically says it was just a rehash of his algorithm. Now I have to say I have, so first of all, he makes a point here, right, that we don't really do unsupervised pre-training anymore until now, of course. But you, like, to train an MNIST classifier, you don't have to do that. But it's also doubtful that this was a step, even though if it wasn't on the exact path to the current situation, it was a thing that got people excited maybe. And so the critique is like half valid. And also it doesn't help Schmidhuber that he always compares it to his own things. Like it just, like, either criticize them for, you know, general things, but then avoid bringing your own things in because it just sounds like I did this before. And also I read some papers from these times. People just wrote papers sometimes. I haven't read this specific one, but sometimes people just wrote papers writing down their ideas. Like, one could do this and this and this. Never doing any experiments or actually specifying exactly what they mean. They just kind of wrote down a bunch of ideas and that got published. Especially, like, there's some reinforcement learning papers where people are just like, oh, one, I imagine agents doing this and learning from that. So it is, again, it is never really clear. Ideas are just had by everyone. I think people mistake this, that think that ideas are unique. It's not ideas that are unique. Many people have the same ideas, but some... There's also execution and exact formalization and so on. And exact level of specificity. All of this is really hard. And then the Honda says, in 2009, Dr. Hinton and two of his students used multilayer neural nets to make major breakthrough in speech recognition. That led directly to greatly improved. And this, of course, Schmidrub goes off by this because speech recognition is, of course, prime LSTM territory. So you don't want to go near this. And the Honda further says, revolutionized computer vision by showing that deep learning worked far better than existing state of the art. And again, he says the basic ingredients were already there and so on. And our team in Switzerland already used his first superior award-winning GPU-based CNN and so on. That's what was called DanNet, was produced by his group. And again, this seems correct, right? This seems when he lays it out like this, it doesn't change the fact that AlexNet won ImageNet in 2012. And that was like the start of the deep learning revolution. It was like, wow, you can cut the error rate by something like 30% simply by doing this deep learning stuff. So again, even if DanNet says it blew away the competition, it always seems like Schmidhuber is kind of right, but then also he's not. He's like, exact academic work and the idea being there on a paper isn't the only thing that drives progress. And it says, to achieve their dramatic results, Dr. Hinton also invented a widely used new method called dropout, which uses overfitting. No, like no, like no, just no. Randomly dropping parts in order to make something more robust, that is surely not a new thing. And he also says much earlier, there's this stochastic delta rule and so on. And he also critiques that this paper did not cite this. They just gave it the name. This is an idea that is kind of so simple that you wouldn't even necessarily think about researching whether that has existed already. I think they just did it because it's a natural idea, and then they gave it a name, and the name stuck. It's not about the idea itself. And then lastly, they say, of the countless AI-based technological services across the world, it is no exaggeration to say that few would have been possible without the results Dr. Hinton created. I love this. Name one that would not have been possible. And he just gives a list of their own group that are basically possible without Hinton's contributions. And this is just a bit of a cheap shot. Clearly, Honda, they're not saying it would have been physically impossible. Without his contributions. But certainly Hinton has, even if he hadn't invented any of those things, he certainly has created a spark. And these things created a splash, got people excited, people thinking about new ways of applying things, even if this is all true. But I would like you to notice, this is a critique of what Honda says about Hinton. And if I read through the statements of Schmidhuber, most of them are technically correct. And you know, that was that. And then I thought, OK, cool. But then someone posted it on Reddit. And then Hinton replies. And this is... Don't you love this? So Hinton says, Having a public debate with Schmidhuber about academic credit is not advisable because it just encourages him. And there is no limit to the time and effort that he is willing to put into trying to discredit his perceived arrivals. He is even escorted to tricks like having multiple aliases in Wikipedia to make it look as if other people agree. The page on his website about Alan Turing is a nice example of how he goes on trying to... These are shots fired. And he says, I'm going to respond once and only once. I have never claimed that I invented backpropagation. David Rumelhard invented it independently. After other people in other fields had invented it. It's true. When we first published, we did not know the history. So he basically says, OK, we did forget to cite it when we first published about backprop, but he doesn't say he invented it. What I've claimed is that I was the person to clearly demonstrate that backprop could learn interesting internal representations and that this is what made it popular. So this goes into the direction. Schmidhuber is very much on academic contributions. Idea was there before. And Hinton basically says, no, what we did is kind of we showed that it works in this particular way and we kind of got people excited about it. I did this by forcing that, blah, blah, blah. And he says, it is true that many people in the press have said I invented backprop and I've spent a lot of time correcting them. Here is an excerpt from 2018 where this is, I guess, a quote from this book that quotes Hinton where he says, lots of people invented different versions of backprop before David Rumelhart. They were mainly independent inventions, something I feel I've got too much credit for. It's one of these rare cases where an academic feels he has gotten too much credit for something. My main contribution was to show you can use it for learning distributor representations. So I'd like to set the record straight on that. Then he says, maybe Juergen would like to set the record straight on who invented LSTMs. Boom, boom. Crazy. Shots fired by Hinton here. This is, I mean, this is just great. But again, look at what Hinton says. Hinton basically says, yes, I have not invented that. I have corrected this on public record in the past. And yeah, so that's what Hinton says. And I mean, the comments here are just gold. I really invite you to read it. And then Schmidhuber, of course, being Schmidhuber, replies again. Down here, he has a response to the reply. And I don't expect Hinton to reply again, so I waited for a bit. But I believe Hinton when he says he does it only once. So he goes into this. It just says, summary, the facts presented in sections one, two, three, four, five are still valid. So he goes kind of statement by statement. Is this having a public debate? Blah, blah, blah. And he just says, well, this is an ad hominem attack, which is true. Right. This is true. And he says he even has multiple aliases in Wikipedia. And he just says another ad hominem attack. And then he goes into that Schmidhuber tries to discredit Alan Turing. And then Schmidhuber goes into this big, long, big, long, basically claim that Alan Turing wasn't as important as people made him out to be. And people invented this kind of Turing machine equivalence before that. Again, it's kind of Schmidhuber's take that the idea basically was already there and these people don't get the correct credit. And also he's correct that this is a true it's an ad hominem attack. Right. So, you know, be it as it may. This is correct. And then when Hinton goes that he didn't invent Backprop and Schmidhuber says this is finally a response related to my post, which is true. Right. However, it does not at all contradict what I wrote. And it is true that Hinton credited his co-author Rommelhardt with the invention, but neither cited Lin-Neymar and also the statement, lots of people. He says it wasn't created by lots of different people, but exactly one person. So this I find, like, can you really say now this is the exact time when Backprop was invented, even though it probably wasn't in the exact current formulation. And it probably existed somewhat like this. So, but again, and he his main claim is Dr. Hinton accepted the Honda price, although he apparently agrees that Honda's claims are false. He should ask Honda to correct their statements. And maybe you're going to would like to set the record straight. We invented LSTMs and, you know, as we as you may know, Sepp Hochreiter kind of invented LSTMs under Jürgen Schmidhuber as a PhD advisor. But the to summarize Dr. Hinton's comments and ad hominem arguments diverge from the contents of my post and do not challenge the facts and so on. And I have to say after reading this, this is this is correct. Right. Hinton basically replies to, hey, I, I never claimed I invented Backprop and other people have invented it. And Schmidhuber doesn't criticize Hinton in this particular post. He may otherwise. Schmidhuber doesn't criticize Hinton for claiming that he criticizes Honda for claiming that Hinton did. And Hinton doesn't. Hinton basically agrees with him. And also Schmidhuber says Dr. Hinton accepted the Honda price, although he apparently agrees that the claims are false. He should ask Honda to correct their statements. And it is true that Hinton accepted this price under this release. Right. Now, you might be able to say Hinton also says he's on the record basically saying he didn't do this. And I guess if you're Hinton and you know, you've had this you've had the successful career and so on. And you have previously really publicly stated that you didn't invent these things and, you know, made it clear. And then you get a prize and they write this thing. Maybe you just don't want to go after every single press statement and correcting that. But, you know, in essence, basically Hinton understood this as an attack on himself that he claims he invented Backprop. And Schmidhuber says Honda claims Hinton invented Backprop and Hinton accepted the price. So he agrees with it and Hinton basically agrees with it, but doesn't say Honda should have corrected it, which I can understand. So this is my take on this issue. It's kind of both are correct and they just kind of talk past each other. And Schmidhuber is always on the idea existed before. And Hinton is correct when he says it's not always just about the idea. Progress is also made by people being excited, people actually getting something to work, people, you know, doing something at the right time, the right place, which is also correct. But it is fun. It is fun. So I just I enjoyed I enjoy this honestly, like because ultimately this is the kind of discussions also need to happen in science because credit assignment is an important thing in science. And even though sometimes it's over the top, like Schmidhuber always going after it, I think we need people like him just kind of to keep the field in check a bit. And yeah, I will link to all of this. I hope you enjoy this and I wish you a nice rest of the weekend. If you're still here, consider subscribing and leave comment if you want. I usually read them. Bye bye.
[ { "start": 0, "end": 4, "text": " On April 21st, Jürgen Schmidhuber tweeted out," }, { "start": 4, "end": 8, "text": " Stop crediting the wrong people for inventions made by others." }, { "start": 8, "end": 11, "text": " At least in science, the facts will always win at the end," }, { "start": 11, "end": 14, "text": " as long as the facts have not yet won." }, { "start": 14, "end": 16, "text": " It is not yet the end." }, { "start": 16, "end": 19, "text": " No fancy award can ever change that." }, { "start": 19, "end": 24, "text": " Hashtag self-correcting science, hashtag plagiarism." }, { "start": 24, "end": 28, "text": " And links to an article of his own website," }, { "start": 28, "end": 30, "text": " where he wrote," }, { "start": 30, "end": 34, "text": " Critique of Honda Prize for Dr. Hinton." }, { "start": 34, "end": 37, "text": " So this is on Schmidhuber's own website," }, { "start": 37, "end": 39, "text": " and it's by himself." }, { "start": 39, "end": 41, "text": " Don't you love this?" }, { "start": 41, "end": 44, "text": " How to pronounce his name, Jürgen Schmidhuber." }, { "start": 44, "end": 47, "text": " You again. Sorry." }, { "start": 47, "end": 49, "text": " This is absolutely great." }, { "start": 49, "end": 52, "text": " So both, actually, Schmidhuber and Hinton are on Twitter." }, { "start": 52, "end": 55, "text": " You can tweet at them and follow them." }, { "start": 55, "end": 59, "text": " This article here is basically a critique" }, { "start": 59, "end": 62, "text": " of the press release of Honda" }, { "start": 62, "end": 65, "text": " when they awarded Jeff Hinton" }, { "start": 65, "end": 68, "text": " for his achievements." }, { "start": 68, "end": 71, "text": " And it goes through it step by step." }, { "start": 71, "end": 73, "text": " And we won't look at the whole thing," }, { "start": 73, "end": 75, "text": " but just for you to get the flavor." }, { "start": 75, "end": 77, "text": " So here Honda says," }, { "start": 77, "end": 80, "text": " Dr. Hinton has created a number of technologies" }, { "start": 80, "end": 83, "text": " that have enabled the broader application of AI," }, { "start": 83, "end": 85, "text": " including the backpropagation algorithm" }, { "start": 85, "end": 89, "text": " that forms the basis of deep learning approach to AI." }, { "start": 89, "end": 92, "text": " And Schmidhuber just goes off." }, { "start": 92, "end": 94, "text": " He basically claims," }, { "start": 94, "end": 96, "text": " while Hinton and his coworkers have made" }, { "start": 96, "end": 98, "text": " certain significant contributions to deep learning," }, { "start": 98, "end": 101, "text": " the claim above is plain wrong." }, { "start": 101, "end": 105, "text": " He says Hinton did not invent backpropagation." }, { "start": 105, "end": 110, "text": " The person who invented backpropagation was Seppo Linayma." }, { "start": 110, "end": 118, "text": " He says basically many papers failed to cite Linayma," }, { "start": 118, "end": 124, "text": " who was the original inventor of backprop and so on." }, { "start": 124, "end": 127, "text": " And he goes through a history of this" }, { "start": 127, "end": 129, "text": " and how it's even earlier." }, { "start": 129, "end": 131, "text": " I always have a bit of a trouble with claims" }, { "start": 131, "end": 132, "text": " like who invented what," }, { "start": 132, "end": 136, "text": " because when is an algorithm really the same thing?" }, { "start": 136, "end": 138, "text": " And when is it a variation on another algorithm?" }, { "start": 138, "end": 140, "text": " And when is it something completely new?" }, { "start": 140, "end": 142, "text": " It's never entirely clear." }, { "start": 142, "end": 146, "text": " But the points here made that the things," }, { "start": 146, "end": 152, "text": " the backpropagation algorithm existed before Hinton." }, { "start": 152, "end": 156, "text": " And also that some of the papers," }, { "start": 156, "end": 160, "text": " some of the seminal papers did not cite the correct origin." }, { "start": 160, "end": 163, "text": " Statement 2. In 2002 he introduced" }, { "start": 163, "end": 167, "text": " a fast learning algorithm for restricted Boltzmann machines" }, { "start": 167, "end": 171, "text": " that allowed them to learn a single layer of distributed representation" }, { "start": 171, "end": 173, "text": " without requiring any labeled data." }, { "start": 173, "end": 176, "text": " These methods allowed deep learning to work better" }, { "start": 176, "end": 179, "text": " and they led to the current deep learning revolution." }, { "start": 179, "end": 184, "text": " And he basically goes, no, Dr. Hinton's interesting unsupervised pre-training" }, { "start": 184, "end": 187, "text": " for deep neural networks was irrelevant" }, { "start": 187, "end": 189, "text": " for the current deep learning revolution." }, { "start": 189, "end": 193, "text": " In 2010 our team showed that the feed forward networks" }, { "start": 193, "end": 195, "text": " can be trained by plain backprop," }, { "start": 195, "end": 198, "text": " do not at all require pre-training." }, { "start": 198, "end": 201, "text": " And he basically again says," }, { "start": 201, "end": 203, "text": " apart from this Hinton's unsupervised pre-training" }, { "start": 203, "end": 207, "text": " was conceptually a rehash of my unsupervised pre-training" }, { "start": 207, "end": 210, "text": " for deep recurrent neural networks." }, { "start": 210, "end": 214, "text": " So he, you know, as you know, Schmidhuber has done a lot of work" }, { "start": 214, "end": 217, "text": " in recurrent neural networks and he basically says" }, { "start": 217, "end": 220, "text": " it was just a rehash of his algorithm." }, { "start": 220, "end": 223, "text": " Now I have to say I have," }, { "start": 223, "end": 227, "text": " so first of all, he makes a point here, right," }, { "start": 227, "end": 231, "text": " that we don't really do unsupervised pre-training anymore" }, { "start": 231, "end": 233, "text": " until now, of course." }, { "start": 233, "end": 236, "text": " But you, like, to train an MNIST classifier," }, { "start": 236, "end": 238, "text": " you don't have to do that." }, { "start": 238, "end": 242, "text": " But it's also doubtful that this was a step," }, { "start": 242, "end": 246, "text": " even though if it wasn't on the exact path" }, { "start": 246, "end": 249, "text": " to the current situation," }, { "start": 249, "end": 252, "text": " it was a thing that got people excited maybe." }, { "start": 252, "end": 255, "text": " And so the critique is like half valid." }, { "start": 255, "end": 258, "text": " And also it doesn't help Schmidhuber" }, { "start": 258, "end": 261, "text": " that he always compares it to his own things." }, { "start": 261, "end": 266, "text": " Like it just, like, either criticize them for, you know," }, { "start": 266, "end": 270, "text": " general things, but then avoid bringing your own things in" }, { "start": 270, "end": 273, "text": " because it just sounds like I did this before." }, { "start": 273, "end": 276, "text": " And also I read some papers from these times." }, { "start": 276, "end": 279, "text": " People just wrote papers sometimes." }, { "start": 279, "end": 281, "text": " I haven't read this specific one," }, { "start": 281, "end": 283, "text": " but sometimes people just wrote papers" }, { "start": 283, "end": 285, "text": " writing down their ideas." }, { "start": 285, "end": 288, "text": " Like, one could do this and this and this." }, { "start": 288, "end": 290, "text": " Never doing any experiments" }, { "start": 290, "end": 294, "text": " or actually specifying exactly what they mean." }, { "start": 294, "end": 296, "text": " They just kind of wrote down a bunch of ideas" }, { "start": 296, "end": 298, "text": " and that got published." }, { "start": 298, "end": 302, "text": " Especially, like, there's some reinforcement learning papers" }, { "start": 302, "end": 304, "text": " where people are just like, oh, one," }, { "start": 304, "end": 308, "text": " I imagine agents doing this and learning from that." }, { "start": 308, "end": 313, "text": " So it is, again, it is never really clear." }, { "start": 313, "end": 315, "text": " Ideas are just had by everyone." }, { "start": 315, "end": 318, "text": " I think people mistake this," }, { "start": 318, "end": 320, "text": " that think that ideas are unique." }, { "start": 320, "end": 322, "text": " It's not ideas that are unique." }, { "start": 322, "end": 326, "text": " Many people have the same ideas, but some..." }, { "start": 326, "end": 331, "text": " There's also execution and exact formalization and so on." }, { "start": 331, "end": 333, "text": " And exact level of specificity." }, { "start": 333, "end": 335, "text": " All of this is really hard." }, { "start": 335, "end": 339, "text": " And then the Honda says, in 2009, Dr. Hinton" }, { "start": 339, "end": 341, "text": " and two of his students used multilayer neural nets" }, { "start": 341, "end": 343, "text": " to make major breakthrough in speech recognition." }, { "start": 343, "end": 345, "text": " That led directly to greatly improved." }, { "start": 345, "end": 348, "text": " And this, of course, Schmidrub goes off by this" }, { "start": 348, "end": 355, "text": " because speech recognition is, of course, prime LSTM territory." }, { "start": 355, "end": 359, "text": " So you don't want to go near this." }, { "start": 359, "end": 361, "text": " And the Honda further says," }, { "start": 361, "end": 364, "text": " revolutionized computer vision" }, { "start": 364, "end": 366, "text": " by showing that deep learning worked far better" }, { "start": 366, "end": 368, "text": " than existing state of the art." }, { "start": 368, "end": 372, "text": " And again, he says the basic ingredients" }, { "start": 372, "end": 375, "text": " were already there and so on." }, { "start": 375, "end": 380, "text": " And our team in Switzerland already used" }, { "start": 380, "end": 384, "text": " his first superior award-winning GPU-based CNN and so on." }, { "start": 384, "end": 388, "text": " That's what was called DanNet, was produced by his group." }, { "start": 388, "end": 391, "text": " And again, this seems correct, right?" }, { "start": 391, "end": 393, "text": " This seems when he lays it out like this," }, { "start": 393, "end": 398, "text": " it doesn't change the fact that AlexNet won ImageNet in 2012." }, { "start": 398, "end": 403, "text": " And that was like the start of the deep learning revolution." }, { "start": 403, "end": 410, "text": " It was like, wow, you can cut the error rate by something like 30%" }, { "start": 410, "end": 414, "text": " simply by doing this deep learning stuff." }, { "start": 414, "end": 420, "text": " So again, even if DanNet says it blew away the competition," }, { "start": 420, "end": 425, "text": " it always seems like Schmidhuber is kind of right," }, { "start": 425, "end": 429, "text": " but then also he's not." }, { "start": 429, "end": 434, "text": " He's like, exact academic work" }, { "start": 434, "end": 437, "text": " and the idea being there on a paper" }, { "start": 437, "end": 442, "text": " isn't the only thing that drives progress." }, { "start": 442, "end": 445, "text": " And it says, to achieve their dramatic results," }, { "start": 445, "end": 449, "text": " Dr. Hinton also invented a widely used new method called dropout," }, { "start": 449, "end": 451, "text": " which uses overfitting." }, { "start": 451, "end": 456, "text": " No, like no, like no, just no." }, { "start": 456, "end": 461, "text": " Randomly dropping parts in order to make something more robust," }, { "start": 461, "end": 466, "text": " that is surely not a new thing." }, { "start": 466, "end": 473, "text": " And he also says much earlier, there's this stochastic delta rule and so on." }, { "start": 473, "end": 478, "text": " And he also critiques that this paper did not cite this." }, { "start": 478, "end": 480, "text": " They just gave it the name." }, { "start": 480, "end": 483, "text": " This is an idea that is kind of so simple" }, { "start": 483, "end": 487, "text": " that you wouldn't even necessarily think about researching" }, { "start": 487, "end": 489, "text": " whether that has existed already." }, { "start": 489, "end": 493, "text": " I think they just did it because it's a natural idea," }, { "start": 493, "end": 496, "text": " and then they gave it a name, and the name stuck." }, { "start": 496, "end": 499, "text": " It's not about the idea itself." }, { "start": 499, "end": 502, "text": " And then lastly, they say, of the countless AI-based technological services" }, { "start": 502, "end": 506, "text": " across the world, it is no exaggeration to say that few would have been possible" }, { "start": 506, "end": 509, "text": " without the results Dr. Hinton created." }, { "start": 509, "end": 510, "text": " I love this." }, { "start": 510, "end": 516, "text": " Name one that would not have been possible." }, { "start": 516, "end": 521, "text": " And he just gives a list of their own group" }, { "start": 521, "end": 525, "text": " that are basically possible without Hinton's contributions." }, { "start": 525, "end": 529, "text": " And this is just a bit of a cheap shot." }, { "start": 529, "end": 535, "text": " Clearly, Honda, they're not saying it would have been physically impossible." }, { "start": 535, "end": 538, "text": " Without his contributions." }, { "start": 538, "end": 547, "text": " But certainly Hinton has, even if he hadn't invented any of those things," }, { "start": 547, "end": 551, "text": " he certainly has created a spark." }, { "start": 551, "end": 555, "text": " And these things created a splash, got people excited," }, { "start": 555, "end": 560, "text": " people thinking about new ways of applying things, even if this is all true." }, { "start": 560, "end": 569, "text": " But I would like you to notice," }, { "start": 569, "end": 574, "text": " this is a critique of what Honda says about Hinton." }, { "start": 574, "end": 577, "text": " And if I read through the statements of Schmidhuber," }, { "start": 577, "end": 580, "text": " most of them are technically correct." }, { "start": 580, "end": 583, "text": " And you know, that was that." }, { "start": 583, "end": 585, "text": " And then I thought, OK, cool." }, { "start": 585, "end": 588, "text": " But then someone posted it on Reddit." }, { "start": 588, "end": 592, "text": " And then Hinton replies." }, { "start": 592, "end": 594, "text": " And this is..." }, { "start": 594, "end": 595, "text": " Don't you love this?" }, { "start": 595, "end": 598, "text": " So Hinton says," }, { "start": 598, "end": 602, "text": " Having a public debate with Schmidhuber about academic credit is not advisable" }, { "start": 602, "end": 604, "text": " because it just encourages him." }, { "start": 604, "end": 608, "text": " And there is no limit to the time and effort that he is willing to put" }, { "start": 608, "end": 613, "text": " into trying to discredit his perceived arrivals." }, { "start": 613, "end": 617, "text": " He is even escorted to tricks like having multiple aliases in Wikipedia" }, { "start": 617, "end": 620, "text": " to make it look as if other people agree." }, { "start": 620, "end": 625, "text": " The page on his website about Alan Turing is a nice example" }, { "start": 625, "end": 627, "text": " of how he goes on trying to..." }, { "start": 627, "end": 631, "text": " These are shots fired." }, { "start": 631, "end": 634, "text": " And he says, I'm going to respond once and only once." }, { "start": 634, "end": 638, "text": " I have never claimed that I invented backpropagation." }, { "start": 638, "end": 643, "text": " David Rumelhard invented it independently." }, { "start": 643, "end": 649, "text": " After other people in other fields had invented it." }, { "start": 649, "end": 652, "text": " It's true. When we first published, we did not know the history." }, { "start": 652, "end": 658, "text": " So he basically says, OK, we did forget to cite it when we first published" }, { "start": 658, "end": 664, "text": " about backprop, but he doesn't say he invented it." }, { "start": 664, "end": 666, "text": " What I've claimed is that I was the person to clearly demonstrate that" }, { "start": 666, "end": 669, "text": " backprop could learn interesting internal representations" }, { "start": 669, "end": 672, "text": " and that this is what made it popular." }, { "start": 672, "end": 674, "text": " So this goes into the direction." }, { "start": 674, "end": 677, "text": " Schmidhuber is very much on academic contributions." }, { "start": 677, "end": 678, "text": " Idea was there before." }, { "start": 678, "end": 682, "text": " And Hinton basically says, no, what we did is kind of we showed" }, { "start": 682, "end": 687, "text": " that it works in this particular way and we kind of got people excited about it." }, { "start": 687, "end": 694, "text": " I did this by forcing that, blah, blah, blah." }, { "start": 694, "end": 698, "text": " And he says, it is true that many people in the press have said" }, { "start": 698, "end": 701, "text": " I invented backprop and I've spent a lot of time correcting them." }, { "start": 701, "end": 708, "text": " Here is an excerpt from 2018 where this is, I guess, a quote from this book" }, { "start": 708, "end": 713, "text": " that quotes Hinton where he says, lots of people invented different versions" }, { "start": 713, "end": 715, "text": " of backprop before David Rumelhart." }, { "start": 715, "end": 721, "text": " They were mainly independent inventions, something I feel I've got too much credit for." }, { "start": 721, "end": 726, "text": " It's one of these rare cases where an academic feels he has gotten too much credit for something." }, { "start": 726, "end": 730, "text": " My main contribution was to show you can use it for learning distributor representations." }, { "start": 730, "end": 734, "text": " So I'd like to set the record straight on that." }, { "start": 734, "end": 740, "text": " Then he says, maybe Juergen would like to set the record straight on who invented LSTMs." }, { "start": 740, "end": 743, "text": " Boom, boom." }, { "start": 743, "end": 744, "text": " Crazy." }, { "start": 744, "end": 748, "text": " Shots fired by Hinton here." }, { "start": 748, "end": 751, "text": " This is, I mean, this is just great." }, { "start": 751, "end": 755, "text": " But again, look at what Hinton says." }, { "start": 755, "end": 759, "text": " Hinton basically says, yes, I have not invented that." }, { "start": 759, "end": 764, "text": " I have corrected this on public record in the past." }, { "start": 764, "end": 768, "text": " And yeah, so that's what Hinton says." }, { "start": 768, "end": 774, "text": " And I mean, the comments here are just gold." }, { "start": 774, "end": 776, "text": " I really invite you to read it." }, { "start": 776, "end": 781, "text": " And then Schmidhuber, of course, being Schmidhuber, replies again." }, { "start": 781, "end": 786, "text": " Down here, he has a response to the reply." }, { "start": 786, "end": 790, "text": " And I don't expect Hinton to reply again, so I waited for a bit." }, { "start": 790, "end": 793, "text": " But I believe Hinton when he says he does it only once." }, { "start": 793, "end": 797, "text": " So he goes into this." }, { "start": 797, "end": 805, "text": " It just says, summary, the facts presented in sections one, two, three, four, five are still valid." }, { "start": 805, "end": 808, "text": " So he goes kind of statement by statement." }, { "start": 808, "end": 811, "text": " Is this having a public debate? Blah, blah, blah." }, { "start": 811, "end": 815, "text": " And he just says, well, this is an ad hominem attack, which is true." }, { "start": 815, "end": 816, "text": " Right. This is true." }, { "start": 816, "end": 820, "text": " And he says he even has multiple aliases in Wikipedia." }, { "start": 820, "end": 825, "text": " And he just says another ad hominem attack." }, { "start": 825, "end": 830, "text": " And then he goes into that Schmidhuber tries to discredit Alan Turing." }, { "start": 830, "end": 841, "text": " And then Schmidhuber goes into this big, long, big, long, basically claim that Alan Turing wasn't as important as people made him out to be." }, { "start": 841, "end": 846, "text": " And people invented this kind of Turing machine equivalence before that." }, { "start": 846, "end": 853, "text": " Again, it's kind of Schmidhuber's take that the idea basically was already there and these people don't get the correct credit." }, { "start": 853, "end": 865, "text": " And also he's correct that this is a true it's an ad hominem attack." }, { "start": 865, "end": 869, "text": " Right. So, you know, be it as it may." }, { "start": 869, "end": 871, "text": " This is correct." }, { "start": 871, "end": 881, "text": " And then when Hinton goes that he didn't invent Backprop and Schmidhuber says this is finally a response related to my post, which is true." }, { "start": 881, "end": 885, "text": " Right. However, it does not at all contradict what I wrote." }, { "start": 885, "end": 896, "text": " And it is true that Hinton credited his co-author Rommelhardt with the invention, but neither cited Lin-Neymar and also the statement, lots of people." }, { "start": 896, "end": 901, "text": " He says it wasn't created by lots of different people, but exactly one person." }, { "start": 901, "end": 916, "text": " So this I find, like, can you really say now this is the exact time when Backprop was invented, even though it probably wasn't in the exact current formulation." }, { "start": 916, "end": 919, "text": " And it probably existed somewhat like this." }, { "start": 919, "end": 931, "text": " So, but again, and he his main claim is Dr. Hinton accepted the Honda price, although he apparently agrees that Honda's claims are false." }, { "start": 931, "end": 934, "text": " He should ask Honda to correct their statements." }, { "start": 934, "end": 938, "text": " And maybe you're going to would like to set the record straight." }, { "start": 938, "end": 951, "text": " We invented LSTMs and, you know, as we as you may know, Sepp Hochreiter kind of invented LSTMs under Jürgen Schmidhuber as a PhD advisor." }, { "start": 951, "end": 962, "text": " But the to summarize Dr. Hinton's comments and ad hominem arguments diverge from the contents of my post and do not challenge the facts and so on." }, { "start": 962, "end": 966, "text": " And I have to say after reading this, this is this is correct." }, { "start": 966, "end": 977, "text": " Right. Hinton basically replies to, hey, I, I never claimed I invented Backprop and other people have invented it." }, { "start": 977, "end": 981, "text": " And Schmidhuber doesn't criticize Hinton in this particular post." }, { "start": 981, "end": 991, "text": " He may otherwise. Schmidhuber doesn't criticize Hinton for claiming that he criticizes Honda for claiming that Hinton did." }, { "start": 991, "end": 994, "text": " And Hinton doesn't. Hinton basically agrees with him." }, { "start": 994, "end": 1000, "text": " And also Schmidhuber says Dr. Hinton accepted the Honda price, although he apparently agrees that the claims are false." }, { "start": 1000, "end": 1003, "text": " He should ask Honda to correct their statements." }, { "start": 1003, "end": 1008, "text": " And it is true that Hinton accepted this price under this release." }, { "start": 1008, "end": 1015, "text": " Right. Now, you might be able to say Hinton also says he's on the record basically saying he didn't do this." }, { "start": 1015, "end": 1021, "text": " And I guess if you're Hinton and you know, you've had this you've had the successful career and so on." }, { "start": 1021, "end": 1028, "text": " And you have previously really publicly stated that you didn't invent these things and, you know, made it clear." }, { "start": 1028, "end": 1031, "text": " And then you get a prize and they write this thing." }, { "start": 1031, "end": 1037, "text": " Maybe you just don't want to go after every single press statement and correcting that." }, { "start": 1037, "end": 1047, "text": " But, you know, in essence, basically Hinton understood this as an attack on himself that he claims he invented Backprop." }, { "start": 1047, "end": 1052, "text": " And Schmidhuber says Honda claims Hinton invented Backprop and Hinton accepted the price." }, { "start": 1052, "end": 1062, "text": " So he agrees with it and Hinton basically agrees with it, but doesn't say Honda should have corrected it, which I can understand." }, { "start": 1062, "end": 1066, "text": " So this is my take on this issue." }, { "start": 1066, "end": 1073, "text": " It's kind of both are correct and they just kind of talk past each other." }, { "start": 1073, "end": 1079, "text": " And Schmidhuber is always on the idea existed before." }, { "start": 1079, "end": 1085, "text": " And Hinton is correct when he says it's not always just about the idea." }, { "start": 1085, "end": 1097, "text": " Progress is also made by people being excited, people actually getting something to work, people, you know, doing something at the right time, the right place, which is also correct." }, { "start": 1097, "end": 1100, "text": " But it is fun. It is fun." }, { "start": 1100, "end": 1118, "text": " So I just I enjoyed I enjoy this honestly, like because ultimately this is the kind of discussions also need to happen in science because credit assignment is an important thing in science." }, { "start": 1118, "end": 1128, "text": " And even though sometimes it's over the top, like Schmidhuber always going after it, I think we need people like him just kind of to keep the field in check a bit." }, { "start": 1128, "end": 1131, "text": " And yeah, I will link to all of this." }, { "start": 1131, "end": 1135, "text": " I hope you enjoy this and I wish you a nice rest of the weekend." }, { "start": 1135, "end": 1140, "text": " If you're still here, consider subscribing and leave comment if you want." }, { "start": 1140, "end": 1159, "text": " I usually read them. Bye bye." } ]
D6osiiEoV0w
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
HyperTransformer: Model Generation for Supervised and Semi-Supervised Few-Shot Learning (w/ Author)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "metalearning", "meta learning", "neural network", "unsupervised learning", "few shot learning", "google", "google research", "google ai", "transformer", "meta transformer", "hypertransformer", "hyper transformer", "generate the weights of a neural network", "privacy", "personalization", "interview", "paper explained", "semi-supervised learning" ]
#hypertransformer #metalearning #deeplearning This video contains a paper explanation and an interview with author Andrey Zhmoginov! Few-shot learning is an interesting sub-field in meta-learning, with wide applications, such as creating personalized models based on just a handful of data points. Traditionally, approaches have followed the BERT approach where a large model is pre-trained and then fine-tuned. However, this couples the size of the final model to the size of the model that has been pre-trained. Similar problems exist with "true" meta-learners, such as MaML. HyperTransformer fundamentally decouples the meta-learner from the size of the final model by directly predicting the weights of the final model. The HyperTransformer takes the few-shot dataset as a whole into its context and predicts either one or multiple layers of a (small) ConvNet, meaning its output are the weights of the convolution filters. Interestingly, and with the correct engineering care, this actually appears to deliver promising results and can be extended in many ways. OUTLINE: 0:00 - Intro & Overview 3:05 - Weight-generation vs Fine-tuning for few-shot learning 10:10 - HyperTransformer model architecture overview 22:30 - Why the self-attention mechanism is useful here 34:45 - Start of Interview 39:45 - Can neural networks even produce weights of other networks? 47:00 - How complex does the computational graph get? 49:45 - Why are transformers particularly good here? 58:30 - What can the attention maps tell us about the algorithm? 1:07:00 - How could we produce larger weights? 1:09:30 - Diving into experimental results 1:14:30 - What questions remain open? Paper: https://arxiv.org/abs/2201.04182 ERRATA: I introduce Max Vladymyrov as Mark Vladymyrov Abstract: In this work we propose a HyperTransformer, a transformer-based model for few-shot learning that generates weights of a convolutional neural network (CNN) directly from support samples. Since the dependence of a small generated CNN model on a specific task is encoded by a high-capacity transformer model, we effectively decouple the complexity of the large task space from the complexity of individual tasks. Our method is particularly effective for small target CNN architectures where learning a fixed universal task-independent embedding is not optimal and better performance is attained when the information about the task can modulate all model parameters. For larger models we discover that generating the last layer alone allows us to produce competitive or better results than those obtained with state-of-the-art methods while being end-to-end differentiable. Finally, we extend our approach to a semi-supervised regime utilizing unlabeled samples in the support set and further improving few-shot performance. Authors: Andrey Zhmoginov, Mark Sandler, Max Vladymyrov Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher LinkedIn: https://www.linkedin.com/in/ykilcher BiliBili: https://space.bilibili.com/2017636191 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Hello, today we're going to look at HyperTransformer. This is a model for few shot learning where you get new data that you haven't seen before with potentially new class labels. So this model takes in a set of data points and corresponding class labels and its output is the weights of a convolutional neural network that can then be used to classify those data points and corresponding test data points. This is very useful because it decouples the model that does the meta learning or the few shot learning. It decouples the size of that model from the size of the model that then does the actual inference on the data, which means that I can have a big model doing all the meta learning things and end up with a very, very lean ConvNet that I can deploy anywhere. It's very useful if this needs to be deployed on mobile phones. It's very useful if there are privacy considerations, federated learning, anything like this. So the HyperTransformer, it doesn't classify data itself. It actually produces a model that classifies data, which is very cool in itself. So the models are quite performant by itself. They're not super good. Like they're not the best, but they're good enough. And potentially they could even be used as a starting point to then refine and do some more training. So this is what we're going to look at today. This research is by Andrei Shmoginov, Mark Sandler and Mark Vladimirov. And I'm going to interview Andrei in a bit here on the channel. He joined me and we had a nice conversation about the paper. So please let me know if you like styles like this. I feel it's a big boost to have the authors on with these paper reviews, but you need to tell me how to make the best use of their time, how to need to make the best use of your time, the viewer's time, because I don't want to make these videos like more long than they have to be. But I also want to give you the opportunity to sort of pick and choose. Some people prefer just my explanations. Some people prefer the interviews. And I view it as like a bit of a buffet. But please let me know in the comments how you would like a paper explanation with an author to be structured the best because it's, you know, ultimately, it needs to be good for anyone watching. All right, let's dive in. The interview is going to be a market. There's chapter annotations down in the bar here. You just look if you want to skip to the interview, feel free. So the hyper transformer is a model and it says it in the name. It's a hyper transformer or I mean, you could also have called it like meta transformer or something like this. It is a model that in itself produces weights. And what is it useful for? It's useful for few shot learning. And this is one of the things I appreciate about this paper, which I only really realized after I've done the interview is that in just the framing of the problem itself is very special, such that the model is quite good at it, which is maybe a lesson for all of us in research to to already look for the good problem. So what we're going to end up with is we're going to end up with a few shot learning setting in few shot learning, you want to build a model like let's call it model M, or just some sort of an algorithm doesn't even have to be a model. And that model M will get just a few data points. Let's call let's say these are images like, okay, I get in this case, four, it might be some more than four, but you know, a couple of dozen images or something like this. So not a giant amount of images with their corresponding label. So let's call let's give each one a Y like each one a label. And I want to take this data set, I want to input in into this box, and the box should come up with ideally a model. So the box doesn't have to be a model. But let's call this like a neural network over here, which should then be performant on the data that on the distribution that this small amount of data has come from. The challenges are obvious, you only have very little data to do this. The second challenge is that these labels might come from classes that you've never seen before, right? They might be new classes. So this is the general task of few shot learning. The advantage is that very often, the task isn't completely new. So the task isn't like a complete surprise. But the task itself, this is what it's called a task right here, the task itself comes from a distribution of tasks, which means that you have kind of like a data set that have many such tasks here. So here is a task, right? This is a data set with some train and test samples, each one having their labels. And then so this is a task, and then there might be another task and another task and another task. So consider this sort of like a machine learning problem, except the data points our entire tasks. So you want to build a model that takes in such a task and gives you a good classifier for that particular task. Now, the question is obviously how you do that, what most people do, or not most people, what has been popular previously, and I've made a video, for example, for iMammal. So iMammal, I think it's written like this, L, there's an L here. This is a technique about meta learning. So what you would do is you would train one big model, you train a big model, and you train it with each of these sort of train it with each of the tasks. And what you do is you want to end up with a model that is kind of like a common initialization for all the models. So when you get a new task, you want to take this model and you want to fine tune it for a couple of steps for that particular task. And if you get another task, you want to take the common initialization, you want to fine tune it for that particular task. So for each task, you'd end up with the same model with this model right here, but fine tuned for that particular task. This is what we do. It's very popular. If you think of things like BERT or so, this is essentially what we do, we get to a common initialization, and then we fine tune that, except methods like iMammal explicitly train that initialization for the purpose of then being fine tuned to a few short learning tasks. So potentially having new labels, or potentially the same labels. The problem is obvious, the models are the same, right? This model and this model right here, they're the same like architecture, it's just one is a fine tuned version of the other. And there's the question, right? For is that appropriate for the task? Like is this model right here appropriate for this task? Maybe you can say, well, maybe not. It's just a few data points. In general, if I have a few data points, I might want a small lean model, though it doesn't like blow up, it doesn't overfit. Also, maybe, you know, where do I use few shot learning? Well, probably, I use it when you know, I need to have a model for every user, like you have your photos library, the photos library has a couple of dozen pictures in it. Now you want to train a classifier on it, right? And your classifier is going to be different from the next user's classifier, and so on. So there's no common classifier, it can be personalized. And also there, this needs to like run on your mobile phone, if that's the case. And then you don't want like this giant model. So we want a lean model. However, if you look at the model in the middle right here, like this one, of course, this needs to be big, it needs to like cover all of the different tasks that could be and then some more, right? Like it needs to train on a distribution of tasks to be able to classify tasks that it hasn't even seen before. So that one needs to be giant, ideally, as big as it can get, right? To absorb all the information. So there you have the dichotomy and the weakness with the approach of having the same model being fine tuned down the road. And that's why the hyper transformer does a different thing. The hyper transformer says, well, I have a big model right here, and that model will produce the weights of the small model. So we won't fine tune anything, we will simply forward propagate the task through the model. And then that model will spit out the weights. And we're gonna do it in a kind of a smart way, because I believe this has been tried before. I think even I have tried it before. And it usually doesn't work and has particular reasons why it doesn't work. Among other things, neural networks are quite bad at hitting exact numbers, they're good at classifying. But when it comes to like regressing on numbers, they're quite bad. Also, there are errors that build up and so on. We'll get into that. However, what I said before, the framing of the task. Now, few shot learning can be characterized in a few different ways. Sometimes, often, it is also said, well, we have like a big data set available, right, big data set, like ImageNet, or so on. And we use that to pre train the big model right here. And we use that to sort of prepare the model for a few shot learning. If this is particularly not, I'm sure you could somehow get it in there. But in this particular thing, the model needs to be able, it's a transformer, it needs to be able to take all of these samples into its input, so into its context window. And therefore, it's almost like the model is limited to an upper bound of number of data points that it can input. So the framing of the task itself, like few shot learning means you have these tasks, and every task has few samples and so on. You know, differentiated from the framing where few shot or meta learning means that you want to get a big data set, and then you want to fine tune it on many small data sets. That distinction is a smart one if you write a research paper, right? It is, if you say, well, we're actually in this situation. And here, the model makes perfect sense, right? Here, it would be more difficult. I think just a lesson for people who write research papers is the framing of the problem is like half the battle. So how does this model actually produce weights? This is a schematic overview over the hyper transformer method. The hyper transformer itself, you can see right, right here, not even that. So the hyper transformer itself is going to be this box right here, or this box right here, respectively, that produces weights of neural networks, the weights of the neural networks that are produced are these things right here. So what's all this other stuff? Well, the hyper transformer needs some information to produce actual weights. Remember, what we're going to do is we're going to take a set of what they call support samples. So this is the data set. This is the entire data set. In this case, we have three data points. Now, this is a schematic, usually, as I said, it's maybe a couple of dozen data points. In this case, we have three data points. So these are the X's and their corresponding labels. In this case, they call them C for like class labels, we call them Y. So these are data points and labels. And remember, you might not have exactly seen the classes before, or you might. This is this is up to sort of the task at hand. So what we're going to do is we're going to feed the hyper transformer with the data, right, we say, you know, here is this is the entire data set, we say, dear hyper transformer, this is the entire data set, please give us weights. Now the question is, how do we feed a data set to the transformer? And they have various ways of how to do that. And what they do is they want to provide like the most accurate information to the transformer as possible. So the first thing you see right here is that there is a feature extractor, this thing right here, it takes in a data point, each one individually, and it outputs features for it, which makes sense. So the transformer can't, for example, read images by itself, it can't read them out of the box. So we need some sort of data extraction pipeline. This is a feature extractor, it's going to be like a convolutional neural network that has a few layers that serves as a feature extractor, this can be trained end to end, this can also be pre trained. What's important that we end up with a vector for each data point, so each data point here gets a vector, which can then be fed into the transformer as you would feed a token embedding vector, if you were to do NLP. The other thing is, and this is not super important in the first layer, we also need to feed the hidden activations of the current layer. Now I want to leave this away right here because in the first layer, there's not that much of a distinction, but it's going to be important in all the following layers. And then we also want to feed an embedding of the class label right here. They put the class label directly, but it's actually an embedding of the class label that is fed to the transformer. So with all of this information, the transformer sees the entire data set it's supposed to classify, and it will output the weights of the convolutional neural network. Now you see right here, it's more complicated than just outputting the weights of the entire ConvNet. So what we could do is we can say, well, I have a ConvNet with a bunch of layers, right? I put my data into the transformer and the transformer just like boom outputs all the weights at the same time like bam, bam, bam, bam, bam, bam, bam, bam, bam. Here's all the weights. This would be very bad. Well, I guess, I don't know, but I guess it wouldn't work, at least in my experience, because these errors, they would kind of accumulate, the transformer would need to guess from the initial embeddings right here, what all the weights are. So essentially, internally, it would sort of have to model this model in its like, in it like inside of it, and then sort of guess what the representations in here are going to be in order to create the weights for the layer here. If you make a mistake right here, then or a small error, then that error will kind of accumulate through the layers and so on. So it is quite bad advice to produce all the weights at the same time. Instead of the hyper transformer produces the first layers weights first, then it takes the data points, propagates them through the weights that it itself had just produced, it observes the hidden activations after that layer. And then it reconsiders these hidden activations for producing the second layer's weights. This is all one big computational graph, you can actually model it in like TensorFlow PyTorch. And in the interview, we're going into a little bit of whether that's, you know, feasible for larger models and whatnot. But that's what it does. So it first produces the weights of the first layer right here, then it forward props the model. So this this F right here, that is the resulting confnet. So you take the weights of the confnet, you fuse it together with the architecture. And that's going to be the generated layer number one, you take the data points, you feed them through the generated layer, you get the activations right here. And that those activations will become sort of the feature, this it says activation feature extractor. So you got you're going to add some hidden activations, which are also going to be if it's a confnet, they're going to be some sort of a a tensor, some sort of like a and with by height by channel tensor. So again, you need like a feature extractor. But essentially, what you're going to do is you're going to feed the hidden activations again, to the transformer, along with the original data. So you're going to say here's the original data, here is the hidden activation it has at the layer that I'm trying to produce the weights for right now. And also, again, you're going to feed the class labels. So this is the totality of the information that transformer has available at every layer, it has the original data, the hidden embeddings of the current layer after the last layers, and the class labels, and then it's supposed to produce the next layer right here. Yeah, this, as I said, the computational graph is quite enormous right here. Because if you if you think about it, right, you produce these weights right here, and then you forward prop through these weights. So any change you do to the weights will sort of change everything that's after. But Andre told me that this is it is quite possible to do with current deep learning frameworks, which is a cool thing. Like imagine you had to do this by hand, like old papers, they always wrote down the gradient by hand. So this is in general, the model, what's possible and what they do is they say, well, we don't technically need to produce all the weights of a CNN. What we can do is if we have like a CNN, we can just use the hyper transformer to produce like the last layers weights or the last two layers weights, we can still train, for example, these things right here with back prop. So what happens during training during training, this thing right here is one task, right? This is one data point, essentially, if you think from a meta learning perspective. So this one task, I'm going to feed through the whole architecture. At the end, right here, I'm going to feed the data or these hidden activations, I'm going to feed them through, I'm going to get the labels of the data point, then I'm going to use back propagation to train all of this. So I'm going to use back propagation to train the hyper transformers parameters, possibly also the feature extractors parameters here and here. And if I don't like this is one step. And if those things only produce, let's say the only produce the last two layers weights, I can also back propagate because the back propagation path is like this and then like, you know, like this and then so on. I can also use back propagation to train these first two layers. So the first two layers will essentially become this this common feature extractor like we talked about at the beginning, when we spoke about iMAML or something like this, they will essentially become shared among tasks. And then it is just the last layers that are tasks specifically produced for that. They do find in the experiments that for small models, like if the CNN is small, it pays off to produce more of the layers like also the filters. If the CNN, however, is large, they say they can get away with just producing like the last layer, which is the classification layer. So, you know, I don't know whether that's a limitation of the implementation of the method itself, it seems you know, that there's errors can accumulate and so on, the data sets. But also, as I said, the models should be small. So you don't even want to build super large models from you don't want to build super large models right right here, the ones that you actually deploy. So that is that is the overview over the model. There is this other graphic right here, where they show how exactly the hyper transformer does the things it does. So here, what it gets as an input are these things. So that we have the class sorry, the class label embeddings concatenated with the sample embeddings. So that is like one token as an input, they do praise the transformer because it's invariant to positions, right. So if you don't provide positional encodings, any permutation of the input will generate the same, the same output essentially. So they this is one token, one token is an embedding of a sample and an embedding of its class label, the transformer can also take what they call no label embeddings, which means they can go into semi supervised learning. So sometimes you have a bunch of data and then a bunch more data that is not labeled. So they can just provide a pseudo embedding like for an additional class that essentially says this one's unlabeled, they do find that they can incorporate unlabeled data, but only to a point like if it's too much, it gets too noisy. And then these things right here, essentially, these are these are kind of requests to the transformer. These are embeddings for the weights that I'd like to produce. So essentially, this one right here might say, I want to produce layer one weights for the convolutional filter. And of that convolutional filter, I want to to generate slice number one. Right. So and then this one right here will be slice number one of the convolutional filter of layer one. So that you essentially with the weight embeddings, what they call right here, these aren't really weight embeddings themselves. They're like weight address embeddings, like like like, you know, if you if you had to name the variables in your code, these are essentially the variable names. So these are the it's like the it's like the CLS token, right? You request something from the transformer, say here is a token. And on the output of that token, I'm going to expect you to give me a particular result. So that is how the hyper transformer takes in data and outputs data. Here's the generated weight slices. Now they can be directly the weights or they can be some sort of an embedding for the weights if you have to produce a lot of weights. So you can have like another model that scales up whatever is output here to the actual format of the weights. Yeah, many things possible right here. I don't want to go too much into the results right here. Because, as I said, one one big result is that if they have models that produce all of the weights right here, and also this here, logits and conv, like if they produce the logit layer and the convolutional layers, this only appears to really help if the model is small. So these here would be the smaller models, which do outperform if you only if you sort of learn jointly the conv layers and then only produce the logit layers with the hyper transformer. Whereas for the bigger models, this doesn't seem to make that much of a difference anymore. Other than that, I don't want to go too much into the results. However, the last thing I want to explain right here is their sort of chapter on the reasoning behind the self attention mechanism. So they argue that the self attention mechanism has special properties that make it very, very apt at producing the at producing weights for like a for a classifier. And specifically, they go into why it could be ideal, not ideal, but appropriate for producing weights for a classification layer. So I want to make clear what's happening right here. They say theoretically or in concept, the self attention mechanism right here can in one single layer of self attention can produce a classifier over the data samples that we give it right. This is this is what the transformer has to do. The transformer has to take in the data points, right, it has to produce essentially, let's think of the last layer has to produce a classifier for those data points. So the question is, how does it do that? There's no SGD involved, there's no training involved, right, you could fine tune but they're in the forward prop through the transform, there's no training involved. So how conceivably can self attention mechanism produce the a classifier over data. And for that, they show that even a one layer self attention mechanism can conceivably produce a simple classifier. How does it do that? So let's think of what a classifier is. A classifier is essentially a weight matrix. And the weight matrix in the, let's say in the, let's make a coordinate system, let's say this is the embedding space of the last layer. So what the weight matrix looks like is, let's say we have, let's say we have three different classes, or say we have four different, oopsie, we have four different classes. So this is one, two, three, four, or four different classes, which means that the weight matrix is going to be like D by four. So it has one slice, one column, or row, one column for each of the one column for each of the classes. And how is it going to classify? Well, it's going to run every data point x through the weight matrix multiplied by the weight matrix. And that gives me four numbers. So it's an inner product which eat with each of the columns gives me four numbers, which is essentially the inner product with with each of the four vectors right here. If x is, for example, here, the biggest number is going to be the one with the largest dot product. So that's going to be this one right here. And that's going to be my class label. These are usually called logits, the numbers that turn out right here. But they're essentially similarities to the columns of the weight matrix of the last layer. So can we produce this weight matrix? Can the self attention mechanism produce the purple weight matrix, such that at least the training data points are classified correctly? Now, in order to do that, what it needs to do is it needs to do the following for each of the data points that we have, it has to that the weight matrix can essentially be constructed like this. So why here, this is why is a one hot encoding over the class label, and ej is some embedding of the data point. And you see, if we calculate this up, why is only going to be one at the at the class where the data points label is. So the weight matrix, essentially, this is going to address only the column of the weight matrix, where that data point falls into. And by the sum, it essentially sorts all the data points into its their respective columns. And within each column, it sums all the data points up. So if we do, if you apply this formula, then the data points in class one are going to be summed together or averaged together and put into the weight matrix at column one, and the same for column two, the same for concrete that would actually result in a good classifier because the classifier would just be the mean embedding of all of the data points that belong to this class, which is, you know, a reasonable classifier in first approximation. The question is, can the self attention mechanism produce something like this? So let's ask ourselves right here, let's say, let's say, let's draw this again. So we have x1, y1, x2, y2, x3, y3. If you remember, the self attention mechanism will calculate queries, keys, and values for each of the data points, it will provide like it will do like a softmax over the queries and the keys of over an outer product of them, then multiply them by the values. So the question is, this entire thing needs to turn out to be a W like that. So this entire thing needs to address all the data points of the same class and then average them. We can say, well, that's pretty easy. Okay. And they say this, this is what they say in the paragraph right here, they try to make a case that this can be done. So if we take the data points, and we we just calculate, we calculate their embedding, like they have some embedding function, actually, we don't even need, let's just say the data points themselves are already embedded. So x, x2, like is is the embedding of itself. So let's say, these the data points themselves, they are, they're the values. Yeah, let's say they are the values, then the labels are the keys. So that means that if two data points have the same label, they will expose the same key. Now, all we need to do essentially, is we need to make sure that the queries, so over here, we have the weight, the address of weight one and the address of weight two, we need to make sure that the queries that the weights produce, if those queries are matching with the with the keys that these expose, you can see that this all works fine. That this all works out. So weight one would say, well, I am the weight that is going to be the column for class one, I'm going to expose as a query, the embedding, which they like Xi, I don't know, I just write this letter, the embedding for class one, whereas these data points say, well, I'm going to expose as a key, whatever the embedding of my class label is. And now you can see that weight one, given that it's class one will aggregate all of the different data points, but only if they expose the key of class one, right, if y two equals C one, they will aggregate together the query and the keys will match, they will aggregate together, the values are the data points themselves. So this will result for each of the weights in an average of all the data points that correspond to its particular class label. That's exactly how we build the W. Notice that it's not important what the queries of the data point tokens are. It's also not important what the keys and the values of the weights are, as long as they don't conflict with these queries right here. It's just a proof of concept that this could happen. Another proof of concept they do in a similar vein is that with respect to the unlabeled samples, remember, we said we can also do semi supervised learning right here, we have a data point and we have no label available for it, what can be done and they show that with a two layer self attention mechanism, you can actually do it such that in the first layer, sort of the labels are propagated, and then in the second layer, you can apply the same thing as right here. So how do we propagate labels? Again, let's think of data point x1, y1, x2, y2. And now let's think of x3 with unknown label. What can we do? What we can do is and now we have to rethink a bit, how do we structure the self attention mechanism such that label is propagated in the next layer to this data point right here. So let's say this data point here exposes as a query, it exposes its data point, like its vector, its embedding, that is going to be the query. So every token right here as a query exposes its embedding, and also as a key, and specifically these two as a key, they expose their vector. And they also expose their embedding of the class as values. So now you can see that we're going to match up keys and queries. Now let's say these two data points here are very similar, their keys and their queries are going to match, right. And specifically since this here is the query, the value of that data point is going to be put is going to be aggregated in that token, whereas these might not match as much. So this value isn't going to be aggregated. So here you can see that this is essentially a nearest neighbor classifier, this token is going to look which of the other data points are similar to myself. If this is really how it's, you know, how the mechanism is structured, is going to look which are similar to myself. And from all of those that are similar, I'm going to average the class label embedding for myself and all that, and then all I need is like a residual connection to copy over the data and some orthogonality. And I have essentially aggregated class labels from all the nearest neighbors of the other data points. That's the first layer. And then the second layer. Now every data point has a class embedding, and I can just use this one to build a classifier. So this is a proof of concept that with two layers, it is actually possible to label unlabeled data in the nearest neighbor fashion, and then build a rudimentary classifier over like an average embedding classifier over that data. I hope that made a little bit of sense. We're going to talk about some supporting experiments that are in the appendix that actually show and we're going to talk about this in the interview that actually show that if these are these two layers, right, in the first layer, the unlabeled examples, they attend to the labeled examples a lot. And then in the transformer layer two, the weights actually attend, sorry, in the layer one, the weights attend only to the labeled examples, you can see they don't attend to the unlabeled examples at all. In layer two, however, the weights, having already attended to the labeled examples now also attend to the unlabeled examples, which means that the unlabeled examples have gained some information in layer two. As I said, we're going to talk about this more in the interview. So what you're going to hear in the interview is also again, like a little bit of a different perspective on the model. We'll go through the experiments, we go through means, some criticisms that I have about the model itself. And yeah, so I realized this was a bit of a longer explanation than usual. I'm trying these things out. Again, let me know what you prefer like short introductions to the paper, then an interview, or like long explanations followed by a short or long interview. Do you want to pick and choose from the video and so on? I need to know. So please tell me. And as always, if you like this, then leave a like, comments and yeah, have fun. Welcome everyone. Today I have with me here, Andrei Smoginov. Is that approximately correct, Andrei? Approximately correct. Yeah, thank you. Thanks for having me. Thank you. So you're one of the authors of the Hyper Transformer paper. And this is a pretty cool paper I found. Little like it, I do not hang it out big time, but I have once tried to publish a paper using one model to produce the weights of another model. It worked like barely. So when I saw a paper that actually does it in practice, I was like, I was stoked. I was like, yay, this is, you know, it's pretty cool. So yeah, welcome, first of all, and congrats on this paper. I liked it. If we look at like the high level idea of the paper, it is, you generate, essentially use one neural network to generate weights for another neural network. There are many settings which that can be applied to. Do you want to maybe transmit like the high level idea of what the paper is about? Yeah, so we basically started exactly as a question, can we even train a model that generates all of the weights for the other model? But unlike hyper network paper, which we were inspired by, in this case, we really wanted to modulate the model that we produce on the task that it's supposed to solve. So basically, what we wanted is we wanted to take a description of a task that the model is supposed to solve. And in a single model, we wanted to take a description of a task forward paths converted into the weights of a fully trained model, and not even a subset of weights, but we wanted to take a big bite and generate all of the weights of the model. And the question, you know, from the very beginning was, is it even going to work? Will we get results comparable to what you might get by training the model to start with? And the, in principle, the applications, we consider the few short learning as an application, but it really kind of the field could be, for example, personalization. And I guess like one of the main ideas of this paper, what we try to convey is that in many cases, when people discuss few short learning, or when they discuss personalization, they think of models as, you know, as large as they need to be to serve all of the potential users, all of the potential needs. And here we ask a question, well, what if the computational budget is actually limited? And you want to basically to produce a model that is very, very fine tuned to specific needs of a specific user. So basically, we are kind of trying to separate the complexity of a small model that is supposed to solve a task for each individual kind of user from the complexity of a big model that's supposed to know everything about the world and everything about how to generate these small models. And so that kind of was one of the main ideas that we can separate them. And we were hoping that we would be able to capture the variety of the small models and how they depend on the task inside this big transformer based model, essentially. The idea seems so clear when you think about it, but it is so far away when you've, at least to me, it was like, once I saw your paper, I was like, oh, yeah, of course, because what we were doing in the past few years, I think is, and this started maybe with something like BERT made it really popular to like pre train a really big model and then kind of just fine tune it on on your little data and all of these meta learning or a few short learning papers, they would do the same thing. They would pre train a big model. And then for example, MAML would train that same model on the small data. Essentially, what they were trying to do was find like a good initialization, right to, to then continue training. But it's not like, you know, essentially that the same model was tasked with two different things. The same model was tasked with ultimately solving all of these small tasks that you throw at it. And at the same time, like finding a good compromise between all the models and you separating this, it makes total sense. You say, well, one network is really responsible for integrating all of these tasks and the other, like the smaller network that is produced is responsible for solving the individual tasks. This has lots of applications. I think you mentioned it in the paper, personalization is probably a big one, right? If I just have my, you know, 20, 30 photos in my photo library, now I could, I could have like a small model that is just made for me, derived by this, by this big model. So I was, I was, it seems obvious in hindsight, but I, it was, to me, it was not on the forefront of my mind. So you, you, I mean, there are legitimate concerns when, when you say we want one network to just output the weights of another network. Specifically, we know that neural networks are really good at classifying stuff, of, you know, outputting ones or zeros or, or, or into a bucket, but they're not so good at outputting exact numbers, right? They're not, they're not to the, to the point where a lot of reinforcement learning papers, for example, they would rather bucket the values they're trying to predict and then predict the class of the bucket rather than predicting an actual number. So, you know, did you, did you have, you must have had these concerns as well. And, and how, how exactly does your model like predict the weights of another model? Yeah, that's, that was definitely a concern. And actually, as it turned out for convolutional models solving future learning tasks, that doesn't end up being a huge issue, partly because for, especially for very large models, you don't really need to fine tune all of the weights very carefully. Because if your embedding is already good enough, embedding model, then in principle, all you need to do is look at the final embeddings produced for different images and kind of based on that figure out how you need to assign labels to essentially these embeddings. So in practice, as we've seen, all that matters for, especially for very large models that, you know, can have a very large embedding inside is to just generate the final layer. But once you get into the land of smaller models, it's still important to, to generate all of the layers. And one of the approaches that we use basically, what we have to do carefully is instead of generating all layers at once from the inputs. So the input in this case, just to clarify is the, in a future learning scenario, you have a support set that basically tells you, these are the images that the final network has to classify as a cat, for example. And these are the images that the final network should classify as a dog. And then we hope that the generated model would be able to classify both cats as cats and all dogs as dogs. And so our model in this case would see a support set. It would see that sufficiently small batch of images. And instead of generating, you know, like immediately layer one, two, three, four, we decided that we needed to generate them layer by layer, starting from the lower one. And the motivation for this is really, if you imagine that you modify the very early layer, then all of the activations throughout the network will be modified. And so basically, if you modify the first layer, you have to then adjust all of the rest. And the, you know, the differences will propagate and will potentially amplify through the network. And so you have to potentially be very aware of what the previous layer generates to actually generate the following layer. And I guess that was one of the ideas how we could stabilize that layer by the layer generation process. So is it fair to say that you're, so this, what you call support set, that is essentially the data set of the few shot task, right? It's like, here are 10 images of dogs and cats with corresponding labels, which in this is a diagram of your architecture in general. So this is the support set with the samples and the labels. And then you make use of lots of signals throughout the network, such that, as you said, you make sure you first build the first layer and then based on that build the second layer. So if we quickly walk through it, one core component is this image feature extractor that is a trained, let's say a ConvNet that is applied to each image individually, and just extract some sort of a feature map. And this feature map is then given to every single computation layer in your set, right? So your main model is this transformer thing here that it takes in, as you can see, it takes in these embeddings of the support set. It takes in the labels, obviously, right? It needs to know what it needs to classify how. And it takes in this thing right here. And I think in the first layer, this is kind of the same as these image embeddings. It's another embedding, right? It's sort of a signaler. It's another embedding, it's smaller. Yeah. But it's basically produced from the same image essentially. I guess we'll come, like this is in subsequent layers, this will actually be different. So what we do is the transformer here, it will produce the weights of the first layer. And as you said, we don't just produce the first layer and the second and the third in one batch. But what seems to be really important is now we actually forward propagate, I need a different color here. We forward propagate the support set through the weights we've just generated. And that will give us the next layers representation. And then that can be used again by the transformer to generate the next layers weights, along with the embeddings of the original images, along with the labels, and so on. So this sort of building up to the end seems to be important and refeeding the information through your own generation. Is it fair to say that it's a little bit like an auto regressive language model if I feed in whatever I output again and again? Yeah, exactly. In some version of the paper, we even wrote it this way, basically. But yeah, it's kind of like a progressive process in a way that you generate basically the next, the following layer weights conditioned on the weights that you already generated essentially. And again, the motivation you know for this is if you imagine yourself having images, original images, and you have to generate weights for the layer number three convolutional layer, right? You may have a trouble if you just look at the images themselves. But if you look at the activations that the previous layer gives you with the corresponding labels, you can then look at small patches of those activations and figure out that, oh, look, there is this feature that is seen in all of the images labeled as one. So perhaps I can have a filter specifically looking for this in the activations, because that's what the layer is going to operate on. And that's basically why we have to do it this way. When we try to do it all at once, the model is significantly less stable. Yeah, I mean, that is what one would expect. So I think the other trick here is that every step where you generate the weights of a new layer, you have sort of all the information you have, what's the data set I'm trying to classify, how does that data set look at the input to that layer, right? And that helps me tremendously to then produce the weights. This looks, it's two layers right here. And it looks already quite complicated, right? Here is like an entire transformer, right? And then that transformer generates a set of weights, right? And then I forward propagate a signal through the weights that were generated by using that signal as an input, right? So I'm imagining the computation graph here gets pretty iffy, quite like quite fast. And then there is another transformer. And then I'm backprop through all of this back, right? What's the concerns with stability here? And how big does the computational graph get? Is this a problem? So in practice, it was not a big problem. But you're right that it grows faster than generally conventional CNN would grow. But here what you care about, I assume, is kind of the longest path in this graph. And so I assume it will still be proportional to the number of layers. But it is true that when you generate the final layer, you essentially have to back propagate through all of the transformers that you have, right? Like if you have multiple layers in each transformer, you have to back propagate through all of them. But in practice, this thing was surprisingly stable to train, actually. That was one of the things that surprised me. The only issue I think is I wasn't able to, like when we looked at this, we weren't able really to train it with anything other than SGD, not that we really spent a lot of time doing this. And one of the assumptions why could at least partially be the case is because when we train it, the way we train it is basically we train kind of like you would train an usual model where you give input images and you produce labels. Here we give tasks, which are support sets, and we produce weights. But essentially, since we have memory limitations, we basically do one task per batch. So it's kind of a single sample batch, if you will, in that sense, in a sense that it's just one support batch. And so maybe that's why the methods weren't exactly super stable when you really applied other techniques, but with SGD trained absolutely fine. And we discovered, like I think to some degree, one of the advantages that we claim this method might have is that it actually might be more stable than mammal-based methods, for example, because in mammal-like methods, you really have to back propagate through potentially many unrolls if you want to really apply several SGD updates. So here we really propagate through a single model in that sense, although to some degree, it's still a manual layer model. And you make a particular case that transformers are a good choice of model for this particular task. Why are transformers so good? They have some trivial nice properties. One of the trivial properties is that in the usual design, when you don't use any kind of masking or when you don't use positional embeddings, the output of the transformer is kind of an equivariant to the inputs. So in a sense, if you change the order of input tokens, the output tokens will change the same way. And that's what we want for a model like this, because the order of samples in the support set, in which order in which you show kittens doesn't really matter. All that matters is that you show them all. And so that was one nice property that it can handle potentially a varying number of samples and it doesn't matter what order they come in. But another consideration, and that was, you know, there are prior papers that looked at attention-based methods applied specifically for kind of generating the last layer, the last logits layer of the model. And we make a claim that these attention-based mechanisms are useful specifically for sure for generating the final logits layer. And I guess we make a distinction, we say that, first of all, when you are in supervised regime and, you know, you have a label for every sample, if you naively want to say, oh, you know what, I will generate the last layer by just essentially averaging embeddings for each class. And that will be a row in my final logits layer. Because what you want to do is when a new embedding arrives, for example, you don't know yet, you take a dot product with all of the embeddings that you know correspond to certain classes. And that gives you basically the kind of the higher this dot product is, the more aligned the vectors are, the more likely you will say that, oh, yeah, that's probably that class. And so one of the approaches to generating the logits layer is basically to average embeddings for each class. Right? So if you have a bunch of people, you take embeddings for these images, you average them, and that's your row in that logits weight matrix that you produce. But if you want to just average embeddings that can be done with a simple attention mechanism, you basically you take the output that you want to produce, that row, and you make it attend to embeddings of all of the images labeled as label one. And then when you attend to only those, you only need in the end to average their corresponding values, which will be embeddings. And you end up calculating the average of embeddings of all of the caps. And that's what you want. So that was the very simple mechanism that you could mainly use that can also be implemented as a basic attention based model. And you so that that is so you make specific arguments. Yeah, this is the reasoning behind the self attention mechanism here, you show a diagram that goes a little bit into how exactly you you build up this. So you have your support set on is inputs as tokens, along with their labels or the class embeddings, let's say, you also have the opportunity to put in data without labels, which I guess is quite often available in these tasks. So users, let's let's again assume I have my photo library, right, I might even label some of the photos, maybe with like hashtags, or I send them to my, you know, I share them in some album or so, but most of the photos will have no label. So you also have the opportunity here to just input them as well and just say, here is some data. And I think the a lot of models benefit from kind of like extra data just to know what the data manifold looks like. So that's the the the sense here. But you in your experiments, you also show you have to be careful how how much of those you you introduce right in comparison. But in essence, you can you can take this in and then for each weight that you want to output, you have a special token. So this is this will be equivalent to let's say the the CLS token or so in in a in like a BERT model when I want to classify something, I have one token per output that I want to do the these have different embeddings. So like they're like addresses of the weights that I want to output. And yeah, this this whole thing, it's it's then there's just just as transformer but you have you already said with respect to like the last layer that this is implementable. But you also make the case that if I have a two layer transformer, I can implement like a nearest neighbor algorithm is like, do you want to maybe just briefly, what's the idea behind how does how does a two layer transformer implement nearest neighbor? We never full disclosure, we never really tried to implement it right like in code. But it's it's a simple cost of that hopefully is correct. But the idea was that yeah, when you have labeled and unlabeled samples, again, you can imagine that you have a bunch of embeddings that you know the label of like you know that these are cats, but you also have a bunch of unlabeled embeddings everywhere. So naively what you might want to do is you look at them on all unlabeled embeddings, and you'll notice that some of them are really close to the embeddings that you already know are cats. So you say, okay, you know what, I will label them as cats because they are suspiciously suspiciously close. And when I have to compute the final, you know, clusters, basically, I will just average over both labeled samples and those that I just labeled because I'm pretty sure that they are actually cats. Right. So that's kind of a reasonable way to do this. And if you have self attention based mechanism, you can do it in two steps. The first step is really when you try to propagate labels from labeled samples to these nearby unlabeled samples. And if you remember how the right how the self attention mechanism works is you can you need to make sure that the closeness is based on the product of embeddings of samples. And you can make unlabeled samples attend to nearby labeled samples. And when when this when I'm an unlabeled sample and I attend to all nearby labeled samples, I can basically look at them and pool their class information to myself, to my personal embedding. So even though my class embedding before was I have no idea what I am, as I said, I am as soon as I saw several neighbors in the embedding space, I can just borrow their embeddings. And this way be certain that I belong to that cat category, actually. And so that's kind of the idea of what the first layer should do. And then after this is done, the second layer basically looks at specifically the traces of this label, whether it was, you know, originally given to the sample, or it propagated to the sample. And as soon as I observe that all these samples are marked as a cat or kind of, you know, a smell of a cat, basically, they they borrow that cat reference, I can again, I can take all of them average their embeddings. And that will be my final kind of the centroid of the cluster that I'm producing. And you know, funny enough, we didn't really look into what exactly the transformer does, because it's really difficult. But if you just look at the attention maps of two layers, it turns out to be suspiciously close to the mechanism that how self attention actually works on the train model. Because we see that exactly like in the very first layer, unlabeled samples, attend to labeled samples. And at the same time, weights get information from labeled samples. But at the second layer, weights actually get something from these unlabeled samples that were just updated. So it does look like this mechanism or at least the version of it is actually what's happening. And you have sort of you do in the appendix, you do a lot of investigations into these into various attention maps and so on. Is there is there one you'd like to particularly highlight? Yeah, it's this one, basically. I don't remember exactly how it works. But I think in the first one, the first transformer layer, it's very awkward to describe. So basically, what happens is the top rows are the ones that will generate weights. So basically, if you look at the, for example, the very top row, this row is telling you when the weights are updated, what are they looking at? Yeah. So in this case, you can see that they are looking at the columns corresponding to labeled samples. So it means that these weights borrow something from labeled samples. But at the same time, if you look below, you will see that at the bottom of this plot, there are unlabeled samples, and they also attempt to label samples. So basically, after this first layer, both the weights are updated, and the unlabeled samples are updated somehow from the labeled sample information. And then at the second layer... It's interesting that the weights, they don't care at all about the unlabeled samples. They learn to ignore the unlabeled samples. That's pretty interesting. Yeah. And that's exactly kind of what you would want. Because at this point, right, these unlabeled samples really getting not that much information about what you need to generate. And that's actually maybe one of the reasons why when you have too many of these samples, the model becomes overwhelmed, and you have to introduce them carefully. You can't just throw like hundreds of unlabeled samples at this model. And then at the second layer, basically what happens is at this point, you don't care how labeled or unlabeled samples are modified because you don't take that information into account after the second layer. So all you care about with this transformer layer two is the top rows. It's again the weights. And here you can see that top rows actually are at the second layer, attempt to unlabel samples, but almost fully neglect the labeled samples. Which is also actually quite remarkable that there is this divide. And in our opinion, that basically shows that there is this flow of information, right, from labeled samples to unlabeled and then from unlabeled at the final layer to the weights. Yeah. And so that... It looks like the weights, they don't even care about the labeled samples anymore, but it is probably because they've already gotten a lot of information in layer one out of these labeled samples, right? And now they're also aggregating across the unlabeled samples. Do you think there might be like some sort of... In these autoregressive models, if they have causal attention and so on, do you think there might be some smart attention mask that you could implement that would kind of encourage the algorithm to behave better? I'm not exactly sure what I'm looking for, but do you think that there could be some smart biases built into the attention masks here so that we actually make the model pay attention to the more relevant things or that we want them to pay attention to? Yeah. I think actually that's a wonderful idea. Actually, as a matter of fact, what we do right now is we say, oh, we think that's what's happening. And then we look at the attention masks and we see that, yes, that's mostly what's happening. But you're absolutely right that if we were certain that we wanted to restrict the flow of information in a particular way, we could very well manipulate basically the masking of each self-attention layer and this way very carefully restrict how the computation should actually be performed. Yeah, you're right. That's actually a very interesting point. I imagine that could be applied to a bunch of other applications like what you just said. If you know in advance how the information should flow essentially, you can implement this by using proper attention masks. You also have a bunch of other visualizations right here. Do you want to maybe tell us a little bit about... Because I just thought they looked kind of funky. What do they represent? These are weights of the actual CNN layers. Yeah. To be honest, it's really difficult to interpret them. And I think I would rather not go into too much because we really have a hard time understanding what this means. But I think to some degree, one thing to observe is that, first of all, we discussed several ways of generating weights. And one of them, it all ends up being how you take the outputs produced by a transformer and how you combine them into single convolutional filters. If you think about this, there are multiple opportunities. You can, for example, take outputs and assume that they are different channels of a kernel by kernel by input channel thing. Or you can assume that they are k-squared different slices that you combine, but each has a dimension of input channels, output channels. And then you reshape them into k by k by input channels by output channels. And depending on how you choose to do that, the model will have different inductive biases, actually, because a very lazy transformer model, for example, wouldn't probably want to generate very different embeddings, very different tokens as output. It would more likely, if it's maybe poorly trained, would generate a very similar outputs. And so if you assume that these outputs correspond to spatial dimensions, then you will see much more smooth produced weights. Because essentially, you treat every coordinate, every spatial coordinate as different produced tokens, and they are all very, very similar. But if you do that in channel, channel wise, then now kind of the k by k thing, k by k kernel can look completely random. It can't like there doesn't have to be any order. They can look like minus five plus five minus 11 plus 12. And so that's why they will look much more kind of random visually. And so I think we kind of observe that. But we were also curious to see if the generated kernels vary significantly for different supports and tasks. And I guess again, we see that they vary, but we cannot interpret this. We hope to get slightly better results, like more interpretable. But in that regard, I think what matters is that when we generate small models, we can measure the difference of training and test accuracies. When you actually generate only the final layer, or you generate all of the layers, including computational layers. And we see that for teeny tiny models, for especially small ones, it really starts to matter that you generate all of the layers instead of only the final one. And so that in the future, if we really want to understand what this model does, we really have to look at the smaller models. And then the variation of kernels with respect to different support sets will be probably more telling on what's happening. So yeah, you find that in the small models, you fare better generating all the weights than if you... And in the larger models, the strategy is essentially to only train the model to produce the last layer and then use regular back prop through that generated layer to essentially learn the lower layers. And that might be, I mean, that might also be like an effect of just the method not being figured out yet quite right. It's a complicated method. It seems maybe a bit unstable, especially if you go to a larger model and also the errors in larger model, they accumulate over the layers. You have many weights. If one is kind of off, then what are you going to do? So yeah, it's an exciting future. Have you thought about... So you generate this output, essentially, this weight token at the end, it generates some sort of an embedding. I'm gonna scroll for a whole bunch of time right here. No, I think I copied the paper twice. I'm sorry. So you're going to generate for each of these weight tokens, you're going to generate some sort of an output which you can interpret directly. Is it also possible to interpret this output as, let's say, the embedding of a convolutional kernel? That there be another model like a GAN or a VQVAE or something like this, where you essentially generate into the embedding space of that model. And then that model can be really good at producing like realistic filters. It just sort of needs to know what filter to produce. Is that something that you have tried or have in mind or ruled out as a possibility? No, it's definitely something that we have in mind because really, when we try to scale these methods, it becomes difficult when you have to generate really humongous weights. And at this point, yes, the best thing you can probably do is basically have a separate model that receives embeddings of the weights that it needs to generate and that learns to generate those weights themselves. So yeah, you got it exactly right. That's basically one of the paths to scale it to significantly larger models. We can scale this model even to resinate architecture, but to maybe to speed up training, to improve, like you said, we don't even know for sure if the lack of the need to train lower common layers is a result of a, that the method is having more trouble. And I definitely have some evidence that if we pre-train certain parts of the model, then it trains slightly better. So there is definitely that complication of training this thing end to end, but also it's few shots so that every, if you train some model on five classes, having all of the images, of course it will perform a significantly better because in a few shots setting, you have only a few images per class. And so what can you do? So that's another source of maybe imperfection that results in you not having to generate the foundational layers. But also it's that I think honestly, the classification problem is kind of simple in a sense that you need to find boundaries between classes. Generative models, for example, are much, much more challenging because you have to understand the structure of the data manifold, not just how to separate the data manifolds. And so I think if you ask me where this can become important, that people will be there. So you've made several experiments on, oh sorry, you made several experiments on benchmark data sets. Could you maybe summarize what in your opinion, in the experiments, what was most striking to you? What stood out the most? What's the main conclusion you pulled out of there? Yes. So I think one of the conclusions was that yes, when we generate small models, we can potentially perform better than you know, mammal based methods or methods that we train a small embedding and then try to just generate the final layer by using again like that dot product method, for example, averaging embeddings, finding clusters. So we definitely, because we have such a large model generating a smaller model, we have a lot more capacity to learn about the world. And when we generate a small model, we are much more informed than say a mammal model would be. So we definitely think that for smaller models, there is an advantage of doing what we do, a significant bump in accuracy, and especially in the training accuracy, which might matter if what you care about is basically specializing on the model, basically specialize a model, assuming that the classes are seen during training, because generalization is I train on cats and dogs, but I generalize the new unseen classes. And that's key, that can be complicated. But when you know for sure that you need to specialize for a user, their model to work on some of the classes that you saw during training, then what you care about is the training accuracy. And because we have such a big model, we definitely get much higher training accuracy. So that's about this. So basically, again, for smaller models, there's definitely an advantage of doing this. When it comes to very large models, we see that when we generate just the last logic layer, we get competitive results to a lot of different methods that try to carefully design those functions and the methods that they use. So, you know, without doing anything, we basically are kind of compatible. So that was, again, encouraging. And the final thing that, to be honest, that I personally found very, very exciting is that I think of this as having a potential to move to very, very abstract task descriptions. So in future learning, your task description is essentially, look, these are several images you should label as cat, these few images you should label as dog, etc. But in one of our examples, we add unlabeled samples, right, and that improves the accuracy quite a lot. So I was very excited to see that, you know, we can get a very significant bump in the model accuracy by giving it unlabeled examples. So somehow, without us telling how we should use unlabeled examples, it learned to use them. But in the future, you could also imagine using a lot of other types of data, you could provide, like you mentioned, photo metadata, hashtags, which might be sparsely related to some images, for example, you could have textual descriptions, for example, what people are interested in, and so on and so forth. And that would be a task description from which your model learns to generate a model very well aligned with the interests of that particular person, for example. So I am kind of personally very excited about this. And I think that that performance on semi supervised task, and the fact that the model learned what to do in that case, is the most interesting. Yeah, and I didn't mention another thing is basically what we already covered is that for smaller models, you don't only care about generating the last logic layer, but you seem to benefit from generating all of the comp layers as well. And it still remains to see if there is a big difference versus generating something like fill layers. But I'm hopeful that generating, as a matter of fact, all of the layers full of weights is important. Cool. Yeah, I think that was, I mean, I've looked at the results. I was positively surprised. I mean, it's not at the level yet where it's like, you know, we can generate like the state of the art ImageNet models, but it's not necessary. Like, I think it's important to keep in mind that these models, they're supposed to be deployed somewhere where I have very little data, right? I just want to kind of produce a small model for that little data, maybe in personalization, right? The model even doesn't even have to be big because it may be, you know, on my phone or something like this. And there's definitely also, I think opportunities in the future to combine this thing with, how should I say, to combine it with optimization, right? It's not necessarily a binary choice between I generate the weights or I, you know, like MAML, I optimize from some checkpoint, I can also, you know, maybe find clever ways of combining it. But I really like the approach of the paper right here. Yeah, is there, I don't know, is there anything else you want to say about this general research direction? Anything people, if people want to dive into this, you know, where can they go? What can they do? What are like, you know, big open questions that you're not considering researching? So, you know, people don't scoop you. That's okay. Well, I do think that, I think we are still actually interested in this research direction. And we think that this particular model could be scaled and could be applied to other problems as well. And that it could potentially again, shine either in certain instances where you have a limited computational budget or where you have the complex tasks, like generative tasks. But overall, yeah, I would say that some of these ideas are not new. If somebody wants to just know what people have been doing in that regard, like for example, what you just mentioned, Leo paper does something similar where they also have a generation of model layers, but at the same time, they also use MAML approach, essentially. So they kind of back propagate through the generator of, yeah, essentially through the generator, in a way. So it's kind of similar to our approach joined with the MAML. But there are other techniques that generate weights. And I think that hyper network, original paper is really interesting, and it gave rise to a lot of interesting research. And there were recently papers that looked into generative models that also looked at hyper, that were inspired by hyper networks. And honestly, I think that, yeah, in the future, we might see models that generate other models and that actually works in practice. Let's see. Yeah. So I, to be honest, it's very difficult to say what else can be done. But one of the things that maybe people will scoop me, but what I'm interested in is, I was just thinking about this, is we can also generate not just weights of the CNN models, we can generate policies as well, for example. And as a very simple example, which is very toyish, but could be interesting, is for example, you have a robot that you build, you take a few photos of it, and you upload them to the service. And the service basically is tasked with having several images of the robot and having maybe images of the terrain that it's supposed to walk on, just generate a locomotive controller policy for it, just like that, just from images. And so I think that doing things like this might be interesting. Again, one thing to note is that model distillation and training and combining these methods with training might be very, very interesting as well, and probably can be very compatible with methods like this. But I think that's one direction what the future is, generating models from specifications of what needs to happen, instead of necessarily just training them from scratch. Cool. Well, in this case, Andrey, thank you so much for being with us here. This was awesome. Thank you for your insights. And I hope to see you again with a transformer that generates an even bigger transformer. Thank you very much. Yeah, thanks for inviting me. It was very interesting to discuss this paper.
[ { "start": 0, "end": 2.8000000000000003, "text": " Hello, today we're going to look at HyperTransformer." }, { "start": 2.8000000000000003, "end": 8.4, "text": " This is a model for few shot learning where you get new data that you haven't seen before with" }, { "start": 8.4, "end": 14.96, "text": " potentially new class labels. So this model takes in a set of data points and corresponding class" }, { "start": 14.96, "end": 20.080000000000002, "text": " labels and its output is the weights of a convolutional neural network that can then" }, { "start": 20.080000000000002, "end": 25.68, "text": " be used to classify those data points and corresponding test data points. This is very" }, { "start": 25.68, "end": 31.52, "text": " useful because it decouples the model that does the meta learning or the few shot learning. It" }, { "start": 31.52, "end": 37.76, "text": " decouples the size of that model from the size of the model that then does the actual inference on" }, { "start": 37.76, "end": 43.519999999999996, "text": " the data, which means that I can have a big model doing all the meta learning things and end up with" }, { "start": 43.519999999999996, "end": 48.879999999999995, "text": " a very, very lean ConvNet that I can deploy anywhere. It's very useful if this needs to be" }, { "start": 48.879999999999995, "end": 53.68, "text": " deployed on mobile phones. It's very useful if there are privacy considerations, federated" }, { "start": 53.68, "end": 58.32, "text": " learning, anything like this. So the HyperTransformer, it doesn't classify data itself." }, { "start": 58.32, "end": 65.44, "text": " It actually produces a model that classifies data, which is very cool in itself. So the models" }, { "start": 65.44, "end": 70.8, "text": " are quite performant by itself. They're not super good. Like they're not the best, but they're good" }, { "start": 70.8, "end": 76.32, "text": " enough. And potentially they could even be used as a starting point to then refine and do some more" }, { "start": 76.32, "end": 81.92, "text": " training. So this is what we're going to look at today. This research is by Andrei Shmoginov," }, { "start": 81.92, "end": 89.44, "text": " Mark Sandler and Mark Vladimirov. And I'm going to interview Andrei in a bit here on the channel." }, { "start": 89.44, "end": 95.68, "text": " He joined me and we had a nice conversation about the paper. So please let me know if you like" }, { "start": 95.68, "end": 100.88, "text": " styles like this. I feel it's a big boost to have the authors on with these paper reviews," }, { "start": 100.88, "end": 106.8, "text": " but you need to tell me how to make the best use of their time, how to need to make the best use" }, { "start": 106.8, "end": 111.12, "text": " of your time, the viewer's time, because I don't want to make these videos like more" }, { "start": 111.12, "end": 115.52000000000001, "text": " long than they have to be. But I also want to give you the opportunity to sort of pick and choose." }, { "start": 115.52000000000001, "end": 121.36, "text": " Some people prefer just my explanations. Some people prefer the interviews. And I view it as" }, { "start": 121.36, "end": 127.52000000000001, "text": " like a bit of a buffet. But please let me know in the comments how you would like a paper explanation" }, { "start": 127.52000000000001, "end": 132.8, "text": " with an author to be structured the best because it's, you know, ultimately, it needs to be good" }, { "start": 132.8, "end": 138.72, "text": " for anyone watching. All right, let's dive in. The interview is going to be a market. There's chapter" }, { "start": 138.72, "end": 145.2, "text": " annotations down in the bar here. You just look if you want to skip to the interview, feel free." }, { "start": 145.2, "end": 152.32, "text": " So the hyper transformer is a model and it says it in the name. It's a hyper transformer or I mean," }, { "start": 152.32, "end": 158.96, "text": " you could also have called it like meta transformer or something like this. It is a model that in" }, { "start": 158.96, "end": 166.24, "text": " itself produces weights. And what is it useful for? It's useful for few shot learning. And this is one" }, { "start": 166.24, "end": 171.12, "text": " of the things I appreciate about this paper, which I only really realized after I've done the" }, { "start": 171.12, "end": 176.72, "text": " interview is that in just the framing of the problem itself is very special, such that the" }, { "start": 176.72, "end": 183.92000000000002, "text": " model is quite good at it, which is maybe a lesson for all of us in research to to already look for" }, { "start": 183.92000000000002, "end": 189.36, "text": " the good problem. So what we're going to end up with is we're going to end up with a few shot" }, { "start": 189.36, "end": 195.36, "text": " learning setting in few shot learning, you want to build a model like let's call it model M, or" }, { "start": 195.36, "end": 200.88000000000002, "text": " just some sort of an algorithm doesn't even have to be a model. And that model M will get just a" }, { "start": 200.88000000000002, "end": 205.92000000000002, "text": " few data points. Let's call let's say these are images like, okay, I get in this case, four," }, { "start": 205.92000000000002, "end": 211.04000000000002, "text": " it might be some more than four, but you know, a couple of dozen images or something like this. So" }, { "start": 211.04000000000002, "end": 216.56, "text": " not a giant amount of images with their corresponding label. So let's call let's give each one a Y like" }, { "start": 216.56, "end": 222.64000000000001, "text": " each one a label. And I want to take this data set, I want to input in into this box, and the box" }, { "start": 222.64, "end": 228.23999999999998, "text": " should come up with ideally a model. So the box doesn't have to be a model. But let's call this" }, { "start": 228.23999999999998, "end": 235.2, "text": " like a neural network over here, which should then be performant on the data that on the distribution" }, { "start": 235.2, "end": 240.64, "text": " that this small amount of data has come from. The challenges are obvious, you only have very little" }, { "start": 240.64, "end": 246.72, "text": " data to do this. The second challenge is that these labels might come from classes that you've" }, { "start": 246.72, "end": 254.24, "text": " never seen before, right? They might be new classes. So this is the general task of few shot learning." }, { "start": 254.24, "end": 260.96, "text": " The advantage is that very often, the task isn't completely new. So the task isn't like a complete" }, { "start": 260.96, "end": 267.04, "text": " surprise. But the task itself, this is what it's called a task right here, the task itself comes" }, { "start": 267.04, "end": 274.96, "text": " from a distribution of tasks, which means that you have kind of like a data set that have many such" }, { "start": 274.96, "end": 281.91999999999996, "text": " tasks here. So here is a task, right? This is a data set with some train and test samples, each one" }, { "start": 281.91999999999996, "end": 287.28, "text": " having their labels. And then so this is a task, and then there might be another task and another" }, { "start": 287.28, "end": 293.67999999999995, "text": " task and another task. So consider this sort of like a machine learning problem, except the data" }, { "start": 293.67999999999995, "end": 300.4, "text": " points our entire tasks. So you want to build a model that takes in such a task and gives you" }, { "start": 300.4, "end": 307.03999999999996, "text": " a good classifier for that particular task. Now, the question is obviously how you do that, what" }, { "start": 307.03999999999996, "end": 312.32, "text": " most people do, or not most people, what has been popular previously, and I've made a video, for" }, { "start": 312.32, "end": 320.64, "text": " example, for iMammal. So iMammal, I think it's written like this, L, there's an L here." }, { "start": 321.84, "end": 327.76, "text": " This is a technique about meta learning. So what you would do is you would train one big model," }, { "start": 327.76, "end": 334.64, "text": " you train a big model, and you train it with each of these sort of train it with each of the tasks." }, { "start": 334.64, "end": 340.48, "text": " And what you do is you want to end up with a model that is kind of like a common initialization for" }, { "start": 340.48, "end": 344.88, "text": " all the models. So when you get a new task, you want to take this model and you want to fine tune" }, { "start": 344.88, "end": 350.8, "text": " it for a couple of steps for that particular task. And if you get another task, you want to take the" }, { "start": 350.8, "end": 355.52, "text": " common initialization, you want to fine tune it for that particular task. So for each task," }, { "start": 355.52, "end": 361.68, "text": " you'd end up with the same model with this model right here, but fine tuned for that particular" }, { "start": 361.68, "end": 366.64, "text": " task. This is what we do. It's very popular. If you think of things like BERT or so, this is" }, { "start": 366.64, "end": 372.08, "text": " essentially what we do, we get to a common initialization, and then we fine tune that," }, { "start": 372.08, "end": 378.47999999999996, "text": " except methods like iMammal explicitly train that initialization for the purpose of then being" }, { "start": 378.47999999999996, "end": 384.4, "text": " fine tuned to a few short learning tasks. So potentially having new labels, or potentially" }, { "start": 384.4, "end": 390.56, "text": " the same labels. The problem is obvious, the models are the same, right? This model and this" }, { "start": 390.56, "end": 396.4, "text": " model right here, they're the same like architecture, it's just one is a fine tuned version of the other." }, { "start": 396.4, "end": 401.44, "text": " And there's the question, right? For is that appropriate for the task? Like is this model" }, { "start": 401.44, "end": 407.03999999999996, "text": " right here appropriate for this task? Maybe you can say, well, maybe not. It's just a few data" }, { "start": 407.03999999999996, "end": 412.4, "text": " points. In general, if I have a few data points, I might want a small lean model, though it doesn't" }, { "start": 412.4, "end": 417.84, "text": " like blow up, it doesn't overfit. Also, maybe, you know, where do I use few shot learning? Well," }, { "start": 417.84, "end": 423.35999999999996, "text": " probably, I use it when you know, I need to have a model for every user, like you have your photos" }, { "start": 423.35999999999996, "end": 428.71999999999997, "text": " library, the photos library has a couple of dozen pictures in it. Now you want to train a classifier" }, { "start": 428.71999999999997, "end": 433.67999999999995, "text": " on it, right? And your classifier is going to be different from the next user's classifier," }, { "start": 433.67999999999995, "end": 440.08, "text": " and so on. So there's no common classifier, it can be personalized. And also there, this needs to like" }, { "start": 440.08, "end": 446, "text": " run on your mobile phone, if that's the case. And then you don't want like this giant model. So we" }, { "start": 446, "end": 451.44, "text": " want a lean model. However, if you look at the model in the middle right here, like this one," }, { "start": 451.44, "end": 456.79999999999995, "text": " of course, this needs to be big, it needs to like cover all of the different tasks that could be" }, { "start": 456.79999999999995, "end": 463.52, "text": " and then some more, right? Like it needs to train on a distribution of tasks to be able to classify" }, { "start": 463.52, "end": 469.68, "text": " tasks that it hasn't even seen before. So that one needs to be giant, ideally, as big as it can get," }, { "start": 469.68, "end": 474.48, "text": " right? To absorb all the information. So there you have the dichotomy and the weakness with the" }, { "start": 474.48, "end": 482.64, "text": " approach of having the same model being fine tuned down the road. And that's why the hyper transformer" }, { "start": 482.64, "end": 486.88, "text": " does a different thing. The hyper transformer says, well, I have a big model right here," }, { "start": 486.88, "end": 493.2, "text": " and that model will produce the weights of the small model. So we won't fine tune anything," }, { "start": 493.2, "end": 498.08, "text": " we will simply forward propagate the task through the model. And then that model will spit out the" }, { "start": 498.08, "end": 502.64, "text": " weights. And we're gonna do it in a kind of a smart way, because I believe this has been tried" }, { "start": 502.64, "end": 509.52, "text": " before. I think even I have tried it before. And it usually doesn't work and has particular reasons" }, { "start": 509.52, "end": 514.56, "text": " why it doesn't work. Among other things, neural networks are quite bad at hitting exact numbers," }, { "start": 514.56, "end": 519.52, "text": " they're good at classifying. But when it comes to like regressing on numbers, they're quite bad." }, { "start": 519.52, "end": 524.56, "text": " Also, there are errors that build up and so on. We'll get into that. However, what I said before," }, { "start": 524.56, "end": 531.3599999999999, "text": " the framing of the task. Now, few shot learning can be characterized in a few different ways." }, { "start": 531.3599999999999, "end": 537.68, "text": " Sometimes, often, it is also said, well, we have like a big data set available, right, big data set," }, { "start": 537.68, "end": 545.04, "text": " like ImageNet, or so on. And we use that to pre train the big model right here. And we use that" }, { "start": 545.04, "end": 550.9599999999999, "text": " to sort of prepare the model for a few shot learning. If this is particularly not, I'm sure" }, { "start": 550.96, "end": 556.5600000000001, "text": " you could somehow get it in there. But in this particular thing, the model needs to be able," }, { "start": 556.5600000000001, "end": 562.48, "text": " it's a transformer, it needs to be able to take all of these samples into its input, so into its" }, { "start": 562.48, "end": 569.36, "text": " context window. And therefore, it's almost like the model is limited to an upper bound of number" }, { "start": 569.36, "end": 575.52, "text": " of data points that it can input. So the framing of the task itself, like few shot learning means" }, { "start": 575.52, "end": 580.88, "text": " you have these tasks, and every task has few samples and so on. You know, differentiated from" }, { "start": 580.88, "end": 585.92, "text": " the framing where few shot or meta learning means that you want to get a big data set," }, { "start": 585.92, "end": 591.6, "text": " and then you want to fine tune it on many small data sets. That distinction is a smart one if you" }, { "start": 591.6, "end": 598, "text": " write a research paper, right? It is, if you say, well, we're actually in this situation. And here," }, { "start": 598, "end": 603.68, "text": " the model makes perfect sense, right? Here, it would be more difficult. I think just a lesson" }, { "start": 603.68, "end": 608.56, "text": " for people who write research papers is the framing of the problem is like half the battle." }, { "start": 609.5999999999999, "end": 614.7199999999999, "text": " So how does this model actually produce weights? This is a schematic overview over the hyper" }, { "start": 614.7199999999999, "end": 621.76, "text": " transformer method. The hyper transformer itself, you can see right, right here, not even that. So" }, { "start": 621.76, "end": 627.3599999999999, "text": " the hyper transformer itself is going to be this box right here, or this box right here, respectively," }, { "start": 627.36, "end": 633.44, "text": " that produces weights of neural networks, the weights of the neural networks that are produced" }, { "start": 633.44, "end": 639.36, "text": " are these things right here. So what's all this other stuff? Well, the hyper transformer needs" }, { "start": 639.36, "end": 644.64, "text": " some information to produce actual weights. Remember, what we're going to do is we're going" }, { "start": 644.64, "end": 651.2, "text": " to take a set of what they call support samples. So this is the data set. This is the entire data" }, { "start": 651.2, "end": 655.36, "text": " set. In this case, we have three data points. Now, this is a schematic, usually, as I said," }, { "start": 655.36, "end": 659.76, "text": " it's maybe a couple of dozen data points. In this case, we have three data points. So these are the" }, { "start": 659.76, "end": 664.88, "text": " X's and their corresponding labels. In this case, they call them C for like class labels, we call" }, { "start": 664.88, "end": 672.08, "text": " them Y. So these are data points and labels. And remember, you might not have exactly seen" }, { "start": 672.08, "end": 680.48, "text": " the classes before, or you might. This is this is up to sort of the task at hand. So what we're" }, { "start": 680.48, "end": 686, "text": " going to do is we're going to feed the hyper transformer with the data, right, we say, you" }, { "start": 686, "end": 691.04, "text": " know, here is this is the entire data set, we say, dear hyper transformer, this is the entire data" }, { "start": 691.04, "end": 699.04, "text": " set, please give us weights. Now the question is, how do we feed a data set to the transformer? And" }, { "start": 699.04, "end": 705.2, "text": " they have various ways of how to do that. And what they do is they want to provide like the most" }, { "start": 705.2, "end": 710.96, "text": " accurate information to the transformer as possible. So the first thing you see right here is" }, { "start": 710.96, "end": 716.48, "text": " that there is a feature extractor, this thing right here, it takes in a data point, each one" }, { "start": 716.48, "end": 722.72, "text": " individually, and it outputs features for it, which makes sense. So the transformer can't, for" }, { "start": 722.72, "end": 728.5600000000001, "text": " example, read images by itself, it can't read them out of the box. So we need some sort of data" }, { "start": 728.5600000000001, "end": 734.1600000000001, "text": " extraction pipeline. This is a feature extractor, it's going to be like a convolutional neural" }, { "start": 734.16, "end": 739.6, "text": " network that has a few layers that serves as a feature extractor, this can be trained end to end," }, { "start": 739.6, "end": 745.6, "text": " this can also be pre trained. What's important that we end up with a vector for each data point," }, { "start": 745.6, "end": 751.1999999999999, "text": " so each data point here gets a vector, which can then be fed into the transformer as you would" }, { "start": 751.1999999999999, "end": 758.24, "text": " feed a token embedding vector, if you were to do NLP. The other thing is, and this is not super" }, { "start": 758.24, "end": 764.5600000000001, "text": " important in the first layer, we also need to feed the hidden activations of the current layer. Now" }, { "start": 764.5600000000001, "end": 768.8, "text": " I want to leave this away right here because in the first layer, there's not that much of a" }, { "start": 768.8, "end": 773.6, "text": " distinction, but it's going to be important in all the following layers. And then we also want to" }, { "start": 773.6, "end": 777.92, "text": " feed an embedding of the class label right here. They put the class label directly, but it's" }, { "start": 777.92, "end": 782.72, "text": " actually an embedding of the class label that is fed to the transformer. So with all of this" }, { "start": 782.72, "end": 788.8000000000001, "text": " information, the transformer sees the entire data set it's supposed to classify, and it will output" }, { "start": 788.8000000000001, "end": 795.6, "text": " the weights of the convolutional neural network. Now you see right here, it's more complicated than" }, { "start": 795.6, "end": 800.48, "text": " just outputting the weights of the entire ConvNet. So what we could do is we can say, well," }, { "start": 800.48, "end": 804.8000000000001, "text": " I have a ConvNet with a bunch of layers, right? I put my data into the transformer and the" }, { "start": 804.8000000000001, "end": 809.6800000000001, "text": " transformer just like boom outputs all the weights at the same time like bam, bam, bam, bam, bam," }, { "start": 809.68, "end": 814.4799999999999, "text": " bam, bam, bam, bam. Here's all the weights. This would be very bad. Well, I guess, I don't know," }, { "start": 814.4799999999999, "end": 819.76, "text": " but I guess it wouldn't work, at least in my experience, because these errors, they would" }, { "start": 819.76, "end": 825.92, "text": " kind of accumulate, the transformer would need to guess from the initial embeddings right here," }, { "start": 825.92, "end": 831.8399999999999, "text": " what all the weights are. So essentially, internally, it would sort of have to model this" }, { "start": 832.4799999999999, "end": 838.7199999999999, "text": " model in its like, in it like inside of it, and then sort of guess what the representations in" }, { "start": 838.72, "end": 844.88, "text": " here are going to be in order to create the weights for the layer here. If you make a mistake right" }, { "start": 844.88, "end": 850.96, "text": " here, then or a small error, then that error will kind of accumulate through the layers and so on." }, { "start": 850.96, "end": 857.0400000000001, "text": " So it is quite bad advice to produce all the weights at the same time. Instead of the" }, { "start": 857.0400000000001, "end": 863.2, "text": " hyper transformer produces the first layers weights first, then it takes the data points," }, { "start": 863.2, "end": 870.32, "text": " propagates them through the weights that it itself had just produced, it observes the hidden" }, { "start": 870.32, "end": 876.72, "text": " activations after that layer. And then it reconsiders these hidden activations for" }, { "start": 876.72, "end": 881.6800000000001, "text": " producing the second layer's weights. This is all one big computational graph, you can actually" }, { "start": 881.6800000000001, "end": 886.96, "text": " model it in like TensorFlow PyTorch. And in the interview, we're going into a little bit of whether" }, { "start": 886.96, "end": 893.2800000000001, "text": " that's, you know, feasible for larger models and whatnot. But that's what it does. So it first produces" }, { "start": 893.2800000000001, "end": 901.0400000000001, "text": " the weights of the first layer right here, then it forward props the model. So this this F right here," }, { "start": 901.0400000000001, "end": 906, "text": " that is the resulting confnet. So you take the weights of the confnet, you fuse it together with" }, { "start": 906, "end": 911.2800000000001, "text": " the architecture. And that's going to be the generated layer number one, you take the data" }, { "start": 911.28, "end": 918.48, "text": " points, you feed them through the generated layer, you get the activations right here. And that those" }, { "start": 918.48, "end": 925.92, "text": " activations will become sort of the feature, this it says activation feature extractor. So you got" }, { "start": 925.92, "end": 930.0799999999999, "text": " you're going to add some hidden activations, which are also going to be if it's a confnet," }, { "start": 930.0799999999999, "end": 936.0799999999999, "text": " they're going to be some sort of a a tensor, some sort of like a and with by height by channel" }, { "start": 936.0799999999999, "end": 940.24, "text": " tensor. So again, you need like a feature extractor. But essentially, what you're going to do is you're" }, { "start": 940.24, "end": 946.4, "text": " going to feed the hidden activations again, to the transformer, along with the original data. So" }, { "start": 946.4, "end": 950.96, "text": " you're going to say here's the original data, here is the hidden activation it has at the layer that" }, { "start": 950.96, "end": 955.92, "text": " I'm trying to produce the weights for right now. And also, again, you're going to feed the class" }, { "start": 955.92, "end": 961.52, "text": " labels. So this is the totality of the information that transformer has available at every layer," }, { "start": 961.52, "end": 967.84, "text": " it has the original data, the hidden embeddings of the current layer after the last layers," }, { "start": 967.84, "end": 974.24, "text": " and the class labels, and then it's supposed to produce the next layer right here. Yeah, this," }, { "start": 974.24, "end": 978.88, "text": " as I said, the computational graph is quite enormous right here. Because if you if you think" }, { "start": 978.88, "end": 983.2, "text": " about it, right, you produce these weights right here, and then you forward prop through these" }, { "start": 983.2, "end": 990.8000000000001, "text": " weights. So any change you do to the weights will sort of change everything that's after. But Andre" }, { "start": 990.8000000000001, "end": 996, "text": " told me that this is it is quite possible to do with current deep learning frameworks, which is" }, { "start": 996, "end": 1001.6, "text": " a cool thing. Like imagine you had to do this by hand, like old papers, they always wrote down" }, { "start": 1001.6, "end": 1008, "text": " the gradient by hand. So this is in general, the model, what's possible and what they do is they" }, { "start": 1008, "end": 1013.52, "text": " say, well, we don't technically need to produce all the weights of a CNN. What we can do is if we" }, { "start": 1013.52, "end": 1018.88, "text": " have like a CNN, we can just use the hyper transformer to produce like the last layers weights or the" }, { "start": 1018.88, "end": 1025.12, "text": " last two layers weights, we can still train, for example, these things right here with back prop." }, { "start": 1025.12, "end": 1031.4399999999998, "text": " So what happens during training during training, this thing right here is one task, right? This is" }, { "start": 1031.4399999999998, "end": 1036.3999999999999, "text": " one data point, essentially, if you think from a meta learning perspective. So this one task," }, { "start": 1036.3999999999999, "end": 1042, "text": " I'm going to feed through the whole architecture. At the end, right here, I'm going to feed the data" }, { "start": 1042, "end": 1046.9599999999998, "text": " or these hidden activations, I'm going to feed them through, I'm going to get the labels of the" }, { "start": 1046.9599999999998, "end": 1052.3999999999999, "text": " data point, then I'm going to use back propagation to train all of this. So I'm going to use back" }, { "start": 1052.4, "end": 1059.3600000000001, "text": " propagation to train the hyper transformers parameters, possibly also the feature extractors" }, { "start": 1059.3600000000001, "end": 1067.52, "text": " parameters here and here. And if I don't like this is one step. And if those things only produce," }, { "start": 1067.52, "end": 1072.5600000000002, "text": " let's say the only produce the last two layers weights, I can also back propagate because the" }, { "start": 1072.5600000000002, "end": 1079.2800000000002, "text": " back propagation path is like this and then like, you know, like this and then so on. I can also use" }, { "start": 1079.28, "end": 1084.8799999999999, "text": " back propagation to train these first two layers. So the first two layers will essentially become" }, { "start": 1084.8799999999999, "end": 1090.8799999999999, "text": " this this common feature extractor like we talked about at the beginning, when we spoke about iMAML" }, { "start": 1090.8799999999999, "end": 1096.08, "text": " or something like this, they will essentially become shared among tasks. And then it is just" }, { "start": 1096.08, "end": 1102.56, "text": " the last layers that are tasks specifically produced for that. They do find in the experiments that" }, { "start": 1102.56, "end": 1109.9199999999998, "text": " for small models, like if the CNN is small, it pays off to produce more of the layers like also" }, { "start": 1109.9199999999998, "end": 1115.28, "text": " the filters. If the CNN, however, is large, they say they can get away with just producing like the" }, { "start": 1115.28, "end": 1121.04, "text": " last layer, which is the classification layer. So, you know, I don't know whether that's a limitation" }, { "start": 1121.04, "end": 1126.32, "text": " of the implementation of the method itself, it seems you know, that there's errors can accumulate" }, { "start": 1126.32, "end": 1132.24, "text": " and so on, the data sets. But also, as I said, the models should be small. So you don't even" }, { "start": 1132.24, "end": 1138.64, "text": " want to build super large models from you don't want to build super large models right right here," }, { "start": 1138.64, "end": 1145.28, "text": " the ones that you actually deploy. So that is that is the overview over the model. There is this other" }, { "start": 1145.28, "end": 1153.28, "text": " graphic right here, where they show how exactly the hyper transformer does the things it does. So here," }, { "start": 1153.28, "end": 1159.52, "text": " what it gets as an input are these things. So that we have the class sorry, the class label embeddings" }, { "start": 1159.52, "end": 1166.16, "text": " concatenated with the sample embeddings. So that is like one token as an input, they do praise" }, { "start": 1166.16, "end": 1172.16, "text": " the transformer because it's invariant to positions, right. So if you don't provide positional" }, { "start": 1172.16, "end": 1178.16, "text": " encodings, any permutation of the input will generate the same, the same output essentially." }, { "start": 1178.16, "end": 1184.32, "text": " So they this is one token, one token is an embedding of a sample and an embedding of its class" }, { "start": 1184.32, "end": 1190.6399999999999, "text": " label, the transformer can also take what they call no label embeddings, which means they can" }, { "start": 1190.6399999999999, "end": 1195.2, "text": " go into semi supervised learning. So sometimes you have a bunch of data and then a bunch more data" }, { "start": 1195.2, "end": 1201.36, "text": " that is not labeled. So they can just provide a pseudo embedding like for an additional class that" }, { "start": 1201.36, "end": 1208.1599999999999, "text": " essentially says this one's unlabeled, they do find that they can incorporate unlabeled data," }, { "start": 1208.16, "end": 1216, "text": " but only to a point like if it's too much, it gets too noisy. And then these things right here," }, { "start": 1216, "end": 1223.92, "text": " essentially, these are these are kind of requests to the transformer. These are embeddings for the" }, { "start": 1223.92, "end": 1228.8000000000002, "text": " weights that I'd like to produce. So essentially, this one right here might say, I want to produce" }, { "start": 1229.3600000000001, "end": 1237.3600000000001, "text": " layer one weights for the convolutional filter. And of that convolutional filter, I want to" }, { "start": 1237.36, "end": 1244.9599999999998, "text": " to generate slice number one. Right. So and then this one right here will be slice number one" }, { "start": 1244.9599999999998, "end": 1251.12, "text": " of the convolutional filter of layer one. So that you essentially with the weight embeddings," }, { "start": 1251.12, "end": 1255.52, "text": " what they call right here, these aren't really weight embeddings themselves. They're like weight" }, { "start": 1256.08, "end": 1262.24, "text": " address embeddings, like like like, you know, if you if you had to name the variables in your code," }, { "start": 1262.24, "end": 1267.76, "text": " these are essentially the variable names. So these are the it's like the it's like the CLS token," }, { "start": 1267.76, "end": 1274.16, "text": " right? You request something from the transformer, say here is a token. And on the output of that" }, { "start": 1274.16, "end": 1280.64, "text": " token, I'm going to expect you to give me a particular result. So that is how the hyper" }, { "start": 1280.64, "end": 1286.8, "text": " transformer takes in data and outputs data. Here's the generated weight slices. Now they can be" }, { "start": 1286.8, "end": 1292.72, "text": " directly the weights or they can be some sort of an embedding for the weights if you have to produce" }, { "start": 1292.72, "end": 1299.52, "text": " a lot of weights. So you can have like another model that scales up whatever is output here to" }, { "start": 1299.52, "end": 1306.6399999999999, "text": " the actual format of the weights. Yeah, many things possible right here. I don't want to go too much" }, { "start": 1306.6399999999999, "end": 1315.12, "text": " into the results right here. Because, as I said, one one big result is that if they have models" }, { "start": 1315.12, "end": 1320.8, "text": " that produce all of the weights right here, and also this here, logits and conv, like if they" }, { "start": 1320.8, "end": 1327.6799999999998, "text": " produce the logit layer and the convolutional layers, this only appears to really help if the" }, { "start": 1327.6799999999998, "end": 1335.1999999999998, "text": " model is small. So these here would be the smaller models, which do outperform if you only if you sort" }, { "start": 1335.1999999999998, "end": 1340.56, "text": " of learn jointly the conv layers and then only produce the logit layers with the hyper transformer." }, { "start": 1340.56, "end": 1345.9199999999998, "text": " Whereas for the bigger models, this doesn't seem to make that much of a difference anymore." }, { "start": 1345.9199999999998, "end": 1349.84, "text": " Other than that, I don't want to go too much into the results. However, the last thing I want to" }, { "start": 1349.84, "end": 1357.2, "text": " explain right here is their sort of chapter on the reasoning behind the self attention mechanism. So" }, { "start": 1357.2, "end": 1364.6399999999999, "text": " they argue that the self attention mechanism has special properties that make it very, very apt" }, { "start": 1364.64, "end": 1374.24, "text": " at producing the at producing weights for like a for a classifier. And specifically, they go into" }, { "start": 1374.24, "end": 1381.2800000000002, "text": " why it could be ideal, not ideal, but appropriate for producing weights for a classification layer." }, { "start": 1381.2800000000002, "end": 1386.5600000000002, "text": " So I want to make clear what's happening right here. They say theoretically or in concept," }, { "start": 1386.56, "end": 1396.32, "text": " the self attention mechanism right here can in one single layer of self attention can produce a" }, { "start": 1396.32, "end": 1403.12, "text": " classifier over the data samples that we give it right. This is this is what the transformer has" }, { "start": 1403.12, "end": 1407.76, "text": " to do. The transformer has to take in the data points, right, it has to produce essentially," }, { "start": 1407.76, "end": 1413.84, "text": " let's think of the last layer has to produce a classifier for those data points. So the question" }, { "start": 1413.84, "end": 1419.9199999999998, "text": " is, how does it do that? There's no SGD involved, there's no training involved, right, you could" }, { "start": 1419.9199999999998, "end": 1424.6399999999999, "text": " fine tune but they're in the forward prop through the transform, there's no training involved." }, { "start": 1424.6399999999999, "end": 1434.3999999999999, "text": " So how conceivably can self attention mechanism produce the a classifier over data. And for that," }, { "start": 1434.3999999999999, "end": 1441.1999999999998, "text": " they show that even a one layer self attention mechanism can conceivably produce a simple" }, { "start": 1441.2, "end": 1449.44, "text": " classifier. How does it do that? So let's think of what a classifier is. A classifier is essentially" }, { "start": 1449.44, "end": 1456.24, "text": " a weight matrix. And the weight matrix in the, let's say in the, let's make a coordinate system," }, { "start": 1456.24, "end": 1464.56, "text": " let's say this is the embedding space of the last layer. So what the weight matrix looks like is," }, { "start": 1464.56, "end": 1470.32, "text": " let's say we have, let's say we have three different classes, or say we have four different," }, { "start": 1470.32, "end": 1477.76, "text": " oopsie, we have four different classes. So this is one, two, three, four, or four different classes," }, { "start": 1478.6399999999999, "end": 1486.96, "text": " which means that the weight matrix is going to be like D by four. So it has one slice, one column," }, { "start": 1486.96, "end": 1494.56, "text": " or row, one column for each of the one column for each of the classes. And how is it going to" }, { "start": 1494.56, "end": 1499.36, "text": " classify? Well, it's going to run every data point x through the weight matrix multiplied by" }, { "start": 1499.36, "end": 1505.12, "text": " the weight matrix. And that gives me four numbers. So it's an inner product which eat with each of" }, { "start": 1505.12, "end": 1509.84, "text": " the columns gives me four numbers, which is essentially the inner product with with each of" }, { "start": 1509.84, "end": 1516.8799999999999, "text": " the four vectors right here. If x is, for example, here, the biggest number is going to be the one" }, { "start": 1516.8799999999999, "end": 1522.08, "text": " with the largest dot product. So that's going to be this one right here. And that's going to be my" }, { "start": 1522.08, "end": 1526.7199999999998, "text": " class label. These are usually called logits, the numbers that turn out right here. But they're" }, { "start": 1526.72, "end": 1533.76, "text": " essentially similarities to the columns of the weight matrix of the last layer. So can we produce" }, { "start": 1533.76, "end": 1539.6000000000001, "text": " this weight matrix? Can the self attention mechanism produce the purple weight matrix," }, { "start": 1539.6000000000001, "end": 1546.24, "text": " such that at least the training data points are classified correctly? Now, in order to do that," }, { "start": 1546.24, "end": 1550.8, "text": " what it needs to do is it needs to do the following for each of the data points that we have," }, { "start": 1550.8, "end": 1558.48, "text": " it has to that the weight matrix can essentially be constructed like this. So why here, this is" }, { "start": 1558.96, "end": 1568.56, "text": " why is a one hot encoding over the class label, and ej is some embedding of the data point. And" }, { "start": 1568.56, "end": 1575.68, "text": " you see, if we calculate this up, why is only going to be one at the at the class where the data" }, { "start": 1575.68, "end": 1583.04, "text": " points label is. So the weight matrix, essentially, this is going to address only the column of the" }, { "start": 1583.04, "end": 1590.5600000000002, "text": " weight matrix, where that data point falls into. And by the sum, it essentially sorts all the data" }, { "start": 1590.5600000000002, "end": 1596.64, "text": " points into its their respective columns. And within each column, it sums all the data points up." }, { "start": 1596.64, "end": 1603.6000000000001, "text": " So if we do, if you apply this formula, then the data points in class one are going to be summed" }, { "start": 1603.6, "end": 1609.12, "text": " together or averaged together and put into the weight matrix at column one, and the same for" }, { "start": 1609.12, "end": 1613.6799999999998, "text": " column two, the same for concrete that would actually result in a good classifier because" }, { "start": 1614.48, "end": 1620.8799999999999, "text": " the classifier would just be the mean embedding of all of the data points that belong to this class," }, { "start": 1620.8799999999999, "end": 1628, "text": " which is, you know, a reasonable classifier in first approximation. The question is, can the" }, { "start": 1628, "end": 1632.7199999999998, "text": " self attention mechanism produce something like this? So let's ask ourselves right here," }, { "start": 1632.72, "end": 1645.2, "text": " let's say, let's say, let's draw this again. So we have x1, y1, x2, y2, x3, y3. If you remember," }, { "start": 1645.2, "end": 1651.76, "text": " the self attention mechanism will calculate queries, keys, and values for each of the data" }, { "start": 1651.76, "end": 1658.24, "text": " points, it will provide like it will do like a softmax over the queries and the keys of over an" }, { "start": 1658.24, "end": 1663.84, "text": " outer product of them, then multiply them by the values. So the question is, this entire thing" }, { "start": 1663.84, "end": 1670.48, "text": " needs to turn out to be a W like that. So this entire thing needs to address all the data points" }, { "start": 1670.48, "end": 1676.88, "text": " of the same class and then average them. We can say, well, that's pretty easy. Okay. And they say" }, { "start": 1676.88, "end": 1680.96, "text": " this, this is what they say in the paragraph right here, they try to make a case that this can be" }, { "start": 1680.96, "end": 1686.88, "text": " done. So if we take the data points, and we we just calculate, we calculate their embedding," }, { "start": 1686.88, "end": 1691.2, "text": " like they have some embedding function, actually, we don't even need, let's just say the data points" }, { "start": 1691.2, "end": 1699.1200000000001, "text": " themselves are already embedded. So x, x2, like is is the embedding of itself. So let's say," }, { "start": 1699.6000000000001, "end": 1705.7600000000002, "text": " these the data points themselves, they are, they're the values. Yeah, let's say they are the" }, { "start": 1705.7600000000002, "end": 1714, "text": " values, then the labels are the keys. So that means that if two data points have the same label," }, { "start": 1714, "end": 1720.16, "text": " they will expose the same key. Now, all we need to do essentially, is we need to make sure that" }, { "start": 1720.16, "end": 1727.52, "text": " the queries, so over here, we have the weight, the address of weight one and the address of weight" }, { "start": 1727.52, "end": 1733.52, "text": " two, we need to make sure that the queries that the weights produce, if those queries" }, { "start": 1735.04, "end": 1742.8, "text": " are matching with the with the keys that these expose, you can see that this all works fine." }, { "start": 1742.8, "end": 1749.44, "text": " That this all works out. So weight one would say, well, I am the weight that is going to be the" }, { "start": 1749.44, "end": 1756.32, "text": " column for class one, I'm going to expose as a query, the embedding, which they like Xi," }, { "start": 1756.32, "end": 1761.6, "text": " I don't know, I just write this letter, the embedding for class one, whereas these data" }, { "start": 1761.6, "end": 1768.6399999999999, "text": " points say, well, I'm going to expose as a key, whatever the embedding of my class label is." }, { "start": 1768.64, "end": 1775.3600000000001, "text": " And now you can see that weight one, given that it's class one will aggregate all of the different" }, { "start": 1775.3600000000001, "end": 1782.48, "text": " data points, but only if they expose the key of class one, right, if y two equals C one," }, { "start": 1782.96, "end": 1788.5600000000002, "text": " they will aggregate together the query and the keys will match, they will aggregate together," }, { "start": 1788.5600000000002, "end": 1793.92, "text": " the values are the data points themselves. So this will result for each of the weights in an" }, { "start": 1793.92, "end": 1799.44, "text": " average of all the data points that correspond to its particular class label. That's exactly how we" }, { "start": 1799.44, "end": 1806.3200000000002, "text": " build the W. Notice that it's not important what the queries of the data point tokens are. It's" }, { "start": 1806.3200000000002, "end": 1811.8400000000001, "text": " also not important what the keys and the values of the weights are, as long as they don't conflict" }, { "start": 1811.8400000000001, "end": 1819.2, "text": " with these queries right here. It's just a proof of concept that this could happen. Another proof" }, { "start": 1819.2, "end": 1826.16, "text": " of concept they do in a similar vein is that with respect to the unlabeled samples, remember," }, { "start": 1826.16, "end": 1830.24, "text": " we said we can also do semi supervised learning right here, we have a data point and we have no" }, { "start": 1830.24, "end": 1836.32, "text": " label available for it, what can be done and they show that with a two layer self attention" }, { "start": 1836.32, "end": 1841.92, "text": " mechanism, you can actually do it such that in the first layer, sort of the labels are propagated," }, { "start": 1841.92, "end": 1848.72, "text": " and then in the second layer, you can apply the same thing as right here. So how do we propagate" }, { "start": 1848.72, "end": 1858.8, "text": " labels? Again, let's think of data point x1, y1, x2, y2. And now let's think of x3 with unknown" }, { "start": 1858.8, "end": 1865.2, "text": " label. What can we do? What we can do is and now we have to rethink a bit, how do we structure the" }, { "start": 1865.2, "end": 1871.44, "text": " self attention mechanism such that label is propagated in the next layer to this data point" }, { "start": 1871.44, "end": 1880.3200000000002, "text": " right here. So let's say this data point here exposes as a query, it exposes its data point," }, { "start": 1880.3200000000002, "end": 1886.72, "text": " like its vector, its embedding, that is going to be the query. So every token right here as a query" }, { "start": 1886.72, "end": 1896.72, "text": " exposes its embedding, and also as a key, and specifically these two as a key, they expose" }, { "start": 1896.72, "end": 1905.3600000000001, "text": " their vector. And they also expose their embedding of the class as values. So now you can see that" }, { "start": 1905.3600000000001, "end": 1911.1200000000001, "text": " we're going to match up keys and queries. Now let's say these two data points here are very" }, { "start": 1911.1200000000001, "end": 1917.04, "text": " similar, their keys and their queries are going to match, right. And specifically since this here is" }, { "start": 1917.04, "end": 1925.28, "text": " the query, the value of that data point is going to be put is going to be aggregated in that token," }, { "start": 1925.28, "end": 1931.92, "text": " whereas these might not match as much. So this value isn't going to be aggregated. So here you" }, { "start": 1931.92, "end": 1939.2, "text": " can see that this is essentially a nearest neighbor classifier, this token is going to look which of" }, { "start": 1939.2, "end": 1944, "text": " the other data points are similar to myself. If this is really how it's, you know, how the" }, { "start": 1944, "end": 1949.2, "text": " mechanism is structured, is going to look which are similar to myself. And from all of those that" }, { "start": 1949.2, "end": 1954.72, "text": " are similar, I'm going to average the class label embedding for myself and all that, and then" }, { "start": 1954.72, "end": 1960.64, "text": " all I need is like a residual connection to copy over the data and some orthogonality. And I have" }, { "start": 1960.64, "end": 1967.04, "text": " essentially aggregated class labels from all the nearest neighbors of the other data points." }, { "start": 1967.04, "end": 1971.68, "text": " That's the first layer. And then the second layer. Now every data point has a class embedding," }, { "start": 1971.68, "end": 1978.48, "text": " and I can just use this one to build a classifier. So this is a proof of concept that with two layers," }, { "start": 1978.48, "end": 1985.28, "text": " it is actually possible to label unlabeled data in the nearest neighbor fashion, and then build" }, { "start": 1985.28, "end": 1993.2, "text": " a rudimentary classifier over like an average embedding classifier over that data. I hope that" }, { "start": 1993.2, "end": 1998.16, "text": " made a little bit of sense. We're going to talk about some supporting experiments that are in the" }, { "start": 1998.16, "end": 2003.6, "text": " appendix that actually show and we're going to talk about this in the interview that actually show" }, { "start": 2003.6, "end": 2010.6399999999999, "text": " that if these are these two layers, right, in the first layer, the unlabeled examples, they attend" }, { "start": 2010.6399999999999, "end": 2018.56, "text": " to the labeled examples a lot. And then in the transformer layer two, the weights actually attend," }, { "start": 2018.56, "end": 2024.32, "text": " sorry, in the layer one, the weights attend only to the labeled examples, you can see they don't" }, { "start": 2024.32, "end": 2030, "text": " attend to the unlabeled examples at all. In layer two, however, the weights, having already attended" }, { "start": 2030, "end": 2036.64, "text": " to the labeled examples now also attend to the unlabeled examples, which means that the unlabeled" }, { "start": 2036.64, "end": 2042.16, "text": " examples have gained some information in layer two. As I said, we're going to talk about this more" }, { "start": 2042.16, "end": 2046.96, "text": " in the interview. So what you're going to hear in the interview is also again, like a little bit of" }, { "start": 2046.96, "end": 2051.76, "text": " a different perspective on the model. We'll go through the experiments, we go through means," }, { "start": 2051.76, "end": 2057.36, "text": " some criticisms that I have about the model itself. And yeah, so I realized this was a bit" }, { "start": 2057.36, "end": 2062.48, "text": " of a longer explanation than usual. I'm trying these things out. Again, let me know what you" }, { "start": 2062.48, "end": 2069.1200000000003, "text": " prefer like short introductions to the paper, then an interview, or like long explanations followed" }, { "start": 2069.1200000000003, "end": 2074.8, "text": " by a short or long interview. Do you want to pick and choose from the video and so on? I need to" }, { "start": 2074.8, "end": 2081.92, "text": " know. So please tell me. And as always, if you like this, then leave a like, comments and yeah," }, { "start": 2081.92, "end": 2094.56, "text": " have fun. Welcome everyone. Today I have with me here, Andrei Smoginov. Is that approximately" }, { "start": 2094.56, "end": 2099.44, "text": " correct, Andrei? Approximately correct. Yeah, thank you. Thanks for having me. Thank you. So" }, { "start": 2099.44, "end": 2107.04, "text": " you're one of the authors of the Hyper Transformer paper. And this is a pretty cool paper I found." }, { "start": 2107.04, "end": 2114.96, "text": " Little like it, I do not hang it out big time, but I have once tried to publish a paper using" }, { "start": 2114.96, "end": 2122.88, "text": " one model to produce the weights of another model. It worked like barely. So when I saw a paper that" }, { "start": 2122.88, "end": 2128.8, "text": " actually does it in practice, I was like, I was stoked. I was like, yay, this is, you know," }, { "start": 2128.8, "end": 2137.44, "text": " it's pretty cool. So yeah, welcome, first of all, and congrats on this paper. I liked it. If we" }, { "start": 2137.44, "end": 2145.04, "text": " look at like the high level idea of the paper, it is, you generate, essentially use one neural" }, { "start": 2145.04, "end": 2149.44, "text": " network to generate weights for another neural network. There are many settings which that can" }, { "start": 2149.44, "end": 2154.7200000000003, "text": " be applied to. Do you want to maybe transmit like the high level idea of what the paper is about?" }, { "start": 2154.72, "end": 2160.72, "text": " Yeah, so we basically started exactly as a question, can we even train a model that generates" }, { "start": 2160.72, "end": 2166.64, "text": " all of the weights for the other model? But unlike hyper network paper, which we were inspired by," }, { "start": 2166.64, "end": 2172.64, "text": " in this case, we really wanted to modulate the model that we produce on the task that it's" }, { "start": 2172.64, "end": 2177.8399999999997, "text": " supposed to solve. So basically, what we wanted is we wanted to take a description of a task that" }, { "start": 2177.8399999999997, "end": 2184.64, "text": " the model is supposed to solve. And in a single model, we wanted to take a description of a task" }, { "start": 2184.64, "end": 2190.3199999999997, "text": " forward paths converted into the weights of a fully trained model, and not even a subset of weights," }, { "start": 2190.3199999999997, "end": 2194.64, "text": " but we wanted to take a big bite and generate all of the weights of the model. And the question," }, { "start": 2194.64, "end": 2200.3199999999997, "text": " you know, from the very beginning was, is it even going to work? Will we get results comparable" }, { "start": 2201.04, "end": 2207.6, "text": " to what you might get by training the model to start with? And the, in principle, the applications," }, { "start": 2207.6, "end": 2212.7999999999997, "text": " we consider the few short learning as an application, but it really kind of the field could be," }, { "start": 2212.8, "end": 2218.32, "text": " for example, personalization. And I guess like one of the main ideas of this paper, what we try to" }, { "start": 2218.32, "end": 2224.7200000000003, "text": " convey is that in many cases, when people discuss few short learning, or when they discuss" }, { "start": 2224.7200000000003, "end": 2230.7200000000003, "text": " personalization, they think of models as, you know, as large as they need to be to serve all of the" }, { "start": 2230.7200000000003, "end": 2236.32, "text": " potential users, all of the potential needs. And here we ask a question, well, what if the" }, { "start": 2236.32, "end": 2241.6000000000004, "text": " computational budget is actually limited? And you want to basically to produce a model that is" }, { "start": 2241.6, "end": 2247.68, "text": " very, very fine tuned to specific needs of a specific user. So basically, we are kind of trying" }, { "start": 2247.68, "end": 2253.04, "text": " to separate the complexity of a small model that is supposed to solve a task for each individual" }, { "start": 2253.04, "end": 2260.56, "text": " kind of user from the complexity of a big model that's supposed to know everything about the world" }, { "start": 2260.56, "end": 2265.12, "text": " and everything about how to generate these small models. And so that kind of was one of the main" }, { "start": 2265.12, "end": 2270.7999999999997, "text": " ideas that we can separate them. And we were hoping that we would be able to capture the" }, { "start": 2270.8, "end": 2276.96, "text": " variety of the small models and how they depend on the task inside this big transformer based" }, { "start": 2276.96, "end": 2278.48, "text": " model, essentially." }, { "start": 2278.48, "end": 2285.76, "text": " The idea seems so clear when you think about it, but it is so far away when you've, at least to me," }, { "start": 2285.76, "end": 2290.5600000000004, "text": " it was like, once I saw your paper, I was like, oh, yeah, of course, because what we were doing" }, { "start": 2290.5600000000004, "end": 2296.1600000000003, "text": " in the past few years, I think is, and this started maybe with something like BERT made it" }, { "start": 2296.16, "end": 2301.44, "text": " really popular to like pre train a really big model and then kind of just fine tune it on" }, { "start": 2301.44, "end": 2306.64, "text": " on your little data and all of these meta learning or a few short learning papers, they would do the" }, { "start": 2306.64, "end": 2312.72, "text": " same thing. They would pre train a big model. And then for example, MAML would train that same" }, { "start": 2312.72, "end": 2320.24, "text": " model on the small data. Essentially, what they were trying to do was find like a good initialization," }, { "start": 2320.24, "end": 2325.92, "text": " right to, to then continue training. But it's not like, you know," }, { "start": 2325.92, "end": 2331.2000000000003, "text": " essentially that the same model was tasked with two different things. The same model was tasked" }, { "start": 2331.2000000000003, "end": 2337.84, "text": " with ultimately solving all of these small tasks that you throw at it. And at the same time, like" }, { "start": 2337.84, "end": 2344.08, "text": " finding a good compromise between all the models and you separating this, it makes total sense." }, { "start": 2344.08, "end": 2350.88, "text": " You say, well, one network is really responsible for integrating all of these tasks and the other," }, { "start": 2350.88, "end": 2357.04, "text": " like the smaller network that is produced is responsible for solving the individual tasks." }, { "start": 2357.04, "end": 2361.84, "text": " This has lots of applications. I think you mentioned it in the paper, personalization" }, { "start": 2361.84, "end": 2368.6400000000003, "text": " is probably a big one, right? If I just have my, you know, 20, 30 photos in my photo library," }, { "start": 2369.52, "end": 2376.7200000000003, "text": " now I could, I could have like a small model that is just made for me, derived by this," }, { "start": 2376.72, "end": 2384.3199999999997, "text": " by this big model. So I was, I was, it seems obvious in hindsight, but I, it was, to me," }, { "start": 2384.3199999999997, "end": 2392.16, "text": " it was not on the forefront of my mind. So you, you, I mean, there are legitimate concerns when," }, { "start": 2392.9599999999996, "end": 2396.9599999999996, "text": " when you say we want one network to just output the weights of another network." }, { "start": 2397.7599999999998, "end": 2402.72, "text": " Specifically, we know that neural networks are really good at classifying stuff," }, { "start": 2402.72, "end": 2411.2, "text": " of, you know, outputting ones or zeros or, or, or into a bucket, but they're not so good at" }, { "start": 2411.2, "end": 2416, "text": " outputting exact numbers, right? They're not, they're not to the, to the point where a lot" }, { "start": 2416, "end": 2421.2799999999997, "text": " of reinforcement learning papers, for example, they would rather bucket the values they're" }, { "start": 2421.2799999999997, "end": 2427.4399999999996, "text": " trying to predict and then predict the class of the bucket rather than predicting an actual number." }, { "start": 2427.44, "end": 2433.6, "text": " So, you know, did you, did you have, you must have had these concerns as well. And, and how," }, { "start": 2433.6, "end": 2437.44, "text": " how exactly does your model like predict the weights of another model?" }, { "start": 2438.64, "end": 2442.96, "text": " Yeah, that's, that was definitely a concern. And actually, as it turned out for" }, { "start": 2443.52, "end": 2449.68, "text": " convolutional models solving future learning tasks, that doesn't end up being a huge issue," }, { "start": 2449.68, "end": 2456.08, "text": " partly because for, especially for very large models, you don't really need to fine tune all" }, { "start": 2456.08, "end": 2462.24, "text": " of the weights very carefully. Because if your embedding is already good enough, embedding model," }, { "start": 2462.24, "end": 2468.08, "text": " then in principle, all you need to do is look at the final embeddings produced for different images" }, { "start": 2468.08, "end": 2473.7599999999998, "text": " and kind of based on that figure out how you need to assign labels to essentially these embeddings." }, { "start": 2473.7599999999998, "end": 2479.2, "text": " So in practice, as we've seen, all that matters for, especially for very large models that," }, { "start": 2479.2, "end": 2484.7999999999997, "text": " you know, can have a very large embedding inside is to just generate the final layer." }, { "start": 2484.8, "end": 2492.2400000000002, "text": " But once you get into the land of smaller models, it's still important to, to generate all of the" }, { "start": 2492.2400000000002, "end": 2498.5600000000004, "text": " layers. And one of the approaches that we use basically, what we have to do carefully is" }, { "start": 2498.5600000000004, "end": 2505.6800000000003, "text": " instead of generating all layers at once from the inputs. So the input in this case, just to clarify" }, { "start": 2505.6800000000003, "end": 2512.48, "text": " is the, in a future learning scenario, you have a support set that basically tells you, these are" }, { "start": 2512.48, "end": 2517.52, "text": " the images that the final network has to classify as a cat, for example. And these are the images" }, { "start": 2517.52, "end": 2521.76, "text": " that the final network should classify as a dog. And then we hope that the generated model would" }, { "start": 2521.76, "end": 2527.76, "text": " be able to classify both cats as cats and all dogs as dogs. And so our model in this case would see" }, { "start": 2527.76, "end": 2534.2400000000002, "text": " a support set. It would see that sufficiently small batch of images. And instead of generating," }, { "start": 2534.2400000000002, "end": 2539.6, "text": " you know, like immediately layer one, two, three, four, we decided that we needed to generate them" }, { "start": 2539.6, "end": 2544.64, "text": " layer by layer, starting from the lower one. And the motivation for this is really, if you imagine" }, { "start": 2544.64, "end": 2549.92, "text": " that you modify the very early layer, then all of the activations throughout the network will be" }, { "start": 2549.92, "end": 2556.48, "text": " modified. And so basically, if you modify the first layer, you have to then adjust all of the rest." }, { "start": 2557.04, "end": 2563.68, "text": " And the, you know, the differences will propagate and will potentially amplify through the network." }, { "start": 2563.68, "end": 2569.6, "text": " And so you have to potentially be very aware of what the previous layer generates to actually" }, { "start": 2570.3199999999997, "end": 2575.44, "text": " generate the following layer. And I guess that was one of the ideas how we could stabilize that" }, { "start": 2575.44, "end": 2582.7999999999997, "text": " layer by the layer generation process. So is it fair to say that you're, so this," }, { "start": 2582.7999999999997, "end": 2590.08, "text": " what you call support set, that is essentially the data set of the few shot task, right? It's like," }, { "start": 2590.08, "end": 2596.48, "text": " here are 10 images of dogs and cats with corresponding labels, which in this is a diagram" }, { "start": 2596.48, "end": 2602.16, "text": " of your architecture in general. So this is the support set with the samples and the labels." }, { "start": 2602.16, "end": 2608.16, "text": " And then you make use of lots of signals throughout the network, such that, as you said," }, { "start": 2608.16, "end": 2613.52, "text": " you make sure you first build the first layer and then based on that build the second layer." }, { "start": 2613.52, "end": 2620.16, "text": " So if we quickly walk through it, one core component is this image feature extractor that" }, { "start": 2620.16, "end": 2627.92, "text": " is a trained, let's say a ConvNet that is applied to each image individually, and just extract some" }, { "start": 2627.92, "end": 2635.68, "text": " sort of a feature map. And this feature map is then given to every single computation layer" }, { "start": 2635.68, "end": 2643.12, "text": " in your set, right? So your main model is this transformer thing here that it takes in," }, { "start": 2643.12, "end": 2649.7599999999998, "text": " as you can see, it takes in these embeddings of the support set. It takes in the labels," }, { "start": 2650.48, "end": 2658, "text": " obviously, right? It needs to know what it needs to classify how. And it takes in this thing right" }, { "start": 2658, "end": 2665.04, "text": " here. And I think in the first layer, this is kind of the same as these image embeddings. It's" }, { "start": 2665.04, "end": 2669.7599999999998, "text": " another embedding, right? It's sort of a signaler. It's another embedding, it's smaller. Yeah. But" }, { "start": 2669.76, "end": 2676.1600000000003, "text": " it's basically produced from the same image essentially. I guess we'll come, like this is" }, { "start": 2676.1600000000003, "end": 2681.2000000000003, "text": " in subsequent layers, this will actually be different. So what we do is the transformer" }, { "start": 2681.2000000000003, "end": 2688.1600000000003, "text": " here, it will produce the weights of the first layer. And as you said, we don't just produce" }, { "start": 2688.1600000000003, "end": 2693.6800000000003, "text": " the first layer and the second and the third in one batch. But what seems to be really important" }, { "start": 2693.68, "end": 2700.48, "text": " is now we actually forward propagate, I need a different color here. We forward propagate" }, { "start": 2700.48, "end": 2706.3199999999997, "text": " the support set through the weights we've just generated. And that will give us the next layers" }, { "start": 2706.3199999999997, "end": 2712.08, "text": " representation. And then that can be used again by the transformer to generate the next layers" }, { "start": 2712.08, "end": 2719.04, "text": " weights, along with the embeddings of the original images, along with the labels, and so on. So this" }, { "start": 2719.04, "end": 2725.12, "text": " sort of building up to the end seems to be important and refeeding the information through" }, { "start": 2725.12, "end": 2732.56, "text": " your own generation. Is it fair to say that it's a little bit like an auto regressive language model" }, { "start": 2732.56, "end": 2739.44, "text": " if I feed in whatever I output again and again? Yeah, exactly. In some version of the paper," }, { "start": 2739.44, "end": 2744.8, "text": " we even wrote it this way, basically. But yeah, it's kind of like a progressive process in a way" }, { "start": 2744.8, "end": 2750.32, "text": " that you generate basically the next, the following layer weights conditioned on the" }, { "start": 2750.32, "end": 2754.5600000000004, "text": " weights that you already generated essentially. And again, the motivation you know for this is" }, { "start": 2754.5600000000004, "end": 2759.76, "text": " if you imagine yourself having images, original images, and you have to generate weights for the" }, { "start": 2759.76, "end": 2765.1200000000003, "text": " layer number three convolutional layer, right? You may have a trouble if you just look at the" }, { "start": 2765.1200000000003, "end": 2769.1200000000003, "text": " images themselves. But if you look at the activations that the previous layer gives you" }, { "start": 2769.1200000000003, "end": 2773.76, "text": " with the corresponding labels, you can then look at small patches of those activations and figure" }, { "start": 2773.76, "end": 2780.2400000000002, "text": " out that, oh, look, there is this feature that is seen in all of the images labeled as one. So perhaps" }, { "start": 2780.2400000000002, "end": 2785.5200000000004, "text": " I can have a filter specifically looking for this in the activations, because that's what the layer" }, { "start": 2785.5200000000004, "end": 2791.2000000000003, "text": " is going to operate on. And that's basically why we have to do it this way. When we try to do it all" }, { "start": 2791.2000000000003, "end": 2798, "text": " at once, the model is significantly less stable. Yeah, I mean, that is what one would expect. So I" }, { "start": 2798, "end": 2804.96, "text": " think the other trick here is that every step where you generate the weights of a new layer," }, { "start": 2804.96, "end": 2809.12, "text": " you have sort of all the information you have, what's the data set I'm trying to classify," }, { "start": 2809.12, "end": 2815.92, "text": " how does that data set look at the input to that layer, right? And that helps me tremendously to" }, { "start": 2815.92, "end": 2823.76, "text": " then produce the weights. This looks, it's two layers right here. And it looks already" }, { "start": 2823.76, "end": 2831.1200000000003, "text": " quite complicated, right? Here is like an entire transformer, right? And then that transformer" }, { "start": 2831.1200000000003, "end": 2837.6800000000003, "text": " generates a set of weights, right? And then I forward propagate a signal through the weights" }, { "start": 2837.6800000000003, "end": 2844.88, "text": " that were generated by using that signal as an input, right? So I'm imagining the computation" }, { "start": 2844.88, "end": 2851.84, "text": " graph here gets pretty iffy, quite like quite fast. And then there is another transformer." }, { "start": 2851.84, "end": 2859.92, "text": " And then I'm backprop through all of this back, right? What's the concerns with stability here?" }, { "start": 2860.8, "end": 2864.48, "text": " And how big does the computational graph get? Is this a problem?" }, { "start": 2865.52, "end": 2870.1600000000003, "text": " So in practice, it was not a big problem. But you're right that it grows faster than" }, { "start": 2870.1600000000003, "end": 2875.84, "text": " generally conventional CNN would grow. But here what you care about, I assume, is kind of the" }, { "start": 2875.84, "end": 2884.32, "text": " longest path in this graph. And so I assume it will still be proportional to the number of layers." }, { "start": 2884.32, "end": 2889.6800000000003, "text": " But it is true that when you generate the final layer, you essentially have to back propagate" }, { "start": 2889.6800000000003, "end": 2893.84, "text": " through all of the transformers that you have, right? Like if you have multiple layers in each" }, { "start": 2893.84, "end": 2898.08, "text": " transformer, you have to back propagate through all of them. But in practice, this thing was" }, { "start": 2898.08, "end": 2904.08, "text": " surprisingly stable to train, actually. That was one of the things that surprised me. The only issue" }, { "start": 2904.08, "end": 2909.2799999999997, "text": " I think is I wasn't able to, like when we looked at this, we weren't able really to train it with" }, { "start": 2910, "end": 2915.2799999999997, "text": " anything other than SGD, not that we really spent a lot of time doing this. And one of the" }, { "start": 2915.2799999999997, "end": 2920, "text": " assumptions why could at least partially be the case is because when we train it, the way we train" }, { "start": 2920, "end": 2925.84, "text": " it is basically we train kind of like you would train an usual model where you give input images" }, { "start": 2925.84, "end": 2931.52, "text": " and you produce labels. Here we give tasks, which are support sets, and we produce weights." }, { "start": 2931.52, "end": 2937.12, "text": " But essentially, since we have memory limitations, we basically do one task per batch. So it's kind" }, { "start": 2937.12, "end": 2942.08, "text": " of a single sample batch, if you will, in that sense, in a sense that it's just one support" }, { "start": 2944.08, "end": 2951.36, "text": " batch. And so maybe that's why the methods weren't exactly super stable when you really applied" }, { "start": 2951.36, "end": 2957.6, "text": " other techniques, but with SGD trained absolutely fine. And we discovered, like I think to some" }, { "start": 2957.6, "end": 2962.24, "text": " degree, one of the advantages that we claim this method might have is that it actually might be" }, { "start": 2962.24, "end": 2967.2799999999997, "text": " more stable than mammal-based methods, for example, because in mammal-like methods, you really have to" }, { "start": 2967.2799999999997, "end": 2974.24, "text": " back propagate through potentially many unrolls if you want to really apply several SGD updates." }, { "start": 2974.88, "end": 2981.92, "text": " So here we really propagate through a single model in that sense, although to some degree," }, { "start": 2981.92, "end": 2991.28, "text": " it's still a manual layer model. And you make a particular case that transformers are a good" }, { "start": 2991.28, "end": 3000.48, "text": " choice of model for this particular task. Why are transformers so good? They have some trivial nice" }, { "start": 3000.48, "end": 3006.08, "text": " properties. One of the trivial properties is that in the usual design, when you don't use any kind" }, { "start": 3006.08, "end": 3011.92, "text": " of masking or when you don't use positional embeddings, the output of the transformer is kind of" }, { "start": 3011.92, "end": 3017.36, "text": " an equivariant to the inputs. So in a sense, if you change the order of input tokens, the output" }, { "start": 3017.36, "end": 3023.92, "text": " tokens will change the same way. And that's what we want for a model like this, because the order" }, { "start": 3023.92, "end": 3030.96, "text": " of samples in the support set, in which order in which you show kittens doesn't really matter. All" }, { "start": 3030.96, "end": 3035.7599999999998, "text": " that matters is that you show them all. And so that was one nice property that it can" }, { "start": 3035.76, "end": 3040.32, "text": " handle potentially a varying number of samples and it doesn't matter what order they come in." }, { "start": 3040.32, "end": 3045.6000000000004, "text": " But another consideration, and that was, you know, there are prior papers that looked at" }, { "start": 3047.0400000000004, "end": 3052.4, "text": " attention-based methods applied specifically for kind of generating the last layer," }, { "start": 3052.4, "end": 3060.1600000000003, "text": " the last logits layer of the model. And we make a claim that these attention-based mechanisms" }, { "start": 3060.16, "end": 3067.12, "text": " are useful specifically for sure for generating the final logits layer. And I guess we make a" }, { "start": 3067.12, "end": 3072.72, "text": " distinction, we say that, first of all, when you are in supervised regime and, you know," }, { "start": 3072.72, "end": 3078.8799999999997, "text": " you have a label for every sample, if you naively want to say, oh, you know what, I will generate" }, { "start": 3078.8799999999997, "end": 3087.44, "text": " the last layer by just essentially averaging embeddings for each class. And that will be a" }, { "start": 3087.44, "end": 3092.4, "text": " row in my final logits layer. Because what you want to do is when a new embedding arrives," }, { "start": 3092.4, "end": 3097.28, "text": " for example, you don't know yet, you take a dot product with all of the embeddings that you know" }, { "start": 3097.28, "end": 3104, "text": " correspond to certain classes. And that gives you basically the kind of the higher this dot product" }, { "start": 3104, "end": 3109.52, "text": " is, the more aligned the vectors are, the more likely you will say that, oh, yeah, that's probably" }, { "start": 3109.52, "end": 3115.76, "text": " that class. And so one of the approaches to generating the logits layer is basically to" }, { "start": 3115.76, "end": 3119.92, "text": " average embeddings for each class. Right? So if you have a bunch of people, you take embeddings" }, { "start": 3119.92, "end": 3126.6400000000003, "text": " for these images, you average them, and that's your row in that logits weight matrix that you" }, { "start": 3126.6400000000003, "end": 3135.28, "text": " produce. But if you want to just average embeddings that can be done with a simple attention mechanism," }, { "start": 3135.28, "end": 3141.2000000000003, "text": " you basically you take the output that you want to produce, that row, and you make it attend to" }, { "start": 3141.2, "end": 3149.2799999999997, "text": " embeddings of all of the images labeled as label one. And then when you attend to only those," }, { "start": 3149.2799999999997, "end": 3153.8399999999997, "text": " you only need in the end to average their corresponding values, which will be embeddings." }, { "start": 3153.8399999999997, "end": 3158, "text": " And you end up calculating the average of embeddings of all of the caps. And that's what" }, { "start": 3158, "end": 3165.12, "text": " you want. So that was the very simple mechanism that you could mainly use that can also be" }, { "start": 3165.12, "end": 3173.92, "text": " implemented as a basic attention based model. And you so that that is so you make specific" }, { "start": 3173.92, "end": 3181.52, "text": " arguments. Yeah, this is the reasoning behind the self attention mechanism here, you show a diagram" }, { "start": 3181.52, "end": 3191.04, "text": " that goes a little bit into how exactly you you build up this. So you have your support set on" }, { "start": 3191.04, "end": 3198.8, "text": " is inputs as tokens, along with their labels or the class embeddings, let's say, you also have" }, { "start": 3198.8, "end": 3204.96, "text": " the opportunity to put in data without labels, which I guess is quite often available in these" }, { "start": 3204.96, "end": 3213.68, "text": " tasks. So users, let's let's again assume I have my photo library, right, I might even label some" }, { "start": 3213.68, "end": 3220.32, "text": " of the photos, maybe with like hashtags, or I send them to my, you know, I share them in some album" }, { "start": 3220.32, "end": 3225.6800000000003, "text": " or so, but most of the photos will have no label. So you also have the opportunity here to just" }, { "start": 3225.6800000000003, "end": 3232.6400000000003, "text": " input them as well and just say, here is some data. And I think the a lot of models benefit from kind" }, { "start": 3232.6400000000003, "end": 3238.96, "text": " of like extra data just to know what the data manifold looks like. So that's the the the sense" }, { "start": 3238.96, "end": 3244.6400000000003, "text": " here. But you in your experiments, you also show you have to be careful how how much of those you" }, { "start": 3244.64, "end": 3251.3599999999997, "text": " you introduce right in comparison. But in essence, you can you can take this in and then for each" }, { "start": 3251.3599999999997, "end": 3257.44, "text": " weight that you want to output, you have a special token. So this is this will be equivalent to let's" }, { "start": 3257.44, "end": 3265.44, "text": " say the the CLS token or so in in a in like a BERT model when I want to classify something, I have one" }, { "start": 3265.44, "end": 3271.04, "text": " token per output that I want to do the these have different embeddings. So like they're like" }, { "start": 3271.04, "end": 3277.84, "text": " addresses of the weights that I want to output. And yeah, this this whole thing, it's it's then" }, { "start": 3277.84, "end": 3284.72, "text": " there's just just as transformer but you have you already said with respect to like the last layer" }, { "start": 3284.72, "end": 3291.68, "text": " that this is implementable. But you also make the case that if I have a two layer transformer," }, { "start": 3291.68, "end": 3299.92, "text": " I can implement like a nearest neighbor algorithm is like, do you want to maybe just briefly," }, { "start": 3299.92, "end": 3305.6800000000003, "text": " what's the idea behind how does how does a two layer transformer implement nearest neighbor?" }, { "start": 3306.96, "end": 3312.48, "text": " We never full disclosure, we never really tried to implement it right like in code. But it's it's a" }, { "start": 3312.48, "end": 3317.04, "text": " simple cost of that hopefully is correct. But the idea was that yeah, when you have labeled and" }, { "start": 3317.04, "end": 3321.92, "text": " unlabeled samples, again, you can imagine that you have a bunch of embeddings that you know the label" }, { "start": 3321.92, "end": 3326.16, "text": " of like you know that these are cats, but you also have a bunch of unlabeled embeddings everywhere." }, { "start": 3326.16, "end": 3330.96, "text": " So naively what you might want to do is you look at them on all unlabeled embeddings," }, { "start": 3330.96, "end": 3335.7599999999998, "text": " and you'll notice that some of them are really close to the embeddings that you already know" }, { "start": 3335.7599999999998, "end": 3341.3599999999997, "text": " are cats. So you say, okay, you know what, I will label them as cats because they are suspiciously" }, { "start": 3341.3599999999997, "end": 3347.7599999999998, "text": " suspiciously close. And when I have to compute the final, you know, clusters, basically, I will just" }, { "start": 3347.7599999999998, "end": 3354.3199999999997, "text": " average over both labeled samples and those that I just labeled because I'm pretty sure that they" }, { "start": 3354.32, "end": 3360.7200000000003, "text": " are actually cats. Right. So that's kind of a reasonable way to do this. And if you have" }, { "start": 3360.7200000000003, "end": 3366.8, "text": " self attention based mechanism, you can do it in two steps. The first step is really when you try" }, { "start": 3366.8, "end": 3374.56, "text": " to propagate labels from labeled samples to these nearby unlabeled samples. And if you remember how" }, { "start": 3374.56, "end": 3381.6000000000004, "text": " the right how the self attention mechanism works is you can you need to make sure that the closeness" }, { "start": 3381.6, "end": 3389.12, "text": " is based on the product of embeddings of samples. And you can make unlabeled samples attend to nearby" }, { "start": 3389.12, "end": 3396.24, "text": " labeled samples. And when when this when I'm an unlabeled sample and I attend to all nearby labeled" }, { "start": 3396.24, "end": 3404, "text": " samples, I can basically look at them and pool their class information to myself, to my personal" }, { "start": 3404, "end": 3410.64, "text": " embedding. So even though my class embedding before was I have no idea what I am, as I said," }, { "start": 3410.64, "end": 3416.24, "text": " I am as soon as I saw several neighbors in the embedding space, I can just borrow their embeddings." }, { "start": 3416.7999999999997, "end": 3423.04, "text": " And this way be certain that I belong to that cat category, actually. And so that's kind of the idea" }, { "start": 3423.04, "end": 3429.68, "text": " of what the first layer should do. And then after this is done, the second layer basically looks at" }, { "start": 3429.68, "end": 3435.52, "text": " specifically the traces of this label, whether it was, you know, originally given to the sample," }, { "start": 3435.52, "end": 3443.6, "text": " or it propagated to the sample. And as soon as I observe that all these samples are marked as a cat" }, { "start": 3443.6, "end": 3449.92, "text": " or kind of, you know, a smell of a cat, basically, they they borrow that cat reference, I can again," }, { "start": 3449.92, "end": 3455.28, "text": " I can take all of them average their embeddings. And that will be my final kind of the centroid of" }, { "start": 3455.28, "end": 3462.32, "text": " the cluster that I'm producing. And you know, funny enough, we didn't really look into what exactly" }, { "start": 3462.32, "end": 3466.4, "text": " the transformer does, because it's really difficult. But if you just look at the attention" }, { "start": 3466.4, "end": 3473.6000000000004, "text": " maps of two layers, it turns out to be suspiciously close to the mechanism that how self attention" }, { "start": 3473.6000000000004, "end": 3478.7200000000003, "text": " actually works on the train model. Because we see that exactly like in the very first layer," }, { "start": 3479.36, "end": 3486.56, "text": " unlabeled samples, attend to labeled samples. And at the same time, weights get information from" }, { "start": 3486.56, "end": 3492.7999999999997, "text": " labeled samples. But at the second layer, weights actually get something from these unlabeled" }, { "start": 3492.7999999999997, "end": 3497.68, "text": " samples that were just updated. So it does look like this mechanism or at least the version of" }, { "start": 3497.68, "end": 3503.68, "text": " it is actually what's happening. And you have sort of you do in the appendix, you do a lot of" }, { "start": 3503.68, "end": 3510.72, "text": " investigations into these into various attention maps and so on. Is there is there one you'd like" }, { "start": 3510.72, "end": 3516.7999999999997, "text": " to particularly highlight? Yeah, it's this one, basically. I don't remember exactly how it works." }, { "start": 3516.7999999999997, "end": 3522.24, "text": " But I think in the first one, the first transformer layer, it's very awkward to describe. So basically," }, { "start": 3522.24, "end": 3527.2, "text": " what happens is the top rows are the ones that will generate weights. So basically, if you look" }, { "start": 3527.2, "end": 3534.08, "text": " at the, for example, the very top row, this row is telling you when the weights are updated, what are" }, { "start": 3534.08, "end": 3540, "text": " they looking at? Yeah. So in this case, you can see that they are looking at the columns corresponding" }, { "start": 3540, "end": 3545.36, "text": " to labeled samples. So it means that these weights borrow something from labeled samples." }, { "start": 3546.16, "end": 3552.64, "text": " But at the same time, if you look below, you will see that at the bottom of this plot," }, { "start": 3552.64, "end": 3559.6, "text": " there are unlabeled samples, and they also attempt to label samples. So basically, after this first" }, { "start": 3559.6, "end": 3565.6, "text": " layer, both the weights are updated, and the unlabeled samples are updated somehow from the" }, { "start": 3565.6, "end": 3572.4, "text": " labeled sample information. And then at the second layer... It's interesting that the weights, they" }, { "start": 3572.4, "end": 3578.24, "text": " don't care at all about the unlabeled samples. They learn to ignore the unlabeled samples." }, { "start": 3578.7999999999997, "end": 3584, "text": " That's pretty interesting. Yeah. And that's exactly kind of what you would want. Because at this point," }, { "start": 3584, "end": 3589.2799999999997, "text": " right, these unlabeled samples really getting not that much information about what you need to" }, { "start": 3589.2799999999997, "end": 3594, "text": " generate. And that's actually maybe one of the reasons why when you have too many of these samples," }, { "start": 3594, "end": 3598.08, "text": " the model becomes overwhelmed, and you have to introduce them carefully. You can't just throw" }, { "start": 3598.08, "end": 3605.6, "text": " like hundreds of unlabeled samples at this model. And then at the second layer, basically what" }, { "start": 3605.6, "end": 3610.16, "text": " happens is at this point, you don't care how labeled or unlabeled samples are modified because" }, { "start": 3610.16, "end": 3615.04, "text": " you don't take that information into account after the second layer. So all you care about" }, { "start": 3615.04, "end": 3621.12, "text": " with this transformer layer two is the top rows. It's again the weights. And here you can see that" }, { "start": 3621.12, "end": 3627.52, "text": " top rows actually are at the second layer, attempt to unlabel samples, but almost fully neglect the" }, { "start": 3627.52, "end": 3635.6, "text": " labeled samples. Which is also actually quite remarkable that there is this divide. And in our" }, { "start": 3635.6, "end": 3640.72, "text": " opinion, that basically shows that there is this flow of information, right, from labeled samples" }, { "start": 3640.72, "end": 3649.12, "text": " to unlabeled and then from unlabeled at the final layer to the weights. Yeah. And so that..." }, { "start": 3649.12, "end": 3654.96, "text": " It looks like the weights, they don't even care about the labeled samples anymore, but it is" }, { "start": 3654.96, "end": 3660.3199999999997, "text": " probably because they've already gotten a lot of information in layer one out of these labeled" }, { "start": 3660.3199999999997, "end": 3666.64, "text": " samples, right? And now they're also aggregating across the unlabeled samples. Do you think there" }, { "start": 3666.64, "end": 3673.68, "text": " might be like some sort of... In these autoregressive models, if they have causal attention and so on," }, { "start": 3673.68, "end": 3681.2799999999997, "text": " do you think there might be some smart attention mask that you could implement that would kind of" }, { "start": 3681.2799999999997, "end": 3689.04, "text": " encourage the algorithm to behave better? I'm not exactly sure what I'm looking for, but do you think" }, { "start": 3689.04, "end": 3697.2, "text": " that there could be some smart biases built into the attention masks here so that we actually make" }, { "start": 3697.2, "end": 3702.48, "text": " the model pay attention to the more relevant things or that we want them to pay attention to?" }, { "start": 3702.48, "end": 3708, "text": " Yeah. I think actually that's a wonderful idea. Actually, as a matter of fact, what we do right" }, { "start": 3708, "end": 3712.88, "text": " now is we say, oh, we think that's what's happening. And then we look at the attention masks and we see" }, { "start": 3712.88, "end": 3717.92, "text": " that, yes, that's mostly what's happening. But you're absolutely right that if we were certain that" }, { "start": 3717.92, "end": 3723.36, "text": " we wanted to restrict the flow of information in a particular way, we could very well manipulate" }, { "start": 3724.2400000000002, "end": 3731.12, "text": " basically the masking of each self-attention layer and this way very carefully restrict how the" }, { "start": 3731.12, "end": 3735.52, "text": " computation should actually be performed. Yeah, you're right. That's actually a very interesting" }, { "start": 3735.52, "end": 3740.64, "text": " point. I imagine that could be applied to a bunch of other applications like what you just said." }, { "start": 3740.64, "end": 3746.88, "text": " If you know in advance how the information should flow essentially, you can implement this" }, { "start": 3747.44, "end": 3754.16, "text": " by using proper attention masks. You also have a bunch of other visualizations right here. Do you" }, { "start": 3754.16, "end": 3760.16, "text": " want to maybe tell us a little bit about... Because I just thought they looked kind of funky." }, { "start": 3760.16, "end": 3767.3599999999997, "text": " What do they represent? These are weights of the actual CNN layers. Yeah. To be honest," }, { "start": 3767.3599999999997, "end": 3773.7599999999998, "text": " it's really difficult to interpret them. And I think I would rather not go into too much because" }, { "start": 3773.7599999999998, "end": 3779.6, "text": " we really have a hard time understanding what this means. But I think to some degree, one thing to" }, { "start": 3780.3999999999996, "end": 3786.16, "text": " observe is that, first of all, we discussed several ways of generating weights. And one of them," }, { "start": 3786.16, "end": 3792.3999999999996, "text": " it all ends up being how you take the outputs produced by a transformer and how you combine" }, { "start": 3792.3999999999996, "end": 3797.52, "text": " them into single convolutional filters. If you think about this, there are multiple opportunities." }, { "start": 3797.52, "end": 3804.96, "text": " You can, for example, take outputs and assume that they are different channels of a kernel by" }, { "start": 3804.96, "end": 3814.08, "text": " kernel by input channel thing. Or you can assume that they are k-squared different slices that you" }, { "start": 3814.08, "end": 3819.52, "text": " combine, but each has a dimension of input channels, output channels. And then you reshape" }, { "start": 3819.52, "end": 3826.3199999999997, "text": " them into k by k by input channels by output channels. And depending on how you choose to do" }, { "start": 3826.3199999999997, "end": 3832.08, "text": " that, the model will have different inductive biases, actually, because a very lazy transformer" }, { "start": 3832.08, "end": 3837.7599999999998, "text": " model, for example, wouldn't probably want to generate very different embeddings, very different" }, { "start": 3837.7599999999998, "end": 3843.6, "text": " tokens as output. It would more likely, if it's maybe poorly trained, would generate a very similar" }, { "start": 3843.6, "end": 3849.12, "text": " outputs. And so if you assume that these outputs correspond to spatial dimensions," }, { "start": 3849.8399999999997, "end": 3856.96, "text": " then you will see much more smooth produced weights. Because essentially, you treat every" }, { "start": 3856.96, "end": 3865.2, "text": " coordinate, every spatial coordinate as different produced tokens, and they are all very, very" }, { "start": 3865.2, "end": 3873.7599999999998, "text": " similar. But if you do that in channel, channel wise, then now kind of the k by k thing, k by k" }, { "start": 3873.7599999999998, "end": 3879.52, "text": " kernel can look completely random. It can't like there doesn't have to be any order. They can look" }, { "start": 3879.52, "end": 3886.72, "text": " like minus five plus five minus 11 plus 12. And so that's why they will look much more kind of" }, { "start": 3887.6, "end": 3893.8399999999997, "text": " random visually. And so I think we kind of observe that. But we were also curious to see if the" }, { "start": 3893.84, "end": 3901.6000000000004, "text": " generated kernels vary significantly for different supports and tasks. And I guess again, we see that" }, { "start": 3901.6000000000004, "end": 3907.28, "text": " they vary, but we cannot interpret this. We hope to get slightly better results, like more" }, { "start": 3907.28, "end": 3912.96, "text": " interpretable. But in that regard, I think what matters is that when we generate small models," }, { "start": 3912.96, "end": 3918.96, "text": " we can measure the difference of training and test accuracies. When you actually generate only" }, { "start": 3918.96, "end": 3924.16, "text": " the final layer, or you generate all of the layers, including computational layers. And we see that" }, { "start": 3924.16, "end": 3931.28, "text": " for teeny tiny models, for especially small ones, it really starts to matter that you generate all" }, { "start": 3931.28, "end": 3937.92, "text": " of the layers instead of only the final one. And so that in the future, if we really want to understand" }, { "start": 3937.92, "end": 3942.88, "text": " what this model does, we really have to look at the smaller models. And then the variation of kernels" }, { "start": 3942.88, "end": 3947.76, "text": " with respect to different support sets will be probably more telling on what's happening." }, { "start": 3947.76, "end": 3953.92, "text": " So yeah, you find that in the small models, you fare better generating all the weights than" }, { "start": 3954.5600000000004, "end": 3962.7200000000003, "text": " if you... And in the larger models, the strategy is essentially to only train the model to produce" }, { "start": 3962.7200000000003, "end": 3968.48, "text": " the last layer and then use regular back prop through that generated layer to essentially learn" }, { "start": 3968.48, "end": 3974.48, "text": " the lower layers. And that might be, I mean, that might also be like an effect of just the method" }, { "start": 3974.48, "end": 3982.48, "text": " not being figured out yet quite right. It's a complicated method. It seems maybe a bit unstable," }, { "start": 3982.48, "end": 3987.12, "text": " especially if you go to a larger model and also the errors in larger model, they accumulate over" }, { "start": 3987.12, "end": 3994.72, "text": " the layers. You have many weights. If one is kind of off, then what are you going to do? So yeah," }, { "start": 3994.72, "end": 4005.12, "text": " it's an exciting future. Have you thought about... So you generate this output, essentially," }, { "start": 4005.12, "end": 4011.68, "text": " this weight token at the end, it generates some sort of an embedding. I'm gonna scroll for a whole" }, { "start": 4011.68, "end": 4021.4399999999996, "text": " bunch of time right here. No, I think I copied the paper twice. I'm sorry. So you're going to" }, { "start": 4021.44, "end": 4027.28, "text": " generate for each of these weight tokens, you're going to generate some sort of an output which" }, { "start": 4027.28, "end": 4033.44, "text": " you can interpret directly. Is it also possible to interpret this output as, let's say, the embedding" }, { "start": 4034.08, "end": 4042.64, "text": " of a convolutional kernel? That there be another model like a GAN or a VQVAE or something like this," }, { "start": 4042.64, "end": 4048.96, "text": " where you essentially generate into the embedding space of that model. And then that model can be" }, { "start": 4048.96, "end": 4055.92, "text": " really good at producing like realistic filters. It just sort of needs to know what filter to produce." }, { "start": 4055.92, "end": 4061.92, "text": " Is that something that you have tried or have in mind or ruled out as a possibility?" }, { "start": 4062.48, "end": 4067.6, "text": " No, it's definitely something that we have in mind because really, when we try to scale these" }, { "start": 4067.6, "end": 4072.7200000000003, "text": " methods, it becomes difficult when you have to generate really humongous weights. And at this" }, { "start": 4072.7200000000003, "end": 4078.08, "text": " point, yes, the best thing you can probably do is basically have a separate model that receives" }, { "start": 4078.08, "end": 4082.7999999999997, "text": " embeddings of the weights that it needs to generate and that learns to generate those" }, { "start": 4082.7999999999997, "end": 4088.08, "text": " weights themselves. So yeah, you got it exactly right. That's basically one of the paths to scale" }, { "start": 4088.08, "end": 4095.44, "text": " it to significantly larger models. We can scale this model even to resinate architecture, but" }, { "start": 4096.08, "end": 4101.76, "text": " to maybe to speed up training, to improve, like you said, we don't even know for sure if" }, { "start": 4101.76, "end": 4108, "text": " the lack of the need to train lower common layers is a result of a, that the method is" }, { "start": 4108, "end": 4113.12, "text": " having more trouble. And I definitely have some evidence that if we pre-train certain parts of" }, { "start": 4113.12, "end": 4118.400000000001, "text": " the model, then it trains slightly better. So there is definitely that complication of training" }, { "start": 4118.400000000001, "end": 4125.6, "text": " this thing end to end, but also it's few shots so that every, if you train some model on five" }, { "start": 4125.6, "end": 4130.08, "text": " classes, having all of the images, of course it will perform a significantly better because in a" }, { "start": 4130.08, "end": 4135.04, "text": " few shots setting, you have only a few images per class. And so what can you do? So that's another" }, { "start": 4135.04, "end": 4142.32, "text": " source of maybe imperfection that results in you not having to generate the foundational layers." }, { "start": 4142.8, "end": 4147.6, "text": " But also it's that I think honestly, the classification problem is kind of simple in a" }, { "start": 4147.6, "end": 4152.08, "text": " sense that you need to find boundaries between classes. Generative models, for example, are much," }, { "start": 4152.08, "end": 4155.92, "text": " much more challenging because you have to understand the structure of the data manifold," }, { "start": 4155.92, "end": 4160.56, "text": " not just how to separate the data manifolds. And so I think if you ask me where this can become" }, { "start": 4160.56, "end": 4166.8, "text": " important, that people will be there. So you've made several experiments on, oh sorry, you made" }, { "start": 4166.8, "end": 4178.56, "text": " several experiments on benchmark data sets. Could you maybe summarize what in your opinion," }, { "start": 4178.56, "end": 4183.84, "text": " in the experiments, what was most striking to you? What stood out the most? What's the main" }, { "start": 4183.84, "end": 4190.24, "text": " conclusion you pulled out of there? Yes. So I think one of the conclusions was that yes," }, { "start": 4190.24, "end": 4195.2, "text": " when we generate small models, we can potentially perform better than you know," }, { "start": 4195.2, "end": 4201.360000000001, "text": " mammal based methods or methods that we train a small embedding and then try to just generate" }, { "start": 4201.360000000001, "end": 4208.4800000000005, "text": " the final layer by using again like that dot product method, for example, averaging embeddings," }, { "start": 4208.48, "end": 4214.32, "text": " finding clusters. So we definitely, because we have such a large model generating a smaller model," }, { "start": 4214.32, "end": 4219.36, "text": " we have a lot more capacity to learn about the world. And when we generate a small model," }, { "start": 4219.36, "end": 4225.2, "text": " we are much more informed than say a mammal model would be. So we definitely think that for smaller" }, { "start": 4225.2, "end": 4230.639999999999, "text": " models, there is an advantage of doing what we do, a significant bump in accuracy, and especially in" }, { "start": 4230.639999999999, "end": 4236.799999999999, "text": " the training accuracy, which might matter if what you care about is basically specializing on the" }, { "start": 4236.8, "end": 4242.72, "text": " model, basically specialize a model, assuming that the classes are seen during training," }, { "start": 4242.72, "end": 4247.84, "text": " because generalization is I train on cats and dogs, but I generalize the new unseen classes." }, { "start": 4247.84, "end": 4254, "text": " And that's key, that can be complicated. But when you know for sure that you need to specialize for" }, { "start": 4254, "end": 4260.96, "text": " a user, their model to work on some of the classes that you saw during training, then what you care" }, { "start": 4260.96, "end": 4266.16, "text": " about is the training accuracy. And because we have such a big model, we definitely get much" }, { "start": 4266.16, "end": 4271.5199999999995, "text": " higher training accuracy. So that's about this. So basically, again, for smaller models, there's" }, { "start": 4271.5199999999995, "end": 4276.48, "text": " definitely an advantage of doing this. When it comes to very large models, we see that when we" }, { "start": 4276.48, "end": 4282.32, "text": " generate just the last logic layer, we get competitive results to a lot of different methods that" }, { "start": 4282.32, "end": 4287.92, "text": " try to carefully design those functions and the methods that they use. So, you know, without" }, { "start": 4287.92, "end": 4292.4, "text": " doing anything, we basically are kind of compatible. So that was, again, encouraging." }, { "start": 4292.4, "end": 4297.12, "text": " And the final thing that, to be honest, that I personally found very, very exciting is that" }, { "start": 4297.679999999999, "end": 4306.639999999999, "text": " I think of this as having a potential to move to very, very abstract task descriptions. So" }, { "start": 4306.639999999999, "end": 4312.08, "text": " in future learning, your task description is essentially, look, these are several images you" }, { "start": 4312.08, "end": 4317.92, "text": " should label as cat, these few images you should label as dog, etc. But in one of our examples, we" }, { "start": 4317.92, "end": 4323.04, "text": " add unlabeled samples, right, and that improves the accuracy quite a lot. So I was very excited" }, { "start": 4323.04, "end": 4328.88, "text": " to see that, you know, we can get a very significant bump in the model accuracy by giving it unlabeled" }, { "start": 4328.88, "end": 4335.28, "text": " examples. So somehow, without us telling how we should use unlabeled examples, it learned to use" }, { "start": 4335.28, "end": 4340.96, "text": " them. But in the future, you could also imagine using a lot of other types of data, you could" }, { "start": 4340.96, "end": 4346.24, "text": " provide, like you mentioned, photo metadata, hashtags, which might be sparsely related to" }, { "start": 4346.24, "end": 4350.8, "text": " some images, for example, you could have textual descriptions, for example, what people are" }, { "start": 4350.8, "end": 4356.4, "text": " interested in, and so on and so forth. And that would be a task description from which your model" }, { "start": 4356.4, "end": 4362.08, "text": " learns to generate a model very well aligned with the interests of that particular person, for" }, { "start": 4362.08, "end": 4368.32, "text": " example. So I am kind of personally very excited about this. And I think that that performance on" }, { "start": 4368.32, "end": 4374.24, "text": " semi supervised task, and the fact that the model learned what to do in that case, is the" }, { "start": 4374.24, "end": 4383.679999999999, "text": " most interesting. Yeah, and I didn't mention another thing is basically what we already covered is that" }, { "start": 4383.679999999999, "end": 4388.32, "text": " for smaller models, you don't only care about generating the last logic layer, but you seem to" }, { "start": 4388.32, "end": 4394.16, "text": " benefit from generating all of the comp layers as well. And it still remains to see if there is a big" }, { "start": 4394.16, "end": 4399.76, "text": " difference versus generating something like fill layers. But I'm hopeful that generating, as a" }, { "start": 4399.76, "end": 4409.360000000001, "text": " matter of fact, all of the layers full of weights is important. Cool. Yeah, I think that was, I mean," }, { "start": 4409.360000000001, "end": 4416.64, "text": " I've looked at the results. I was positively surprised. I mean, it's not at the level yet" }, { "start": 4416.64, "end": 4421.6, "text": " where it's like, you know, we can generate like the state of the art ImageNet models, but it's not" }, { "start": 4421.6, "end": 4426.56, "text": " necessary. Like, I think it's important to keep in mind that these models, they're supposed to be" }, { "start": 4426.56, "end": 4432.160000000001, "text": " deployed somewhere where I have very little data, right? I just want to kind of produce a small model" }, { "start": 4433.280000000001, "end": 4439.120000000001, "text": " for that little data, maybe in personalization, right? The model even doesn't even have to be big" }, { "start": 4439.120000000001, "end": 4444.160000000001, "text": " because it may be, you know, on my phone or something like this. And there's definitely also," }, { "start": 4444.160000000001, "end": 4450.64, "text": " I think opportunities in the future to combine this thing with, how should I say, to combine it" }, { "start": 4450.64, "end": 4456.64, "text": " with optimization, right? It's not necessarily a binary choice between I generate the weights or I," }, { "start": 4456.64, "end": 4461.92, "text": " you know, like MAML, I optimize from some checkpoint, I can also, you know, maybe find" }, { "start": 4461.92, "end": 4469.12, "text": " clever ways of combining it. But I really like the approach of the paper right here. Yeah, is there," }, { "start": 4469.12, "end": 4474.4800000000005, "text": " I don't know, is there anything else you want to say about this general research direction?" }, { "start": 4474.4800000000005, "end": 4480.320000000001, "text": " Anything people, if people want to dive into this, you know, where can they go? What can they do?" }, { "start": 4480.32, "end": 4487.04, "text": " What are like, you know, big open questions that you're not considering researching? So, you know," }, { "start": 4488.08, "end": 4495.759999999999, "text": " people don't scoop you. That's okay. Well, I do think that, I think we are still actually" }, { "start": 4495.759999999999, "end": 4500.719999999999, "text": " interested in this research direction. And we think that this particular model could be scaled" }, { "start": 4500.719999999999, "end": 4506.08, "text": " and could be applied to other problems as well. And that it could potentially again, shine either" }, { "start": 4506.08, "end": 4510.4, "text": " in certain instances where you have a limited computational budget or where you have the complex" }, { "start": 4510.4, "end": 4516.4, "text": " tasks, like generative tasks. But overall, yeah, I would say that some of these ideas are not new." }, { "start": 4516.4, "end": 4521.28, "text": " If somebody wants to just know what people have been doing in that regard, like for example," }, { "start": 4521.28, "end": 4527.6, "text": " what you just mentioned, Leo paper does something similar where they also have a generation of" }, { "start": 4527.6, "end": 4532.48, "text": " model layers, but at the same time, they also use MAML approach, essentially. So they kind of" }, { "start": 4532.48, "end": 4539.839999999999, "text": " back propagate through the generator of, yeah, essentially through the generator, in a way." }, { "start": 4539.839999999999, "end": 4546.799999999999, "text": " So it's kind of similar to our approach joined with the MAML. But there are other techniques" }, { "start": 4546.799999999999, "end": 4553.2, "text": " that generate weights. And I think that hyper network, original paper is really interesting," }, { "start": 4553.2, "end": 4558.16, "text": " and it gave rise to a lot of interesting research. And there were recently papers that looked into" }, { "start": 4558.16, "end": 4565.36, "text": " generative models that also looked at hyper, that were inspired by hyper networks. And honestly," }, { "start": 4565.36, "end": 4571.68, "text": " I think that, yeah, in the future, we might see models that generate other models and that actually" }, { "start": 4571.68, "end": 4580.88, "text": " works in practice. Let's see. Yeah. So I, to be honest, it's very difficult to say what else can" }, { "start": 4580.88, "end": 4585.599999999999, "text": " be done. But one of the things that maybe people will scoop me, but what I'm interested in is," }, { "start": 4585.6, "end": 4590.96, "text": " I was just thinking about this, is we can also generate not just weights of the CNN models," }, { "start": 4590.96, "end": 4598.160000000001, "text": " we can generate policies as well, for example. And as a very simple example, which is very toyish," }, { "start": 4598.160000000001, "end": 4604.240000000001, "text": " but could be interesting, is for example, you have a robot that you build, you take a few photos of" }, { "start": 4604.240000000001, "end": 4611.120000000001, "text": " it, and you upload them to the service. And the service basically is tasked with having several" }, { "start": 4611.12, "end": 4615.84, "text": " images of the robot and having maybe images of the terrain that it's supposed to walk on," }, { "start": 4615.84, "end": 4624, "text": " just generate a locomotive controller policy for it, just like that, just from images. And so I think" }, { "start": 4624, "end": 4631.84, "text": " that doing things like this might be interesting. Again, one thing to note is that model distillation" }, { "start": 4631.84, "end": 4637.28, "text": " and training and combining these methods with training might be very, very interesting as well," }, { "start": 4637.28, "end": 4646.8, "text": " and probably can be very compatible with methods like this. But I think that's one direction what" }, { "start": 4646.8, "end": 4654, "text": " the future is, generating models from specifications of what needs to happen, instead of necessarily" }, { "start": 4654, "end": 4661.44, "text": " just training them from scratch. Cool. Well, in this case, Andrey, thank you so much for being" }, { "start": 4661.44, "end": 4667.599999999999, "text": " with us here. This was awesome. Thank you for your insights. And I hope to see you again with a" }, { "start": 4668.4, "end": 4671.759999999999, "text": " transformer that generates an even bigger transformer." }, { "start": 4671.76, "end": 4689.04, "text": " Thank you very much. Yeah, thanks for inviting me. It was very interesting to discuss this paper." } ]
vVRC-0VKPrg
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Learning Rate Grafting: Transferability of Optimizer Tuning (Machine Learning Research Paper Review)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "grafting", "learning rate", "deep learning learning rate", "neural network learning rate", "adaptive learning rate", "adaptive optimizer", "learning rate grafting", "optimizer grafting", "adam", "sgd", "adagrad", "lars", "lamb", "openreview", "reviewer", "automatic learning rate", "learning rate decay", "learning rate warmup" ]
#grafting #adam #sgd The last years in deep learning research have given rise to a plethora of different optimization algorithms, such as SGD, AdaGrad, Adam, LARS, LAMB, etc. which all claim to have their special peculiarities and advantages. In general, all algorithms modify two major things: The (implicit) learning rate schedule, and a correction to the gradient direction. This paper introduces grafting, which allows to transfer the induced learning rate schedule of one optimizer to another one. In that, the paper shows that much of the benefits of adaptive methods (e.g. Adam) are actually due to this schedule, and not necessarily to the gradient direction correction. Grafting allows for more fundamental research into differences and commonalities between optimizers, and a derived version of it makes it possible to computes static learning rate corrections for SGD, which potentially allows for large savings of GPU memory. OUTLINE 0:00 - Rant about Reviewer #2 6:25 - Intro & Overview 12:25 - Adaptive Optimization Methods 20:15 - Grafting Algorithm 26:45 - Experimental Results 31:35 - Static Transfer of Learning Rate Ratios 35:25 - Conclusion & Discussion Paper (OpenReview): https://openreview.net/forum?id=FpKgG31Z_i9 Old Paper (Arxiv): https://arxiv.org/abs/2002.11803 Our Discord: https://discord.gg/4H8xxDF Abstract: In the empirical science of training large neural networks, the learning rate schedule is a notoriously challenging-to-tune hyperparameter, which can depend on all other properties (architecture, optimizer, batch size, dataset, regularization, ...) of the problem. In this work, we probe the entanglements between the optimizer and the learning rate schedule. We propose the technique of optimizer grafting, which allows for the transfer of the overall implicit step size schedule from a tuned optimizer to a new optimizer, preserving empirical performance. This provides a robust plug-and-play baseline for optimizer comparisons, leading to reductions to the computational cost of optimizer hyperparameter search. Using grafting, we discover a non-adaptive learning rate correction to SGD which allows it to train a BERT model to state-of-the-art performance. Besides providing a resource-saving tool for practitioners, the invariances discovered via grafting shed light on the successes and failure modes of optimizers in deep learning. Authors: Anonymous (Under Review) Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher LinkedIn: https://www.linkedin.com/in/ykilcher BiliBili: https://space.bilibili.com/2017636191 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Alright, so I just got done making a video about this paper and I was trying to upload it so I looked at the open review page and I read the first review and I just thought I had to show you this. Now before you see the review of the paper, but just look at this review. So the paper is about optimizer grafting, it's about transferring the learning rate of one optimizer to another optimizer that has some experiments in it and proposes this algorithm to investigate sort of learning rate schedules. Main review, S1, which I guess is strength 1. A large amount of experiments is conducted and plenty of results shown in the appendix. As to a novel optimizing mode of grafting to different optimizers is proposed. So you know a little bit about what's in the paper. Weakness 1. The paper structure is strange. I recommend to read some published proceedings to try to make this paper more clearly. What? Just to say these are accomplished researchers, right, that are the authors of this paper, actually show who the authors are. The structure is stra- I recommend reading, you know, read a bit. Maybe a book. Maybe you know, you'll learn something. Weakness 2. Some form it may not be legal. Okay. Weakness 3. The theory is not reasonable. By the way, the paper proposes no theory. The theory is not reasonable. In other words, you just tell me you do it like this, but not why it's reasonable. Okay, I mean, that is a- even though the paper explains clearly why they do everything, that might be a criticism, like you haven't really given a theoretical foundation for your reasons, but then actually, I don't think Adam grafted onto SGD, so this is the new method they propose, it's SGD with the learning rate of Adam. Actually, I don't think Adam grafted onto SGD will be better than Adam. Notice, this is what they show in the paper, that they make experiments to show that this is the case. And it's not like this person has tried it out and has said, it doesn't work for me, or it doesn't work in this other paper. No, no, no, no, the entire thing that this person says is, I don't think this will happen. No reason- what? Why? What is this? This is a type of reviewers that people have to fight with. And then there's like some herbity, herbity, herbity, herbity. I'm sorry, if they show in the paper that this is the case, then either you claim they are lying, and or you have conflicting evidence or anything like this, but simply sitting here saying, I don't think so. What? What? I mean, then we can- why? This is this. This is why I'm confused. In my view, this method is more like an SGD with multiplying a large constant to its gradient. I mean, at the end, that's what it is, but like, has this person actually read the paper? Weakness four. I have a question. That's a weakness. A weakness is I have a question. How to compute the norms? How to compute these norms? It's norms. The paper clearly, like they don't say it's L2 norms, but they clearly, you know, how do you compute the norm of a vector? Is this calculated with- this is answered in the paper. This is clearly answered throughout the paper. If not, figure one is a wrong example. Well, it is. So it's, how is it a weakness if you have a question that is answered in the paper? And then weakness five. The results shown in tables are not strong enough, right? A large amount of experiment is conducted and plenty of result is shown in the appendix. The result shown is not strong enough. Well, what do you mean not strong enough? Like, not highly performant enough because that's not what the paper is about. Not strong enough you mean not enough? Because, well, the other reviews, it's not like the other reviews are necessarily good reviews of the paper, but at least they have some criticism like, hey, you know, you're not theoretically motivated or something like this and they are a bit extensive. But like, this is what this is. You know, it's, it's, it's, I guess, you know, if you're some company researcher and so on, you know, your bonus might depend on a submission being accepted or not, which, you know, if you're at Google or so, I mean, you're doing well, right? But if you're like a PhD student and you need to get papers accepted in within a certain amount of years, and then I don't think that what you clearly show in the paper is the way it is because I just pull it like somewhere out of here. Okay, enough of me ranting. Let's go into the paper. By the way, I make one mistake. I make one mistake in the paper, which is kind of similar to what the person is here. So I, there is a diagram, I'm gonna just gonna describe it right here, where I where I say that there's an arrow like this and an arrow like this. And I say, well, the combined update step would be something like in the in between, which is is not the case. It would be actually one of the arrows just rescaled. Error top. Okay. Bye. Last thing. This is the best I forgot. Confidence, you are absolutely certain about your assessment. This is the highest score. This is the reviewer rating themselves. You are very familiar with the related work and checked the math and other details. Really? Because here it says I'm confused and I have a question. The following is a community inspired paper review, which means that we have talked about this paper in our discord paper discussions. We do this regularly. And I can take a lot of good opinions from there and bring them into my videos. If you're interested in joining these paper discussions, join our discord and watch the events channel. Hi there. Today, we're going to look at a paper by Namon Agarwal, Rohan Anil, Elad Hassan, Tomer Corran and Cyril Chung. But it is not the paper that you see right here. You see this paper is called Disentangling Adaptive Gradient Methods from Learning Rates and it's on archive with the authors. Allow me to present this paper right here under review at iClear with anonymous authors that's called Learning Rate Grafting Transferability of Optimizer Tuning. Now, suspiciously the two papers have pretty much exactly the same content. So you know, safe to assume that we might make an educated guess about who these authors might be. I'm going to review the obviously newer version because newer is always better. So what is this paper about? This paper is about a technique called learning rate grafting. And grafting means that we transfer the learning rate from one optimizer to another optimizer. We have a bit of a graphic right here. So what we would do is we would take two different optimizers and think of things like SGD or Adam or something like this. So these are fairly popular optimizers in deep learning. We would take one of them and that one would give us the information of what the direction of updates of our weight is. So let's actually say SGD here is this purple one in this direction. You can see that will follow in general the direction that SGD tells us to go. However, we don't go exactly what SGD, we don't do what SGD tells us to do. Instead of we take the learning step size or the learning rate from Adam and we go that far. So one algorithm dictates where we go. The other algorithm dictates how far we go. And what this does is it implicitly transfers the learning rate schedule from one optimizer to another optimizer. And as a result of this, many, many things happen. So one simple thing that results from this is we're able to investigate some of the differences between the optimizers. Surprisingly, one of the things that this paper finds is that maybe the different optimizers, it's a bit over, let's say over described over hyped what the differences really are between them. A lot of times it simply comes down to the learning rate schedule that the optimizers induce. And as soon as you transfer that to another optimizer, the other optimizer will perform just as well. So the differences between a lot of these optimizers might just come down to the learning rate schedule. Another thing that they can do is they can, for example, transfer these learning rate adaption adaption, sorry, adaptations that one does to the other. And that makes it in practice. That gives you benefits in practice. For example, Adam, let's look at Adam. Adam maintains three buffers for every single parameter. So for let's, let's go SGD SGD for every parameter w, it has one. It essentially just updates that parameter. If you have SGD with momentum, then you also have the momentum parameter that it maintains. So for every parameter, there is a momentum parameter, and then as a gradient comes in, it updates the momentum parameter and that it uses that to update the weights. So one buffer essentially per parameter that we want to treat. Adam on the other hand maintains like three buffers. I don't exactly remember what they all are, but they are like the squared sums of gradients. And then they are somehow the current gradient squared, or some exponential moving average across that. In any case, it maintains like three different buffers per parameter. And that also means that it has like double at least double or three times the memory requirements of SGD, right? SGD even with momentum needs a lot less memory than Adam. And that's a big deal because memory is one of the things that especially on GPUs is a limited commodity. So if you're able to reduce the amount of memory that your optimizers need, then that means that you can train bigger models because now you have a bunch of free space. So what this grafting method allows you to do is it allows you to essentially run SGD just for the learning rate schedule of Adam, but without having to run Adam, you can simply transfer the learning rate schedule or the adjustments to the learning rate from Adam to SGD. And you know, that's a that's a pretty cool thing. So we're going to look going to go look into how this paper does it what it suggests. And it's pretty straightforward paper. I think it's pretty, pretty short, pretty cool to read. Yeah, so what is what exactly is grafting? They first do a little bit of an excursion into preliminaries. And that essentially presents these adaptive optimizer these adaptive methods. So if you look at SGD, what it does is it pure plain SGD, its update rule, which they characterize as an algorithm a right here that takes in the current weights of the neural network, or whatever system you optimize, and the current gradient, right, so w are the weights, g is the gradient, both at time step t, it will output for the next weight, so a always gives you w t plus one, it will output the current weight minus a step size times the gradient. This is classic gradient descent. Now this right here is a learning rate schedule. So even in gradient descent, people do learning rate schedules. Sometimes there is a bit of a warm up and then you might reduce it over time, maybe after some epochs, I go down and so on. Or you might not, right. But these are usually handcrafted learning rate schedules. Now when you go to other things such as Adam or AdaGrad or anything like this, of all of these AdaGrad is probably the most simple. So the reasoning behind AdaGrad is the following. If you have a loss landscape, which we are going to draw here as some sort of a topological plot, so every line is in sort of a same loss height, and this is the global optimum right here. So you start out somewhere here, you calculate the gradient, the gradient maybe goes in this direction, so that's the local tangent to these ISO lines. That's pretty simple, right. You see you go straight here. Even if you have some sort of a bit of a mistake at the beginning because it's stochastic, you can see in general you go downhill. However, what if the landscape doesn't look like this, but it actually looks like really skewed in one of the dimensions. So it's really steep in one of the dimensions, it's really flat in the other dimension. Now what happens here is that if you start off the same thing, maybe you have a little bit of noise, you tend to make, if you do this step, so if you look at this, what you're going to do is probably, you're going to make a big step into this, and then it's really steep, right. Now it's really steep into this direction, so you're going to bounce over here, like really far. And then it's really steep in that direction, so you're going to bounce over here really far. So because it's so steep in that direction, you're going to bounce around with way too big of a step size, just because one direction, this direction, is way steeper than this direction. So what do methods like AdaGrad do? AdaGrad flattens out this landscape by observing, I mean the algorithm doesn't see the landscape, it only sees these points where you're at and the corresponding gradients. So what AdaGrad does is, it simply says, I'm going to look at one of these gradient steps, right, let's say I'm here, this is my gradient here, I'm going to look at what's the change in this direction, what's the change in this direction, and then I'm going to normalize by it. So the update rule for AdaGrad is something like Wt plus one equals Wt minus some step size times the gradient, but now the gradient gets scaled by the sum of square gradients and the square root of that. So what this means is that I'll take all of the gradients that I've seen so far, I square them and then I sum them all up. And in essence, this is element-wise by the way, so these are vectors, and we are talking about diagonal AdaGrad, so in essence what this says is that if I have my gradient vector here, I'll put a matrix in front of it and every entry in this matrix is one divided by the square of the gradients I've seen so far. So it's a bit of a normalization. If my gradients in this particular direction were really large, I'll divide by a lot. If my gradients were really small, I'll divide by just a little bit. So you can see that it transforms a landscape like this to implicitly look much, much more well-conditioned. And you can even see, because we have a total sum right here that goes on with time, that there is a little bit of even a decreasing learning rate built in because the square is always positive, so we're simply going to add on to these buffers and that means that we are going to decrease our learning rate implicitly over time. So here you can see two things. You can see that these preconditioners, they have their reasons for existing first of all, what's much more important, they introduce an implicit learning rate schedule. This thing right here is an implicit learning rate schedule. And all of these algorithms like AdaGrad, Adam, and so on, they introduce exactly that. So this part right here, that's the implicit learning rate schedule. And we're now wondering how much of the success of these optimizers comes from the fact that they do something like this right here, where they look at each of the coordinates and they adapt with respect to how steep they are and so on. And how much simply comes from the fact that they say, well, now you need to go far, now you need to go not so far, now you need to make a big step, now you need to make a small step. So that's what we're wondering. And grafting allows us to answer these questions. So in grafting what we do is we leave the optimizers as they are. So here we would leave SGD to do SGD. So again, we're at the start here, running out of colors to draw over top of one another. Let's go with green. We're at the start right here. And we want to, let's say we've made the step, now we want to go into this direction, SGD would make a big jump right here. And AdaGrad or Adam maybe would do two things. It would say, well, since this one direction is very steep, I'm not going to make that big of a step into that direction. I'll maybe make a smaller step and I also adjust my direction. What grafting does is it says, okay, we're going to take your suggestion of how far we should go, but we're still going to go into the same direction that we originally went. So we're taking the step size that the one optimizer suggests, and we'll transfer it onto the direction of another optimizer. So this allows us to answer the question, what's really important here? The step size schedule or the direction, the particular direction that these optimizers produce. And the answer is going to be the step size. So the grafting algorithm is detailed here. This is the simple version, which is, I believe called global grafting. So you can see, we're going to note, we're going to take this right here, this notation. So M stands for magnitude algorithm, I guess, I don't know, I've invented it. D stands for direction algorithm, and M hash D is the combined grafted algorithm. So what we're going to do is we're going to feed the same input, the current weight, and the current gradient to both of the algorithms. They will manage their states, internal states independently, but yet they will not yet update the weights, they will simply suggest each an update. What we'll then do is we'll look at two quantities, this right here, and this right here. So this is the step that this here is Wt plus one, according to algorithm M. And this is Wt plus one, according to algorithm D. And we're going to look at both of the steps that they would suggest, right? If we subtract this here, this is what step do you suggest? And then what we do is we compute the norms of these steps, and we'll simply normalize the quantity of D right here by the ratio of these norms. If we rewrite this a little bit, you can see much more clearly what's going on. This is Wt plus, and then I'll write the first norm here, Wm minus Wt, and then I'll write the second thing Wd minus Wt divided by the norm of Wd minus Wt. So there you can see that we'll take the direction of the D optimizer, and we take the direction because by dividing by its norm, we normalize it. So this always has length one, right? So this is simply the direction of the step that the D optimizer would do. And we multiply it by the norm of the step that the M optimizer would do. Notice M only comes in here through this norm, so M has no influence on the direction that we go, while D has no influence on the magnitude of the step, because we always divide by its own magnitude. So that's the grafting algorithm. And they have some properties right here. You can graft an algorithm onto itself, it won't do anything, you can graft multiple algorithms and so on, it's not commutative, yadda yadda yadda. It's not necessarily a descent method, which is interesting, but I guess irrelevant because I consider that an edge case. And now they have one more trick up their sleeve, how they make it more interesting, namely, this is what they call global grafting, where it's just one global learning rate, right? These whole norms here, they are just one number at the end. They can also do this, for example, for each layer individually. So they divide up the parameters into layers and then do it for each layer individually. If they were to do it for each parameter individually, then it would not have any effect. So if they do it for each parameter individually, I think it would just revert to being the old, sorry, it would just revert to being the M algorithm, right? That's what they say right here. If they do it for each parameter individually, they might as well just run M because the magnitude of each parameter is dictated by fully by M. And we don't calculate the direction of D, because each of the entries is separately divided by itself. So D will just output a bunch of ones. So yeah, that's the reason. And because the norms are just of size one. In any case, that's a bit of pushing it to the limit. We can either do this globally, or we can do it for each layer individually. That's this partition parameter right here. So what does this, where does this go? What they try is, notice that we're still in the case where we need to run both algorithms simultaneously, right? So for each step, we're here for each step, we have to consult SGD, what would you do? And then Adam, what would you do? And then we do the grafting between the two things. And then we maybe get this direction right here. We go on, we again ask both optimizers, we go on. In the experiments, they do a good job of controlling for the actual compute that they give to these experiments. And therefore, you can make some assumptions. But one worrying thing about me just as a side note is that Adam has this, for example, this internal state, right? So it has these, it accumulates the gradient into buffers and so on. And we make an update step that is not into the direction that these buffers would suggest. So technically, these buffers are wrong for the path that we're taking, the buffers expected that we're going to take this path right here. And I'm not sure how much, how much, you know, we, how much we actually miss due to that. I also don't know how we easily would correct it. I would just wanted to say that the internal state is updated as if we were to actually take the step that the algorithm suggests. However, we're not going to take that step at the end. So this is a bit of a shady practice in this grafting algorithm. In any case, as we do run both at the same time, you can see right here, so there's an experiment where experiments for implicit hyperparameter transfer comparing hyperparameter search for SGD with momentum versus grafting with, and then M is SGD, sorry, so it's Adam grafted onto SGD. Is that, is that true? M, because it seems like D is SGD, right? It's always M hash D. And then SGD is at the end. Huh. Well, maybe that's wrong. I don't know. As the way I understand it is that you have the trials with SGD, you have trial with Adam, which is in blue right here. And then if you take this grafting approach and you do Adam along with SGD, so you do the direction of SGD, but the step size that Adam would do, you see that you almost get the same performance. In fact, in this particular case, SGD with the Adam step size even outperforms Adam like a tiny little bit. If you go to a higher batch size, that's no longer the case. But also here, you see that it seems to be that as soon as you get this step size right, not only can you not match it with any humanly chosen, let's say step size of SGD, which would be all the gray stuff, but also immediately most of the, or all of the benefits of the Adam optimizer versus SGD vanish. So it really seems to be a thing of the step size. And as far as I understand it, that's the global grafting. Yeah, they, they do make some, like they mentioned a bunch of times that this number right here, no, it's layer wise, sorry, it's layer wise grafting. They mentioned a bunch of times that this is higher than just using Adam. But I'm not sure how exactly robust this is, especially as you see here, if you go to the higher batch sizes, it is a different, different story. They also do some experiments with, with Resnets, which aren't as cool, like they're not as performant. So here you see a lot of the times that they take SGD, which is a good algorithm for these types of problems. By the way, SGD was a bad algorithm for Bert. That's why they used it as the direction and grafted the learning rate onto it. In these particular cases, SGD is actually pretty good and so is Adam, as you can see right here. And the other algorithms, AdaGrad seems to be kind of bad. If they now graft SGD or Adam onto AdaGrad, which you can see here with the layer wise or the global grafting, it helps a little bit, right? Compared to just AdaGrad. But it's not like, it's not like that it really gets into a highly performant region. So I guess the conclusions of this is that sometimes or is that the step size schedule is an important parameter. It does, it is part of why some of the optimization algorithms outperform others. It might not be all of the reason. I guess that's a cautious thing you can say right here. They go into a little bit of analysis, for example, about this giving you sort of new bit of new insights. So for example, people have come up with this yellow learning rate schedule for SGD, there's a bit of a warm up, and then there is just a decay after every few epochs and so on. And if you transfer that to AdaGrad, so if you graft that on AdaGrad, right, the trick is we don't transfer it, we don't simply say, well, these are the steps. We always we ask both optimizers and then the resulting learning rate schedule might be a different one from either of the two. And the cool thing is that here, the algorithm seems to really decide kind of on this polynomial warm up for AdaGrad before then using this decay that comes from SGD. So it's pretty neat that it allows you to kind of gain an insight into what these algorithms are doing. They do a last thing right here where they say, can we get away with not running both algorithms at the same time? And that's what they do right here. So what is this? They take AdaGrad and they, no, they take Adam, sorry, they take Adam and they take SGD, and they run it for just 2000 steps. This is very small number of steps, let's say, in training of BERT. So these are just the first few iterations, they run both. And what they do is they observe the norm ratio during grafting. So they do this grafting where they run both and they observe the ratio of norms between what one and what the other one would suggest. So essentially they do this grafting and they observe how the step sizes between the two relate. And then they say, okay, we'll just take the median over these 2000 steps and that is going to be our learning rate correction to SGD. So essentially we're saying we're going for 2000 steps, how does the learning rate of the implicit step size of Adam compare to SGD over these steps? Maybe it's always 10 times higher for some layers, maybe it's 50 times higher for other layers, you can see they split this up into different layer types like embeddings or self attention and so on. And then they say, well, okay, so from here on out, let's just run SGD, only SGD, but always correct the step size by this ratio. And that actually works apparently. So I don't think there's a plot necessarily right here, but you can see this is one of the results. So with Adam, you again get this 69.5 SGD is way worse because this is BERT. But then the combination, as far as I understand it, that is this discovered per layer learning rate correction. So that's one number per layer. Even then SGD is better if you have this learning rate correction given by Adam than just Adam itself. A little bit, but still it is. Or is it not? No, this is grafted, sorry. I think this is the one, this here is the one where they keep it constant. And that is not better, but it is at least it is the same. Like I hope the rounding was in their favor right here. Otherwise they'd have like added like one digit and could claim that they're better. But in any case, it's pretty cool to see that the performance here jumps by quite a bit. And it's not that much worse as if you had executed Adam alongside, right? That's the 70.1. On the bottom here they have different kind of even more quantizations, which make the result worse most often. But it seems like if you get them exactly correct, then it can improve by a little bit. Not too big of a fan of these kinds of things. It shows that you can go simpler, but you have to kind of hit it exactly right with this hyperparameter. And that defeats the purpose a little bit. In any case, I think this is two powerful things from this paper. First of all, this can be used for investigating these optimizers, right? Because you can now see, aha, here is the exact effect that the step size schedule is having on one or the other optimizer. You can sort of mix the step size from one with the directional update rule of another one. The second one is that something like this, where you simply quickly observe how two optimizers stack up against each other, match each other in the step sizes they would suggest, maybe you need a little bit more memory at the beginning because you execute both of them. However, you only need to do this for a few number of steps before you can then go ahead and simply take what you learned and save a whole bunch of memory. Because as they do right here, they only from here on out, they only execute SGD. No more Adam, the ratios are fixed and they are per layer. So that's pretty cool and pretty powerful. Especially I'm wondering how these things generalize. So can I take sort of these, can I take the ratios of one network and transfer them to another one with a slightly different architecture, maybe a bigger network or a different problem, a different data set. So this seems to be a pretty exciting future direction, because it makes everything a lot more efficient. If we simply know that, aha, embedding layer, okay, you know, let's just multiply that by 50 or something like this. And then lastly, this is a bit of my worry is that I don't know where we go if we if what I said right here, the internal state of the optimizer assumes we're taking a certain step yet we take a different step. I don't know how that influences the entire grafting algorithm. They have a lengthy appendix, if you want to go into that of a lot of a lot of different results right here. And, but I don't want to go into that right here. In the conclusion, they say we've introduced grafting binary operation, which blends the behavior of two optimization algorithms towards investigating the entanglements between widely used adaptive preconditioning rules and learning rate schedules, yada, yada, yada. Furthermore, we have shown that grafting can be used to extract standalone learning rate corrections enabling us to train a transformer using SGD with momentum for the first time. Well, I guess people have been able to train them before just not to a satisfactory to satisfactory accuracies. We hope that this finding will simulate further empirical research power of simple per layer learning rate schedules. Okey-dokey. The empirical phenomena examined in this work seem to be unexplained by current theory. That is also an interesting point. We hope that the experiments enabled by grafting will aid in developing more robust beliefs about both adaptive methods and learning rate schedules and guide future theoretical inquiry. Alright, theory people, here's something for you to explain. Alright, I hope you have enjoyed this overview of learning rate grafting. Sorry for de-anonymizing the paper right away, but yeah, that's a bit silly anyway. In any case, if you liked this, hit subscribe, smash like, get enough sleep, and I'll see you next time. Bye.
[ { "start": 0, "end": 5.64, "text": " Alright, so I just got done making a video about this paper and I was trying to upload" }, { "start": 5.64, "end": 12.040000000000001, "text": " it so I looked at the open review page and I read the first review and I just thought" }, { "start": 12.040000000000001, "end": 13.700000000000001, "text": " I had to show you this." }, { "start": 13.700000000000001, "end": 19.12, "text": " Now before you see the review of the paper, but just look at this review." }, { "start": 19.12, "end": 25.080000000000002, "text": " So the paper is about optimizer grafting, it's about transferring the learning rate" }, { "start": 25.08, "end": 30.279999999999998, "text": " of one optimizer to another optimizer that has some experiments in it and proposes this" }, { "start": 30.279999999999998, "end": 34.64, "text": " algorithm to investigate sort of learning rate schedules." }, { "start": 34.64, "end": 38.16, "text": " Main review, S1, which I guess is strength 1." }, { "start": 38.16, "end": 43.84, "text": " A large amount of experiments is conducted and plenty of results shown in the appendix." }, { "start": 43.84, "end": 49.36, "text": " As to a novel optimizing mode of grafting to different optimizers is proposed." }, { "start": 49.36, "end": 53.2, "text": " So you know a little bit about what's in the paper." }, { "start": 53.2, "end": 54.2, "text": " Weakness 1." }, { "start": 54.2, "end": 56.92, "text": " The paper structure is strange." }, { "start": 56.92, "end": 63.32000000000001, "text": " I recommend to read some published proceedings to try to make this paper more clearly." }, { "start": 63.32000000000001, "end": 65.28, "text": " What?" }, { "start": 65.28, "end": 71.4, "text": " Just to say these are accomplished researchers, right, that are the authors of this paper," }, { "start": 71.4, "end": 74.36, "text": " actually show who the authors are." }, { "start": 74.36, "end": 78.52000000000001, "text": " The structure is stra- I recommend reading, you know, read a bit." }, { "start": 78.52000000000001, "end": 79.9, "text": " Maybe a book." }, { "start": 79.9, "end": 83.48, "text": " Maybe you know, you'll learn something." }, { "start": 83.48, "end": 84.48, "text": " Weakness 2." }, { "start": 84.48, "end": 86.76, "text": " Some form it may not be legal." }, { "start": 86.76, "end": 88.16, "text": " Okay." }, { "start": 88.16, "end": 89.16, "text": " Weakness 3." }, { "start": 89.16, "end": 92.52000000000001, "text": " The theory is not reasonable." }, { "start": 92.52000000000001, "end": 95.56, "text": " By the way, the paper proposes no theory." }, { "start": 95.56, "end": 96.64, "text": " The theory is not reasonable." }, { "start": 96.64, "end": 104.16, "text": " In other words, you just tell me you do it like this, but not why it's reasonable." }, { "start": 104.16, "end": 109.32000000000001, "text": " Okay, I mean, that is a- even though the paper explains clearly why they do everything, that" }, { "start": 109.32, "end": 115.67999999999999, "text": " might be a criticism, like you haven't really given a theoretical foundation for your reasons," }, { "start": 115.67999999999999, "end": 122.88, "text": " but then actually, I don't think Adam grafted onto SGD, so this is the new method they propose," }, { "start": 122.88, "end": 125.08, "text": " it's SGD with the learning rate of Adam." }, { "start": 125.08, "end": 130.64, "text": " Actually, I don't think Adam grafted onto SGD will be better than Adam." }, { "start": 130.64, "end": 136.6, "text": " Notice, this is what they show in the paper, that they make experiments to show that this" }, { "start": 136.6, "end": 137.95999999999998, "text": " is the case." }, { "start": 137.96, "end": 144.04000000000002, "text": " And it's not like this person has tried it out and has said, it doesn't work for me," }, { "start": 144.04000000000002, "end": 146.12, "text": " or it doesn't work in this other paper." }, { "start": 146.12, "end": 152.16, "text": " No, no, no, no, the entire thing that this person says is, I don't think this will happen." }, { "start": 152.16, "end": 155.08, "text": " No reason- what?" }, { "start": 155.08, "end": 157.70000000000002, "text": " Why?" }, { "start": 157.70000000000002, "end": 158.70000000000002, "text": " What is this?" }, { "start": 158.70000000000002, "end": 163, "text": " This is a type of reviewers that people have to fight with." }, { "start": 163, "end": 165.76000000000002, "text": " And then there's like some herbity, herbity, herbity, herbity." }, { "start": 165.76, "end": 170.6, "text": " I'm sorry, if they show in the paper that this is the case, then either you claim they" }, { "start": 170.6, "end": 177.23999999999998, "text": " are lying, and or you have conflicting evidence or anything like this, but simply sitting" }, { "start": 177.23999999999998, "end": 180.32, "text": " here saying, I don't think so." }, { "start": 180.32, "end": 181.32, "text": " What?" }, { "start": 181.32, "end": 182.32, "text": " What?" }, { "start": 182.32, "end": 188.16, "text": " I mean, then we can- why?" }, { "start": 188.16, "end": 189.22, "text": " This is this." }, { "start": 189.22, "end": 191.28, "text": " This is why I'm confused." }, { "start": 191.28, "end": 197.08, "text": " In my view, this method is more like an SGD with multiplying a large constant to its gradient." }, { "start": 197.08, "end": 204.8, "text": " I mean, at the end, that's what it is, but like, has this person actually read the paper?" }, { "start": 204.8, "end": 205.8, "text": " Weakness four." }, { "start": 205.8, "end": 207.16, "text": " I have a question." }, { "start": 207.16, "end": 208.16, "text": " That's a weakness." }, { "start": 208.16, "end": 211.96, "text": " A weakness is I have a question." }, { "start": 211.96, "end": 214.52, "text": " How to compute the norms?" }, { "start": 214.52, "end": 217.56, "text": " How to compute these norms?" }, { "start": 217.56, "end": 218.56, "text": " It's norms." }, { "start": 218.56, "end": 223.76, "text": " The paper clearly, like they don't say it's L2 norms, but they clearly, you know, how" }, { "start": 223.76, "end": 228.64000000000001, "text": " do you compute the norm of a vector?" }, { "start": 228.64000000000001, "end": 231.52, "text": " Is this calculated with- this is answered in the paper." }, { "start": 231.52, "end": 234.96, "text": " This is clearly answered throughout the paper." }, { "start": 234.96, "end": 238.28, "text": " If not, figure one is a wrong example." }, { "start": 238.28, "end": 240.32, "text": " Well, it is." }, { "start": 240.32, "end": 245.68, "text": " So it's, how is it a weakness if you have a question that is answered in the paper?" }, { "start": 245.68, "end": 246.98000000000002, "text": " And then weakness five." }, { "start": 246.98, "end": 252.56, "text": " The results shown in tables are not strong enough, right?" }, { "start": 252.56, "end": 257.4, "text": " A large amount of experiment is conducted and plenty of result is shown in the appendix." }, { "start": 257.4, "end": 260.15999999999997, "text": " The result shown is not strong enough." }, { "start": 260.15999999999997, "end": 263.96, "text": " Well, what do you mean not strong enough?" }, { "start": 263.96, "end": 269.12, "text": " Like, not highly performant enough because that's not what the paper is about." }, { "start": 269.12, "end": 271.4, "text": " Not strong enough you mean not enough?" }, { "start": 271.4, "end": 276.96, "text": " Because, well, the other reviews, it's not like the other reviews are necessarily" }, { "start": 276.96, "end": 281.71999999999997, "text": " good reviews of the paper, but at least they have some criticism like, hey, you know, you're" }, { "start": 281.71999999999997, "end": 287.35999999999996, "text": " not theoretically motivated or something like this and they are a bit extensive." }, { "start": 287.35999999999996, "end": 292, "text": " But like, this is what this is." }, { "start": 292, "end": 297.71999999999997, "text": " You know, it's, it's, it's, I guess, you know, if you're some company researcher and so on," }, { "start": 297.71999999999997, "end": 302.52, "text": " you know, your bonus might depend on a submission being accepted or not, which, you know, if" }, { "start": 302.52, "end": 306.4, "text": " you're at Google or so, I mean, you're doing well, right?" }, { "start": 306.4, "end": 312.47999999999996, "text": " But if you're like a PhD student and you need to get papers accepted in within a certain" }, { "start": 312.47999999999996, "end": 319.08, "text": " amount of years, and then I don't think that what you clearly show in the paper is the" }, { "start": 319.08, "end": 323.59999999999997, "text": " way it is because I just pull it like somewhere out of here." }, { "start": 323.59999999999997, "end": 326.03999999999996, "text": " Okay, enough of me ranting." }, { "start": 326.03999999999996, "end": 327.28, "text": " Let's go into the paper." }, { "start": 327.28, "end": 328.84, "text": " By the way, I make one mistake." }, { "start": 328.84, "end": 333.64, "text": " I make one mistake in the paper, which is kind of similar to what the person is here." }, { "start": 333.64, "end": 339.4, "text": " So I, there is a diagram, I'm gonna just gonna describe it right here, where I where I say" }, { "start": 339.4, "end": 342.08, "text": " that there's an arrow like this and an arrow like this." }, { "start": 342.08, "end": 348, "text": " And I say, well, the combined update step would be something like in the in between," }, { "start": 348, "end": 350.24, "text": " which is is not the case." }, { "start": 350.24, "end": 353.91999999999996, "text": " It would be actually one of the arrows just rescaled." }, { "start": 353.91999999999996, "end": 355.68, "text": " Error top." }, { "start": 355.68, "end": 356.68, "text": " Okay." }, { "start": 356.68, "end": 357.68, "text": " Bye." }, { "start": 357.68, "end": 358.68, "text": " Last thing." }, { "start": 358.68, "end": 360.68, "text": " This is the best I forgot." }, { "start": 360.68, "end": 365.04, "text": " Confidence, you are absolutely certain about your assessment." }, { "start": 365.04, "end": 366.04, "text": " This is the highest score." }, { "start": 366.04, "end": 368.84000000000003, "text": " This is the reviewer rating themselves." }, { "start": 368.84000000000003, "end": 374.92, "text": " You are very familiar with the related work and checked the math and other details." }, { "start": 374.92, "end": 375.92, "text": " Really?" }, { "start": 375.92, "end": 383.74, "text": " Because here it says I'm confused and I have a question." }, { "start": 383.74, "end": 388.76, "text": " The following is a community inspired paper review, which means that we have talked about" }, { "start": 388.76, "end": 392.2, "text": " this paper in our discord paper discussions." }, { "start": 392.2, "end": 394.03999999999996, "text": " We do this regularly." }, { "start": 394.03999999999996, "end": 398.84, "text": " And I can take a lot of good opinions from there and bring them into my videos." }, { "start": 398.84, "end": 403.76, "text": " If you're interested in joining these paper discussions, join our discord and watch the" }, { "start": 403.76, "end": 404.76, "text": " events channel." }, { "start": 404.76, "end": 405.76, "text": " Hi there." }, { "start": 405.76, "end": 412.71999999999997, "text": " Today, we're going to look at a paper by Namon Agarwal, Rohan Anil, Elad Hassan, Tomer Corran" }, { "start": 412.71999999999997, "end": 414.15999999999997, "text": " and Cyril Chung." }, { "start": 414.15999999999997, "end": 417.2, "text": " But it is not the paper that you see right here." }, { "start": 417.2, "end": 421.84, "text": " You see this paper is called Disentangling Adaptive Gradient Methods from Learning Rates" }, { "start": 421.84, "end": 424.56, "text": " and it's on archive with the authors." }, { "start": 424.56, "end": 431.36, "text": " Allow me to present this paper right here under review at iClear with anonymous authors" }, { "start": 431.36, "end": 436.76, "text": " that's called Learning Rate Grafting Transferability of Optimizer Tuning." }, { "start": 436.76, "end": 441.44, "text": " Now, suspiciously the two papers have pretty much exactly the same content." }, { "start": 441.44, "end": 447.52, "text": " So you know, safe to assume that we might make an educated guess about who these authors" }, { "start": 447.52, "end": 448.52, "text": " might be." }, { "start": 448.52, "end": 453.8, "text": " I'm going to review the obviously newer version because newer is always better." }, { "start": 453.8, "end": 455.4, "text": " So what is this paper about?" }, { "start": 455.4, "end": 460.08, "text": " This paper is about a technique called learning rate grafting." }, { "start": 460.08, "end": 469.4, "text": " And grafting means that we transfer the learning rate from one optimizer to another optimizer." }, { "start": 469.4, "end": 472.4, "text": " We have a bit of a graphic right here." }, { "start": 472.4, "end": 481.08, "text": " So what we would do is we would take two different optimizers and think of things like SGD or" }, { "start": 481.08, "end": 483.62, "text": " Adam or something like this." }, { "start": 483.62, "end": 488.23999999999995, "text": " So these are fairly popular optimizers in deep learning." }, { "start": 488.23999999999995, "end": 495.08, "text": " We would take one of them and that one would give us the information of what the direction" }, { "start": 495.08, "end": 497.32, "text": " of updates of our weight is." }, { "start": 497.32, "end": 502.64, "text": " So let's actually say SGD here is this purple one in this direction." }, { "start": 502.64, "end": 509.15999999999997, "text": " You can see that will follow in general the direction that SGD tells us to go." }, { "start": 509.15999999999997, "end": 515.3199999999999, "text": " However, we don't go exactly what SGD, we don't do what SGD tells us to do." }, { "start": 515.3199999999999, "end": 522.12, "text": " Instead of we take the learning step size or the learning rate from Adam and we go that" }, { "start": 522.12, "end": 523.2, "text": " far." }, { "start": 523.2, "end": 526.24, "text": " So one algorithm dictates where we go." }, { "start": 526.24, "end": 530.16, "text": " The other algorithm dictates how far we go." }, { "start": 530.16, "end": 537.6, "text": " And what this does is it implicitly transfers the learning rate schedule from one optimizer" }, { "start": 537.6, "end": 540.2, "text": " to another optimizer." }, { "start": 540.2, "end": 544.76, "text": " And as a result of this, many, many things happen." }, { "start": 544.76, "end": 552.5600000000001, "text": " So one simple thing that results from this is we're able to investigate some of the differences" }, { "start": 552.5600000000001, "end": 554.6800000000001, "text": " between the optimizers." }, { "start": 554.68, "end": 563.2399999999999, "text": " Surprisingly, one of the things that this paper finds is that maybe the different optimizers," }, { "start": 563.2399999999999, "end": 568.8, "text": " it's a bit over, let's say over described over hyped what the differences really are" }, { "start": 568.8, "end": 570.4799999999999, "text": " between them." }, { "start": 570.4799999999999, "end": 575.76, "text": " A lot of times it simply comes down to the learning rate schedule that the optimizers" }, { "start": 575.76, "end": 577.02, "text": " induce." }, { "start": 577.02, "end": 581.8399999999999, "text": " And as soon as you transfer that to another optimizer, the other optimizer will perform" }, { "start": 581.8399999999999, "end": 583.3599999999999, "text": " just as well." }, { "start": 583.36, "end": 587.88, "text": " So the differences between a lot of these optimizers might just come down to the learning" }, { "start": 587.88, "end": 590.44, "text": " rate schedule." }, { "start": 590.44, "end": 595.88, "text": " Another thing that they can do is they can, for example, transfer these learning rate" }, { "start": 595.88, "end": 601.2, "text": " adaption adaption, sorry, adaptations that one does to the other." }, { "start": 601.2, "end": 605.22, "text": " And that makes it in practice." }, { "start": 605.22, "end": 606.76, "text": " That gives you benefits in practice." }, { "start": 606.76, "end": 609.52, "text": " For example, Adam, let's look at Adam." }, { "start": 609.52, "end": 616.1999999999999, "text": " Adam maintains three buffers for every single parameter." }, { "start": 616.1999999999999, "end": 625.76, "text": " So for let's, let's go SGD SGD for every parameter w, it has one." }, { "start": 625.76, "end": 628.62, "text": " It essentially just updates that parameter." }, { "start": 628.62, "end": 635.22, "text": " If you have SGD with momentum, then you also have the momentum parameter that it maintains." }, { "start": 635.22, "end": 640.74, "text": " So for every parameter, there is a momentum parameter, and then as a gradient comes in," }, { "start": 640.74, "end": 646.26, "text": " it updates the momentum parameter and that it uses that to update the weights." }, { "start": 646.26, "end": 652.5600000000001, "text": " So one buffer essentially per parameter that we want to treat." }, { "start": 652.5600000000001, "end": 655.12, "text": " Adam on the other hand maintains like three buffers." }, { "start": 655.12, "end": 663.32, "text": " I don't exactly remember what they all are, but they are like the squared sums of gradients." }, { "start": 663.32, "end": 670, "text": " And then they are somehow the current gradient squared, or some exponential moving average" }, { "start": 670, "end": 671.6800000000001, "text": " across that." }, { "start": 671.6800000000001, "end": 676.32, "text": " In any case, it maintains like three different buffers per parameter." }, { "start": 676.32, "end": 682.12, "text": " And that also means that it has like double at least double or three times the memory" }, { "start": 682.12, "end": 684.5600000000001, "text": " requirements of SGD, right?" }, { "start": 684.5600000000001, "end": 690.0400000000001, "text": " SGD even with momentum needs a lot less memory than Adam." }, { "start": 690.04, "end": 695.52, "text": " And that's a big deal because memory is one of the things that especially on GPUs is a" }, { "start": 695.52, "end": 697.48, "text": " limited commodity." }, { "start": 697.48, "end": 704.28, "text": " So if you're able to reduce the amount of memory that your optimizers need, then that" }, { "start": 704.28, "end": 709.68, "text": " means that you can train bigger models because now you have a bunch of free space." }, { "start": 709.68, "end": 717.52, "text": " So what this grafting method allows you to do is it allows you to essentially run SGD" }, { "start": 717.52, "end": 723.48, "text": " just for the learning rate schedule of Adam, but without having to run Adam, you can simply" }, { "start": 723.48, "end": 728.6, "text": " transfer the learning rate schedule or the adjustments to the learning rate from Adam" }, { "start": 728.6, "end": 730.04, "text": " to SGD." }, { "start": 730.04, "end": 732.74, "text": " And you know, that's a that's a pretty cool thing." }, { "start": 732.74, "end": 738.14, "text": " So we're going to look going to go look into how this paper does it what it suggests." }, { "start": 738.14, "end": 739.76, "text": " And it's pretty straightforward paper." }, { "start": 739.76, "end": 743.12, "text": " I think it's pretty, pretty short, pretty cool to read." }, { "start": 743.12, "end": 748.76, "text": " Yeah, so what is what exactly is grafting?" }, { "start": 748.76, "end": 754.5600000000001, "text": " They first do a little bit of an excursion into preliminaries." }, { "start": 754.5600000000001, "end": 760, "text": " And that essentially presents these adaptive optimizer these adaptive methods." }, { "start": 760, "end": 768.12, "text": " So if you look at SGD, what it does is it pure plain SGD, its update rule, which they" }, { "start": 768.12, "end": 774, "text": " characterize as an algorithm a right here that takes in the current weights of the neural" }, { "start": 774, "end": 779.72, "text": " network, or whatever system you optimize, and the current gradient, right, so w are" }, { "start": 779.72, "end": 786.36, "text": " the weights, g is the gradient, both at time step t, it will output for the next weight," }, { "start": 786.36, "end": 796.04, "text": " so a always gives you w t plus one, it will output the current weight minus a step size" }, { "start": 796.04, "end": 797.28, "text": " times the gradient." }, { "start": 797.28, "end": 799.92, "text": " This is classic gradient descent." }, { "start": 799.92, "end": 802.92, "text": " Now this right here is a learning rate schedule." }, { "start": 802.92, "end": 807.68, "text": " So even in gradient descent, people do learning rate schedules." }, { "start": 807.68, "end": 811.52, "text": " Sometimes there is a bit of a warm up and then you might reduce it over time, maybe" }, { "start": 811.52, "end": 814.92, "text": " after some epochs, I go down and so on." }, { "start": 814.92, "end": 817.0799999999999, "text": " Or you might not, right." }, { "start": 817.0799999999999, "end": 821.48, "text": " But these are usually handcrafted learning rate schedules." }, { "start": 821.48, "end": 827.96, "text": " Now when you go to other things such as Adam or AdaGrad or anything like this, of all of" }, { "start": 827.96, "end": 831.5600000000001, "text": " these AdaGrad is probably the most simple." }, { "start": 831.5600000000001, "end": 834.76, "text": " So the reasoning behind AdaGrad is the following." }, { "start": 834.76, "end": 840.12, "text": " If you have a loss landscape, which we are going to draw here as some sort of a topological" }, { "start": 840.12, "end": 847.84, "text": " plot, so every line is in sort of a same loss height, and this is the global optimum right" }, { "start": 847.84, "end": 848.84, "text": " here." }, { "start": 848.84, "end": 852.48, "text": " So you start out somewhere here, you calculate the gradient, the gradient maybe goes in this" }, { "start": 852.48, "end": 859.6, "text": " direction, so that's the local tangent to these ISO lines." }, { "start": 859.6, "end": 860.76, "text": " That's pretty simple, right." }, { "start": 860.76, "end": 863.2, "text": " You see you go straight here." }, { "start": 863.2, "end": 867.48, "text": " Even if you have some sort of a bit of a mistake at the beginning because it's stochastic," }, { "start": 867.48, "end": 871.1600000000001, "text": " you can see in general you go downhill." }, { "start": 871.1600000000001, "end": 878.2800000000001, "text": " However, what if the landscape doesn't look like this, but it actually looks like really" }, { "start": 878.28, "end": 882.56, "text": " skewed in one of the dimensions." }, { "start": 882.56, "end": 888.1999999999999, "text": " So it's really steep in one of the dimensions, it's really flat in the other dimension." }, { "start": 888.1999999999999, "end": 891.48, "text": " Now what happens here is that if you start off the same thing, maybe you have a little" }, { "start": 891.48, "end": 899.5799999999999, "text": " bit of noise, you tend to make, if you do this step, so if you look at this, what you're" }, { "start": 899.5799999999999, "end": 906, "text": " going to do is probably, you're going to make a big step into this, and then it's really" }, { "start": 906, "end": 907.1999999999999, "text": " steep, right." }, { "start": 907.2, "end": 910.6800000000001, "text": " Now it's really steep into this direction, so you're going to bounce over here, like" }, { "start": 910.6800000000001, "end": 912.24, "text": " really far." }, { "start": 912.24, "end": 916.32, "text": " And then it's really steep in that direction, so you're going to bounce over here really" }, { "start": 916.32, "end": 917.32, "text": " far." }, { "start": 917.32, "end": 923.6, "text": " So because it's so steep in that direction, you're going to bounce around with way too" }, { "start": 923.6, "end": 931.0400000000001, "text": " big of a step size, just because one direction, this direction, is way steeper than this direction." }, { "start": 931.0400000000001, "end": 934.2, "text": " So what do methods like AdaGrad do?" }, { "start": 934.2, "end": 941.2, "text": " AdaGrad flattens out this landscape by observing, I mean the algorithm doesn't see the landscape," }, { "start": 941.2, "end": 945.6800000000001, "text": " it only sees these points where you're at and the corresponding gradients." }, { "start": 945.6800000000001, "end": 951.6, "text": " So what AdaGrad does is, it simply says, I'm going to look at one of these gradient steps," }, { "start": 951.6, "end": 958, "text": " right, let's say I'm here, this is my gradient here, I'm going to look at what's the change" }, { "start": 958, "end": 962.1600000000001, "text": " in this direction, what's the change in this direction, and then I'm going to normalize" }, { "start": 962.1600000000001, "end": 963.2, "text": " by it." }, { "start": 963.2, "end": 970.5600000000001, "text": " So the update rule for AdaGrad is something like Wt plus one equals Wt minus some step" }, { "start": 970.5600000000001, "end": 979.1600000000001, "text": " size times the gradient, but now the gradient gets scaled by the sum of square gradients" }, { "start": 979.1600000000001, "end": 982.2800000000001, "text": " and the square root of that." }, { "start": 982.2800000000001, "end": 988.0400000000001, "text": " So what this means is that I'll take all of the gradients that I've seen so far, I square" }, { "start": 988.0400000000001, "end": 991.12, "text": " them and then I sum them all up." }, { "start": 991.12, "end": 997.48, "text": " And in essence, this is element-wise by the way, so these are vectors, and we are talking" }, { "start": 997.48, "end": 1002.28, "text": " about diagonal AdaGrad, so in essence what this says is that if I have my gradient vector" }, { "start": 1002.28, "end": 1012.16, "text": " here, I'll put a matrix in front of it and every entry in this matrix is one divided" }, { "start": 1012.16, "end": 1016, "text": " by the square of the gradients I've seen so far." }, { "start": 1016, "end": 1017.6, "text": " So it's a bit of a normalization." }, { "start": 1017.6, "end": 1023.62, "text": " If my gradients in this particular direction were really large, I'll divide by a lot." }, { "start": 1023.62, "end": 1027.98, "text": " If my gradients were really small, I'll divide by just a little bit." }, { "start": 1027.98, "end": 1035.48, "text": " So you can see that it transforms a landscape like this to implicitly look much, much more" }, { "start": 1035.48, "end": 1036.82, "text": " well-conditioned." }, { "start": 1036.82, "end": 1042.44, "text": " And you can even see, because we have a total sum right here that goes on with time, that" }, { "start": 1042.44, "end": 1047.92, "text": " there is a little bit of even a decreasing learning rate built in because the square" }, { "start": 1047.92, "end": 1052.6000000000001, "text": " is always positive, so we're simply going to add on to these buffers and that means" }, { "start": 1052.6000000000001, "end": 1058.2, "text": " that we are going to decrease our learning rate implicitly over time." }, { "start": 1058.2, "end": 1061.0800000000002, "text": " So here you can see two things." }, { "start": 1061.0800000000002, "end": 1068.92, "text": " You can see that these preconditioners, they have their reasons for existing first of all," }, { "start": 1068.92, "end": 1074.0800000000002, "text": " what's much more important, they introduce an implicit learning rate schedule." }, { "start": 1074.0800000000002, "end": 1081.28, "text": " This thing right here is an implicit learning rate schedule." }, { "start": 1081.28, "end": 1086.44, "text": " And all of these algorithms like AdaGrad, Adam, and so on, they introduce exactly that." }, { "start": 1086.44, "end": 1091.4, "text": " So this part right here, that's the implicit learning rate schedule." }, { "start": 1091.4, "end": 1100.88, "text": " And we're now wondering how much of the success of these optimizers comes from the fact that" }, { "start": 1100.88, "end": 1110, "text": " they do something like this right here, where they look at each of the coordinates and they" }, { "start": 1110, "end": 1113.42, "text": " adapt with respect to how steep they are and so on." }, { "start": 1113.42, "end": 1119.96, "text": " And how much simply comes from the fact that they say, well, now you need to go far, now" }, { "start": 1119.96, "end": 1124.64, "text": " you need to go not so far, now you need to make a big step, now you need to make a small" }, { "start": 1124.64, "end": 1125.64, "text": " step." }, { "start": 1125.64, "end": 1129.16, "text": " So that's what we're wondering." }, { "start": 1129.16, "end": 1132.32, "text": " And grafting allows us to answer these questions." }, { "start": 1132.32, "end": 1138.06, "text": " So in grafting what we do is we leave the optimizers as they are." }, { "start": 1138.06, "end": 1142.9, "text": " So here we would leave SGD to do SGD." }, { "start": 1142.9, "end": 1148.92, "text": " So again, we're at the start here, running out of colors to draw over top of one another." }, { "start": 1148.92, "end": 1150.92, "text": " Let's go with green." }, { "start": 1150.92, "end": 1153.64, "text": " We're at the start right here." }, { "start": 1153.64, "end": 1158.68, "text": " And we want to, let's say we've made the step, now we want to go into this direction, SGD" }, { "start": 1158.68, "end": 1162.6000000000001, "text": " would make a big jump right here." }, { "start": 1162.6000000000001, "end": 1167, "text": " And AdaGrad or Adam maybe would do two things." }, { "start": 1167, "end": 1173.96, "text": " It would say, well, since this one direction is very steep, I'm not going to make that" }, { "start": 1173.96, "end": 1176.16, "text": " big of a step into that direction." }, { "start": 1176.16, "end": 1179.88, "text": " I'll maybe make a smaller step and I also adjust my direction." }, { "start": 1179.88, "end": 1185.16, "text": " What grafting does is it says, okay, we're going to take your suggestion of how far we" }, { "start": 1185.16, "end": 1191.1200000000001, "text": " should go, but we're still going to go into the same direction that we originally went." }, { "start": 1191.1200000000001, "end": 1199.0400000000002, "text": " So we're taking the step size that the one optimizer suggests, and we'll transfer it" }, { "start": 1199.0400000000002, "end": 1202.1200000000001, "text": " onto the direction of another optimizer." }, { "start": 1202.1200000000001, "end": 1205.66, "text": " So this allows us to answer the question, what's really important here?" }, { "start": 1205.66, "end": 1211.44, "text": " The step size schedule or the direction, the particular direction that these optimizers" }, { "start": 1211.44, "end": 1213.5600000000002, "text": " produce." }, { "start": 1213.5600000000002, "end": 1216.72, "text": " And the answer is going to be the step size." }, { "start": 1216.72, "end": 1219.1200000000001, "text": " So the grafting algorithm is detailed here." }, { "start": 1219.1200000000001, "end": 1224.3200000000002, "text": " This is the simple version, which is, I believe called global grafting." }, { "start": 1224.3200000000002, "end": 1230.76, "text": " So you can see, we're going to note, we're going to take this right here, this notation." }, { "start": 1230.76, "end": 1238.08, "text": " So M stands for magnitude algorithm, I guess, I don't know, I've invented it." }, { "start": 1238.08, "end": 1246.32, "text": " D stands for direction algorithm, and M hash D is the combined grafted algorithm." }, { "start": 1246.32, "end": 1252.68, "text": " So what we're going to do is we're going to feed the same input, the current weight, and" }, { "start": 1252.68, "end": 1256.48, "text": " the current gradient to both of the algorithms." }, { "start": 1256.48, "end": 1262.08, "text": " They will manage their states, internal states independently, but yet they will not yet update" }, { "start": 1262.08, "end": 1266.64, "text": " the weights, they will simply suggest each an update." }, { "start": 1266.64, "end": 1271.92, "text": " What we'll then do is we'll look at two quantities, this right here, and this right here." }, { "start": 1271.92, "end": 1281.6, "text": " So this is the step that this here is Wt plus one, according to algorithm M. And this is" }, { "start": 1281.6, "end": 1285.92, "text": " Wt plus one, according to algorithm D." }, { "start": 1285.92, "end": 1289.48, "text": " And we're going to look at both of the steps that they would suggest, right?" }, { "start": 1289.48, "end": 1294.96, "text": " If we subtract this here, this is what step do you suggest?" }, { "start": 1294.96, "end": 1301.64, "text": " And then what we do is we compute the norms of these steps, and we'll simply normalize" }, { "start": 1301.64, "end": 1307.1200000000001, "text": " the quantity of D right here by the ratio of these norms." }, { "start": 1307.1200000000001, "end": 1310.88, "text": " If we rewrite this a little bit, you can see much more clearly what's going on." }, { "start": 1310.88, "end": 1325.94, "text": " This is Wt plus, and then I'll write the first norm here, Wm minus Wt, and then I'll write" }, { "start": 1325.94, "end": 1338.96, "text": " the second thing Wd minus Wt divided by the norm of Wd minus Wt." }, { "start": 1338.96, "end": 1350.6000000000001, "text": " So there you can see that we'll take the direction of the D optimizer, and we take the direction" }, { "start": 1350.6000000000001, "end": 1354.44, "text": " because by dividing by its norm, we normalize it." }, { "start": 1354.44, "end": 1357.48, "text": " So this always has length one, right?" }, { "start": 1357.48, "end": 1362.8400000000001, "text": " So this is simply the direction of the step that the D optimizer would do." }, { "start": 1362.84, "end": 1369.12, "text": " And we multiply it by the norm of the step that the M optimizer would do." }, { "start": 1369.12, "end": 1373.9599999999998, "text": " Notice M only comes in here through this norm, so M has no influence on the direction that" }, { "start": 1373.9599999999998, "end": 1380.12, "text": " we go, while D has no influence on the magnitude of the step, because we always divide by its" }, { "start": 1380.12, "end": 1382.48, "text": " own magnitude." }, { "start": 1382.48, "end": 1385.4399999999998, "text": " So that's the grafting algorithm." }, { "start": 1385.4399999999998, "end": 1388.34, "text": " And they have some properties right here." }, { "start": 1388.34, "end": 1394.32, "text": " You can graft an algorithm onto itself, it won't do anything, you can graft multiple" }, { "start": 1394.32, "end": 1397.9199999999998, "text": " algorithms and so on, it's not commutative, yadda yadda yadda." }, { "start": 1397.9199999999998, "end": 1403.3999999999999, "text": " It's not necessarily a descent method, which is interesting, but I guess irrelevant because" }, { "start": 1403.3999999999999, "end": 1405.9199999999998, "text": " I consider that an edge case." }, { "start": 1405.9199999999998, "end": 1412.32, "text": " And now they have one more trick up their sleeve, how they make it more interesting," }, { "start": 1412.32, "end": 1417.56, "text": " namely, this is what they call global grafting, where it's just one global learning rate," }, { "start": 1417.56, "end": 1418.56, "text": " right?" }, { "start": 1418.56, "end": 1425.84, "text": " These whole norms here, they are just one number at the end." }, { "start": 1425.84, "end": 1430.8, "text": " They can also do this, for example, for each layer individually." }, { "start": 1430.8, "end": 1437.46, "text": " So they divide up the parameters into layers and then do it for each layer individually." }, { "start": 1437.46, "end": 1446.62, "text": " If they were to do it for each parameter individually, then it would not have any effect." }, { "start": 1446.62, "end": 1451.8799999999999, "text": " So if they do it for each parameter individually, I think it would just revert to being the" }, { "start": 1451.8799999999999, "end": 1459.36, "text": " old, sorry, it would just revert to being the M algorithm, right?" }, { "start": 1459.36, "end": 1460.8799999999999, "text": " That's what they say right here." }, { "start": 1460.8799999999999, "end": 1465.6399999999999, "text": " If they do it for each parameter individually, they might as well just run M because the" }, { "start": 1465.6399999999999, "end": 1473.1999999999998, "text": " magnitude of each parameter is dictated by fully by M." }, { "start": 1473.2, "end": 1482.8400000000001, "text": " And we don't calculate the direction of D, because each of the entries is separately" }, { "start": 1482.8400000000001, "end": 1484.28, "text": " divided by itself." }, { "start": 1484.28, "end": 1487.1200000000001, "text": " So D will just output a bunch of ones." }, { "start": 1487.1200000000001, "end": 1490.6000000000001, "text": " So yeah, that's the reason." }, { "start": 1490.6000000000001, "end": 1493.1200000000001, "text": " And because the norms are just of size one." }, { "start": 1493.1200000000001, "end": 1497.3600000000001, "text": " In any case, that's a bit of pushing it to the limit." }, { "start": 1497.3600000000001, "end": 1502.8400000000001, "text": " We can either do this globally, or we can do it for each layer individually." }, { "start": 1502.84, "end": 1508.28, "text": " That's this partition parameter right here." }, { "start": 1508.28, "end": 1511.6799999999998, "text": " So what does this, where does this go?" }, { "start": 1511.6799999999998, "end": 1517.6799999999998, "text": " What they try is, notice that we're still in the case where we need to run both algorithms" }, { "start": 1517.6799999999998, "end": 1519.1999999999998, "text": " simultaneously, right?" }, { "start": 1519.1999999999998, "end": 1524.28, "text": " So for each step, we're here for each step, we have to consult SGD, what would you do?" }, { "start": 1524.28, "end": 1525.9199999999998, "text": " And then Adam, what would you do?" }, { "start": 1525.9199999999998, "end": 1528.72, "text": " And then we do the grafting between the two things." }, { "start": 1528.72, "end": 1531.3799999999999, "text": " And then we maybe get this direction right here." }, { "start": 1531.38, "end": 1535.6000000000001, "text": " We go on, we again ask both optimizers, we go on." }, { "start": 1535.6000000000001, "end": 1539.24, "text": " In the experiments, they do a good job of controlling for the actual compute that they" }, { "start": 1539.24, "end": 1543.0400000000002, "text": " give to these experiments." }, { "start": 1543.0400000000002, "end": 1545.7, "text": " And therefore, you can make some assumptions." }, { "start": 1545.7, "end": 1551.64, "text": " But one worrying thing about me just as a side note is that Adam has this, for example," }, { "start": 1551.64, "end": 1553.5200000000002, "text": " this internal state, right?" }, { "start": 1553.5200000000002, "end": 1558, "text": " So it has these, it accumulates the gradient into buffers and so on." }, { "start": 1558, "end": 1564.4, "text": " And we make an update step that is not into the direction that these buffers would suggest." }, { "start": 1564.4, "end": 1569.64, "text": " So technically, these buffers are wrong for the path that we're taking, the buffers expected" }, { "start": 1569.64, "end": 1572.36, "text": " that we're going to take this path right here." }, { "start": 1572.36, "end": 1580.56, "text": " And I'm not sure how much, how much, you know, we, how much we actually miss due to that." }, { "start": 1580.56, "end": 1583.48, "text": " I also don't know how we easily would correct it." }, { "start": 1583.48, "end": 1590.92, "text": " I would just wanted to say that the internal state is updated as if we were to actually" }, { "start": 1590.92, "end": 1593.64, "text": " take the step that the algorithm suggests." }, { "start": 1593.64, "end": 1596.6, "text": " However, we're not going to take that step at the end." }, { "start": 1596.6, "end": 1602.48, "text": " So this is a bit of a shady practice in this grafting algorithm." }, { "start": 1602.48, "end": 1607.56, "text": " In any case, as we do run both at the same time, you can see right here, so there's an" }, { "start": 1607.56, "end": 1615.52, "text": " experiment where experiments for implicit hyperparameter transfer comparing hyperparameter" }, { "start": 1615.52, "end": 1626.8799999999999, "text": " search for SGD with momentum versus grafting with, and then M is SGD, sorry, so it's Adam" }, { "start": 1626.8799999999999, "end": 1628.84, "text": " grafted onto SGD." }, { "start": 1628.84, "end": 1631.6, "text": " Is that, is that true?" }, { "start": 1631.6, "end": 1634.9199999999998, "text": " M, because it seems like D is SGD, right?" }, { "start": 1634.9199999999998, "end": 1637.28, "text": " It's always M hash D." }, { "start": 1637.28, "end": 1640.92, "text": " And then SGD is at the end." }, { "start": 1640.92, "end": 1642.76, "text": " Huh." }, { "start": 1642.76, "end": 1646.24, "text": " Well, maybe that's wrong." }, { "start": 1646.24, "end": 1648.24, "text": " I don't know." }, { "start": 1648.24, "end": 1656, "text": " As the way I understand it is that you have the trials with SGD, you have trial with Adam," }, { "start": 1656, "end": 1657.76, "text": " which is in blue right here." }, { "start": 1657.76, "end": 1664.44, "text": " And then if you take this grafting approach and you do Adam along with SGD, so you do" }, { "start": 1664.44, "end": 1671, "text": " the direction of SGD, but the step size that Adam would do, you see that you almost get" }, { "start": 1671, "end": 1673.48, "text": " the same performance." }, { "start": 1673.48, "end": 1680.44, "text": " In fact, in this particular case, SGD with the Adam step size even outperforms Adam like" }, { "start": 1680.44, "end": 1682.88, "text": " a tiny little bit." }, { "start": 1682.88, "end": 1685.6000000000001, "text": " If you go to a higher batch size, that's no longer the case." }, { "start": 1685.6000000000001, "end": 1693.88, "text": " But also here, you see that it seems to be that as soon as you get this step size right," }, { "start": 1693.88, "end": 1699.72, "text": " not only can you not match it with any humanly chosen, let's say step size of SGD, which" }, { "start": 1699.72, "end": 1707.0400000000002, "text": " would be all the gray stuff, but also immediately most of the, or all of the benefits of the" }, { "start": 1707.0400000000002, "end": 1710.1200000000001, "text": " Adam optimizer versus SGD vanish." }, { "start": 1710.1200000000001, "end": 1713.64, "text": " So it really seems to be a thing of the step size." }, { "start": 1713.64, "end": 1717.2800000000002, "text": " And as far as I understand it, that's the global grafting." }, { "start": 1717.2800000000002, "end": 1723.6000000000001, "text": " Yeah, they, they do make some, like they mentioned a bunch of times that this number right here," }, { "start": 1723.6, "end": 1727.4399999999998, "text": " no, it's layer wise, sorry, it's layer wise grafting." }, { "start": 1727.4399999999998, "end": 1733.32, "text": " They mentioned a bunch of times that this is higher than just using Adam." }, { "start": 1733.32, "end": 1739.24, "text": " But I'm not sure how exactly robust this is, especially as you see here, if you go to the" }, { "start": 1739.24, "end": 1745.4399999999998, "text": " higher batch sizes, it is a different, different story." }, { "start": 1745.44, "end": 1755, "text": " They also do some experiments with, with Resnets, which aren't as cool, like they're not as" }, { "start": 1755, "end": 1756, "text": " performant." }, { "start": 1756, "end": 1762.1200000000001, "text": " So here you see a lot of the times that they take SGD, which is a good algorithm for these" }, { "start": 1762.1200000000001, "end": 1764, "text": " types of problems." }, { "start": 1764, "end": 1766.76, "text": " By the way, SGD was a bad algorithm for Bert." }, { "start": 1766.76, "end": 1771.68, "text": " That's why they used it as the direction and grafted the learning rate onto it." }, { "start": 1771.68, "end": 1776, "text": " In these particular cases, SGD is actually pretty good and so is Adam, as you can see" }, { "start": 1776, "end": 1777.6000000000001, "text": " right here." }, { "start": 1777.6000000000001, "end": 1782.44, "text": " And the other algorithms, AdaGrad seems to be kind of bad." }, { "start": 1782.44, "end": 1788.8400000000001, "text": " If they now graft SGD or Adam onto AdaGrad, which you can see here with the layer wise" }, { "start": 1788.8400000000001, "end": 1793.4, "text": " or the global grafting, it helps a little bit, right?" }, { "start": 1793.4, "end": 1795.68, "text": " Compared to just AdaGrad." }, { "start": 1795.68, "end": 1802.68, "text": " But it's not like, it's not like that it really gets into a highly performant region." }, { "start": 1802.68, "end": 1812.3600000000001, "text": " So I guess the conclusions of this is that sometimes or is that the step size schedule" }, { "start": 1812.3600000000001, "end": 1814.76, "text": " is an important parameter." }, { "start": 1814.76, "end": 1823.04, "text": " It does, it is part of why some of the optimization algorithms outperform others." }, { "start": 1823.04, "end": 1826.76, "text": " It might not be all of the reason." }, { "start": 1826.76, "end": 1833.08, "text": " I guess that's a cautious thing you can say right here." }, { "start": 1833.08, "end": 1840.24, "text": " They go into a little bit of analysis, for example, about this giving you sort of new" }, { "start": 1840.24, "end": 1843.6, "text": " bit of new insights." }, { "start": 1843.6, "end": 1847.68, "text": " So for example, people have come up with this yellow learning rate schedule for SGD, there's" }, { "start": 1847.68, "end": 1854.76, "text": " a bit of a warm up, and then there is just a decay after every few epochs and so on." }, { "start": 1854.76, "end": 1859.8, "text": " And if you transfer that to AdaGrad, so if you graft that on AdaGrad, right, the trick" }, { "start": 1859.8, "end": 1864, "text": " is we don't transfer it, we don't simply say, well, these are the steps." }, { "start": 1864, "end": 1870.6200000000001, "text": " We always we ask both optimizers and then the resulting learning rate schedule might" }, { "start": 1870.6200000000001, "end": 1874.3600000000001, "text": " be a different one from either of the two." }, { "start": 1874.36, "end": 1881.6399999999999, "text": " And the cool thing is that here, the algorithm seems to really decide kind of on this polynomial" }, { "start": 1881.6399999999999, "end": 1889.26, "text": " warm up for AdaGrad before then using this decay that comes from SGD." }, { "start": 1889.26, "end": 1894.6799999999998, "text": " So it's pretty neat that it allows you to kind of gain an insight into what these algorithms" }, { "start": 1894.6799999999998, "end": 1896.08, "text": " are doing." }, { "start": 1896.08, "end": 1902.6799999999998, "text": " They do a last thing right here where they say, can we get away with not running both" }, { "start": 1902.68, "end": 1905.8400000000001, "text": " algorithms at the same time?" }, { "start": 1905.8400000000001, "end": 1911.02, "text": " And that's what they do right here." }, { "start": 1911.02, "end": 1912.8, "text": " So what is this?" }, { "start": 1912.8, "end": 1919.96, "text": " They take AdaGrad and they, no, they take Adam, sorry, they take Adam and they take" }, { "start": 1919.96, "end": 1924.66, "text": " SGD, and they run it for just 2000 steps." }, { "start": 1924.66, "end": 1930.64, "text": " This is very small number of steps, let's say, in training of BERT." }, { "start": 1930.64, "end": 1936.0800000000002, "text": " So these are just the first few iterations, they run both." }, { "start": 1936.0800000000002, "end": 1942.3400000000001, "text": " And what they do is they observe the norm ratio during grafting." }, { "start": 1942.3400000000001, "end": 1949.3600000000001, "text": " So they do this grafting where they run both and they observe the ratio of norms between" }, { "start": 1949.3600000000001, "end": 1954.5200000000002, "text": " what one and what the other one would suggest." }, { "start": 1954.52, "end": 1961.6399999999999, "text": " So essentially they do this grafting and they observe how the step sizes between the two" }, { "start": 1961.6399999999999, "end": 1963.6399999999999, "text": " relate." }, { "start": 1963.6399999999999, "end": 1970.04, "text": " And then they say, okay, we'll just take the median over these 2000 steps and that is going" }, { "start": 1970.04, "end": 1974.44, "text": " to be our learning rate correction to SGD." }, { "start": 1974.44, "end": 1982.2, "text": " So essentially we're saying we're going for 2000 steps, how does the learning rate of" }, { "start": 1982.2, "end": 1988.2, "text": " the implicit step size of Adam compare to SGD over these steps?" }, { "start": 1988.2, "end": 1992.76, "text": " Maybe it's always 10 times higher for some layers, maybe it's 50 times higher for other" }, { "start": 1992.76, "end": 1999.28, "text": " layers, you can see they split this up into different layer types like embeddings or self" }, { "start": 1999.28, "end": 2001.64, "text": " attention and so on." }, { "start": 2001.64, "end": 2007.8400000000001, "text": " And then they say, well, okay, so from here on out, let's just run SGD, only SGD, but" }, { "start": 2007.84, "end": 2013.08, "text": " always correct the step size by this ratio." }, { "start": 2013.08, "end": 2017.86, "text": " And that actually works apparently." }, { "start": 2017.86, "end": 2024.36, "text": " So I don't think there's a plot necessarily right here, but you can see this is one of" }, { "start": 2024.36, "end": 2026.5, "text": " the results." }, { "start": 2026.5, "end": 2033.4599999999998, "text": " So with Adam, you again get this 69.5 SGD is way worse because this is BERT." }, { "start": 2033.46, "end": 2041.42, "text": " But then the combination, as far as I understand it, that is this discovered per layer learning" }, { "start": 2041.42, "end": 2042.42, "text": " rate correction." }, { "start": 2042.42, "end": 2046.2, "text": " So that's one number per layer." }, { "start": 2046.2, "end": 2053.84, "text": " Even then SGD is better if you have this learning rate correction given by Adam than just Adam" }, { "start": 2053.84, "end": 2054.84, "text": " itself." }, { "start": 2054.84, "end": 2057.78, "text": " A little bit, but still it is." }, { "start": 2057.78, "end": 2058.78, "text": " Or is it not?" }, { "start": 2058.78, "end": 2060.64, "text": " No, this is grafted, sorry." }, { "start": 2060.64, "end": 2065.7599999999998, "text": " I think this is the one, this here is the one where they keep it constant." }, { "start": 2065.7599999999998, "end": 2069.7999999999997, "text": " And that is not better, but it is at least it is the same." }, { "start": 2069.7999999999997, "end": 2075.48, "text": " Like I hope the rounding was in their favor right here." }, { "start": 2075.48, "end": 2084.3599999999997, "text": " Otherwise they'd have like added like one digit and could claim that they're better." }, { "start": 2084.3599999999997, "end": 2090, "text": " But in any case, it's pretty cool to see that the performance here jumps by quite a bit." }, { "start": 2090, "end": 2095.52, "text": " And it's not that much worse as if you had executed Adam alongside, right?" }, { "start": 2095.52, "end": 2097.88, "text": " That's the 70.1." }, { "start": 2097.88, "end": 2105.08, "text": " On the bottom here they have different kind of even more quantizations, which make the" }, { "start": 2105.08, "end": 2107.94, "text": " result worse most often." }, { "start": 2107.94, "end": 2112.6, "text": " But it seems like if you get them exactly correct, then it can improve by a little bit." }, { "start": 2112.6, "end": 2115.7, "text": " Not too big of a fan of these kinds of things." }, { "start": 2115.7, "end": 2122.2, "text": " It shows that you can go simpler, but you have to kind of hit it exactly right with" }, { "start": 2122.2, "end": 2123.68, "text": " this hyperparameter." }, { "start": 2123.68, "end": 2126.72, "text": " And that defeats the purpose a little bit." }, { "start": 2126.72, "end": 2130.7999999999997, "text": " In any case, I think this is two powerful things from this paper." }, { "start": 2130.7999999999997, "end": 2136.12, "text": " First of all, this can be used for investigating these optimizers, right?" }, { "start": 2136.12, "end": 2142.8199999999997, "text": " Because you can now see, aha, here is the exact effect that the step size schedule is" }, { "start": 2142.8199999999997, "end": 2145.48, "text": " having on one or the other optimizer." }, { "start": 2145.48, "end": 2152, "text": " You can sort of mix the step size from one with the directional update rule of another" }, { "start": 2152, "end": 2153, "text": " one." }, { "start": 2153, "end": 2160.32, "text": " The second one is that something like this, where you simply quickly observe how two optimizers" }, { "start": 2160.32, "end": 2166.08, "text": " stack up against each other, match each other in the step sizes they would suggest, maybe" }, { "start": 2166.08, "end": 2171.96, "text": " you need a little bit more memory at the beginning because you execute both of them." }, { "start": 2171.96, "end": 2178.44, "text": " However, you only need to do this for a few number of steps before you can then go ahead" }, { "start": 2178.44, "end": 2183.16, "text": " and simply take what you learned and save a whole bunch of memory." }, { "start": 2183.16, "end": 2189.8, "text": " Because as they do right here, they only from here on out, they only execute SGD." }, { "start": 2189.8, "end": 2194, "text": " No more Adam, the ratios are fixed and they are per layer." }, { "start": 2194, "end": 2197.48, "text": " So that's pretty cool and pretty powerful." }, { "start": 2197.48, "end": 2200.7200000000003, "text": " Especially I'm wondering how these things generalize." }, { "start": 2200.72, "end": 2210.2, "text": " So can I take sort of these, can I take the ratios of one network and transfer them to" }, { "start": 2210.2, "end": 2215.56, "text": " another one with a slightly different architecture, maybe a bigger network or a different problem," }, { "start": 2215.56, "end": 2216.72, "text": " a different data set." }, { "start": 2216.72, "end": 2223.04, "text": " So this seems to be a pretty exciting future direction, because it makes everything a lot" }, { "start": 2223.04, "end": 2224.4399999999996, "text": " more efficient." }, { "start": 2224.4399999999996, "end": 2230.04, "text": " If we simply know that, aha, embedding layer, okay, you know, let's just multiply that by" }, { "start": 2230.04, "end": 2233.7599999999998, "text": " 50 or something like this." }, { "start": 2233.7599999999998, "end": 2239.6, "text": " And then lastly, this is a bit of my worry is that I don't know where we go if we if" }, { "start": 2239.6, "end": 2244.2799999999997, "text": " what I said right here, the internal state of the optimizer assumes we're taking a certain" }, { "start": 2244.2799999999997, "end": 2247.24, "text": " step yet we take a different step." }, { "start": 2247.24, "end": 2253, "text": " I don't know how that influences the entire grafting algorithm." }, { "start": 2253, "end": 2258.44, "text": " They have a lengthy appendix, if you want to go into that of a lot of a lot of different" }, { "start": 2258.44, "end": 2260.52, "text": " results right here." }, { "start": 2260.52, "end": 2265.2400000000002, "text": " And, but I don't want to go into that right here." }, { "start": 2265.2400000000002, "end": 2269, "text": " In the conclusion, they say we've introduced grafting binary operation, which blends the" }, { "start": 2269, "end": 2273.7200000000003, "text": " behavior of two optimization algorithms towards investigating the entanglements between widely" }, { "start": 2273.7200000000003, "end": 2281.8, "text": " used adaptive preconditioning rules and learning rate schedules, yada, yada, yada." }, { "start": 2281.8, "end": 2287, "text": " Furthermore, we have shown that grafting can be used to extract standalone learning rate" }, { "start": 2287, "end": 2293.24, "text": " corrections enabling us to train a transformer using SGD with momentum for the first time." }, { "start": 2293.24, "end": 2299.56, "text": " Well, I guess people have been able to train them before just not to a satisfactory to" }, { "start": 2299.56, "end": 2302.4, "text": " satisfactory accuracies." }, { "start": 2302.4, "end": 2306.72, "text": " We hope that this finding will simulate further empirical research power of simple per layer" }, { "start": 2306.72, "end": 2308.76, "text": " learning rate schedules." }, { "start": 2308.76, "end": 2310.9, "text": " Okey-dokey." }, { "start": 2310.9, "end": 2315.82, "text": " The empirical phenomena examined in this work seem to be unexplained by current theory." }, { "start": 2315.82, "end": 2317.88, "text": " That is also an interesting point." }, { "start": 2317.88, "end": 2321.96, "text": " We hope that the experiments enabled by grafting will aid in developing more robust beliefs" }, { "start": 2321.96, "end": 2327.4, "text": " about both adaptive methods and learning rate schedules and guide future theoretical inquiry." }, { "start": 2327.4, "end": 2331.32, "text": " Alright, theory people, here's something for you to explain." }, { "start": 2331.32, "end": 2337.6000000000004, "text": " Alright, I hope you have enjoyed this overview of learning rate grafting." }, { "start": 2337.6000000000004, "end": 2345.32, "text": " Sorry for de-anonymizing the paper right away, but yeah, that's a bit silly anyway." }, { "start": 2345.32, "end": 2353.48, "text": " In any case, if you liked this, hit subscribe, smash like, get enough sleep, and I'll see" }, { "start": 2353.48, "end": 2354.48, "text": " you next time." }, { "start": 2354.48, "end": 2376.84, "text": " Bye." } ]
Nq3auVtvd9Q
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
[Classic] ImageNet Classification with Deep Convolutional Neural Networks (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "classic", "alexnet", "hinton", "geoff hinton", "imagenet", "convolution", "convolutional neural network", "architecture", "dropout", "data augmentation", "cnns", "computer vision", "image classification", "object recognition", "classifier", "max pool", "pretraining", "deep neural networks" ]
#ai #research #alexnet AlexNet was the start of the deep learning revolution. Up until 2012, the best computer vision systems relied on hand-crafted features and highly specialized algorithms to perform object classification. This paper was the first to successfully train a deep convolutional neural network on not one, but two GPUs and managed to outperform the competition on ImageNet by an order of magnitude. OUTLINE: 0:00 - Intro & Overview 2:00 - The necessity of larger models 6:20 - Why CNNs? 11:05 - ImageNet 12:05 - Model Architecture Overview 14:35 - ReLU Nonlinearities 18:45 - Multi-GPU training 21:30 - Classification Results 24:30 - Local Response Normalization 28:05 - Overlapping Pooling 32:25 - Data Augmentation 38:30 - Dropout 40:30 - More Results 43:50 - Conclusion Paper: http://www.cs.toronto.edu/~hinton/absps/imagenet.pdf Abstract: We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overfitting in the fully-connected layers we employed a recently-developed regularization method called “dropout” that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry. Authors: Alex Krizhevsky, Ilya Sutskever, Geoffrey E. Hinton Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher LinkedIn: https://www.linkedin.com/in/yannic-kilcher-488534136/ If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar (preferred to Patreon): https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Hi there! Today we'll look at ImageNet classification with deep convolutional neural networks by Alex Kruschevsky, Ilya Sutskever and Jeffrey E. Hinton. This paper is another one in the installment of our historical paper overview, where we go through kind of old papers that were or weren't very impactful and see what people knew at the time already, how this developed and so on. Of course this paper here, also known as AlexNet, was the one that started the deep learning revolution, so to say, or at least contributed in large part to it. It was the first paper that showed that you could train these very deep neural networks, and very deep in here is a relative term, but the first one that showed that you could actually use CUDA, GPUs, to train those large networks efficiently, and it won the ImageNet competition that year, and it did so by a very very large margin. So it kind of shook the world, because previously computer vision was still doing like hand engineered features and then using some kind of classifiers on top of those. This paper basically changed everything. So we'll go through the paper and we'll see what was already known, and especially I always enjoy with these papers how did the choices that people make back then, how did they pull through to today, sort of what arbitrary choices that Alex Kruschevsky made right here are we still doing today and what have we learned since then. So the paper is written relatively straightforward, I have to say. It's a good read if you want to read it, and you know straightforward, and sort of gives you a little bit of an intuition of how much work must have gone into this, which is I guess a lot. So they start off by saying that that current approaches to object recognition make essential use of machine learning methods. This was also new, right? Object recognition wasn't always learned. The object recognizers, you could even do it in the indifferent way, like matching templates and so on. Machine learning was still one of the methods used, and of course today it's the method used. To improve their performance we can collect larger datasets, learn more powerful models, and use better techniques for preventing overfitting. Until recently datasets of labeled images were relatively small, on the orders of tens of thousands of images. So this especially at NORP, or here the C410, or C4100, these are relatively small datasets with relatively small images as well, like C410 is 32 by 32 pixels. So they're saying that in these small datasets you can solve it with classical computer vision models, but if you have larger datasets, and especially more realistic datasets like bigger resolution and so on, you need bigger models. So they say but objects in realistic settings exhibit considerable variability to learn to recognize them, it is necessary to use much larger training sets. So they say that this ImageNet dataset is one of those larger datasets, consists of 15 million labeled high resolution images in over 22,000 categories. People keep forgetting this, and I am included in that group of people, that the ImageNet dataset is actually much larger than we know, than we when we talk of ImageNet. When we speak of ImageNet we think of the ImageNet that has a thousand classes and about one or one and a half million images. However that's only a subset of the much much larger ImageNet dataset within many many more categories. It's just that the ImageNet competitions were performed on this subset, because I guess people thought well a thousand classes and a million images is already plenty, so we'll do that. So that's I guess how that came to be. So their argument is right here, to learn about thousands of objects from millions of images we need a model with a large learning capacity. However the immense complexity of object recognition task means that this problem cannot be specified even by a dataset as large as ImageNet. So our model should also have lots of prior knowledge to compensate for all the data we don't have. So their main argument for using neural networks is that the size of the dataset is so large, therefore we need a large model. Granted they already recognize the inherent connection between large models and a lot of complex data, but in the opposite they say well even if we have that much data the task we are trying to solve, object recognition, is way more complicated than the amount of data we have. So our model should also have lots of prior knowledge to compensate for all the data we don't have. Remember at this time convolutional neural networks weren't really known to do anything. I guess they were used for handwritten digit recognition and so on and were kind of on par with other methods. However it wasn't like obviously clear that you would use them for image recognition. So here they have to make like a argument to convince people that okay we can use neural networks for this task because they have such a high capacity. However neural networks, feed-forward neural networks, are already too powerful. They don't know anything about the data. Everything's connected to everything and they argue right here our model should have lots of prior knowledge to compensate for all the data we don't have. So they allude to the convolutional neural networks constitute one such class of models. Their capacity can be controlled by varying the depth and breadth and they also make strong and mostly correct assumptions about the nature of images, namely stationarity of statistics and locality of pixel dependencies. So their argument here is that the convolutional operation is such a strong prior that is mostly consistent with what we know about images that they are very well suited to computer vision. Again something that was not abundantly clear at the time as it is right now. It's interesting to see how they get to this point where they say we need lots of capacity but we also need a model with lots of prior knowledge and of course CNNs fit that very well. So they go into the problems of CNN despite the attractive qualities and despite the relative efficiency of their local architecture they are prohibitively expensive to apply in large-scale high-resolution images. Luckily current GPUs paired with a highly optimized implementation of 2D convolution are powerful enough to facilitate the training of interestingly large CNNs and recent data sets such as ImageNet contain enough labeled example to train such model without severe overfitting. So overfitting was also still like very much at the forefront of people's minds back then. Right now we don't really care about overfitting that much anymore. Basically we figured out that if we just build large enough models we don't overfit which is strange in itself like this double descent phenomenon and so on but overfitting was still very much at the forefront of people's minds and they do a lot of things here to prevent overfitting which gives them kind of a boost in the test accuracy which might actually not have been the overfitting that they're combating. So they do for example in data augmentation already in this paper and they always allude to how this is to prevent overfitting. However we know nowadays that it might not be the overfitting that's combated by data augmentation. It might actually have something to do with regularizing your function making it more smooth and so on. So you just see how coming from a classical machine learning perspective overfitting was like the number one or one of the number one problems in classical machine learning in SVMs and things like this. So it's safe to say that they thought if we built these large models we're gonna have a huge overfitting problem and yeah so that's why this pulls through right here. Also I guess one of the main contributions of this paper is to show to combine this CNN training with GPUs. Also not very non-clear at the time like it was known that you could do computation on GPUs but the fact that these are you know very capable for training these CNNs or generally neural networks wasn't something that was you know known at the time. So this paper basically showed that if you use a GPU you can get that much faster and that makes it possible to train these big neural networks. Again right here the size of our network made overfitting a significant problem even with 1.2 million labeled training examples so we use several effective techniques for preventing overfitting and we'll look at those. And the end they say the network's size is limited mainly by the amount of memory available on current GPUs and by the amount of training time that we are willing to tolerate. Our network takes between five and six days to train on two GTX 580 GPUs. All of our experiments suggest that our results can be improved by simply waiting for faster GPUs and bigger data sets to become available. And I mean that proved to be absolutely true. We don't necessarily have bigger data sets right now though we do but certainly with faster GPUs and bigger GPUs this became a this became these networks became better simply by increasing their depth and as you know then ResNets came along increasing the depth by an order of magnitude and that gave another boost to computer vision. Alright so they talk about the ImageNet data set here and the main point in the ImageNet data set right here is the fact that the images are plenty so there are over a million training images in this subset with a thousand classes which was you know a very big that was that was on like CIFAR 10 had 10 classes, CIFAR 100 had a hundred classes that was already a lot. A thousand classes that is like unheard of before this data set. I guess not unheard of but yeah and a million training images. Completely crazy and also not only was it a lot of images they were resolution was really big so in the order of 256 by 256 whereas previous methods all were like 32 by 32 so definitely challenging data set even today it's a challenging data set. Alright so the architecture. The architecture and there's this famous graphic right here of the AlexNet architecture so briefly they described these convolutional layers right here as you can see there's max pooling already here they have dense layers at the end they do generally increase the number of feature maps right here while decreasing the resolution with max pooling so all of this has sort of you know kept until today I guess they also took it from earlier work on convolutional neural networks that generally found this to be a good idea and the important part here that is kind of special to AlexNet is you can see there are these two different pipelines and Alex for cutting off this part right here I mean you just know like this has the eight pages we need to like we have like three lines too much how can we fit the three lines we've already cropped everything let's just cut off the top half here it's essentially the same as the bottom yeah so space constraints and PDFs for conference submissions ruining yet another paper alright but you can see there is this two this this two column architecture right here so this network was so large that it didn't fit on one GPU so they had to split it onto two GPUs with the occasional intercommunication right you can see here there is intercommunication between the two GPUs and there is also no intercommunication right here on this layer this was very intricate that was one thing that really didn't hold until today I guess until now with things like I don't know G shard or so where you have different weights on different GPUs again I guess the the invention of bigger GPUs made that sort of super fluid but just imagine the amount of code they had to write there was no tensor flow at this point there I don't think there was even cafe around there was just CUDA and yeah just this cross GPU memory writing I just imagined this to be so so ugly and big respect for writing all of this all of this code alright so they they go through a number of important things and most of the things here aren't their invention let's say but they cleverly combine things that were already known about neural networks and things that were maybe developed somewhere that they have found to work really well so the first one is the relu non-linearity now of course relu is nowadays all like abundant everyone uses relu's non-linearities but at that time it was still very much in fashion to use something like the sigmoid right here or the hyperbolic tangent and why is that because the neural networks were still kind of inspired by the neurons where you had the soma of the neuron and then the input dendrites sorry the dendrites with the input axons and then you would sum up all the incoming signals and then that would go over so in the true neuron you have this this this kind of curve where if the input rises above this border right here the action potential maybe I don't know what the the English term is then if it rise above that then the neuron would start to spike right and if it's below that it wouldn't so people wanted to approximate this using some sort of a a kind of differentiable but something that's very similar to this step function and that ultimately led to something like a sigmoid or an hyperbolic tangent so people trying to stay close to biological neurons did this but that gives you the problem that in this region and in this region right here you have almost no gradient to learn from so you can see that they argue that in terms of training time with gradient descent the saturating non-linearity so the hyperbolic tangent and the sigmoid are much slower than the non saturating lean non-linearity this one following Narendt Hinton we refer to neurons with this non-linearity as rectified linear units so taken from this this other paper they say okay we use these relu's these rectified linear units which are not exactly like real biological neurons but they train much faster right and of course relu's are used until this day so you can see right here that a this is on a C for 10 and they measure the time to reach 25% of the training error and this here is with the relu's and this here is with the hyperbolic tangent and it takes much longer to reach the hyperbolic tangent especially it takes six times faster to with the relu's and they say that's one of the main components that allows them to learn this fast to even experiment with these big networks because their entire training time is six days right but they probably didn't train it only once they experimented with it and saw what works so if you have a couple of months of time and he takes you a week to train one of these things you know you don't you can't afford a six times slowdown because that would mean you can only train like two models in the entire course of research and that would severely hinder your progress now we are at the point where that becomes true again with these giant giant transformer language models where people can train it once and then you know like GPT-3 they say oh we made we discovered a bug halfway through and we've kind of fixed it but we're not sure we couldn't restart because it was too expensive yeah maybe we're waiting for a moment I'm still saying we're waiting for the resonant moment in the transformers but yeah relu's in you know here in not introduced here but used here and have been prevailing until today training on multiple GPUs something as I said that didn't didn't really get forward from here especially the kind of GPU training so if we train on multiple GPUs today what we mean is that we have our model right and then we distribute that to multiple GPUs like this and then we take a mini batch from the training data and we simply split it up let each GPU do its thing on its subset of the mini batch and then at the end kind of calculate the loss and then back propagate the gradients and synchronize the gradients between that so we have one model that is on both GPUs here they distribute a model to two GPUs and I'm also thinking that with frameworks like G shard this could potentially have a revival right here this kind of distributing your models especially within the same layer across many GPUs and then having cross communication only at some points so their argument is this only has three gigabytes of memory which limits the maximum size of networks can be trained on it turns out that 1.2 train million training samples are enough to train networks which are too big to fit on one GPU therefore we spread the net across two GPUs current GPUs are particularly well suited to cross GPU parallelization as they're able to read from and write to one another's memory directly without going through the host machine okay so this means that for so sorry here they say the parallelization scheme that we employ essentially puts half the kernels or neurons on each GPU with one additional trick the GPUs communicate only in certain layers that means that for example the kernels of layer 3 take input from all kernel maps in layer 2 however the kernels in layer 4 take input only from the kernel maps in layer 3 which reside on the same GPU so very very interesting choice right here and they they justify this here or they say the results this scheme reduces our top one top five error rates by 1.7 and 1.2 percent respectively as compared with a net with half as many kernels in each computational layer in each convolutional layer on one GPU the two GPU net takes slightly less time to train than the one a GPU net so first of all I have to say big respect right here like like I can imagine they did this you know with the relu's and stuff and they were already better than previous because they're so just to go to the results the pre they beat the error rates of previous models by ginormous amount so this is what they knew right here this is on the 2010 image net split so the previous best ones were like at around 28 25 percent and here their best one is at 17 percent top five error rate I'm gonna imagine that they trained it first and we're already better than the 25 percent and I guess lots of people would just call it a day would be like oh cool we have this entirely new method not only did we show that we can train it we actually showed that it's better and bad a boom I have point one percent better error rate and everything else can be a separate paper no they stuck with it and they pushed it each so each of these things right here they say oh this reduces the error rate by 1% this reduces the error rate by 2% and you know really they they went about it how far can we push this with everything I mean just imagine you come and you train a network I'm pretty sure they first trained on one GPU right and and then they thought oh you know maybe we can train an even bigger network by using two GPUs and then they realized what it's gonna take like a crap ton amount of dumb code to cross synchronize and keep them in lockstep and blah blah blah like it's not even easy to write multi GPU code today with all the frameworks just imagine that and for them to having already observed that their network does better than everything that was previously to sit down and do the cross GPU thing experiment with okay when do we cross communicate and whatnot that is very very respectable right here so maybe a lesson to be learned or or just the mentality of the people maybe they just had more time they were like okay it's still like two months out this competition deadline I don't know but you know I'm this this is not something that I see today very often this this kind of persistence and additional pushing and reporting of what works in these kinds of things I mean some some papers do it but most papers do it because only with all the tricks they can get that point one percent improvement and this one already had the improvement and did it anyway okay but multi GPU training didn't really it's like splitting the models across GPUs didn't really didn't really stick around mainly because I guess the GPUs got larger in memory pretty quickly so it wasn't that necessary but also I guess because the frameworks were just too clunky and now maybe with G-shard this is coming back so worth another shot I guess next one local response normalization this also didn't really stick around I cut kind of dumped in favor of things like batch normalization but with the resurfacing of things like layer normalization this it comes back to this thing here again a little bit so what they say is that what they want to do is they want to kind of normalize the response of these of these relu's so what they do is each response which is this alpha they are these a here is normalized by the following quantity and it's the all the responses of the other neurons around them or of the other kernels around them and you can see the sum is over this weird quantity right here so what does it mean if they have a bunch of convolutional filters and these are the activation so these are the feature maps after the convolution and yeah so if I have like 10 convolutional filters in my layer this is going to be the output the way they normalizes they normalize each filter sorry each output channel by averaging by you see here dividing by the average response of the channels around them right so let's maybe say the five channels though two channels in front of them and two channels behind them this is going to be they take the average across this one and then for another channel right here for this one you would take the average of the five around that this isn't really something that stuck around I guess mainly because of the really dynamic situation right here what people do today is they have things like layer normalization that simply averages across all of the channels or they have group normalization that pre defines these groups like here is there's two groups and we only normalize within this group and within this group also always the same this kind of dynamic normalization on across neighboring filters as I said didn't really stick around not really sure why but I guess it was just easier to implement it otherwise or it just worked better again here they say this this it was motivated well right this scheme bears some resemblance to the local contrast normalization scheme of that but ours would be more correctly termed brightness normalization since we do not subtract the mean activity and oh they make it connection to biological neurons where is it this sort of response normalization implements a form of lateral inhibition inspired by type found in real neurons creating competition for big activities amongst neuron outputs computed using different kernels okay so kind of inspired by real neurons but also kind of inspired by other people doing also some kind of normalization so people already knew that normalization was helpful at some times and this is what they employed right here again reducing the top error rates by 1.4 and 1.2 percent respectively so not a big improvement but still an improvement the last thing overlapping pooling again a thing that didn't really stick around that much where they say okay instead of having a pooling layer so if this is your image and instead of pooling 2x2 in the stride of 2 like we do today and you know pull it down to a smaller image what we can do instead is we can pool with overlapping windows so in that case they pool with a 3x3 window but they do always do stride of 2 so they have like these overlaps right here resulting in the same size but then each pixel right here has some sort of overlapping information from the pixels around it again they say it reduces the top one and top five error rates by 0.4 percent and 0.3 percent maybe this this didn't stick around because I'm not sure maybe because people found it doesn't work in other problems who knows so the overall architecture as we said is described in this picture right here so you have the input image which you can see has three channels and they use convolutional filters with a here with a stride of four at the beginning to reduce the size so at the beginning it's 224 by 224 and then it's 48 by sorry it's 55 by 55 that thing here 55 by 55 48 feature maps you can already see as we said before the feature maps keep increasing while the number of the dimension the resolution of the image keeps decreasing the stride of four convolution here already employed in order to down sample the image at the same time as convolving it nowadays a lot of architectures will simply not do max pooling at all but always use the kind of strided convolution to down sample image while convolving it what you also see here is that they thought that the feature map size should be should also be large at the beginning and then decrease which is a reasonable assumption right because if you have higher resolution images you're probably going to need higher resolution feature maps this didn't really come through until today as you know most architectures today they just go with like three by three kernels from the very start and don't really care about you know also downsizing their their filters I don't really know why whether it's just more convenient or less parameters or whether there's really something to having small filters but I just know you know this is something the large filters at the beginning is something that didn't didn't hold over time also you can see right here they have multiple dense layers at the end I believe most architectures today simply go with two of those instead of three so one like hidden layer and then one classification layer but it's you know it's it's very close to the architectures today right there hasn't changed that much like the difference between this and the VGG 16 VGG 19 network is just depth and then the difference between those and the ResNet is just the whatever these skip connections right here and that's where we are today so so there hasn't hasn't changed that much honestly they also allude to the fact that actually even though it doesn't look like it most parameters are here in these dense layers those are most parameters of the network this right here a convolution layer is like 1% of the parameters even though it takes up a lot of space in the in the drawing so maybe the reduction in the number of classification layers at the end also has something to do with the fact that that's where most parameters are so if you get rid of one of those dense layers you can like get many many more convolutional layers all right so the last part here is on reducing overfitting again they didn't really investigate whether or not really their network was overfitting like really establishing the overfitting it was I think maybe they did and maybe it was actually overfitting but we now we we don't care about overfitting too much anymore maybe because we already use these augmentations naturally but also because we built these deep models so we somehow have an idea that they generalize naturally I'm not sure whether they actually were only worried about it that much because of the history of machine learning or whether they actually did see that everything was overfitting constantly okay they say our neural network architecture has 60 million parameters although the thousand classes make each training example impose 10 bits of constraints on the mapping from image to label this turns out to be insufficient to learn many parameters without considerable overfitting below we describe two primary ways in which we combat overfitting again there's no one today no one today makes this argument anymore this oh we have this many parameters and there are that many images right we have 60 million parameters we have 1.2 million images a thousand classes how you know when how many parameters per sample is that and so on how many bits of constraint we don't care about we're fine with having like a billion times more parameters than training samples we we don't worry about it anymore so the first thing they do is data augmentation already I mean this was already known again like lots of these things here were already known but the combination is just so cool in this paper where so first of all again they say the transformed images are generating Python code on the CPU while the GPU is training on the previous batch of images so these data augmentation schemes are in effect computationally free again this code must have been ugly the first form of data augmentation consists of generating image translations and horizontal reflections we do this by extracting random 224 by 224 patches and their horizontal reflections from the 256 by 256 images okay so random so this was already this these are the most valuable data augmentations that still we have today random horizontal flipping is still used in every pipeline of computer vision except if you want to read text I guess and random cropping is still the most powerful data augmentation technique for images today and the it's crazy that this was already discovered and I I don't know whether they say right here how much this particular thing improves I don't think they have a stat on how much this improves they just say how much this this next thing improves but I'm going to guess this was one of the vital things for pushing the performance because now we know cropping is very important I guess they thought that they they would you know translation was the important part and so they focused on generating image translations and to generate an image translation from a single image naturally you have to crop it however we we we now focus much more on the fact that we crop it and kind of have different sub images of the same image especially in you know self-supervised learning and things like this we know that cropping is what is like the the power horse of these methods so the fact that they extract random patches right here means that their network only operates on these sub patches and then they compensate by a test time the networks makes a prediction by extracting five patches the four corner patches and the center patch as well as their horizontal reflections and averaging the prediction made by the networks softmax layer on the ten patches I also believe that people don't do this too much nowadays they most of the time they simply rescale the test images or something like this or a fine-tune at the end on the kind of scale training images there are various techniques for doing this but random cropping and horizontal flipping already employed right here also color kind of color jittering a form of color jittering a very special form altering the intensities of RGB channels in training images specifically we perform PCA on the set of RGB pixel values throughout the image in a training set to each training image we add multiples of the found principal components with magnitudes proportional to the corresponding eigenvalues times a random variable drawn from a gauss with zero mean and standard deviation point one this is I believe this has gone out of fashion so people do color jitter and kind of brightness jitter and so on but I don't think they particularly do this kind of PCA based image image augmentation right here anymore they say this scheme reduces the top one error rate by over 1% I wonder why why this isn't or maybe because you need these stats over the entire data set and the other things may be working equivalently well but you you can simply apply them without knowing kind of your your principal components okay next thing dropout dropout has been you know one of the things that was very important throughout the early stages of deep learning isn't that important anymore now dropout some people still use it but most people I think don't use dropout anymore and it's very interesting to see but it definitely was a technique that was used a lot during like from Alex net to basically like now or like the last very few years so they say combining the predictions of many different models is a very successful way to reduce test errors but it appears to be too expensive for big neural networks that already take several days to train there is however a very efficient version of model combination that only costs about a factor of two during training so there's this take this technique called dropout then they explain it to set to zero the output of each hidden neuron with probability 0.5 again people didn't know about dropout as they do now but they introduced this right here and they say it reduces their not sure that they also don't say how they how much they by how much this reduces the training error but they say we use drop out in the first two fully connected layers without dropout our network exhibits substantial overfitting dropout roughly doubles the number of iterations required to converge so okay so they did actually make sure or they did find the actual evidence of overfitting and saw that dropout reduces that and I wonder why this doesn't happen nowadays maybe because we have the we have less of these fully connected layers but I can't really imagine maybe because we do more augmentation I don't I don't know or maybe dropout is still used and I'm just I just don't know it and don't see it yeah so here they use momentum to train this and they do some qualitative analysis they do some qualitative analysis so first of all they say okay they shatter all of the previous approaches especially also then they build kind of ensemble methods and they pre-trained they already do transfer learning they already pre-trained on image net 2011 and fine-tune then on the image net 2012 right here the image net 2011 and then fine-tuning on the image net 2012 to reduce that error even further like pulling all the tricks all these things are around still very cool and then they look into what their network learned so they find that there are a number of these kind of filters you see these 11 by 11 filters in the first layer where they show okay this really and this was kind of already known that these neural networks extract filters like this like color gradients or edge detectors in various forms and directions and it's cool to see that this one also does so this one here is also a very cool investigation where they look at examples and the red bar the red one is always the correct label and the bars are basically what their model says are the top five things and it's cool to look at so for example here you have might as the top one but then also black widow cockroach tick starfish but the top labels are usually also very very good labels you can see here grill and it assigns convertible which you know by all means is correct it's just not the class that the annotators assigned to this particular image as well as here Dalmatian was the highest prediction of the network where the label was actually cherry and this is this is quite debatable right so you can see that a lot of the mistakes the network does is are are you know forgivable let's say and you can see that for when the network doesn't do mistakes the not only the top label is good but a lot of the top five labels are also very very adequate lastly they look at a given training set image which these are the training set images right here and they look at the last layers feature vector and the five nearest or the nearest neighbors in Euclidean space of the entire training data set and here's what you come up with so you can see for the elephant the nearest neighbors are all other elephants and regard that they are in different poses right they don't always look the same way these elephants also these dogs right here so it's pretty cool to see that the network actually learns some invariances across the class and puts images with the same label into the same area in the embedding space yeah so that's their that's their paper that they they already allude to the fact that depth is very important it is notable that our networks performance degrades if a single convolutional layer is removed for example removing any of the middle layers results in a loss of about 2% for the top one performance of the network so the depth really is important for achieving our results and as you know this spurred an area of this burden area of trying to build deeper and deeper networks until Resnets came along and built ultra deep networks they also say we did not use any unsupervised pre-training even though we expect that it will help especially if we obtain enough computational power to significantly increase the size of the network without obtaining a corresponding increase of the amount of labeled data thus far our results have improved as we have made our network larger and trained it longer but we still have many orders of magnitude to go in order to match the infrared temporal pathway of the human visual system ultimately with ultimately we would like to use very large and deep convolutional nets on video sequences where the temporal structure provides very helpful information that is missing of our far less obvious in static images so already the previewing of future research here with the self supervised with the many more layers and so on astounding that this kind of foresight and of course all of this proved to be you know very very adequate predictions right here and yeah so this was the paper right here the paper that kicked off deep learning I enjoy reading kind of these old papers especially looking back at what was already known what still is around which turns out to be a lot a lot is still around and the choices that people made back then some of them defined our modern field so that was it for Alex net let me know what you think in the comments and I'll see you next time bye
[ { "start": 0, "end": 5.1000000000000005, "text": " Hi there! Today we'll look at ImageNet classification with deep convolutional" }, { "start": 5.1000000000000005, "end": 10.9, "text": " neural networks by Alex Kruschevsky, Ilya Sutskever and Jeffrey E. Hinton." }, { "start": 10.9, "end": 15.24, "text": " This paper is another one in the installment of our historical paper" }, { "start": 15.24, "end": 20.28, "text": " overview, where we go through kind of old papers that were or weren't very" }, { "start": 20.28, "end": 26.6, "text": " impactful and see what people knew at the time already, how this developed and" }, { "start": 26.6, "end": 31.76, "text": " so on. Of course this paper here, also known as AlexNet, was the one that" }, { "start": 31.76, "end": 37.64, "text": " started the deep learning revolution, so to say, or at least contributed in large" }, { "start": 37.64, "end": 42.56, "text": " part to it. It was the first paper that showed that you could train these very" }, { "start": 42.56, "end": 49.44, "text": " deep neural networks, and very deep in here is a relative term, but the first" }, { "start": 49.44, "end": 55.68000000000001, "text": " one that showed that you could actually use CUDA, GPUs, to train those large" }, { "start": 55.68, "end": 60.76, "text": " networks efficiently, and it won the ImageNet competition that year, and it" }, { "start": 60.76, "end": 67.92, "text": " did so by a very very large margin. So it kind of shook the world, because" }, { "start": 67.92, "end": 72.6, "text": " previously computer vision was still doing like hand engineered features and" }, { "start": 72.6, "end": 78.44, "text": " then using some kind of classifiers on top of those. This paper basically" }, { "start": 78.44, "end": 83.88, "text": " changed everything. So we'll go through the paper and we'll see what was already" }, { "start": 83.88, "end": 90.16, "text": " known, and especially I always enjoy with these papers how did the choices that" }, { "start": 90.16, "end": 95.8, "text": " people make back then, how did they pull through to today, sort of what arbitrary" }, { "start": 95.8, "end": 100.75999999999999, "text": " choices that Alex Kruschevsky made right here are we still doing today and what" }, { "start": 100.75999999999999, "end": 105.32, "text": " have we learned since then. So the paper is written relatively" }, { "start": 105.32, "end": 110.3, "text": " straightforward, I have to say. It's a good read if you want to read it, and you" }, { "start": 110.3, "end": 114.52, "text": " know straightforward, and sort of gives you a little bit of an intuition of how" }, { "start": 114.52, "end": 122.12, "text": " much work must have gone into this, which is I guess a lot. So they start off" }, { "start": 122.12, "end": 127.96, "text": " by saying that that current approaches to object recognition make essential use" }, { "start": 127.96, "end": 132, "text": " of machine learning methods. This was also new, right? Object recognition" }, { "start": 132, "end": 138.64, "text": " wasn't always learned. The object recognizers, you could even do it in" }, { "start": 138.64, "end": 143.44, "text": " the indifferent way, like matching templates and so on. Machine learning" }, { "start": 143.44, "end": 150.04, "text": " was still one of the methods used, and of course today it's the method used. To" }, { "start": 150.04, "end": 154.48, "text": " improve their performance we can collect larger datasets, learn more powerful" }, { "start": 154.48, "end": 159.44, "text": " models, and use better techniques for preventing overfitting. Until recently" }, { "start": 159.44, "end": 163.27999999999997, "text": " datasets of labeled images were relatively small, on the orders of tens" }, { "start": 163.28, "end": 169.32, "text": " of thousands of images. So this especially at NORP, or here the C410," }, { "start": 169.32, "end": 174.32, "text": " or C4100, these are relatively small datasets with relatively small" }, { "start": 174.32, "end": 181.64, "text": " images as well, like C410 is 32 by 32 pixels. So they're saying that" }, { "start": 181.64, "end": 186.88, "text": " in these small datasets you can solve it with classical computer" }, { "start": 186.88, "end": 191.92000000000002, "text": " vision models, but if you have larger datasets, and especially more realistic" }, { "start": 191.92, "end": 196.6, "text": " datasets like bigger resolution and so on, you need bigger models. So they say" }, { "start": 196.6, "end": 201.27999999999997, "text": " but objects in realistic settings exhibit considerable variability to" }, { "start": 201.27999999999997, "end": 208.32, "text": " learn to recognize them, it is necessary to use much larger training sets. So" }, { "start": 208.32, "end": 214.64, "text": " they say that this ImageNet dataset is one of those larger datasets, consists of" }, { "start": 214.64, "end": 219.64, "text": " 15 million labeled high resolution images in over 22,000 categories." }, { "start": 219.64, "end": 225.51999999999998, "text": " People keep forgetting this, and I am included in that group of people, that" }, { "start": 225.51999999999998, "end": 230.64, "text": " the ImageNet dataset is actually much larger than we know, than we when we" }, { "start": 230.64, "end": 235.67999999999998, "text": " talk of ImageNet. When we speak of ImageNet we think of the ImageNet that" }, { "start": 235.67999999999998, "end": 240.56, "text": " has a thousand classes and about one or one and a half million images. However" }, { "start": 240.56, "end": 246.92, "text": " that's only a subset of the much much larger ImageNet dataset within many" }, { "start": 246.92, "end": 252.72, "text": " many more categories. It's just that the ImageNet competitions were performed on" }, { "start": 252.72, "end": 256.47999999999996, "text": " this subset, because I guess people thought well a thousand classes and" }, { "start": 256.47999999999996, "end": 262.36, "text": " a million images is already plenty, so we'll do that. So that's I guess how that" }, { "start": 262.36, "end": 268.96, "text": " came to be. So their argument is right here, to learn about thousands of objects" }, { "start": 268.96, "end": 274, "text": " from millions of images we need a model with a large learning capacity. However" }, { "start": 274, "end": 277.04, "text": " the immense complexity of object recognition task means that this" }, { "start": 277.04, "end": 282.12, "text": " problem cannot be specified even by a dataset as large as ImageNet. So" }, { "start": 282.12, "end": 286.12, "text": " our model should also have lots of prior knowledge to compensate for all the" }, { "start": 286.12, "end": 291.32, "text": " data we don't have. So their main argument for using neural" }, { "start": 291.32, "end": 297.08, "text": " networks is that the size of the dataset is so large, therefore we need a large" }, { "start": 297.08, "end": 304.76, "text": " model. Granted they already recognize the inherent connection" }, { "start": 304.76, "end": 310.82, "text": " between large models and a lot of complex data, but in the opposite they" }, { "start": 310.82, "end": 316.46, "text": " say well even if we have that much data the task we are trying to solve, object" }, { "start": 316.46, "end": 322.82, "text": " recognition, is way more complicated than the amount of data we have. So our model" }, { "start": 322.82, "end": 328.28, "text": " should also have lots of prior knowledge to compensate for all the data we don't" }, { "start": 328.28, "end": 334.48, "text": " have. Remember at this time convolutional neural networks weren't really" }, { "start": 334.48, "end": 338.32, "text": " known to do anything. I guess they were used for handwritten" }, { "start": 338.32, "end": 342.03999999999996, "text": " digit recognition and so on and were kind of on par with other methods." }, { "start": 342.03999999999996, "end": 346.8, "text": " However it wasn't like obviously clear that you would use them for image" }, { "start": 346.8, "end": 352.03999999999996, "text": " recognition. So here they have to make like a argument to convince" }, { "start": 352.04, "end": 357.48, "text": " people that okay we can use neural networks for this task because they have" }, { "start": 357.48, "end": 363.72, "text": " such a high capacity. However neural networks, feed-forward neural" }, { "start": 363.72, "end": 367.76000000000005, "text": " networks, are already too powerful. They don't know anything about the data." }, { "start": 367.76000000000005, "end": 372.8, "text": " Everything's connected to everything and they argue right here our model should" }, { "start": 372.8, "end": 377.16, "text": " have lots of prior knowledge to compensate for all the data we don't have." }, { "start": 377.16, "end": 382.8, "text": " So they allude to the convolutional neural networks constitute one such class" }, { "start": 382.8, "end": 386.68, "text": " of models. Their capacity can be controlled by varying the depth and" }, { "start": 386.68, "end": 391.28000000000003, "text": " breadth and they also make strong and mostly correct assumptions about the" }, { "start": 391.28000000000003, "end": 396.40000000000003, "text": " nature of images, namely stationarity of statistics and locality of pixel" }, { "start": 396.40000000000003, "end": 401.72, "text": " dependencies. So their argument here is that the convolutional operation is such" }, { "start": 401.72, "end": 407.12, "text": " a strong prior that is mostly consistent with what we know about images that" }, { "start": 407.12, "end": 411.44, "text": " they are very well suited to computer vision. Again something that was not" }, { "start": 411.44, "end": 417.2, "text": " abundantly clear at the time as it is right now. It's interesting to see how" }, { "start": 417.2, "end": 420.92, "text": " they get to this point where they say we need lots of capacity but we also need" }, { "start": 420.92, "end": 429.36, "text": " a model with lots of prior knowledge and of course CNNs fit that very well." }, { "start": 429.36, "end": 435.6, "text": " So they go into the problems of CNN despite the attractive qualities" }, { "start": 435.6, "end": 439.68, "text": " and despite the relative efficiency of their local architecture they are" }, { "start": 439.68, "end": 444.44, "text": " prohibitively expensive to apply in large-scale high-resolution images." }, { "start": 444.44, "end": 448.64000000000004, "text": " Luckily current GPUs paired with a highly optimized implementation of 2D" }, { "start": 448.64000000000004, "end": 453.08000000000004, "text": " convolution are powerful enough to facilitate the training of interestingly" }, { "start": 453.08000000000004, "end": 457.92, "text": " large CNNs and recent data sets such as ImageNet contain enough labeled example" }, { "start": 457.92, "end": 462.64000000000004, "text": " to train such model without severe overfitting. So overfitting was also" }, { "start": 462.64, "end": 467.36, "text": " still like very much at the forefront of people's minds back then. Right now we" }, { "start": 467.36, "end": 471.52, "text": " don't really care about overfitting that much anymore. Basically we figured out" }, { "start": 471.52, "end": 477.76, "text": " that if we just build large enough models we don't overfit which is strange" }, { "start": 477.76, "end": 482.8, "text": " in itself like this double descent phenomenon and so on but overfitting was" }, { "start": 482.8, "end": 489.88, "text": " still very much at the forefront of people's minds and they do a lot of" }, { "start": 489.88, "end": 496.52, "text": " things here to prevent overfitting which gives them kind of a boost in the test" }, { "start": 496.52, "end": 501.44, "text": " accuracy which might actually not have been the overfitting that they're" }, { "start": 501.44, "end": 505.71999999999997, "text": " combating. So they do for example in data augmentation already in this paper" }, { "start": 505.71999999999997, "end": 511, "text": " and they always allude to how this is to prevent overfitting. However we know" }, { "start": 511, "end": 517.68, "text": " nowadays that it might not be the overfitting that's combated by data" }, { "start": 517.68, "end": 522.28, "text": " augmentation. It might actually have something to do with regularizing" }, { "start": 522.28, "end": 528.68, "text": " your function making it more smooth and so on. So you just see how" }, { "start": 528.68, "end": 533.8, "text": " coming from a classical machine learning perspective overfitting was like the" }, { "start": 533.8, "end": 537.3599999999999, "text": " number one or one of the number one problems in classical machine learning" }, { "start": 537.3599999999999, "end": 545.16, "text": " in SVMs and things like this. So it's safe to say that they thought" }, { "start": 545.16, "end": 549.24, "text": " if we built these large models we're gonna have a huge overfitting problem" }, { "start": 549.24, "end": 556.04, "text": " and yeah so that's why this pulls through right here. Also I guess" }, { "start": 556.04, "end": 561.4, "text": " one of the main contributions of this paper is to show to combine this CNN" }, { "start": 561.4, "end": 566.56, "text": " training with GPUs. Also not very non-clear at the time like it was known" }, { "start": 566.56, "end": 573.04, "text": " that you could do computation on GPUs but the fact that these are you know very" }, { "start": 573.04, "end": 578.24, "text": " capable for training these CNNs or generally neural networks wasn't" }, { "start": 578.24, "end": 584.48, "text": " something that was you know known at the time. So this paper basically showed that" }, { "start": 584.48, "end": 591.48, "text": " if you use a GPU you can get that much faster and that makes it" }, { "start": 591.48, "end": 597.92, "text": " possible to train these big neural networks. Again right here the size of" }, { "start": 597.92, "end": 602.28, "text": " our network made overfitting a significant problem even with 1.2" }, { "start": 602.28, "end": 606.56, "text": " million labeled training examples so we use several effective techniques for" }, { "start": 606.56, "end": 613.48, "text": " preventing overfitting and we'll look at those. And the end they say the" }, { "start": 613.48, "end": 618.36, "text": " network's size is limited mainly by the amount of memory available on current" }, { "start": 618.36, "end": 622.48, "text": " GPUs and by the amount of training time that we are willing to tolerate. Our" }, { "start": 622.48, "end": 629.52, "text": " network takes between five and six days to train on two GTX 580 GPUs. All of our" }, { "start": 629.52, "end": 633.6, "text": " experiments suggest that our results can be improved by simply waiting for faster" }, { "start": 633.6, "end": 638.24, "text": " GPUs and bigger data sets to become available. And I mean that proved to be" }, { "start": 638.24, "end": 642.56, "text": " absolutely true. We don't necessarily have bigger data sets right now though" }, { "start": 642.56, "end": 651.88, "text": " we do but certainly with faster GPUs and bigger GPUs this became a this became" }, { "start": 651.88, "end": 657.18, "text": " these networks became better simply by increasing their depth and as you know" }, { "start": 657.18, "end": 662.5999999999999, "text": " then ResNets came along increasing the depth by an order of magnitude and that" }, { "start": 662.5999999999999, "end": 668.8399999999999, "text": " gave another boost to computer vision. Alright so they talk about the ImageNet" }, { "start": 668.8399999999999, "end": 675.28, "text": " data set here and the main point in the ImageNet data set right here is the fact" }, { "start": 675.28, "end": 681.56, "text": " that the images are plenty so there are over a million training images in this" }, { "start": 681.56, "end": 688.28, "text": " subset with a thousand classes which was you know a very big that was that was on" }, { "start": 688.28, "end": 692.4399999999999, "text": " like CIFAR 10 had 10 classes, CIFAR 100 had a hundred classes that was already a" }, { "start": 692.4399999999999, "end": 700.16, "text": " lot. A thousand classes that is like unheard of before this data set. I guess" }, { "start": 700.16, "end": 706.28, "text": " not unheard of but yeah and a million training images. Completely crazy and" }, { "start": 706.28, "end": 713.24, "text": " also not only was it a lot of images they were resolution was really big so" }, { "start": 713.24, "end": 721.04, "text": " in the order of 256 by 256 whereas previous methods all were like 32 by 32" }, { "start": 721.04, "end": 728.24, "text": " so definitely challenging data set even today it's a challenging data set. Alright" }, { "start": 728.24, "end": 733.04, "text": " so the architecture. The architecture and there's this famous graphic right" }, { "start": 733.04, "end": 739.64, "text": " here of the AlexNet architecture so briefly they described these" }, { "start": 739.64, "end": 744.52, "text": " convolutional layers right here as you can see there's max pooling already" }, { "start": 744.52, "end": 750.68, "text": " here they have dense layers at the end they do generally increase the number" }, { "start": 750.68, "end": 755.9599999999999, "text": " of feature maps right here while decreasing the resolution with max" }, { "start": 755.9599999999999, "end": 761.78, "text": " pooling so all of this has sort of you know kept until today I guess they also" }, { "start": 761.78, "end": 765.56, "text": " took it from earlier work on convolutional neural networks that" }, { "start": 765.56, "end": 772.72, "text": " generally found this to be a good idea and the important part here that is kind" }, { "start": 772.72, "end": 776.3199999999999, "text": " of special to AlexNet is you can see there are these two different pipelines" }, { "start": 776.3199999999999, "end": 784, "text": " and Alex for cutting off this part right here I mean you just know like this has" }, { "start": 784, "end": 788.88, "text": " the eight pages we need to like we have like three lines too much how can we fit" }, { "start": 788.88, "end": 793.16, "text": " the three lines we've already cropped everything let's just cut off the top" }, { "start": 793.16, "end": 799.4399999999999, "text": " half here it's essentially the same as the bottom yeah so space constraints and" }, { "start": 799.4399999999999, "end": 806.24, "text": " PDFs for conference submissions ruining yet another paper alright but you can" }, { "start": 806.24, "end": 811.32, "text": " see there is this two this this two column architecture right here so this" }, { "start": 811.32, "end": 817.8, "text": " network was so large that it didn't fit on one GPU so they had to split it onto" }, { "start": 817.8, "end": 823.8, "text": " two GPUs with the occasional intercommunication right you can see" }, { "start": 823.8, "end": 828.7199999999999, "text": " here there is intercommunication between the two GPUs and there is also no" }, { "start": 828.7199999999999, "end": 833.8599999999999, "text": " intercommunication right here on this layer this was very intricate that was" }, { "start": 833.8599999999999, "end": 838.4799999999999, "text": " one thing that really didn't hold until today I guess until now with things like" }, { "start": 838.4799999999999, "end": 843.7199999999999, "text": " I don't know G shard or so where you have different weights on different GPUs" }, { "start": 843.72, "end": 849.5600000000001, "text": " again I guess the the invention of bigger GPUs made that sort of super" }, { "start": 849.5600000000001, "end": 854.1600000000001, "text": " fluid but just imagine the amount of code they had to write there was no" }, { "start": 854.1600000000001, "end": 858.72, "text": " tensor flow at this point there I don't think there was even cafe around there" }, { "start": 858.72, "end": 867.32, "text": " was just CUDA and yeah just this cross GPU memory writing I just imagined this" }, { "start": 867.32, "end": 873.8000000000001, "text": " to be so so ugly and big respect for writing all of this all of this code" }, { "start": 873.8000000000001, "end": 879.84, "text": " alright so they they go through a number of important things and most of the" }, { "start": 879.84, "end": 886.72, "text": " things here aren't their invention let's say but they cleverly combine things" }, { "start": 886.72, "end": 890.3000000000001, "text": " that were already known about neural networks and things that were maybe" }, { "start": 890.3000000000001, "end": 894.44, "text": " developed somewhere that they have found to work really well so the first one is" }, { "start": 894.44, "end": 899.48, "text": " the relu non-linearity now of course relu is nowadays all like abundant" }, { "start": 899.48, "end": 904.72, "text": " everyone uses relu's non-linearities but at that time it was still very much in" }, { "start": 904.72, "end": 908.96, "text": " fashion to use something like the sigmoid right here or the hyperbolic" }, { "start": 908.96, "end": 912.7600000000001, "text": " tangent and why is that because the neural networks were still kind of" }, { "start": 912.7600000000001, "end": 917.6400000000001, "text": " inspired by the neurons where you had the soma of the neuron and then the" }, { "start": 917.6400000000001, "end": 924.08, "text": " input dendrites sorry the dendrites with the input axons and then you would sum" }, { "start": 924.08, "end": 930.2, "text": " up all the incoming signals and then that would go over so in the true neuron" }, { "start": 930.2, "end": 937.32, "text": " you have this this this kind of curve where if the input rises above this" }, { "start": 937.32, "end": 943.96, "text": " border right here the action potential maybe I don't know what the the English" }, { "start": 943.96, "end": 950, "text": " term is then if it rise above that then the neuron would start to spike right" }, { "start": 950, "end": 955.88, "text": " and if it's below that it wouldn't so people wanted to approximate this using" }, { "start": 955.88, "end": 961.52, "text": " some sort of a a kind of differentiable but something that's very similar to" }, { "start": 961.52, "end": 967.32, "text": " this step function and that ultimately led to something like a sigmoid or an" }, { "start": 967.32, "end": 974.68, "text": " hyperbolic tangent so people trying to stay close to biological neurons did" }, { "start": 974.68, "end": 979.84, "text": " this but that gives you the problem that in this region and in this region right" }, { "start": 979.84, "end": 985.76, "text": " here you have almost no gradient to learn from so you can see that they" }, { "start": 985.76, "end": 993.9200000000001, "text": " argue that in terms of training time with gradient descent the saturating" }, { "start": 993.9200000000001, "end": 998.48, "text": " non-linearity so the hyperbolic tangent and the sigmoid are much slower than the" }, { "start": 998.48, "end": 1003.76, "text": " non saturating lean non-linearity this one following Narendt Hinton we refer to" }, { "start": 1003.76, "end": 1009.1600000000001, "text": " neurons with this non-linearity as rectified linear units so taken from" }, { "start": 1009.16, "end": 1015.52, "text": " this this other paper they say okay we use these relu's these rectified linear" }, { "start": 1015.52, "end": 1021.36, "text": " units which are not exactly like real biological neurons but they train much" }, { "start": 1021.36, "end": 1029.36, "text": " faster right and of course relu's are used until this day so you can see right" }, { "start": 1029.36, "end": 1035.56, "text": " here that a this is on a C for 10 and they measure the time to reach 25% of" }, { "start": 1035.56, "end": 1041.36, "text": " the training error and this here is with the relu's and this here is with the" }, { "start": 1041.36, "end": 1046.48, "text": " hyperbolic tangent and it takes much longer to reach the hyperbolic tangent" }, { "start": 1046.48, "end": 1054.72, "text": " especially it takes six times faster to with the relu's and they say that's one" }, { "start": 1054.72, "end": 1060, "text": " of the main components that allows them to learn this fast to even experiment" }, { "start": 1060, "end": 1064.8799999999999, "text": " with these big networks because their entire training time is six days right" }, { "start": 1064.88, "end": 1069.48, "text": " but they probably didn't train it only once they experimented with it and saw" }, { "start": 1069.48, "end": 1074.72, "text": " what works so if you have a couple of months of time and he takes you a week" }, { "start": 1074.72, "end": 1080.0800000000002, "text": " to train one of these things you know you don't you can't afford a six times" }, { "start": 1080.0800000000002, "end": 1085.8000000000002, "text": " slowdown because that would mean you can only train like two models in the entire" }, { "start": 1085.8000000000002, "end": 1092.0800000000002, "text": " course of research and that would severely hinder your progress now we are" }, { "start": 1092.08, "end": 1097.12, "text": " at the point where that becomes true again with these giant giant transformer" }, { "start": 1097.12, "end": 1103.1999999999998, "text": " language models where people can train it once and then you know like GPT-3" }, { "start": 1103.1999999999998, "end": 1107.36, "text": " they say oh we made we discovered a bug halfway through and we've kind of fixed" }, { "start": 1107.36, "end": 1111.96, "text": " it but we're not sure we couldn't restart because it was too expensive" }, { "start": 1111.96, "end": 1115.8999999999999, "text": " yeah maybe we're waiting for a moment I'm still saying we're waiting for the" }, { "start": 1115.9, "end": 1123.1200000000001, "text": " resonant moment in the transformers but yeah relu's in you know here in not" }, { "start": 1123.1200000000001, "end": 1129.5600000000002, "text": " introduced here but used here and have been prevailing until today training on" }, { "start": 1129.5600000000002, "end": 1135.3600000000001, "text": " multiple GPUs something as I said that didn't didn't really get forward from" }, { "start": 1135.3600000000001, "end": 1140.76, "text": " here especially the kind of GPU training so if we train on multiple GPUs today" }, { "start": 1140.76, "end": 1147.08, "text": " what we mean is that we have our model right and then we distribute that to" }, { "start": 1147.08, "end": 1153.44, "text": " multiple GPUs like this and then we take a mini batch from the training data and" }, { "start": 1153.44, "end": 1159.16, "text": " we simply split it up let each GPU do its thing on its subset of the mini batch" }, { "start": 1159.16, "end": 1164.32, "text": " and then at the end kind of calculate the loss and then back propagate the" }, { "start": 1164.32, "end": 1169.12, "text": " gradients and synchronize the gradients between that so we have one model that" }, { "start": 1169.12, "end": 1175.9599999999998, "text": " is on both GPUs here they distribute a model to two GPUs and I'm also thinking" }, { "start": 1175.9599999999998, "end": 1182.56, "text": " that with frameworks like G shard this could potentially have a revival right" }, { "start": 1182.56, "end": 1187.6799999999998, "text": " here this kind of distributing your models especially within the same layer" }, { "start": 1187.6799999999998, "end": 1194.32, "text": " across many GPUs and then having cross communication only at some points so" }, { "start": 1194.32, "end": 1199.3999999999999, "text": " their argument is this only has three gigabytes of memory which limits the" }, { "start": 1199.3999999999999, "end": 1204.48, "text": " maximum size of networks can be trained on it turns out that 1.2 train million" }, { "start": 1204.48, "end": 1208.04, "text": " training samples are enough to train networks which are too big to fit on one" }, { "start": 1208.04, "end": 1213.72, "text": " GPU therefore we spread the net across two GPUs current GPUs are particularly" }, { "start": 1213.72, "end": 1218.48, "text": " well suited to cross GPU parallelization as they're able to read from and write" }, { "start": 1218.48, "end": 1223.84, "text": " to one another's memory directly without going through the host machine okay so" }, { "start": 1223.84, "end": 1232.12, "text": " this means that for so sorry here they say the parallelization scheme that we" }, { "start": 1232.12, "end": 1237.48, "text": " employ essentially puts half the kernels or neurons on each GPU with one" }, { "start": 1237.48, "end": 1242.6799999999998, "text": " additional trick the GPUs communicate only in certain layers that means that" }, { "start": 1242.6799999999998, "end": 1246.8, "text": " for example the kernels of layer 3 take input from all kernel maps in layer 2" }, { "start": 1246.8, "end": 1250.84, "text": " however the kernels in layer 4 take input only from the kernel maps in layer" }, { "start": 1250.84, "end": 1256.72, "text": " 3 which reside on the same GPU so very very interesting choice right here and" }, { "start": 1256.72, "end": 1264.6, "text": " they they justify this here or they say the results this scheme reduces our top" }, { "start": 1264.6, "end": 1269.56, "text": " one top five error rates by 1.7 and 1.2 percent respectively as compared with a" }, { "start": 1269.56, "end": 1273.84, "text": " net with half as many kernels in each computational layer in each" }, { "start": 1273.84, "end": 1279.06, "text": " convolutional layer on one GPU the two GPU net takes slightly less time to" }, { "start": 1279.06, "end": 1284.52, "text": " train than the one a GPU net so first of all I have to say big respect right here" }, { "start": 1284.52, "end": 1289.96, "text": " like like I can imagine they did this you know with the relu's and stuff and" }, { "start": 1289.96, "end": 1293.8799999999999, "text": " they were already better than previous because they're so just to go to the" }, { "start": 1293.8799999999999, "end": 1301.52, "text": " results the pre they beat the error rates of previous models by ginormous" }, { "start": 1301.52, "end": 1307.52, "text": " amount so this is what they knew right here this is on the 2010 image net split" }, { "start": 1307.52, "end": 1314.24, "text": " so the previous best ones were like at around 28 25 percent and here their best" }, { "start": 1314.24, "end": 1320.12, "text": " one is at 17 percent top five error rate I'm gonna imagine that they trained it" }, { "start": 1320.12, "end": 1324.8, "text": " first and we're already better than the 25 percent and I guess lots of people" }, { "start": 1324.8, "end": 1328.6399999999999, "text": " would just call it a day would be like oh cool we have this entirely new method" }, { "start": 1328.6399999999999, "end": 1332.6, "text": " not only did we show that we can train it we actually showed that it's better" }, { "start": 1332.6, "end": 1338.1599999999999, "text": " and bad a boom I have point one percent better error rate and everything else" }, { "start": 1338.1599999999999, "end": 1342.9599999999998, "text": " can be a separate paper no they stuck with it and they pushed it each so each" }, { "start": 1342.9599999999998, "end": 1347.48, "text": " of these things right here they say oh this reduces the error rate by 1% this" }, { "start": 1347.48, "end": 1354.04, "text": " reduces the error rate by 2% and you know really they they went about it how" }, { "start": 1354.04, "end": 1359.1599999999999, "text": " far can we push this with everything I mean just imagine you come and you train" }, { "start": 1359.16, "end": 1365.28, "text": " a network I'm pretty sure they first trained on one GPU right and and then" }, { "start": 1365.28, "end": 1370.2, "text": " they thought oh you know maybe we can train an even bigger network by using" }, { "start": 1370.2, "end": 1375.6000000000001, "text": " two GPUs and then they realized what it's gonna take like a crap ton amount" }, { "start": 1375.6000000000001, "end": 1380.8000000000002, "text": " of dumb code to cross synchronize and keep them in lockstep and blah blah blah" }, { "start": 1380.8000000000002, "end": 1385.2, "text": " like it's not even easy to write multi GPU code today with all the frameworks" }, { "start": 1385.2, "end": 1391.1200000000001, "text": " just imagine that and for them to having already observed that their network does" }, { "start": 1391.1200000000001, "end": 1396.4, "text": " better than everything that was previously to sit down and do the cross" }, { "start": 1396.4, "end": 1402.64, "text": " GPU thing experiment with okay when do we cross communicate and whatnot that is" }, { "start": 1402.64, "end": 1411.48, "text": " very very respectable right here so maybe a lesson to be learned or or just" }, { "start": 1411.48, "end": 1415.6, "text": " the mentality of the people maybe they just had more time they were like okay" }, { "start": 1415.6, "end": 1420.32, "text": " it's still like two months out this competition deadline I don't know but" }, { "start": 1420.32, "end": 1426.88, "text": " you know I'm this this is not something that I see today very often this this" }, { "start": 1426.88, "end": 1432.2, "text": " kind of persistence and additional pushing and reporting of what works in" }, { "start": 1432.2, "end": 1436.46, "text": " these kinds of things I mean some some papers do it but most papers do it" }, { "start": 1436.46, "end": 1441.1200000000001, "text": " because only with all the tricks they can get that point one percent improvement" }, { "start": 1441.1200000000001, "end": 1446.72, "text": " and this one already had the improvement and did it anyway okay but multi GPU" }, { "start": 1446.72, "end": 1451.2, "text": " training didn't really it's like splitting the models across GPUs didn't" }, { "start": 1451.2, "end": 1457.68, "text": " really didn't really stick around mainly because I guess the GPUs got larger in" }, { "start": 1457.68, "end": 1463.08, "text": " memory pretty quickly so it wasn't that necessary but also I guess because the" }, { "start": 1463.08, "end": 1467.48, "text": " frameworks were just too clunky and now maybe with G-shard this is coming back" }, { "start": 1467.48, "end": 1473.76, "text": " so worth another shot I guess next one local response normalization this also" }, { "start": 1473.76, "end": 1479.1599999999999, "text": " didn't really stick around I cut kind of dumped in favor of things like batch" }, { "start": 1479.1599999999999, "end": 1484.96, "text": " normalization but with the resurfacing of things like layer normalization this" }, { "start": 1484.96, "end": 1493.4, "text": " it comes back to this thing here again a little bit so what they say is that what" }, { "start": 1493.4, "end": 1498.56, "text": " they want to do is they want to kind of normalize the response of these of these" }, { "start": 1498.56, "end": 1504.16, "text": " relu's so what they do is each response which is this alpha they are these a" }, { "start": 1504.16, "end": 1511.92, "text": " here is normalized by the following quantity and it's the all the responses" }, { "start": 1511.92, "end": 1517.24, "text": " of the other neurons around them or of the other kernels around them and you can" }, { "start": 1517.24, "end": 1523.3600000000001, "text": " see the sum is over this weird quantity right here so what does it mean if they" }, { "start": 1523.3600000000001, "end": 1528.44, "text": " have a bunch of convolutional filters and these are the activation so these are" }, { "start": 1528.44, "end": 1534.3600000000001, "text": " the feature maps after the convolution and yeah so if I have like 10" }, { "start": 1534.3600000000001, "end": 1539.16, "text": " convolutional filters in my layer this is going to be the output the way they" }, { "start": 1539.16, "end": 1547.92, "text": " normalizes they normalize each filter sorry each output channel by averaging" }, { "start": 1547.92, "end": 1556.88, "text": " by you see here dividing by the average response of the channels around them" }, { "start": 1556.88, "end": 1561.1200000000001, "text": " right so let's maybe say the five channels though two channels in front of" }, { "start": 1561.1200000000001, "end": 1565.2, "text": " them and two channels behind them this is going to be they take the average" }, { "start": 1565.2, "end": 1570.6000000000001, "text": " across this one and then for another channel right here for this one you" }, { "start": 1570.6000000000001, "end": 1575.04, "text": " would take the average of the five around that this isn't really something" }, { "start": 1575.04, "end": 1580.8, "text": " that stuck around I guess mainly because of the really dynamic situation right" }, { "start": 1580.8, "end": 1587.48, "text": " here what people do today is they have things like layer normalization that" }, { "start": 1587.48, "end": 1592.24, "text": " simply averages across all of the channels or they have group normalization" }, { "start": 1592.24, "end": 1598.8, "text": " that pre defines these groups like here is there's two groups and we only" }, { "start": 1598.8, "end": 1603.84, "text": " normalize within this group and within this group also always the same this" }, { "start": 1603.84, "end": 1610.36, "text": " kind of dynamic normalization on across neighboring filters as I said didn't" }, { "start": 1610.36, "end": 1617.72, "text": " really stick around not really sure why but I guess it was just easier to" }, { "start": 1617.72, "end": 1624.84, "text": " implement it otherwise or it just worked better again here they say this this it" }, { "start": 1624.84, "end": 1629, "text": " was motivated well right this scheme bears some resemblance to the local" }, { "start": 1629, "end": 1632.92, "text": " contrast normalization scheme of that but ours would be more correctly termed" }, { "start": 1632.92, "end": 1638.52, "text": " brightness normalization since we do not subtract the mean activity and oh they" }, { "start": 1638.52, "end": 1646.16, "text": " make it connection to biological neurons where is it this sort of response" }, { "start": 1646.16, "end": 1650.64, "text": " normalization implements a form of lateral inhibition inspired by type" }, { "start": 1650.64, "end": 1655, "text": " found in real neurons creating competition for big activities amongst" }, { "start": 1655, "end": 1661.52, "text": " neuron outputs computed using different kernels okay so kind of inspired by real" }, { "start": 1661.52, "end": 1666.72, "text": " neurons but also kind of inspired by other people doing also some kind of" }, { "start": 1666.72, "end": 1670.76, "text": " normalization so people already knew that normalization was helpful at some" }, { "start": 1670.76, "end": 1676.16, "text": " times and this is what they employed right here again reducing the top error" }, { "start": 1676.16, "end": 1683.44, "text": " rates by 1.4 and 1.2 percent respectively so not a big improvement but still an" }, { "start": 1683.44, "end": 1688, "text": " improvement the last thing overlapping pooling again a thing that didn't really" }, { "start": 1688, "end": 1694.32, "text": " stick around that much where they say okay instead of having a pooling layer" }, { "start": 1694.32, "end": 1701.6799999999998, "text": " so if this is your image and instead of pooling 2x2 in the stride of 2 like" }, { "start": 1701.6799999999998, "end": 1706.84, "text": " we do today and you know pull it down to a smaller image what we can do instead" }, { "start": 1706.84, "end": 1713.76, "text": " is we can pool with overlapping windows so in that case they pool with a 3x3" }, { "start": 1713.76, "end": 1719.48, "text": " window but they do always do stride of 2 so they have like these overlaps right" }, { "start": 1719.48, "end": 1725.56, "text": " here resulting in the same size but then each pixel right here has some sort of" }, { "start": 1725.56, "end": 1731.8, "text": " overlapping information from the pixels around it again they say it reduces the" }, { "start": 1731.8, "end": 1737.96, "text": " top one and top five error rates by 0.4 percent and 0.3 percent maybe this this" }, { "start": 1737.96, "end": 1743.92, "text": " didn't stick around because I'm not sure maybe because people found it doesn't" }, { "start": 1743.92, "end": 1750.24, "text": " work in other problems who knows so the overall architecture as we said is" }, { "start": 1750.24, "end": 1755.64, "text": " described in this picture right here so you have the input image which you can" }, { "start": 1755.64, "end": 1761.96, "text": " see has three channels and they use convolutional filters with a here with a" }, { "start": 1761.96, "end": 1766.1200000000001, "text": " stride of four at the beginning to reduce the size so at the beginning it's" }, { "start": 1766.12, "end": 1776.1599999999999, "text": " 224 by 224 and then it's 48 by sorry it's 55 by 55 that thing here 55 by 55" }, { "start": 1776.1599999999999, "end": 1781.2399999999998, "text": " 48 feature maps you can already see as we said before the feature maps keep" }, { "start": 1781.2399999999998, "end": 1787.6, "text": " increasing while the number of the dimension the resolution of the image" }, { "start": 1787.6, "end": 1794.6799999999998, "text": " keeps decreasing the stride of four convolution here already employed in" }, { "start": 1794.68, "end": 1799.8, "text": " order to down sample the image at the same time as convolving it nowadays a" }, { "start": 1799.8, "end": 1805.3200000000002, "text": " lot of architectures will simply not do max pooling at all but always use the" }, { "start": 1805.3200000000002, "end": 1811.2, "text": " kind of strided convolution to down sample image while convolving it what" }, { "start": 1811.2, "end": 1818.76, "text": " you also see here is that they thought that the feature map size should be" }, { "start": 1818.76, "end": 1823.5600000000002, "text": " should also be large at the beginning and then decrease which is a reasonable" }, { "start": 1823.56, "end": 1827.56, "text": " assumption right because if you have higher resolution images you're" }, { "start": 1827.56, "end": 1832.44, "text": " probably going to need higher resolution feature maps this didn't really come" }, { "start": 1832.44, "end": 1838.2, "text": " through until today as you know most architectures today they just go with" }, { "start": 1838.2, "end": 1844.36, "text": " like three by three kernels from the very start and don't really care about" }, { "start": 1844.36, "end": 1851.6399999999999, "text": " you know also downsizing their their filters I don't really know why whether" }, { "start": 1851.64, "end": 1857.2800000000002, "text": " it's just more convenient or less parameters or whether there's really" }, { "start": 1857.2800000000002, "end": 1862.68, "text": " something to having small filters but I just know you know this is something the" }, { "start": 1862.68, "end": 1867.44, "text": " large filters at the beginning is something that didn't didn't hold over" }, { "start": 1867.44, "end": 1875.5600000000002, "text": " time also you can see right here they have multiple dense layers at the end I" }, { "start": 1875.5600000000002, "end": 1880.88, "text": " believe most architectures today simply go with two of those instead of three" }, { "start": 1880.88, "end": 1886.2800000000002, "text": " so one like hidden layer and then one classification layer but it's you know" }, { "start": 1886.2800000000002, "end": 1891.2, "text": " it's it's very close to the architectures today right there hasn't" }, { "start": 1891.2, "end": 1896.6000000000001, "text": " changed that much like the difference between this and the VGG 16 VGG 19" }, { "start": 1896.6000000000001, "end": 1899.7600000000002, "text": " network is just depth and then the difference between those and the" }, { "start": 1899.7600000000002, "end": 1904.8400000000001, "text": " ResNet is just the whatever these skip connections right here and that's where" }, { "start": 1904.84, "end": 1911.24, "text": " we are today so so there hasn't hasn't changed that much honestly they also" }, { "start": 1911.24, "end": 1914.76, "text": " allude to the fact that actually even though it doesn't look like it most" }, { "start": 1914.76, "end": 1919.36, "text": " parameters are here in these dense layers those are most parameters of the" }, { "start": 1919.36, "end": 1924.32, "text": " network this right here a convolution layer is like 1% of the parameters even" }, { "start": 1924.32, "end": 1929.6799999999998, "text": " though it takes up a lot of space in the in the drawing so maybe the reduction in" }, { "start": 1929.6799999999998, "end": 1933.8799999999999, "text": " the number of classification layers at the end also has something to do with" }, { "start": 1933.88, "end": 1938.68, "text": " the fact that that's where most parameters are so if you get rid of one" }, { "start": 1938.68, "end": 1944.7600000000002, "text": " of those dense layers you can like get many many more convolutional layers" }, { "start": 1944.7600000000002, "end": 1953.88, "text": " all right so the last part here is on reducing overfitting again they didn't" }, { "start": 1953.88, "end": 1959.1200000000001, "text": " really investigate whether or not really their network was overfitting like" }, { "start": 1959.1200000000001, "end": 1963.24, "text": " really establishing the overfitting it was I think maybe they did and maybe it" }, { "start": 1963.24, "end": 1969.16, "text": " was actually overfitting but we now we we don't care about overfitting too much" }, { "start": 1969.16, "end": 1974.24, "text": " anymore maybe because we already use these augmentations naturally but also" }, { "start": 1974.24, "end": 1979.64, "text": " because we built these deep models so we somehow have an idea that they" }, { "start": 1979.64, "end": 1984.36, "text": " generalize naturally I'm not sure whether they actually were only worried" }, { "start": 1984.36, "end": 1988.04, "text": " about it that much because of the history of machine learning or whether" }, { "start": 1988.04, "end": 1995.36, "text": " they actually did see that everything was overfitting constantly okay they say" }, { "start": 1995.36, "end": 1999.56, "text": " our neural network architecture has 60 million parameters although the thousand" }, { "start": 1999.56, "end": 2003.62, "text": " classes make each training example impose 10 bits of constraints on the" }, { "start": 2003.62, "end": 2007.44, "text": " mapping from image to label this turns out to be insufficient to learn many" }, { "start": 2007.44, "end": 2011.2, "text": " parameters without considerable overfitting below we describe two" }, { "start": 2011.2, "end": 2015.24, "text": " primary ways in which we combat overfitting again there's no one today" }, { "start": 2015.24, "end": 2020.76, "text": " no one today makes this argument anymore this oh we have this many parameters and" }, { "start": 2020.76, "end": 2026.52, "text": " there are that many images right we have 60 million parameters we have 1.2 million" }, { "start": 2026.52, "end": 2033.24, "text": " images a thousand classes how you know when how many parameters per sample is" }, { "start": 2033.24, "end": 2040, "text": " that and so on how many bits of constraint we don't care about we're fine" }, { "start": 2040, "end": 2047.68, "text": " with having like a billion times more parameters than training samples we we" }, { "start": 2047.68, "end": 2052.24, "text": " don't worry about it anymore so the first thing they do is data" }, { "start": 2052.24, "end": 2058.48, "text": " augmentation already I mean this was already known again like lots of these" }, { "start": 2058.48, "end": 2063.04, "text": " things here were already known but the combination is just so cool in this" }, { "start": 2063.04, "end": 2071.04, "text": " paper where so first of all again they say the transformed images are generating" }, { "start": 2071.04, "end": 2076.68, "text": " Python code on the CPU while the GPU is training on the previous batch of images" }, { "start": 2076.68, "end": 2080.52, "text": " so these data augmentation schemes are in effect computationally free again" }, { "start": 2080.52, "end": 2086.84, "text": " this code must have been ugly the first form of data augmentation consists of" }, { "start": 2086.84, "end": 2092.2, "text": " generating image translations and horizontal reflections we do this by" }, { "start": 2092.2, "end": 2097.96, "text": " extracting random 224 by 224 patches and their horizontal reflections from the" }, { "start": 2097.96, "end": 2106.2799999999997, "text": " 256 by 256 images okay so random so this was already this these are the most" }, { "start": 2106.2799999999997, "end": 2111.7999999999997, "text": " valuable data augmentations that still we have today random horizontal flipping" }, { "start": 2111.7999999999997, "end": 2116.3199999999997, "text": " is still used in every pipeline of computer vision except if you want to" }, { "start": 2116.32, "end": 2123.76, "text": " read text I guess and random cropping is still the most powerful data" }, { "start": 2123.76, "end": 2131.6800000000003, "text": " augmentation technique for images today and the it's crazy that this was already" }, { "start": 2131.6800000000003, "end": 2137.76, "text": " discovered and I I don't know whether they say right here how much this" }, { "start": 2137.76, "end": 2142.6400000000003, "text": " particular thing improves I don't think they have a stat on how much this" }, { "start": 2142.64, "end": 2147.2, "text": " improves they just say how much this this next thing improves but I'm going" }, { "start": 2147.2, "end": 2151.7599999999998, "text": " to guess this was one of the vital things for pushing the performance" }, { "start": 2151.7599999999998, "end": 2157.44, "text": " because now we know cropping is very important I guess they thought that they" }, { "start": 2157.44, "end": 2163.56, "text": " they would you know translation was the important part and so they focused on" }, { "start": 2163.56, "end": 2168.96, "text": " generating image translations and to generate an image translation from a" }, { "start": 2168.96, "end": 2175.6, "text": " single image naturally you have to crop it however we we we now focus much more" }, { "start": 2175.6, "end": 2180.52, "text": " on the fact that we crop it and kind of have different sub images of the same" }, { "start": 2180.52, "end": 2184.32, "text": " image especially in you know self-supervised learning and things like" }, { "start": 2184.32, "end": 2189.32, "text": " this we know that cropping is what is like the the power horse of these" }, { "start": 2189.32, "end": 2195.7200000000003, "text": " methods so the fact that they extract random patches right here means that" }, { "start": 2195.72, "end": 2200.12, "text": " their network only operates on these sub patches and then they compensate by a" }, { "start": 2200.12, "end": 2203.68, "text": " test time the networks makes a prediction by extracting five patches" }, { "start": 2203.68, "end": 2207.3999999999996, "text": " the four corner patches and the center patch as well as their horizontal" }, { "start": 2207.3999999999996, "end": 2212.04, "text": " reflections and averaging the prediction made by the networks softmax layer on" }, { "start": 2212.04, "end": 2217.08, "text": " the ten patches I also believe that people don't do this too much nowadays" }, { "start": 2217.08, "end": 2224.08, "text": " they most of the time they simply rescale the test images or something" }, { "start": 2224.08, "end": 2228.16, "text": " like this or a fine-tune at the end on the kind of scale training images there" }, { "start": 2228.16, "end": 2234.44, "text": " are various techniques for doing this but random cropping and horizontal" }, { "start": 2234.44, "end": 2240.08, "text": " flipping already employed right here also color kind of color jittering a" }, { "start": 2240.08, "end": 2245.24, "text": " form of color jittering a very special form altering the intensities of RGB" }, { "start": 2245.24, "end": 2250.7599999999998, "text": " channels in training images specifically we perform PCA on the set of RGB pixel" }, { "start": 2250.76, "end": 2254.44, "text": " values throughout the image in a training set to each training image we" }, { "start": 2254.44, "end": 2259.0800000000004, "text": " add multiples of the found principal components with magnitudes proportional" }, { "start": 2259.0800000000004, "end": 2263.92, "text": " to the corresponding eigenvalues times a random variable drawn from a gauss with" }, { "start": 2263.92, "end": 2270.4, "text": " zero mean and standard deviation point one this is I believe this has gone out" }, { "start": 2270.4, "end": 2276.0800000000004, "text": " of fashion so people do color jitter and kind of brightness jitter and so on but" }, { "start": 2276.08, "end": 2283.04, "text": " I don't think they particularly do this kind of PCA based image image" }, { "start": 2283.04, "end": 2288.56, "text": " augmentation right here anymore they say this scheme reduces the top one error" }, { "start": 2288.56, "end": 2298, "text": " rate by over 1% I wonder why why this isn't or maybe because you need these" }, { "start": 2298, "end": 2302.08, "text": " stats over the entire data set and the other things may be working equivalently" }, { "start": 2302.08, "end": 2306.58, "text": " well but you you can simply apply them without knowing kind of your your" }, { "start": 2306.58, "end": 2314.36, "text": " principal components okay next thing dropout dropout has been you know one of" }, { "start": 2314.36, "end": 2319.4, "text": " the things that was very important throughout the early stages of deep" }, { "start": 2319.4, "end": 2324.56, "text": " learning isn't that important anymore now dropout some people still use it but" }, { "start": 2324.56, "end": 2330.7999999999997, "text": " most people I think don't use dropout anymore and it's very interesting to see" }, { "start": 2330.8, "end": 2336.92, "text": " but it definitely was a technique that was used a lot during like from Alex net" }, { "start": 2336.92, "end": 2345.44, "text": " to basically like now or like the last very few years so they say combining the" }, { "start": 2345.44, "end": 2349, "text": " predictions of many different models is a very successful way to reduce test" }, { "start": 2349, "end": 2352.6400000000003, "text": " errors but it appears to be too expensive for big neural networks that" }, { "start": 2352.6400000000003, "end": 2357, "text": " already take several days to train there is however a very efficient version of" }, { "start": 2357, "end": 2361.72, "text": " model combination that only costs about a factor of two during training so" }, { "start": 2361.72, "end": 2365.88, "text": " there's this take this technique called dropout then they explain it to set to" }, { "start": 2365.88, "end": 2371.68, "text": " zero the output of each hidden neuron with probability 0.5 again people" }, { "start": 2371.68, "end": 2379.08, "text": " didn't know about dropout as they do now but they introduced this right here and" }, { "start": 2379.08, "end": 2386.28, "text": " they say it reduces their not sure that they also don't say how they how much" }, { "start": 2386.28, "end": 2390.76, "text": " they by how much this reduces the training error but they say we use drop" }, { "start": 2390.76, "end": 2394.2400000000002, "text": " out in the first two fully connected layers without dropout our network" }, { "start": 2394.2400000000002, "end": 2398.8, "text": " exhibits substantial overfitting dropout roughly doubles the number of iterations" }, { "start": 2398.8, "end": 2404.7200000000003, "text": " required to converge so okay so they did actually make sure or they did find the" }, { "start": 2404.7200000000003, "end": 2411, "text": " actual evidence of overfitting and saw that dropout reduces that and I wonder" }, { "start": 2411, "end": 2416.0800000000004, "text": " why this doesn't happen nowadays maybe because we have the we have less of" }, { "start": 2416.08, "end": 2420.92, "text": " these fully connected layers but I can't really imagine maybe because we do more" }, { "start": 2420.92, "end": 2425.2, "text": " augmentation I don't I don't know or maybe dropout is still used and I'm just" }, { "start": 2425.2, "end": 2431.88, "text": " I just don't know it and don't see it yeah so here they use momentum to train" }, { "start": 2431.88, "end": 2440.44, "text": " this and they do some qualitative analysis they do some qualitative" }, { "start": 2440.44, "end": 2444, "text": " analysis so first of all they say okay they shatter all of the previous" }, { "start": 2444, "end": 2449.56, "text": " approaches especially also then they build kind of ensemble methods and they" }, { "start": 2449.56, "end": 2454.6, "text": " pre-trained they already do transfer learning they already pre-trained on" }, { "start": 2454.6, "end": 2462.08, "text": " image net 2011 and fine-tune then on the image net 2012 right here the image net" }, { "start": 2462.08, "end": 2468.92, "text": " 2011 and then fine-tuning on the image net 2012 to reduce that error even" }, { "start": 2468.92, "end": 2477.2000000000003, "text": " further like pulling all the tricks all these things are around still very cool" }, { "start": 2477.2000000000003, "end": 2482.2400000000002, "text": " and then they look into what their network learned so they find that there" }, { "start": 2482.2400000000002, "end": 2488.84, "text": " are a number of these kind of filters you see these 11 by 11 filters in the" }, { "start": 2488.84, "end": 2493.32, "text": " first layer where they show okay this really and this was kind of already" }, { "start": 2493.32, "end": 2499.56, "text": " known that these neural networks extract filters like this like color gradients" }, { "start": 2499.56, "end": 2504.6000000000004, "text": " or edge detectors in various forms and directions and it's cool to see that" }, { "start": 2504.6000000000004, "end": 2511.7200000000003, "text": " this one also does so this one here is also a very cool investigation where" }, { "start": 2511.7200000000003, "end": 2517.04, "text": " they look at examples and the red bar the red one is always the correct label" }, { "start": 2517.04, "end": 2522.04, "text": " and the bars are basically what their model says are the top five things and" }, { "start": 2522.04, "end": 2527.6, "text": " it's cool to look at so for example here you have might as the top one but then" }, { "start": 2527.6, "end": 2535.72, "text": " also black widow cockroach tick starfish but the top labels are usually also very" }, { "start": 2535.72, "end": 2541.8, "text": " very good labels you can see here grill and it assigns convertible which you" }, { "start": 2541.8, "end": 2545.68, "text": " know by all means is correct it's just not the class that the annotators" }, { "start": 2545.68, "end": 2552.72, "text": " assigned to this particular image as well as here Dalmatian was the highest" }, { "start": 2552.72, "end": 2557.52, "text": " prediction of the network where the label was actually cherry and this is" }, { "start": 2557.52, "end": 2561.9199999999996, "text": " this is quite debatable right so you can see that a lot of the mistakes the" }, { "start": 2561.9199999999996, "end": 2568.44, "text": " network does is are are you know forgivable let's say and you can see" }, { "start": 2568.44, "end": 2573.96, "text": " that for when the network doesn't do mistakes the not only the top label is" }, { "start": 2573.96, "end": 2583, "text": " good but a lot of the top five labels are also very very adequate lastly they" }, { "start": 2583, "end": 2587.84, "text": " look at a given training set image which these are the training set images right" }, { "start": 2587.84, "end": 2594.36, "text": " here and they look at the last layers feature vector and the five nearest or" }, { "start": 2594.36, "end": 2598.92, "text": " the nearest neighbors in Euclidean space of the entire training data set and" }, { "start": 2598.92, "end": 2603.92, "text": " here's what you come up with so you can see for the elephant the nearest" }, { "start": 2603.92, "end": 2608.96, "text": " neighbors are all other elephants and regard that they are in different poses" }, { "start": 2608.96, "end": 2613.96, "text": " right they don't always look the same way these elephants also these dogs" }, { "start": 2613.96, "end": 2619.32, "text": " right here so it's pretty cool to see that the network actually learns some" }, { "start": 2619.32, "end": 2625.16, "text": " invariances across the class and puts images with the same label into the same" }, { "start": 2625.16, "end": 2636.64, "text": " area in the embedding space yeah so that's their that's their paper that they" }, { "start": 2636.64, "end": 2642.16, "text": " they already allude to the fact that depth is very important it is notable" }, { "start": 2642.16, "end": 2646.68, "text": " that our networks performance degrades if a single convolutional layer is" }, { "start": 2646.68, "end": 2651.7599999999998, "text": " removed for example removing any of the middle layers results in a loss of about" }, { "start": 2651.76, "end": 2657.32, "text": " 2% for the top one performance of the network so the depth really is important" }, { "start": 2657.32, "end": 2664.28, "text": " for achieving our results and as you know this spurred an area of this burden" }, { "start": 2664.28, "end": 2671, "text": " area of trying to build deeper and deeper networks until Resnets came along" }, { "start": 2671, "end": 2676.6800000000003, "text": " and built ultra deep networks they also say we did not use any unsupervised" }, { "start": 2676.6800000000003, "end": 2680.76, "text": " pre-training even though we expect that it will help especially if we obtain" }, { "start": 2680.76, "end": 2684.76, "text": " enough computational power to significantly increase the size of the" }, { "start": 2684.76, "end": 2688.6000000000004, "text": " network without obtaining a corresponding increase of the amount of" }, { "start": 2688.6000000000004, "end": 2693.5600000000004, "text": " labeled data thus far our results have improved as we have made our network" }, { "start": 2693.5600000000004, "end": 2697.1600000000003, "text": " larger and trained it longer but we still have many orders of magnitude to" }, { "start": 2697.1600000000003, "end": 2701.2400000000002, "text": " go in order to match the infrared temporal pathway of the human visual" }, { "start": 2701.2400000000002, "end": 2706.1200000000003, "text": " system ultimately with ultimately we would like to use very large and deep" }, { "start": 2706.1200000000003, "end": 2710.36, "text": " convolutional nets on video sequences where the temporal structure provides" }, { "start": 2710.36, "end": 2715.48, "text": " very helpful information that is missing of our far less obvious in static images" }, { "start": 2715.48, "end": 2720.2000000000003, "text": " so already the previewing of future research here with the self supervised" }, { "start": 2720.2000000000003, "end": 2726.04, "text": " with the many more layers and so on astounding that this kind of foresight" }, { "start": 2726.04, "end": 2732.1600000000003, "text": " and of course all of this proved to be you know very very adequate predictions" }, { "start": 2732.1600000000003, "end": 2738.2000000000003, "text": " right here and yeah so this was the paper right here the paper that kicked" }, { "start": 2738.2, "end": 2745.2, "text": " off deep learning I enjoy reading kind of these old papers especially looking" }, { "start": 2745.2, "end": 2750.08, "text": " back at what was already known what still is around which turns out to be a" }, { "start": 2750.08, "end": 2756.9199999999996, "text": " lot a lot is still around and the choices that people made back then some" }, { "start": 2756.9199999999996, "end": 2763, "text": " of them defined our modern field so that was it for Alex net let me know what you" }, { "start": 2763, "end": 2768.2, "text": " think in the comments and I'll see you next time bye" } ]
P38FZrbNHV4
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
AMP: Adversarial Motion Priors for Stylized Physics-Based Character Control (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "what is deep learning", "deep learning tutorial", "introduction to deep learning", "reinforcement learning", "imitation learning", "uc berkeley", "sergey levine", "sergey levine reinforcement learning", "pieter abbeel", "pieter abbeel reinforcement learning", "walk and punch", "learning from demonstration", "amp", "adversarial motion priors", "physics based reinforcement learning", "3d reinforcement learning" ]
#reiforcementlearning #gan #imitationlearning Learning from demonstrations is a fascinating topic, but what if the demonstrations are not exactly the behaviors we want to learn? Can we adhere to a dataset of demonstrations and still achieve a specified goal? This paper uses GANs to combine goal-achieving reinforcement learning with imitation learning and learns to perform well at a given task while doing so in the style of a given presented dataset. The resulting behaviors include many realistic-looking transitions between the demonstrated movements. OUTLINE: 0:00 - Intro & Overview 1:25 - Problem Statement 6:10 - Reward Signals 8:15 - Motion Prior from GAN 14:10 - Algorithm Overview 20:15 - Reward Engineering & Experimental Results 30:40 - Conclusion & Comments Paper: https://arxiv.org/abs/2104.02180 Main Video: https://www.youtube.com/watch?v=wySUxZN_KbM Supplementary Video: https://www.youtube.com/watch?v=O6fBSMxThR4 Abstract: Synthesizing graceful and life-like behaviors for physically simulated characters has been a fundamental challenge in computer animation. Data-driven methods that leverage motion tracking are a prominent class of techniques for producing high fidelity motions for a wide range of behaviors. However, the effectiveness of these tracking-based methods often hinges on carefully designed objective functions, and when applied to large and diverse motion datasets, these methods require significant additional machinery to select the appropriate motion for the character to track in a given scenario. In this work, we propose to obviate the need to manually design imitation objectives and mechanisms for motion selection by utilizing a fully automated approach based on adversarial imitation learning. High-level task objectives that the character should perform can be specified by relatively simple reward functions, while the low-level style of the character's behaviors can be specified by a dataset of unstructured motion clips, without any explicit clip selection or sequencing. These motion clips are used to train an adversarial motion prior, which specifies style-rewards for training the character through reinforcement learning (RL). The adversarial RL procedure automatically selects which motion to perform, dynamically interpolating and generalizing from the dataset. Our system produces high-quality motions that are comparable to those achieved by state-of-the-art tracking-based techniques, while also being able to easily accommodate large datasets of unstructured motion clips. Composition of disparate skills emerges automatically from the motion prior, without requiring a high-level motion planner or other task-specific annotations of the motion clips. We demonstrate the effectiveness of our framework on a diverse cast of complex simulated characters and a challenging suite of motor control tasks. Authors: Xue Bin Peng, Ze Ma, Pieter Abbeel, Sergey Levine, Angjoo Kanazawa Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher LinkedIn: https://www.linkedin.com/in/ykilcher BiliBili: https://space.bilibili.com/1824646584 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Hey, yo, where's my money? Well get me my money. Alright we're gonna get into this video in a second. Today we're going to look at AMP, Adversarial Motion Priors for Stylized Physics-Based Character Control by Xuebin Peng, Tsema, Pieter Abil, Sergei Levine and Angchu Kanazawa. And this paper is in the domain of control and reinforcement learning, but it's with a little bit of a twist. So on the high level, this paper trains an agent, a physical agent, as you can see here, to perform some sort of goal in the case on the right, it's walking up to a target and punching the target. But to do so in a certain style, and the style is provided by an expert data set or a demonstration data set. So the technique that the paper presents mixes two things, it mixes goal achieving reinforcement learning, and it also mixes adherence to a given style. And the adherence to a given style, that's going to be the adversarial part right here because that's learned in an adversarial way. The mixture of the two at the end looks pretty, pretty cool. So the setup right here is a setup of goal achieving and imitation learning as we have already outlined. And the way it works is the following, there is going to be a task and the task can be, you have to reach a goal, the task can be you have to punch something, you have to overcome some obstacles, and then reach a goal. Any anything like this is a task. So the goals are fairly high level and they are given, obviously by a reward function. So you place the agent in an environment and there is a reward function. By the way, the agent here is as we already also said, is this sort of physical agent that is going to have some sort of a 3d structure. There is going to be joints that it can move. There's a joint here and one here usually. So and there's a head. The agent is this physical thing and it's in a physics simulation and each one of these joints, it can move kind of independently, sometimes free as a as a ball, sometimes it's restricted. It's modeled very much like a human. There are other I believe other models such as a T Rex, which of course work differently. But you have this agent and the agent is supposed to reach a goal like somewhere over here, there's a little flag, there's a goal. And the way the agent can interact with the world is by putting force on any of these joints. So it can move these joints in pretty specified ways. And that constitutes the actions. So the agent will observe the state and the state here is given mostly by it can observe how all the joints are currently the velocity of the of the joints or of the of the individual parts of itself in relation to itself. So it can sort of feel itself. And it also knows in which direction and generally how far away the target that it needs to reach is. So that's the observation space, the action spaces, it can affect these joints. And the reward function is often modeled in accordance with the goal. So the reward function for walking to some goal might simply be you get reward if you are closer to the goal. Okay, so this encourages the agent to go over there. So we work with quite dense rewards right here. Because I guess the fundamental problems of reinforcement learning aren't exactly the point here. The point here is, can you teach these things to achieve a goal while maintaining a certain style? Now, this is the the task and the environment. In addition to that, you do get a data set. And the data set is demonstrations of a certain nature. So this is not necessarily demonstrations of how to reach the goal. It can be any sort of demonstrations. So usually when people do sort of imitation learning or learning from demonstrations, there is a bit there are some requirements. If you want to do pure learning from demonstration, of course, the demonstrations need to be how to achieve the goal. And that we don't we don't have that here. In other cases, you do need the sort of policy or the action of whoever performed the data set. So don't need that here. Our goal is simply going to be we have to reach the task while while sort of adhering to the data set in a way. And this way, we're going to define in a second. So the data set you can imagine, I think there is a good demonstration down here, you can imagine the data set to give you sort of the style of movement. So in one data set, you can have running movements and walking movements. And in another data set, you could have these movements that were just the these actors walk like zombies. And the goal here is to combine the style of the data set with reaching the goal. Okay, so the combination would look like a zombie walking to the goal, which adheres to the zombie walk in the data set, and the goal in specified by the task. Okay, naturally, you're, you're going to model this as two different reward signals. So there's the reward signals of how much you reach the goal. And there is the reward signal of how well you adhere to the style in the data set. The reward goal right here is modeled by classic reinforcement learning. So this is very much very, very classic. Where do we have it? So you would simply train, I don't even think it's it says here, it's update G and D, yada, yada, yada. So this is a policy gradient method reinforcement learning, which means that you do have a policy function, which takes in a state and maybe a history, and it will give you an it will give you an action. And with that, you also train a value function that takes a state and will give you a value for that state. Now, the value function is purely for training the agent, because you do you do advantage estimation with this value function, but essentially, this is a standard policy gradient method that you train this part is lower part of the this lower part of the thing on sorry, you actually train the whole thing on this reward. But the bottom part you can imagine is it a reward comes from reaching a goal. The top part gives also gives you a reward. Okay. And yes, I want to reiterate, both of these rewards are used to train the policy and the value in a policy gradient fashion. So both rewards ultimately are in this standard advantage estimation reinforcement learning setting. However, the top reward is calculated differently than simply do you reach the goal, the top reward is a measure of how close you are in style to the data set. And that's given by this motion prior. And the motion prior is given by a GAN by a generative adversarial network. And I'm trying to, to find the formula here. I think this here is the the best description of it, though it's just a formula. So a generative adversarial model, I'm pretty sure you're you're all aware, there is a data set right here, there is a generator right here, the generator gets some random noise as an input, it outputs a sample x from the data set, you get a sample x prime or a mini batch. And then both of these, or these either of these goes into the discriminator model. And the discriminator has to decide for any sample, is it real? Or is it fake? So the way this generative adversarial network approaches the problem of specifying which motions are real and which ones are not, is by looking at transitions. So the data set here is not images or so like you're used to in a regular GAN, but the data set is transitions. What does that mean? So in every situation, your humanoid or whatnot is here, and the goal is over here. And this is one state, this is s. And then the agent takes an action, okay, the action could be please lift one leg. And how does that evolve? So the new agent would be kind of here, shifting the weight a little bit and lifting one leg. Okay, so this would be one action, which would lead to a new state s prime. So you have three quantities, you have the state, you have the action that the agent took, and you have the new state s prime. Now you could parameterize the transition either using state and action, or state and next state. The paper here does state and next state for the reason that in the data set, in the data set that you get right here, you do not have the action available, you can probably guess it, but you do have the state and the next state. This data set can come from anywhere it can come from human demonstration, it can come from key frames made by a 3d artist, or maybe another agent that has already solved the problem. Therefore, you don't always have the actions available. So a transition is going to be specified by a state and a next state. And the transitions from the data set are transitions that you observe in the real world. So these are state next state pairs that you observe in the real world. And the generator, the generator essentially outputs state next state pairs. Now this generator isn't a generator in a like in a classic adversarial network. But this here is generated by your policy interacting with the environment, right? So here's your policy, it interacts with the environment. And the environment gives you the state and in the next step, it gives you the next state, right? So by interacting with your environment, you do get state next state pairs, these are essentially your generated pairs. And the discriminator is trained to discriminate between whether or not a transition is from the real data set, or whether it has been generated by your agent. Now of course, this whole system isn't backpropagatable. And that's why you do train it using reinforcement learning. So the reward, the usual backpropagation signal that you would have in a generator right here, you can't do that. That's why you simply take the output here, the loss of the discriminator as a reward for the for the policy right here. So in this case, the policy using policy gradient is trying to fool the discriminator into thinking it into it thinking that the transitions that it generates come from a real data set. While the discriminator at the same time is always trained to differentiate between the true data set and the transitions that the policy generates. Alright, so that gives you a reward signal for the policy. And the other reward signal comes simply from the environment as we've already stated. So these two rewards are then combined with each other and used to train the policy, the discriminator itself, as we already seen is trained. So this thing here is actually the discriminator, this more motion prior is trained one hand from the data set. And on the other hand, from the from the policy generating actions and generating transitions through the environment. Alright, I hope that is a bit clear right here. So there are many components to this, but two are important, the policy, which tries to at the same time reach a goal and fool the discriminator. Those are two rewards, there are two rewards are combined. And on the other hand, the discriminator itself simply gets transitions from the data set and gets transitions from the policy environment interaction and tries to train itself to pull the two apart. So it's a it's a classic two player game. And yeah, that that is what you're used to from a GAN. Alright, and that's essentially it for this thing. Here is the algorithm we generally initialize everything there is a replay buffer like in a classic reinforcement learning which stabilizes training quite a bit. I also mentioned the value function which is used for the advantage estimates of policy gradient. So you for M steps, you collect trajectories using the policy you already have, then you feed the transitions to the discriminator right here. Now this here is a feature function of the state. So you only they have special feature functions, which make the this problem easier. There's a lot of expert knowledge going into how you build the features, how you represent the environment and so on. So it's not quite trivial, but I don't I don't want to go too much into that. You do calculate the style reward according to equation seven, equation seven is simply the discriminator. It's not the discriminator loss. So the discriminator loss is actually is this thing right here. They do use a square loss for the discriminator instead of a classic GAN loss. So the classic GAN loss would be this thing up here, where it's log D minus log one minus D. Yet they use this square loss that they found to work a lot better or least square loss. You can see the discriminator is trained to be close to one if the data comes from the real data set, which is capital M here. And it's trained to be negative one when it comes from the policy. So nothing stops the discriminator from spitting out any number like 15 or three. It's just trained in a least squares fashion to go to these numbers, which gives you a better gradient. So for these continuous control problems, often you have to go to least squares objectives, because which number is being output is often quite important rather than just a classification. And even here where it is actually a classification loss, right, which is surprising, but cool. And then the reward, you know, given a transition is calculated as so this is clipped at zero. So this is also between zero and one, as you can see here, if the discriminator says one, the reward is the highest, the reward is actually one. And when is the discriminator one, the discriminator is one if it thinks that the reward, sorry, that the transition comes from the real data set. So if the policy manages to produce a transition that the discriminator things comes from the real data set, it gets maximum reward. Okay. And if it also reaches the goal, it gets maximum reward from that part of the reward signal too. So the general encouragement that we give the policy is you should reach the goal in a matter that's consistent with the data set. So it should probably pick out things that do both, right, it could try to, it could try to switch between the two modes like, okay, let's do a little bit of data set, let's do a little bit of goal reaching, but it's probably better if it actually picks things from the data set or behaviors from the data set that also reach the goal in a matter consistent with the reward with the task reward. So the algorithm just to finish it goes on. And it says, okay, so this is the style reward. The true reward is given by a mixture, a weighted mixture between the style and the task reward and the weights you have to specify. And then we simply store these, this trajectory in our replay buffer. And then we use the replay buffer to update the discriminator. And we also use the replay buffer to update the value function and the trajectory according to policy gradient. They point out a few things that are important right here to their algorithm. One of them they find very important is this gradient penalty. So GAN training can be a bit unstable. And these gradient penalties, they are a way to stabilize this training. And they found that simply penalizing the norm of the gradient as it comes out of the discriminator is stabilizing the training right here. So this is one thing they've helped. This is one thing that they claim is helping them a lot to actually converge. And this tells you a little bit that it's still quite finicky. They talk a lot about the representation of the actions right here, the policy here in network architecture, the policy and value and discriminator functions. They are very simple multi-layer perceptron. So you can see like the mean of the policy function is specified by a fully connected network with two hidden layers consisting of 1024 and 512. Relu, Relu, consisting of Relu. Okay, I guess that's a fully connected layer with a Relu non-linearity followed by linear output. So the networks aren't super complicated right here. What's more complicated is the training procedure, the loss, the regularization constants and the reward engineering. So there is a lot of reward engineering happening right here. And that's what you find in the appendix. So the reward, for example, for going and punching something is threefold. So if you are far away, it's one reward. If you're close, it's a different reward. And if that target has been hit, it's a different reward, right? I guess the top line makes sense, but the others are sort of reward shaping the behavioral one. So you can see the agent to kind of approach the target fast, but then kind of slow down. And also, you know, if you look at something like dribbling, where there's a ball involved, there is a lot of reward shaping going on. Even in in target location, there is a lot of reward shaping going on, where you sort of encourage the agent to have certain velocities and so on. So this is important because of the experimental results that they show. And that's where we go back to the video. Where's the video? Right here. So keep in mind, their point is you're able to reach a goal in the style of the data set. So this is the simplest task they have. It's called target heading, and the goal is simply to walk or to go in a given direction at a certain speed. And the example clips they have are displayed on the right. So the example clips are of someone walking and of someone running. Yet there is not really a transition in the data set from walking to running. And the agent learns to this transition by itself. So their point is always, look, we have kind of simple things in the data set, we have the individual parts in the data set that the agent should do. But we never have the combination of all the things. And to kind of stitch these parts together, that's the powerful thing about this method, which is pretty cool. So here, you can see at the top right, there is a target speed. And all of these three agents are trained agents. And in the same manner, right, and they're all told to reach that given target speed. However, the agent on the left only has been provided with a data set of people just walking. The agent in the middle, the same, but it has only received a data set of just agents running. So no walking. And on the right, this agent has received a data set of agents walking and running. So you can see that as the target speed changes, the like if it's fast, the walker is not able to keep up when it's slow, the runner is not able to slow down. However, the agent that has the full data set available can not only match the speed and change its style according to the speed, it can it also learns the transitions from one to the other. And this these transitions are not in the data set itself. Okay, so the cool part about this method is it can sort of stitch together the appropriate behaviors from the data set. Even if you don't provide these specifically to solve the task. The Yeah, this is the t rex. I think this is just to show that you don't have use motion capture, but you can use it. You can learn from a provided data set of keyframe animation. And you can also see the there is nothing in the data set about reaching a goal. There's just kind of demonstrations of the t rex walking. And the method is able to adapt this walking style in concordance with reaching a goal. So you can see that the turning is much like the turning in the example clips. Whereas if you've ever seen things like this without without the the examples, these policies that these things come up with are quite weird. So here's a failure case. And so the difference between this method and other methods is other methods, such as this motion tracking in the middle, what they try to do is they try to match a given behavior from the data set as closely as possible. So this it's called motion tracking. Now there is some sophistication to it more than I'm saying right here. But essentially, you have a front flip on the left. And then the motion tracking algorithm tries to learn a policy such that the behavior is followed as closely as possible. Now, again, this is really good when you have the exact demonstration available from what you want to do. It's not so good if you if what you have available as demonstrations is not isn't really what you want to do is just sort of some demonstrations. But there are failure cases, of course, if you want to copy exactly. So if you want to do a front flip, and by the way, the reward function here is how closely you match the motion from the reference motion. So that's the reward function. However, motion tracking does more than that motion tracking really tries to track the motion itself. While this method here would only get the reward of tracking the motion. And you can see it doesn't manage to to actually learn it more like doesn't try it tries to not fail. So it reaches the same end position and that's sort of good enough for it. So there is a Yeah, there is a trade off right here. It's probably also given by how much you weigh the different components. So here you have a data set of agents walking and agents waving. And then what you want to do is you want to have a agent that walks in a direction while they wave the arm or why they they lift the arm or something. So at the left, you can see if you only have a data set, if you only have a data set of the waving agents, it's really struggling moving forward, right that the walking it learns it has no demonstration of walking. So that's a struggle. If you only have the walking demonstration in the middle, then it doesn't really track the arm movement where it should even though there is a reward for it, right? Only Yeah, on the right, I mean, this is somewhat somewhat, but it is kind of able to to interpolate. So if you if you want to check out this video, there is another one that actually explains the paper in a short form. This is from from SIGGRAPH. Go check it out. They do have more sophisticated behaviors. So on the bottom here, you can, for example, see the obstacle run, leap and roll. So the data set contains demonstrations from all of those things, but not the things in conjunction with each other. In this here, at least what they describe in the text in this, this right here, what they have in the data set is demonstrations of walking and demonstrations of getting up from the ground. And whenever so the agent learns that whenever it falls over right here, that it can get up faster if it kind of does this rolling motion right here. So this was nowhere in the data set, but because the agent wants to go to a get up state, both because that will go it that will make it go towards a goal. And also because that matches behavior in the data set, it will learn this rolling motion as it falls down in order to get up again. So that is that's pretty cool. Also in this strike and punch example, the data set apparently only contains agents walking or agents punching, it never contains agents walking, and then punching. So the transition that you saw at the beginning is a learned behavior that wasn't in the data set. So that's, I think it's a it's a pretty cool application of and a combination of two things of adversarial learning and of of learning sorry, not from demonstration because that's adversarial learning of learning to reach a goal. And it's a good Yeah, it's a good demonstration of how you can combine the two they have a lot of ablations where they sort of show that the impact of the data set makes a big difference. I mean, you've seen this in the demonstrations. But also here you can see that again in a graphical form. So the locomotion data set contains both demonstrations of walking and running, while the walk or the run data set only contains demonstrations of either and the here is the target speed versus the average speed that the agent does. Now if you only have a walking data set, the agent no matter the target speeds, the agent will always kind of stick to walking. And if you have the running data set, it can run faster up here. But if you want it to slow down, it can't really run slower than you require. Only when the data set contains both things, can it transition between the two and actually match the running or walking. So what do we think of this? My opinion is it's probably it's very cool. It's a good way of sort of bringing demonstrations into the picture without manually tracking the demonstrations or copying exactly. So you just give some suggestions to the algorithm of what it could do. And you do that in form of a data set, which is something that I like, because it's not as invasive as telling the agent, you know, you need to match the joint movements and so on of the of the demonstration. This enables demonstrations to come in that are of a much broader range, not necessarily reach the goal, not necessarily even have a goal in mind. So that's cool. On the other hand, I think it's pretty finicky because you have to strike the trade off parameter between the two rewards quite cleanly, or clearly for your goal. Because we've already seen right at some point, the agent won't reach the goal anymore. If if this reward here, if the reward of the style is too high, we already saw this if you have a data set of just running, the agent will simply neglect the goal, it won't go slower than, you know, the kind of the slowest run or demonstration or a little bit slower than that, it just won't change its policy because it needs to match the data set. And the this balance seems to be quite, quite a important hyper parameter. And that also makes the provided data set here quite an important thing to to have available. So which data set you provide is also quite important. And lastly, the tasks themselves or the reward of the goal directed task nature, or in this paper, extremely engineered. And that's what I want to come back here lastly to so what they tout, for example, in this walk and punch thing, they say, oh, when the agent is far away, it runs towards the target. But if it's close, it only it slows down. And then when it's really close, it punches the target. And it sort of learns to combine these different skills. But and which is cool, right, because the transition wasn't in the data set. But a big part of it combining these skills is because in the reward, you make the reward different, whether the agent is far away, or whether it's near, you can see that right here. So these things are reward shaped to a high degree to encourage these kinds of transitions to happen, which I think is not really practical in a lot of settings. So it's still to be seen how much this is of practical value in other reinforcement learning tasks where you don't have that available. And also in other reinforcement learning tasks, where maybe the reward is more sparse, and how that affects this thing, because essentially, if the reward is much more sparse and irregular, now you have a problem because now the style signal is much more prominent. And that's not necessarily solved by simply reweighing the style signal. So I'm excited to see what comes out of this line of work. Next, it's a pretty cool line, as I already said, it's a good application of GANs in a different field than images. And with that, let me know what you think in the comments. I'll see you next time. Bye bye.
[ { "start": 0, "end": 4.84, "text": " Hey, yo, where's my money?" }, { "start": 4.84, "end": 7.4, "text": " Well get me my money." }, { "start": 7.4, "end": 12, "text": " Alright we're gonna get into this video in a second." }, { "start": 12, "end": 18.04, "text": " Today we're going to look at AMP, Adversarial Motion Priors for Stylized Physics-Based Character" }, { "start": 18.04, "end": 25.72, "text": " Control by Xuebin Peng, Tsema, Pieter Abil, Sergei Levine and Angchu Kanazawa." }, { "start": 25.72, "end": 32.82, "text": " And this paper is in the domain of control and reinforcement learning, but it's with" }, { "start": 32.82, "end": 34.9, "text": " a little bit of a twist." }, { "start": 34.9, "end": 41.72, "text": " So on the high level, this paper trains an agent, a physical agent, as you can see here," }, { "start": 41.72, "end": 47.16, "text": " to perform some sort of goal in the case on the right, it's walking up to a target and" }, { "start": 47.16, "end": 49.099999999999994, "text": " punching the target." }, { "start": 49.1, "end": 57.96, "text": " But to do so in a certain style, and the style is provided by an expert data set or a demonstration" }, { "start": 57.96, "end": 59.64, "text": " data set." }, { "start": 59.64, "end": 65.96000000000001, "text": " So the technique that the paper presents mixes two things, it mixes goal achieving reinforcement" }, { "start": 65.96000000000001, "end": 70.76, "text": " learning, and it also mixes adherence to a given style." }, { "start": 70.76, "end": 75.04, "text": " And the adherence to a given style, that's going to be the adversarial part right here" }, { "start": 75.04, "end": 78.76, "text": " because that's learned in an adversarial way." }, { "start": 78.76, "end": 84.36, "text": " The mixture of the two at the end looks pretty, pretty cool." }, { "start": 84.36, "end": 91.96000000000001, "text": " So the setup right here is a setup of goal achieving and imitation learning as we have" }, { "start": 91.96000000000001, "end": 95.4, "text": " already outlined." }, { "start": 95.4, "end": 101.44, "text": " And the way it works is the following, there is going to be a task and the task can be," }, { "start": 101.44, "end": 106.78, "text": " you have to reach a goal, the task can be you have to punch something, you have to overcome" }, { "start": 106.78, "end": 110.32000000000001, "text": " some obstacles, and then reach a goal." }, { "start": 110.32000000000001, "end": 112.94, "text": " Any anything like this is a task." }, { "start": 112.94, "end": 119.32000000000001, "text": " So the goals are fairly high level and they are given, obviously by a reward function." }, { "start": 119.32000000000001, "end": 123.44, "text": " So you place the agent in an environment and there is a reward function." }, { "start": 123.44, "end": 129.6, "text": " By the way, the agent here is as we already also said, is this sort of physical agent" }, { "start": 129.6, "end": 136.36, "text": " that is going to have some sort of a 3d structure." }, { "start": 136.36, "end": 140.52, "text": " There is going to be joints that it can move." }, { "start": 140.52, "end": 143, "text": " There's a joint here and one here usually." }, { "start": 143, "end": 145.60000000000002, "text": " So and there's a head." }, { "start": 145.60000000000002, "end": 150.56, "text": " The agent is this physical thing and it's in a physics simulation and each one of these" }, { "start": 150.56, "end": 158.28000000000003, "text": " joints, it can move kind of independently, sometimes free as a as a ball, sometimes it's" }, { "start": 158.28000000000003, "end": 159.32000000000002, "text": " restricted." }, { "start": 159.32000000000002, "end": 161.60000000000002, "text": " It's modeled very much like a human." }, { "start": 161.6, "end": 167.4, "text": " There are other I believe other models such as a T Rex, which of course work differently." }, { "start": 167.4, "end": 173.84, "text": " But you have this agent and the agent is supposed to reach a goal like somewhere over here," }, { "start": 173.84, "end": 176.2, "text": " there's a little flag, there's a goal." }, { "start": 176.2, "end": 181.94, "text": " And the way the agent can interact with the world is by putting force on any of these" }, { "start": 181.94, "end": 182.94, "text": " joints." }, { "start": 182.94, "end": 186.24, "text": " So it can move these joints in pretty specified ways." }, { "start": 186.24, "end": 188.24, "text": " And that constitutes the actions." }, { "start": 188.24, "end": 194.64000000000001, "text": " So the agent will observe the state and the state here is given mostly by it can observe" }, { "start": 194.64000000000001, "end": 201.88, "text": " how all the joints are currently the velocity of the of the joints or of the of the individual" }, { "start": 201.88, "end": 205.44, "text": " parts of itself in relation to itself." }, { "start": 205.44, "end": 207.58, "text": " So it can sort of feel itself." }, { "start": 207.58, "end": 214.56, "text": " And it also knows in which direction and generally how far away the target that it needs to reach" }, { "start": 214.56, "end": 216.08, "text": " is." }, { "start": 216.08, "end": 221.72, "text": " So that's the observation space, the action spaces, it can affect these joints." }, { "start": 221.72, "end": 226.72000000000003, "text": " And the reward function is often modeled in accordance with the goal." }, { "start": 226.72000000000003, "end": 232.92000000000002, "text": " So the reward function for walking to some goal might simply be you get reward if you" }, { "start": 232.92000000000002, "end": 234.56, "text": " are closer to the goal." }, { "start": 234.56, "end": 238.32000000000002, "text": " Okay, so this encourages the agent to go over there." }, { "start": 238.32000000000002, "end": 242.8, "text": " So we work with quite dense rewards right here." }, { "start": 242.8, "end": 246.8, "text": " Because I guess the fundamental problems of reinforcement learning aren't exactly the" }, { "start": 246.8, "end": 247.8, "text": " point here." }, { "start": 247.8, "end": 252.08, "text": " The point here is, can you teach these things to achieve a goal while maintaining a certain" }, { "start": 252.08, "end": 254.36, "text": " style?" }, { "start": 254.36, "end": 258.04, "text": " Now, this is the the task and the environment." }, { "start": 258.04, "end": 261.24, "text": " In addition to that, you do get a data set." }, { "start": 261.24, "end": 266.90000000000003, "text": " And the data set is demonstrations of a certain nature." }, { "start": 266.90000000000003, "end": 271.14, "text": " So this is not necessarily demonstrations of how to reach the goal." }, { "start": 271.14, "end": 274.24, "text": " It can be any sort of demonstrations." }, { "start": 274.24, "end": 279.26, "text": " So usually when people do sort of imitation learning or learning from demonstrations," }, { "start": 279.26, "end": 281.58, "text": " there is a bit there are some requirements." }, { "start": 281.58, "end": 286.84, "text": " If you want to do pure learning from demonstration, of course, the demonstrations need to be how" }, { "start": 286.84, "end": 289.44, "text": " to achieve the goal." }, { "start": 289.44, "end": 292.08, "text": " And that we don't we don't have that here." }, { "start": 292.08, "end": 299.2, "text": " In other cases, you do need the sort of policy or the action of whoever performed the data" }, { "start": 299.2, "end": 300.2, "text": " set." }, { "start": 300.2, "end": 301.96, "text": " So don't need that here." }, { "start": 301.96, "end": 309.2, "text": " Our goal is simply going to be we have to reach the task while while sort of adhering" }, { "start": 309.2, "end": 311.8, "text": " to the data set in a way." }, { "start": 311.8, "end": 314.32, "text": " And this way, we're going to define in a second." }, { "start": 314.32, "end": 321.44, "text": " So the data set you can imagine, I think there is a good demonstration down here, you can" }, { "start": 321.44, "end": 326.84, "text": " imagine the data set to give you sort of the style of movement." }, { "start": 326.84, "end": 332.2, "text": " So in one data set, you can have running movements and walking movements." }, { "start": 332.2, "end": 337.91999999999996, "text": " And in another data set, you could have these movements that were just the these actors" }, { "start": 337.91999999999996, "end": 340.28, "text": " walk like zombies." }, { "start": 340.28, "end": 347.35999999999996, "text": " And the goal here is to combine the style of the data set with reaching the goal." }, { "start": 347.35999999999996, "end": 354.64, "text": " Okay, so the combination would look like a zombie walking to the goal, which adheres" }, { "start": 354.64, "end": 361.91999999999996, "text": " to the zombie walk in the data set, and the goal in specified by the task." }, { "start": 361.91999999999996, "end": 368.5, "text": " Okay, naturally, you're, you're going to model this as two different reward signals." }, { "start": 368.5, "end": 372.9, "text": " So there's the reward signals of how much you reach the goal." }, { "start": 372.9, "end": 378.46, "text": " And there is the reward signal of how well you adhere to the style in the data set." }, { "start": 378.46, "end": 383.8, "text": " The reward goal right here is modeled by classic reinforcement learning." }, { "start": 383.8, "end": 390.02000000000004, "text": " So this is very much very, very classic." }, { "start": 390.02000000000004, "end": 391.3, "text": " Where do we have it?" }, { "start": 391.3, "end": 398.2, "text": " So you would simply train, I don't even think it's it says here, it's update G and D, yada," }, { "start": 398.2, "end": 399.2, "text": " yada, yada." }, { "start": 399.2, "end": 407.46000000000004, "text": " So this is a policy gradient method reinforcement learning, which means that you do have a policy" }, { "start": 407.46, "end": 413.7, "text": " function, which takes in a state and maybe a history, and it will give you an it will" }, { "start": 413.7, "end": 415.88, "text": " give you an action." }, { "start": 415.88, "end": 422.9, "text": " And with that, you also train a value function that takes a state and will give you a value" }, { "start": 422.9, "end": 424.5, "text": " for that state." }, { "start": 424.5, "end": 433.09999999999997, "text": " Now, the value function is purely for training the agent, because you do you do advantage" }, { "start": 433.1, "end": 439.3, "text": " estimation with this value function, but essentially, this is a standard policy gradient method" }, { "start": 439.3, "end": 446.82000000000005, "text": " that you train this part is lower part of the this lower part of the thing on sorry," }, { "start": 446.82000000000005, "end": 451.34000000000003, "text": " you actually train the whole thing on this reward." }, { "start": 451.34000000000003, "end": 457.18, "text": " But the bottom part you can imagine is it a reward comes from reaching a goal." }, { "start": 457.18, "end": 460.42, "text": " The top part gives also gives you a reward." }, { "start": 460.42, "end": 461.42, "text": " Okay." }, { "start": 461.42, "end": 467.06, "text": " And yes, I want to reiterate, both of these rewards are used to train the policy and the" }, { "start": 467.06, "end": 470.26, "text": " value in a policy gradient fashion." }, { "start": 470.26, "end": 476.82, "text": " So both rewards ultimately are in this standard advantage estimation reinforcement learning" }, { "start": 476.82, "end": 477.82, "text": " setting." }, { "start": 477.82, "end": 484.22, "text": " However, the top reward is calculated differently than simply do you reach the goal, the top" }, { "start": 484.22, "end": 488.62, "text": " reward is a measure of how close you are in style to the data set." }, { "start": 488.62, "end": 491.3, "text": " And that's given by this motion prior." }, { "start": 491.3, "end": 498.38, "text": " And the motion prior is given by a GAN by a generative adversarial network." }, { "start": 498.38, "end": 505.34000000000003, "text": " And I'm trying to, to find the formula here." }, { "start": 505.34000000000003, "end": 511.26, "text": " I think this here is the the best description of it, though it's just a formula." }, { "start": 511.26, "end": 519.3, "text": " So a generative adversarial model, I'm pretty sure you're you're all aware, there is a data" }, { "start": 519.3, "end": 525.8599999999999, "text": " set right here, there is a generator right here, the generator gets some random noise" }, { "start": 525.8599999999999, "end": 532.9799999999999, "text": " as an input, it outputs a sample x from the data set, you get a sample x prime or a mini" }, { "start": 532.9799999999999, "end": 533.9799999999999, "text": " batch." }, { "start": 533.9799999999999, "end": 540.4599999999999, "text": " And then both of these, or these either of these goes into the discriminator model." }, { "start": 540.4599999999999, "end": 544.74, "text": " And the discriminator has to decide for any sample, is it real?" }, { "start": 544.74, "end": 546.5999999999999, "text": " Or is it fake?" }, { "start": 546.6, "end": 553.78, "text": " So the way this generative adversarial network approaches the problem of specifying which" }, { "start": 553.78, "end": 558.88, "text": " motions are real and which ones are not, is by looking at transitions." }, { "start": 558.88, "end": 563.76, "text": " So the data set here is not images or so like you're used to in a regular GAN, but the data" }, { "start": 563.76, "end": 565.44, "text": " set is transitions." }, { "start": 565.44, "end": 566.44, "text": " What does that mean?" }, { "start": 566.44, "end": 575.5, "text": " So in every situation, your humanoid or whatnot is here, and the goal is over here." }, { "start": 575.5, "end": 578.72, "text": " And this is one state, this is s." }, { "start": 578.72, "end": 585.6, "text": " And then the agent takes an action, okay, the action could be please lift one leg." }, { "start": 585.6, "end": 587.5, "text": " And how does that evolve?" }, { "start": 587.5, "end": 594.3, "text": " So the new agent would be kind of here, shifting the weight a little bit and lifting one leg." }, { "start": 594.3, "end": 599.94, "text": " Okay, so this would be one action, which would lead to a new state s prime." }, { "start": 599.94, "end": 604.34, "text": " So you have three quantities, you have the state, you have the action that the agent" }, { "start": 604.34, "end": 608.84, "text": " took, and you have the new state s prime." }, { "start": 608.84, "end": 615.46, "text": " Now you could parameterize the transition either using state and action, or state and" }, { "start": 615.46, "end": 616.82, "text": " next state." }, { "start": 616.82, "end": 623.62, "text": " The paper here does state and next state for the reason that in the data set, in the data" }, { "start": 623.62, "end": 630.62, "text": " set that you get right here, you do not have the action available, you can probably guess" }, { "start": 630.62, "end": 634.9, "text": " it, but you do have the state and the next state." }, { "start": 634.9, "end": 639.64, "text": " This data set can come from anywhere it can come from human demonstration, it can come" }, { "start": 639.64, "end": 645.48, "text": " from key frames made by a 3d artist, or maybe another agent that has already solved the" }, { "start": 645.48, "end": 646.48, "text": " problem." }, { "start": 646.48, "end": 648.96, "text": " Therefore, you don't always have the actions available." }, { "start": 648.96, "end": 656.0600000000001, "text": " So a transition is going to be specified by a state and a next state." }, { "start": 656.06, "end": 661.54, "text": " And the transitions from the data set are transitions that you observe in the real world." }, { "start": 661.54, "end": 666.8199999999999, "text": " So these are state next state pairs that you observe in the real world." }, { "start": 666.8199999999999, "end": 675.3399999999999, "text": " And the generator, the generator essentially outputs state next state pairs." }, { "start": 675.3399999999999, "end": 681.3399999999999, "text": " Now this generator isn't a generator in a like in a classic adversarial network." }, { "start": 681.34, "end": 687.94, "text": " But this here is generated by your policy interacting with the environment, right?" }, { "start": 687.94, "end": 693.82, "text": " So here's your policy, it interacts with the environment." }, { "start": 693.82, "end": 698.1800000000001, "text": " And the environment gives you the state and in the next step, it gives you the next state," }, { "start": 698.1800000000001, "end": 699.1800000000001, "text": " right?" }, { "start": 699.1800000000001, "end": 706.58, "text": " So by interacting with your environment, you do get state next state pairs, these are essentially" }, { "start": 706.58, "end": 708.58, "text": " your generated pairs." }, { "start": 708.58, "end": 715.86, "text": " And the discriminator is trained to discriminate between whether or not a transition is from" }, { "start": 715.86, "end": 722.6600000000001, "text": " the real data set, or whether it has been generated by your agent." }, { "start": 722.6600000000001, "end": 726.14, "text": " Now of course, this whole system isn't backpropagatable." }, { "start": 726.14, "end": 729.46, "text": " And that's why you do train it using reinforcement learning." }, { "start": 729.46, "end": 735.4200000000001, "text": " So the reward, the usual backpropagation signal that you would have in a generator right here," }, { "start": 735.4200000000001, "end": 736.82, "text": " you can't do that." }, { "start": 736.82, "end": 743.34, "text": " That's why you simply take the output here, the loss of the discriminator as a reward" }, { "start": 743.34, "end": 747.6, "text": " for the for the policy right here." }, { "start": 747.6, "end": 755.1800000000001, "text": " So in this case, the policy using policy gradient is trying to fool the discriminator into thinking" }, { "start": 755.1800000000001, "end": 762.6, "text": " it into it thinking that the transitions that it generates come from a real data set." }, { "start": 762.6, "end": 767.36, "text": " While the discriminator at the same time is always trained to differentiate between the" }, { "start": 767.36, "end": 771.82, "text": " true data set and the transitions that the policy generates." }, { "start": 771.82, "end": 776.62, "text": " Alright, so that gives you a reward signal for the policy." }, { "start": 776.62, "end": 781.26, "text": " And the other reward signal comes simply from the environment as we've already stated." }, { "start": 781.26, "end": 787.6600000000001, "text": " So these two rewards are then combined with each other and used to train the policy, the" }, { "start": 787.6600000000001, "end": 792.5400000000001, "text": " discriminator itself, as we already seen is trained." }, { "start": 792.54, "end": 798.4, "text": " So this thing here is actually the discriminator, this more motion prior is trained one hand" }, { "start": 798.4, "end": 799.74, "text": " from the data set." }, { "start": 799.74, "end": 808.1999999999999, "text": " And on the other hand, from the from the policy generating actions and generating transitions" }, { "start": 808.1999999999999, "end": 809.6999999999999, "text": " through the environment." }, { "start": 809.6999999999999, "end": 814.54, "text": " Alright, I hope that is a bit clear right here." }, { "start": 814.54, "end": 820.3399999999999, "text": " So there are many components to this, but two are important, the policy, which tries" }, { "start": 820.34, "end": 824.46, "text": " to at the same time reach a goal and fool the discriminator." }, { "start": 824.46, "end": 827.1800000000001, "text": " Those are two rewards, there are two rewards are combined." }, { "start": 827.1800000000001, "end": 832.5, "text": " And on the other hand, the discriminator itself simply gets transitions from the data set" }, { "start": 832.5, "end": 839.36, "text": " and gets transitions from the policy environment interaction and tries to train itself to pull" }, { "start": 839.36, "end": 841.38, "text": " the two apart." }, { "start": 841.38, "end": 844.62, "text": " So it's a it's a classic two player game." }, { "start": 844.62, "end": 850.94, "text": " And yeah, that that is what you're used to from a GAN." }, { "start": 850.94, "end": 855.62, "text": " Alright, and that's essentially it for this thing." }, { "start": 855.62, "end": 861.88, "text": " Here is the algorithm we generally initialize everything there is a replay buffer like in" }, { "start": 861.88, "end": 866.34, "text": " a classic reinforcement learning which stabilizes training quite a bit." }, { "start": 866.34, "end": 871.94, "text": " I also mentioned the value function which is used for the advantage estimates of policy" }, { "start": 871.94, "end": 873.04, "text": " gradient." }, { "start": 873.04, "end": 883.4599999999999, "text": " So you for M steps, you collect trajectories using the policy you already have, then you" }, { "start": 883.4599999999999, "end": 887.62, "text": " feed the transitions to the discriminator right here." }, { "start": 887.62, "end": 891.06, "text": " Now this here is a feature function of the state." }, { "start": 891.06, "end": 897.06, "text": " So you only they have special feature functions, which make the this problem easier." }, { "start": 897.06, "end": 901.3399999999999, "text": " There's a lot of expert knowledge going into how you build the features, how you represent" }, { "start": 901.34, "end": 903.6600000000001, "text": " the environment and so on." }, { "start": 903.6600000000001, "end": 908.7800000000001, "text": " So it's not quite trivial, but I don't I don't want to go too much into that." }, { "start": 908.7800000000001, "end": 914.74, "text": " You do calculate the style reward according to equation seven, equation seven is simply" }, { "start": 914.74, "end": 917.34, "text": " the discriminator." }, { "start": 917.34, "end": 919.0400000000001, "text": " It's not the discriminator loss." }, { "start": 919.0400000000001, "end": 922.82, "text": " So the discriminator loss is actually is this thing right here." }, { "start": 922.82, "end": 931.4200000000001, "text": " They do use a square loss for the discriminator instead of a classic GAN loss." }, { "start": 931.4200000000001, "end": 937.0600000000001, "text": " So the classic GAN loss would be this thing up here, where it's log D minus log one minus" }, { "start": 937.0600000000001, "end": 943.34, "text": " D. Yet they use this square loss that they found to work a lot better or least square" }, { "start": 943.34, "end": 944.34, "text": " loss." }, { "start": 944.34, "end": 950.6600000000001, "text": " You can see the discriminator is trained to be close to one if the data comes from the" }, { "start": 950.66, "end": 954.42, "text": " real data set, which is capital M here." }, { "start": 954.42, "end": 959.78, "text": " And it's trained to be negative one when it comes from the policy." }, { "start": 959.78, "end": 966.7199999999999, "text": " So nothing stops the discriminator from spitting out any number like 15 or three." }, { "start": 966.7199999999999, "end": 971.14, "text": " It's just trained in a least squares fashion to go to these numbers, which gives you a" }, { "start": 971.14, "end": 973.38, "text": " better gradient." }, { "start": 973.38, "end": 982.14, "text": " So for these continuous control problems, often you have to go to least squares objectives," }, { "start": 982.14, "end": 987.98, "text": " because which number is being output is often quite important rather than just a classification." }, { "start": 987.98, "end": 995.46, "text": " And even here where it is actually a classification loss, right, which is surprising, but cool." }, { "start": 995.46, "end": 1003.26, "text": " And then the reward, you know, given a transition is calculated as so this is clipped at zero." }, { "start": 1003.26, "end": 1010.42, "text": " So this is also between zero and one, as you can see here, if the discriminator says one," }, { "start": 1010.42, "end": 1014.3, "text": " the reward is the highest, the reward is actually one." }, { "start": 1014.3, "end": 1020.54, "text": " And when is the discriminator one, the discriminator is one if it thinks that the reward, sorry," }, { "start": 1020.54, "end": 1023.22, "text": " that the transition comes from the real data set." }, { "start": 1023.22, "end": 1031.18, "text": " So if the policy manages to produce a transition that the discriminator things comes from the" }, { "start": 1031.18, "end": 1033.9, "text": " real data set, it gets maximum reward." }, { "start": 1033.9, "end": 1034.9, "text": " Okay." }, { "start": 1034.9, "end": 1040.8200000000002, "text": " And if it also reaches the goal, it gets maximum reward from that part of the reward signal" }, { "start": 1040.8200000000002, "end": 1041.8200000000002, "text": " too." }, { "start": 1041.8200000000002, "end": 1048.9, "text": " So the general encouragement that we give the policy is you should reach the goal in" }, { "start": 1048.9, "end": 1051.78, "text": " a matter that's consistent with the data set." }, { "start": 1051.78, "end": 1058.66, "text": " So it should probably pick out things that do both, right, it could try to, it could" }, { "start": 1058.66, "end": 1063.94, "text": " try to switch between the two modes like, okay, let's do a little bit of data set, let's" }, { "start": 1063.94, "end": 1068.42, "text": " do a little bit of goal reaching, but it's probably better if it actually picks things" }, { "start": 1068.42, "end": 1076, "text": " from the data set or behaviors from the data set that also reach the goal in a matter consistent" }, { "start": 1076, "end": 1080.38, "text": " with the reward with the task reward." }, { "start": 1080.38, "end": 1083.02, "text": " So the algorithm just to finish it goes on." }, { "start": 1083.02, "end": 1087.0600000000002, "text": " And it says, okay, so this is the style reward." }, { "start": 1087.06, "end": 1093.22, "text": " The true reward is given by a mixture, a weighted mixture between the style and the task reward" }, { "start": 1093.22, "end": 1097.22, "text": " and the weights you have to specify." }, { "start": 1097.22, "end": 1103.1799999999998, "text": " And then we simply store these, this trajectory in our replay buffer." }, { "start": 1103.1799999999998, "end": 1108.7, "text": " And then we use the replay buffer to update the discriminator." }, { "start": 1108.7, "end": 1115.1399999999999, "text": " And we also use the replay buffer to update the value function and the trajectory according" }, { "start": 1115.14, "end": 1117.4, "text": " to policy gradient." }, { "start": 1117.4, "end": 1122.8600000000001, "text": " They point out a few things that are important right here to their algorithm." }, { "start": 1122.8600000000001, "end": 1126.4, "text": " One of them they find very important is this gradient penalty." }, { "start": 1126.4, "end": 1129.8200000000002, "text": " So GAN training can be a bit unstable." }, { "start": 1129.8200000000002, "end": 1136.22, "text": " And these gradient penalties, they are a way to stabilize this training." }, { "start": 1136.22, "end": 1143.46, "text": " And they found that simply penalizing the norm of the gradient as it comes out of the" }, { "start": 1143.46, "end": 1151.18, "text": " discriminator is stabilizing the training right here." }, { "start": 1151.18, "end": 1155.1000000000001, "text": " So this is one thing they've helped." }, { "start": 1155.1000000000001, "end": 1161.24, "text": " This is one thing that they claim is helping them a lot to actually converge." }, { "start": 1161.24, "end": 1164.78, "text": " And this tells you a little bit that it's still quite finicky." }, { "start": 1164.78, "end": 1171.26, "text": " They talk a lot about the representation of the actions right here, the policy here in" }, { "start": 1171.26, "end": 1176.3799999999999, "text": " network architecture, the policy and value and discriminator functions." }, { "start": 1176.3799999999999, "end": 1181.52, "text": " They are very simple multi-layer perceptron." }, { "start": 1181.52, "end": 1188.06, "text": " So you can see like the mean of the policy function is specified by a fully connected" }, { "start": 1188.06, "end": 1195.06, "text": " network with two hidden layers consisting of 1024 and 512." }, { "start": 1195.06, "end": 1199.82, "text": " Relu, Relu, consisting of Relu." }, { "start": 1199.82, "end": 1205.78, "text": " Okay, I guess that's a fully connected layer with a Relu non-linearity followed by linear" }, { "start": 1205.78, "end": 1206.78, "text": " output." }, { "start": 1206.78, "end": 1209.6599999999999, "text": " So the networks aren't super complicated right here." }, { "start": 1209.6599999999999, "end": 1216.32, "text": " What's more complicated is the training procedure, the loss, the regularization constants and" }, { "start": 1216.32, "end": 1218.58, "text": " the reward engineering." }, { "start": 1218.58, "end": 1221.8999999999999, "text": " So there is a lot of reward engineering happening right here." }, { "start": 1221.8999999999999, "end": 1224.74, "text": " And that's what you find in the appendix." }, { "start": 1224.74, "end": 1233.18, "text": " So the reward, for example, for going and punching something is threefold." }, { "start": 1233.18, "end": 1236.42, "text": " So if you are far away, it's one reward." }, { "start": 1236.42, "end": 1238.98, "text": " If you're close, it's a different reward." }, { "start": 1238.98, "end": 1242.58, "text": " And if that target has been hit, it's a different reward, right?" }, { "start": 1242.58, "end": 1248.94, "text": " I guess the top line makes sense, but the others are sort of reward shaping the behavioral" }, { "start": 1248.94, "end": 1249.94, "text": " one." }, { "start": 1249.94, "end": 1256.66, "text": " So you can see the agent to kind of approach the target fast, but then kind of slow down." }, { "start": 1256.66, "end": 1262.0800000000002, "text": " And also, you know, if you look at something like dribbling, where there's a ball involved," }, { "start": 1262.0800000000002, "end": 1265.04, "text": " there is a lot of reward shaping going on." }, { "start": 1265.04, "end": 1272.42, "text": " Even in in target location, there is a lot of reward shaping going on, where you sort" }, { "start": 1272.42, "end": 1276.22, "text": " of encourage the agent to have certain velocities and so on." }, { "start": 1276.22, "end": 1284.14, "text": " So this is important because of the experimental results that they show." }, { "start": 1284.14, "end": 1288.78, "text": " And that's where we go back to the video." }, { "start": 1288.78, "end": 1290.7, "text": " Where's the video?" }, { "start": 1290.7, "end": 1291.7, "text": " Right here." }, { "start": 1291.7, "end": 1298.54, "text": " So keep in mind, their point is you're able to reach a goal in the style of the data set." }, { "start": 1298.54, "end": 1301.3, "text": " So this is the simplest task they have." }, { "start": 1301.3, "end": 1307.32, "text": " It's called target heading, and the goal is simply to walk or to go in a given direction" }, { "start": 1307.32, "end": 1309.8999999999999, "text": " at a certain speed." }, { "start": 1309.8999999999999, "end": 1315.86, "text": " And the example clips they have are displayed on the right." }, { "start": 1315.86, "end": 1322.6599999999999, "text": " So the example clips are of someone walking and of someone running." }, { "start": 1322.6599999999999, "end": 1328.8, "text": " Yet there is not really a transition in the data set from walking to running." }, { "start": 1328.8, "end": 1334.56, "text": " And the agent learns to this transition by itself." }, { "start": 1334.56, "end": 1339.78, "text": " So their point is always, look, we have kind of simple things in the data set, we have" }, { "start": 1339.78, "end": 1343.4199999999998, "text": " the individual parts in the data set that the agent should do." }, { "start": 1343.4199999999998, "end": 1346.82, "text": " But we never have the combination of all the things." }, { "start": 1346.82, "end": 1352.54, "text": " And to kind of stitch these parts together, that's the powerful thing about this method," }, { "start": 1352.54, "end": 1353.96, "text": " which is pretty cool." }, { "start": 1353.96, "end": 1359.6200000000001, "text": " So here, you can see at the top right, there is a target speed." }, { "start": 1359.6200000000001, "end": 1363.26, "text": " And all of these three agents are trained agents." }, { "start": 1363.26, "end": 1369.8600000000001, "text": " And in the same manner, right, and they're all told to reach that given target speed." }, { "start": 1369.8600000000001, "end": 1377.66, "text": " However, the agent on the left only has been provided with a data set of people just walking." }, { "start": 1377.66, "end": 1383.8400000000001, "text": " The agent in the middle, the same, but it has only received a data set of just agents" }, { "start": 1383.84, "end": 1384.84, "text": " running." }, { "start": 1384.84, "end": 1386.26, "text": " So no walking." }, { "start": 1386.26, "end": 1392.6599999999999, "text": " And on the right, this agent has received a data set of agents walking and running." }, { "start": 1392.6599999999999, "end": 1401.1399999999999, "text": " So you can see that as the target speed changes, the like if it's fast, the walker is not able" }, { "start": 1401.1399999999999, "end": 1405.26, "text": " to keep up when it's slow, the runner is not able to slow down." }, { "start": 1405.26, "end": 1411.06, "text": " However, the agent that has the full data set available can not only match the speed" }, { "start": 1411.06, "end": 1417.02, "text": " and change its style according to the speed, it can it also learns the transitions from" }, { "start": 1417.02, "end": 1418.6799999999998, "text": " one to the other." }, { "start": 1418.6799999999998, "end": 1421.98, "text": " And this these transitions are not in the data set itself." }, { "start": 1421.98, "end": 1429.5, "text": " Okay, so the cool part about this method is it can sort of stitch together the appropriate" }, { "start": 1429.5, "end": 1432.82, "text": " behaviors from the data set." }, { "start": 1432.82, "end": 1438.3, "text": " Even if you don't provide these specifically to solve the task." }, { "start": 1438.3, "end": 1441.5, "text": " The Yeah, this is the t rex." }, { "start": 1441.5, "end": 1447.02, "text": " I think this is just to show that you don't have use motion capture, but you can use it." }, { "start": 1447.02, "end": 1452.7, "text": " You can learn from a provided data set of keyframe animation." }, { "start": 1452.7, "end": 1457.4199999999998, "text": " And you can also see the there is nothing in the data set about reaching a goal." }, { "start": 1457.4199999999998, "end": 1460.82, "text": " There's just kind of demonstrations of the t rex walking." }, { "start": 1460.82, "end": 1468.02, "text": " And the method is able to adapt this walking style in concordance with reaching a goal." }, { "start": 1468.02, "end": 1473.5, "text": " So you can see that the turning is much like the turning in the example clips." }, { "start": 1473.5, "end": 1482.2, "text": " Whereas if you've ever seen things like this without without the the examples, these policies" }, { "start": 1482.2, "end": 1486.22, "text": " that these things come up with are quite weird." }, { "start": 1486.22, "end": 1488.42, "text": " So here's a failure case." }, { "start": 1488.42, "end": 1494.34, "text": " And so the difference between this method and other methods is other methods, such as" }, { "start": 1494.34, "end": 1500.4599999999998, "text": " this motion tracking in the middle, what they try to do is they try to match a given behavior" }, { "start": 1500.4599999999998, "end": 1503.8999999999999, "text": " from the data set as closely as possible." }, { "start": 1503.8999999999999, "end": 1506.06, "text": " So this it's called motion tracking." }, { "start": 1506.06, "end": 1510.6999999999998, "text": " Now there is some sophistication to it more than I'm saying right here." }, { "start": 1510.6999999999998, "end": 1513.78, "text": " But essentially, you have a front flip on the left." }, { "start": 1513.78, "end": 1520.8999999999999, "text": " And then the motion tracking algorithm tries to learn a policy such that the behavior is" }, { "start": 1520.8999999999999, "end": 1522.78, "text": " followed as closely as possible." }, { "start": 1522.78, "end": 1528.98, "text": " Now, again, this is really good when you have the exact demonstration available from what" }, { "start": 1528.98, "end": 1530.16, "text": " you want to do." }, { "start": 1530.16, "end": 1537.34, "text": " It's not so good if you if what you have available as demonstrations is not isn't really what" }, { "start": 1537.34, "end": 1541.56, "text": " you want to do is just sort of some demonstrations." }, { "start": 1541.56, "end": 1545.08, "text": " But there are failure cases, of course, if you want to copy exactly." }, { "start": 1545.08, "end": 1551.86, "text": " So if you want to do a front flip, and by the way, the reward function here is how closely" }, { "start": 1551.86, "end": 1557.2199999999998, "text": " you match the motion from the reference motion." }, { "start": 1557.2199999999998, "end": 1558.74, "text": " So that's the reward function." }, { "start": 1558.74, "end": 1562.78, "text": " However, motion tracking does more than that motion tracking really tries to track the" }, { "start": 1562.78, "end": 1564.1, "text": " motion itself." }, { "start": 1564.1, "end": 1568.78, "text": " While this method here would only get the reward of tracking the motion." }, { "start": 1568.78, "end": 1577.9799999999998, "text": " And you can see it doesn't manage to to actually learn it more like doesn't try it tries to" }, { "start": 1577.9799999999998, "end": 1579.4599999999998, "text": " not fail." }, { "start": 1579.46, "end": 1584.8600000000001, "text": " So it reaches the same end position and that's sort of good enough for it." }, { "start": 1584.8600000000001, "end": 1592.5, "text": " So there is a Yeah, there is a trade off right here." }, { "start": 1592.5, "end": 1596.78, "text": " It's probably also given by how much you weigh the different components." }, { "start": 1596.78, "end": 1602.38, "text": " So here you have a data set of agents walking and agents waving." }, { "start": 1602.38, "end": 1609.46, "text": " And then what you want to do is you want to have a agent that walks in a direction while" }, { "start": 1609.46, "end": 1614.3400000000001, "text": " they wave the arm or why they they lift the arm or something." }, { "start": 1614.3400000000001, "end": 1621.18, "text": " So at the left, you can see if you only have a data set, if you only have a data set of" }, { "start": 1621.18, "end": 1627.3400000000001, "text": " the waving agents, it's really struggling moving forward, right that the walking it" }, { "start": 1627.3400000000001, "end": 1629.5800000000002, "text": " learns it has no demonstration of walking." }, { "start": 1629.5800000000002, "end": 1631.24, "text": " So that's a struggle." }, { "start": 1631.24, "end": 1638.18, "text": " If you only have the walking demonstration in the middle, then it doesn't really track" }, { "start": 1638.18, "end": 1643.14, "text": " the arm movement where it should even though there is a reward for it, right?" }, { "start": 1643.14, "end": 1652.94, "text": " Only Yeah, on the right, I mean, this is somewhat somewhat, but it is kind of able to to interpolate." }, { "start": 1652.94, "end": 1657.34, "text": " So if you if you want to check out this video, there is another one that actually explains" }, { "start": 1657.34, "end": 1659.6200000000001, "text": " the paper in a short form." }, { "start": 1659.62, "end": 1662.2199999999998, "text": " This is from from SIGGRAPH." }, { "start": 1662.2199999999998, "end": 1663.4599999999998, "text": " Go check it out." }, { "start": 1663.4599999999998, "end": 1666.6399999999999, "text": " They do have more sophisticated behaviors." }, { "start": 1666.6399999999999, "end": 1674.04, "text": " So on the bottom here, you can, for example, see the obstacle run, leap and roll." }, { "start": 1674.04, "end": 1679.6599999999999, "text": " So the data set contains demonstrations from all of those things, but not the things in" }, { "start": 1679.6599999999999, "end": 1683.54, "text": " conjunction with each other." }, { "start": 1683.54, "end": 1690.74, "text": " In this here, at least what they describe in the text in this, this right here, what" }, { "start": 1690.74, "end": 1696.1, "text": " they have in the data set is demonstrations of walking and demonstrations of getting up" }, { "start": 1696.1, "end": 1697.94, "text": " from the ground." }, { "start": 1697.94, "end": 1705.42, "text": " And whenever so the agent learns that whenever it falls over right here, that it can get" }, { "start": 1705.42, "end": 1709.06, "text": " up faster if it kind of does this rolling motion right here." }, { "start": 1709.06, "end": 1717.54, "text": " So this was nowhere in the data set, but because the agent wants to go to a get up state, both" }, { "start": 1717.54, "end": 1722.32, "text": " because that will go it that will make it go towards a goal." }, { "start": 1722.32, "end": 1727.22, "text": " And also because that matches behavior in the data set, it will learn this rolling motion" }, { "start": 1727.22, "end": 1730.54, "text": " as it falls down in order to get up again." }, { "start": 1730.54, "end": 1733.46, "text": " So that is that's pretty cool." }, { "start": 1733.46, "end": 1741.38, "text": " Also in this strike and punch example, the data set apparently only contains agents walking" }, { "start": 1741.38, "end": 1747.66, "text": " or agents punching, it never contains agents walking, and then punching." }, { "start": 1747.66, "end": 1755.1000000000001, "text": " So the transition that you saw at the beginning is a learned behavior that wasn't in the data" }, { "start": 1755.1000000000001, "end": 1756.46, "text": " set." }, { "start": 1756.46, "end": 1762.8600000000001, "text": " So that's, I think it's a it's a pretty cool application of and a combination of two things" }, { "start": 1762.86, "end": 1771.1799999999998, "text": " of adversarial learning and of of learning sorry, not from demonstration because that's" }, { "start": 1771.1799999999998, "end": 1774.86, "text": " adversarial learning of learning to reach a goal." }, { "start": 1774.86, "end": 1778.4599999999998, "text": " And it's a good Yeah, it's a good demonstration of how you can combine the two they have a" }, { "start": 1778.4599999999998, "end": 1786.1, "text": " lot of ablations where they sort of show that the impact of the data set makes a big difference." }, { "start": 1786.1, "end": 1788.82, "text": " I mean, you've seen this in the demonstrations." }, { "start": 1788.82, "end": 1792.54, "text": " But also here you can see that again in a graphical form." }, { "start": 1792.54, "end": 1798.42, "text": " So the locomotion data set contains both demonstrations of walking and running, while the walk or" }, { "start": 1798.42, "end": 1804.78, "text": " the run data set only contains demonstrations of either and the here is the target speed" }, { "start": 1804.78, "end": 1808.7, "text": " versus the average speed that the agent does." }, { "start": 1808.7, "end": 1814.42, "text": " Now if you only have a walking data set, the agent no matter the target speeds, the agent" }, { "start": 1814.42, "end": 1817.78, "text": " will always kind of stick to walking." }, { "start": 1817.78, "end": 1823.06, "text": " And if you have the running data set, it can run faster up here." }, { "start": 1823.06, "end": 1829.02, "text": " But if you want it to slow down, it can't really run slower than you require." }, { "start": 1829.02, "end": 1835.26, "text": " Only when the data set contains both things, can it transition between the two and actually" }, { "start": 1835.26, "end": 1839.8, "text": " match the running or walking." }, { "start": 1839.8, "end": 1842.94, "text": " So what do we think of this?" }, { "start": 1842.94, "end": 1848.6200000000001, "text": " My opinion is it's probably it's very cool." }, { "start": 1848.6200000000001, "end": 1856.02, "text": " It's a good way of sort of bringing demonstrations into the picture without manually tracking" }, { "start": 1856.02, "end": 1859.3400000000001, "text": " the demonstrations or copying exactly." }, { "start": 1859.3400000000001, "end": 1865.0800000000002, "text": " So you just give some suggestions to the algorithm of what it could do." }, { "start": 1865.0800000000002, "end": 1871.94, "text": " And you do that in form of a data set, which is something that I like, because it's not" }, { "start": 1871.94, "end": 1878.22, "text": " as invasive as telling the agent, you know, you need to match the joint movements and" }, { "start": 1878.22, "end": 1881.5, "text": " so on of the of the demonstration." }, { "start": 1881.5, "end": 1888.02, "text": " This enables demonstrations to come in that are of a much broader range, not necessarily" }, { "start": 1888.02, "end": 1891.5800000000002, "text": " reach the goal, not necessarily even have a goal in mind." }, { "start": 1891.5800000000002, "end": 1892.7, "text": " So that's cool." }, { "start": 1892.7, "end": 1899.66, "text": " On the other hand, I think it's pretty finicky because you have to strike the trade off parameter" }, { "start": 1899.66, "end": 1906.18, "text": " between the two rewards quite cleanly, or clearly for your goal." }, { "start": 1906.18, "end": 1912.5, "text": " Because we've already seen right at some point, the agent won't reach the goal anymore." }, { "start": 1912.5, "end": 1920.98, "text": " If if this reward here, if the reward of the style is too high, we already saw this if" }, { "start": 1920.98, "end": 1926.6200000000001, "text": " you have a data set of just running, the agent will simply neglect the goal, it won't go" }, { "start": 1926.62, "end": 1932.86, "text": " slower than, you know, the kind of the slowest run or demonstration or a little bit slower" }, { "start": 1932.86, "end": 1940.02, "text": " than that, it just won't change its policy because it needs to match the data set." }, { "start": 1940.02, "end": 1947.84, "text": " And the this balance seems to be quite, quite a important hyper parameter." }, { "start": 1947.84, "end": 1955.7399999999998, "text": " And that also makes the provided data set here quite an important thing to to have available." }, { "start": 1955.74, "end": 1960.38, "text": " So which data set you provide is also quite important." }, { "start": 1960.38, "end": 1968.96, "text": " And lastly, the tasks themselves or the reward of the goal directed task nature, or in this" }, { "start": 1968.96, "end": 1972.06, "text": " paper, extremely engineered." }, { "start": 1972.06, "end": 1978.66, "text": " And that's what I want to come back here lastly to so what they tout, for example, in this" }, { "start": 1978.66, "end": 1985.34, "text": " walk and punch thing, they say, oh, when the agent is far away, it runs towards the" }, { "start": 1985.34, "end": 1986.3799999999999, "text": " target." }, { "start": 1986.3799999999999, "end": 1989.6, "text": " But if it's close, it only it slows down." }, { "start": 1989.6, "end": 1992.8999999999999, "text": " And then when it's really close, it punches the target." }, { "start": 1992.8999999999999, "end": 1996.4199999999998, "text": " And it sort of learns to combine these different skills." }, { "start": 1996.4199999999998, "end": 2000.4199999999998, "text": " But and which is cool, right, because the transition wasn't in the data set." }, { "start": 2000.4199999999998, "end": 2007.8999999999999, "text": " But a big part of it combining these skills is because in the reward, you make the reward" }, { "start": 2007.8999999999999, "end": 2013.8999999999999, "text": " different, whether the agent is far away, or whether it's near, you can see that right" }, { "start": 2013.8999999999999, "end": 2014.8999999999999, "text": " here." }, { "start": 2014.9, "end": 2022.0600000000002, "text": " So these things are reward shaped to a high degree to encourage these kinds of transitions" }, { "start": 2022.0600000000002, "end": 2029.5400000000002, "text": " to happen, which I think is not really practical in a lot of settings." }, { "start": 2029.5400000000002, "end": 2037.3400000000001, "text": " So it's still to be seen how much this is of practical value in other reinforcement" }, { "start": 2037.3400000000001, "end": 2040.5400000000002, "text": " learning tasks where you don't have that available." }, { "start": 2040.54, "end": 2046.6599999999999, "text": " And also in other reinforcement learning tasks, where maybe the reward is more sparse, and" }, { "start": 2046.6599999999999, "end": 2054.7599999999998, "text": " how that affects this thing, because essentially, if the reward is much more sparse and irregular," }, { "start": 2054.7599999999998, "end": 2059.54, "text": " now you have a problem because now the style signal is much more prominent." }, { "start": 2059.54, "end": 2065.1, "text": " And that's not necessarily solved by simply reweighing the style signal." }, { "start": 2065.1, "end": 2069.5, "text": " So I'm excited to see what comes out of this line of work." }, { "start": 2069.5, "end": 2075.94, "text": " Next, it's a pretty cool line, as I already said, it's a good application of GANs in a" }, { "start": 2075.94, "end": 2078.5, "text": " different field than images." }, { "start": 2078.5, "end": 2081.94, "text": " And with that, let me know what you think in the comments." }, { "start": 2081.94, "end": 2083.14, "text": " I'll see you next time." }, { "start": 2083.14, "end": 2100.18, "text": " Bye bye." } ]
hkw-WDBipgo
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Talking to companies at ICML19
[ "Science & Technology" ]
[ "machine learning", "conference", "ai", "artificial intelligence", "industry", "academia", "deep learning", "hardware", "lidar", "graphcore" ]
A short rant on sponsor companies at ICML and how to talk to them.
All right, I quickly want to talk about kind of interaction with corporation company reps at these conferences, because to me it's still a bit of a secret or a bit of a not really clear of what to do. There's very different kinds of companies at these conferences, so some companies I feel are there to basically show off their technology, kind of wanting to use it. One example is for example Graphcore, the kind of new kid on the block for AI hardware in that they claim they have a chip specifically designed for the types of operations that machine learning applications do. So even more specialized than a GPU, and also they claim they are faster for equivalent kind of money spending than an Nvidia GPU, like a classic GPU. So basically you get much more bang for the buck. For now they just offer a cloud solution, I believe, and they're going to sell their cards through Dell. The way it works is they have kind of a low level compiler that will compile your model to these cards, and for now you can interact with it through C++, and then TensorFlow will come later, something like this. The thing about their card is that they have an extremely large memory right next to the compute unit, this would be kind of your traditional level one cache. That means that you get much faster access technically to your local variables, but then they don't have any kind of RAM, which means their entire card only has somewhat like 300 megabytes of memory, but they claim they can just basically distribute, if you have a large model you can distribute that over many cards, and then you'll get basically the speed up of the cards without having to sacrifice a model size. Another company that shows off really cool technology is a company that does LIDAR, and I forget the name right now, but when I try to look it up, they do a LIDAR sensor basically that is super tiny, and it costs a fraction of like a traditional LIDAR sensor. So I think they said theirs cost about $12,000, and it's really tiny, and has a couple of advantages compared to traditional sensors. As far as I understand, their lasers are mounted on the same chip, so they always point in the same direction, which reduces a lot of inaccuracies. I guess people would be interested in that, for self-driving cars and so on. These are kind of the hardware demonstrations that I've seen. Then there's other things, like there is a wellness center where you can get a massage, which is sponsored by the big companies, which is pretty nice, but I'm probably too much. I don't like these kinds of things too much. Maybe I'm just socially too awkward. For some companies, I feel that they're just there to recruit, and they don't really want to talk about what they do too much. So an indication of this would be a company where basically all of the reps at the booth are recruiters, so non-technical recruiters, that basically just kind of tell you what you can do as a career and not really what the company does as a whole. I never really know what to talk about then, because I feel like most people are interested and drawn towards interesting work, and if that comes with good working conditions, then that's a plus, but I don't feel for many people that that is the most important thing. So I could be wrong, and probably it's good that for some people it is, because otherwise everyone would take my jobs, the ones that I like. These companies will usually, if there is an engineer, they will not talk about too much what they do, like, oh, it's company secret and so on. So the funniest one was actually the NSA. Talking to the NSA was kind of painful because you kind of ask them, so what do you do? And they're like, yeah, machine learning. Because what I want to know as a researcher is, is there anything I could do there that I couldn't do anywhere else? So is there any unique problems that the NSA faces that actually demand new research, like demand new machine learning methods or some kind of change? So I ask this, and they're like, yes, there are problems like this. And you ask, like, which problems? And they're like, yeah, there are problems. We can't tell you. So everything's basically whatever. So I made it a game to ask them more specific questions and watch them, like, oh, this is classified. So yeah, if you're here, definitely check them out. It's fun. It's just fun to talk to them. Yeah, I feel to most companies, they're really interesting. I don't know more than half of them. So just going up, ask them what they do, kind of just get an overview over the landscape of what's needed currently in machine learning research. I think that's really useful, because as an academic, I tend to be very disconnected from the industry side of things and from what people actually need or want in practice. So talking to all these companies is really helpful to get an overview over that. Yeah, so but if you know a better way, I know some people are much more successful than me talking to companies at conferences. I'm definitely not the best at this. And yeah, if you have a better strategy, let me know. So I'm pretty happy so far. All right. That was that. See ya.
[ { "start": 0, "end": 11.76, "text": " All right, I quickly want to talk about kind of interaction with corporation company reps" }, { "start": 11.76, "end": 18.04, "text": " at these conferences, because to me it's still a bit of a secret or a bit of a not really" }, { "start": 18.04, "end": 20.64, "text": " clear of what to do." }, { "start": 20.64, "end": 26.92, "text": " There's very different kinds of companies at these conferences, so some companies I" }, { "start": 26.92, "end": 35, "text": " feel are there to basically show off their technology, kind of wanting to use it." }, { "start": 35, "end": 44.88, "text": " One example is for example Graphcore, the kind of new kid on the block for AI hardware" }, { "start": 44.88, "end": 51.2, "text": " in that they claim they have a chip specifically designed for the types of operations that" }, { "start": 51.2, "end": 54.760000000000005, "text": " machine learning applications do." }, { "start": 54.76, "end": 64.16, "text": " So even more specialized than a GPU, and also they claim they are faster for equivalent" }, { "start": 64.16, "end": 70, "text": " kind of money spending than an Nvidia GPU, like a classic GPU." }, { "start": 70, "end": 74.44, "text": " So basically you get much more bang for the buck." }, { "start": 74.44, "end": 80.72, "text": " For now they just offer a cloud solution, I believe, and they're going to sell their" }, { "start": 80.72, "end": 84.2, "text": " cards through Dell." }, { "start": 84.2, "end": 90.2, "text": " The way it works is they have kind of a low level compiler that will compile your model" }, { "start": 90.2, "end": 98.2, "text": " to these cards, and for now you can interact with it through C++, and then TensorFlow will" }, { "start": 98.2, "end": 100.32000000000001, "text": " come later, something like this." }, { "start": 100.32000000000001, "end": 108.96000000000001, "text": " The thing about their card is that they have an extremely large memory right next to the" }, { "start": 108.96, "end": 120.11999999999999, "text": " compute unit, this would be kind of your traditional level one cache." }, { "start": 120.11999999999999, "end": 125.08, "text": " That means that you get much faster access technically to your local variables, but then" }, { "start": 125.08, "end": 132.51999999999998, "text": " they don't have any kind of RAM, which means their entire card only has somewhat like 300" }, { "start": 132.51999999999998, "end": 137.72, "text": " megabytes of memory, but they claim they can just basically distribute, if you have a large" }, { "start": 137.72, "end": 145.6, "text": " model you can distribute that over many cards, and then you'll get basically the speed up" }, { "start": 145.6, "end": 152.2, "text": " of the cards without having to sacrifice a model size." }, { "start": 152.2, "end": 161.24, "text": " Another company that shows off really cool technology is a company that does LIDAR, and" }, { "start": 161.24, "end": 170.60000000000002, "text": " I forget the name right now, but when I try to look it up, they do a LIDAR sensor basically" }, { "start": 170.60000000000002, "end": 179.56, "text": " that is super tiny, and it costs a fraction of like a traditional LIDAR sensor." }, { "start": 179.56, "end": 188.20000000000002, "text": " So I think they said theirs cost about $12,000, and it's really tiny, and has a couple of" }, { "start": 188.2, "end": 192.16, "text": " advantages compared to traditional sensors." }, { "start": 192.16, "end": 197.95999999999998, "text": " As far as I understand, their lasers are mounted on the same chip, so they always point in" }, { "start": 197.95999999999998, "end": 205.6, "text": " the same direction, which reduces a lot of inaccuracies." }, { "start": 205.6, "end": 210.83999999999997, "text": " I guess people would be interested in that, for self-driving cars and so on." }, { "start": 210.83999999999997, "end": 215.67999999999998, "text": " These are kind of the hardware demonstrations that I've seen." }, { "start": 215.68, "end": 223.76000000000002, "text": " Then there's other things, like there is a wellness center where you can get a massage," }, { "start": 223.76000000000002, "end": 232.32, "text": " which is sponsored by the big companies, which is pretty nice, but I'm probably too much." }, { "start": 232.32, "end": 236.88, "text": " I don't like these kinds of things too much." }, { "start": 236.88, "end": 241.24, "text": " Maybe I'm just socially too awkward." }, { "start": 241.24, "end": 247.76000000000002, "text": " For some companies, I feel that they're just there to recruit, and they don't really want" }, { "start": 247.76000000000002, "end": 250.84, "text": " to talk about what they do too much." }, { "start": 250.84, "end": 259.76, "text": " So an indication of this would be a company where basically all of the reps at the booth" }, { "start": 259.76, "end": 267.76, "text": " are recruiters, so non-technical recruiters, that basically just kind of tell you what" }, { "start": 267.76, "end": 276.12, "text": " you can do as a career and not really what the company does as a whole." }, { "start": 276.12, "end": 284.24, "text": " I never really know what to talk about then, because I feel like most people are interested" }, { "start": 284.24, "end": 290, "text": " and drawn towards interesting work, and if that comes with good working conditions, then" }, { "start": 290, "end": 296.24, "text": " that's a plus, but I don't feel for many people that that is the most important thing." }, { "start": 296.24, "end": 302.40000000000003, "text": " So I could be wrong, and probably it's good that for some people it is, because otherwise" }, { "start": 302.40000000000003, "end": 307.8, "text": " everyone would take my jobs, the ones that I like." }, { "start": 307.8, "end": 312.32, "text": " These companies will usually, if there is an engineer, they will not talk about too" }, { "start": 312.32, "end": 315.48, "text": " much what they do, like, oh, it's company secret and so on." }, { "start": 315.48, "end": 319.32, "text": " So the funniest one was actually the NSA." }, { "start": 319.32, "end": 327.08, "text": " Talking to the NSA was kind of painful because you kind of ask them, so what do you do?" }, { "start": 327.08, "end": 331.84, "text": " And they're like, yeah, machine learning." }, { "start": 331.84, "end": 337.8, "text": " Because what I want to know as a researcher is, is there anything I could do there that" }, { "start": 337.8, "end": 339.88, "text": " I couldn't do anywhere else?" }, { "start": 339.88, "end": 348.44, "text": " So is there any unique problems that the NSA faces that actually demand new research, like" }, { "start": 348.44, "end": 354.24, "text": " demand new machine learning methods or some kind of change?" }, { "start": 354.24, "end": 358.88, "text": " So I ask this, and they're like, yes, there are problems like this." }, { "start": 358.88, "end": 360.88, "text": " And you ask, like, which problems?" }, { "start": 360.88, "end": 363.8, "text": " And they're like, yeah, there are problems." }, { "start": 363.8, "end": 364.8, "text": " We can't tell you." }, { "start": 364.8, "end": 366.8, "text": " So everything's basically whatever." }, { "start": 366.8, "end": 373.8, "text": " So I made it a game to ask them more specific questions and watch them, like, oh, this is" }, { "start": 373.8, "end": 374.8, "text": " classified." }, { "start": 374.8, "end": 379.16, "text": " So yeah, if you're here, definitely check them out." }, { "start": 379.16, "end": 380.16, "text": " It's fun." }, { "start": 380.16, "end": 384.08, "text": " It's just fun to talk to them." }, { "start": 384.08, "end": 389.84000000000003, "text": " Yeah, I feel to most companies, they're really interesting." }, { "start": 389.84000000000003, "end": 391.92, "text": " I don't know more than half of them." }, { "start": 391.92, "end": 398.68, "text": " So just going up, ask them what they do, kind of just get an overview over the landscape" }, { "start": 398.68, "end": 401.64, "text": " of what's needed currently in machine learning research." }, { "start": 401.64, "end": 409.88, "text": " I think that's really useful, because as an academic, I tend to be very disconnected from" }, { "start": 409.88, "end": 417.28, "text": " the industry side of things and from what people actually need or want in practice." }, { "start": 417.28, "end": 422.03999999999996, "text": " So talking to all these companies is really helpful to get an overview over that." }, { "start": 422.03999999999996, "end": 428.76, "text": " Yeah, so but if you know a better way, I know some people are much more successful than" }, { "start": 428.76, "end": 433.08, "text": " me talking to companies at conferences." }, { "start": 433.08, "end": 435.08, "text": " I'm definitely not the best at this." }, { "start": 435.08, "end": 439.28, "text": " And yeah, if you have a better strategy, let me know." }, { "start": 439.28, "end": 442.03999999999996, "text": " So I'm pretty happy so far." }, { "start": 442.03999999999996, "end": 443.03999999999996, "text": " All right." }, { "start": 443.03999999999996, "end": 444.03999999999996, "text": " That was that." }, { "start": 444.04, "end": 459.04, "text": " See ya." } ]
Jqvb7jp4Nm8
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Addendum for Supermasks in Superposition: A Closer Look (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "supsup", "supermasks", "lottery ticket", "lottery ticket hypothesis", "gradient", "entropy", "surplus", "superfluous neurons", "lifelong learning", "multitask learning", "catastrophic forgetting", "continuous learning", "binary mask", "random network", "optimization", "hopfield network", "gradient descent", "superposition" ]
I take a closer look at "Supermasks in Superposition" after I've already done a video on it. Specifically, I look at: 1. The intuition and theoretical justification behind the G objective, 2. Whether Supermasks and Superposition can be viewed as two distinct ideas and 3. The Paper's Broader Impact Statement. OUTLINE: 0:00 - Intro & Overview 2:00 - SupSup Recap 4:00 - In-Depth Analysis of the G Objective 20:30 - Superposition without Supermasks 25:40 - Broader Impact Statement 36:40 - Conclusion 37:20 - Live Coding Part 1 on SupSup: https://youtu.be/3jT1qJ8ETzk My Code: https://colab.research.google.com/drive/1bEcppdN6qZRpEFplIiv41ZI3vDwDjcvC?usp=sharing Paper: https://arxiv.org/abs/2006.14769 Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher
Hi there! Today we'll look at super masks in superposition again. So this is part two of this paper by Mitchell Wurtzman and Vivek Ramanujan and here's the reason why there's a part two. So after yesterday's video on this paper I couldn't sleep because I really felt that I had left out some important aspects that I wanted to touch on during the video. Now sometimes during videos I look at the clock and I realize like oh crap the video is already like an hour long and I know people are watching on 2x speed anyway but still it's like too long and I need to wrap it up really soon. And what I felt were pretty important messages about this paper got lost. So specifically I want to address three different things. First of all they have like a formal analysis, not a formal but a kind of more rigorous analysis of what their modified G objective does. And I also want to give some intuition in that because I felt I really had done a good job at that. The second part is that the two different ideas right here being the super masks and the superposition and I think my opinion is sort of that these are two separate things and they really have nothing to do with each other and I think that didn't really come through last video. And the third one being the broader impact statement of this paper which I you know I usually kind of gloss over it and go like haha but I hear there is an important point to it so yeah we'll get to that. Alright so again not a new paper today I realized this but I think it's worth kind of diving deeper into this paper which is a very cool paper you know so don't don't get me wrong right here and I feel mostly I haven't done a good part at explaining it. Like literally lying awake. Okay so let's go to the first point. We had this so if you hadn't seen the first video super masks and superposition basically says that we want to do lifelong learning and we want to do lifelong learning by lifelong learning is the task where you have a bunch of tasks in sequence and you learn them in sequence so one after the other and basically the goal is to not forget tasks once you learn new tasks and this model does it by always building one of these super masks for each task that is applied to the same randomly initialized base neural network each time and you know by keeping the super mask around you won't forget the task and then at inference time if you're given the task and just retrieve the mask if you're not given the tasks you can do this superposition trick where you apply all the masks in a superposition and then you look at sort of the gradient of an entropy function in order to decide which task reduces the entropy the most so which task is the most certain about a particular data point and that you you kind of infer that that's the task you're gonna go with so instead of the entropy which is you know well reasoned they had this other objective they call a G and G basically looks at the it's really strange it looks at the superfluous neurons so they also add these superfluous neurons these S neurons right here and they they the G objective will only look at the S neurons in order to decide whether or not that's the correct task and it's basically just the log some X of the S neurons and we had some intuition about them being you know all small and so on them being like outlier detectors but there is an entire chapter in the appendix where the authors do a sort of more in-depth theoretical analysis of that which you know I it's not not necessary to do this for them so I really enjoy I enjoyed reading that and that gave me sort of the better intuition of what this G objective does so here they say the aim is not to formally prove properties of the algorithm rather we hope that a more mathematical language may prove useful in extending intuition okay so again that's that's pretty cool so they start off by saying you have your neural network is basically W and the the sorry the it's it's this Phi right here and the W are the last layers weights which compute your log it's so Y is going not to be your class but Y is going to be your log it's and P is going to be the probability vector over your class which if the you calculate this via a softmax is going to be the following expression right here if you have a mask right then at least in the last layer you can in you can infer it as this right here so you multiply the mask by the last these weights and then that gives you your log it's so they say here with they initialize the weights right here actually they initialize the they have no bias term and they initialize the weights by this constant so plus minus this constant it's not really necessary to do that but they do it right here it makes the analysis also a bit easier I guess it just works more well if you have these masks in superposition of course you want to add all of these masks with their respective alpha weighting factor then multiply by the weights and that gives you your log it's so note that this this doesn't necessarily only have to be the last layers weights right here you can view that as any sort of weights of the neural network if you formulate this Phi correctly so you don't think that they only apply the mask to the last layer they do apply the mask to the entire thing all right now the the important part here is what happens if we look at the derivative of G with respect to one of the alphas and take the maximum negative derivative of that G which is that mysterious function that only looks at the at the at the superfluous neurons so what they want they kind of construct this G by principle what they say is we want a function G that mimics the supervised loss right we want a function G that is kind of equal like the supervised loss if we had the task ID right and that's that's pretty cool because you know the the supervised loss you sort of need all the information you need the label you need you need all the all you need the task ID so the supervised loss is unavailable but we want a function G that in its gradient mimics the supervised loss so they go about constructing this right here they say lemma first lemma it's possible to construct a function G such that the gradient matches the gradient from the supervised loss for all s neurons so for all these superfluous neurons specifically we want that the gradient with respect to the log it's if the gradient to the log it's is equal that means the gradient to all the rest of the network is equal because the rest of the network goes through the log it's right the gradient through the log it's is equal to the gradient of the supervised loss to the log it's for all the superfluous neurons and zero otherwise so they say the zero otherwise is pretty easily done in math you know simply set it to zero and in the actual code which you can achieve like this where M indicates the superfluous neurons so this is just they said just multiplied here and the other ones are detached so there is no gradient flowing this is the property that we only look at the superfluous neurons and now we are going to show that the gradient is going to be equal so they say if you had the supervised loss which means if you had the label then this would be your cross entropy loss okay so you cross it divides into this part where you need the label and then this part here you don't need the label now you can pretty much say look the label is certainly going to be one of not the superfluous neurons because the superfluous neurons are superfluous they are never the correct neuron so this is always going to be you know not the not the neurons we look at so the gradient certainly this is always going to be zero because we never we wherever the gradient is flowing that's not where the where this is one so the gradient of any superfluous neuron is just this thing right here and that's exactly why they build the function G so the function G has this exact gradient the function G if you derive it has that same gradient as the supervised loss for the superfluous neurons okay so it's sort of magic but it's not you know it's not magic so they need two more assumptions here to have to get the following properties so the for the first property now because now we want to have G be identifying the correct task so we've already constructed G now we want to show that if we really do this the gradient with respect to the alphas then if we do it for a wrong task for the tasks that it's not the task of that particular data point that goes into computing G then we'll get a value that's probably lower than zero however if we plug in if we derive but with respect to the alpha of the correct task then we get a gradient a negative gradient that's higher than zero okay so we're now going to prove that this with high probability really allows us to distinguish the correct task from the wrong task and we need two assumptions right here the assumption one is we assume that the mask learn on task I will be independent from the data from task J if the task data is from task J then this are independent random variables okay so it sort of means that the the tasks themselves are kind of independent but it's not it's it's not the same requirement but you can think of in in the case of permuted M nest or so this is some it's given except if you consider this kind of frequency of brightness and so on but if you have independent task I think that this is given that means that the features right here and the masks are independent variable if if the data is from tax J then the features and the mask from task I are independent variable sorry the second assumption you need is that we assume that a negative weight and a positive weight are equally likely to be masked out okay so this again you can think of with some regularity this is certainly going to be to be given in a randomly initialized neural network note that when the features are 0 which will be the case for 0 mean random features yeah so um yeah before I said this was your neural network this is your random neural network right and then you mask that and so on if this is a randomly initialized neural network then you can make a case that the expected features of those will be 0 it doesn't need to be the case but you can you can construct it such that it is so if you have the two things right if you have those two things then you can prove the following if the data X comes from task J then when you derive by an alpha that's not of task J you get a number that's smaller than zero in expectation and here the crucial part is you reframe this gradient you reframe reframe reframe and what you'll see is that this here comes out so this is a sum and each element of the sum is going to be greater or equal to zero which means that this thing is greater or equal to zero which means the negative thing is smaller than zero in lemma H1 now we're going to look at lemma H1 to get an intuition of what's going on right here so lemma H1 says if J is the true task and I is not equal to J then this quantity here is greater than zero all right I restarted my tablet and we are back so what's kind of the the intuition behind why this quantity here would be greater or equal to zero and honestly in order to make it a bit easier I first want to look at whenever I equals J so whenever J is the true task and then I equals J then we can sort of think of the opposite like why why this should be smaller or equal to zero so consider this this is the run the feature of the network of you right and then the EUV connects that to the to the mask at point V and the mask at point at that point UV is either zero or one depending on the training so this this Xi right here that's going to be the from the initialization but the mask is going to be zero or one depending on whether that feature contributes sorry whether this entire thing here contributes positively to the task or not so the secret right here why we can make a claim that this is greater or lower than zero is going to be that the mask can only be zero or one it cannot be negative one right so if the mask is zero then obviously this thing is going to be zero however if the mask is one what does it mean if the mask is one that means that this this entire feature right here let's call it F is positively impacting is positively contributing to this particular neuron right here so if the mask is one this is this it means the addition of that feature more of that feature makes that log it go up okay so if the mask is one during training it means that the feature positively contributes to the task so if we look at the gradient with respect to this function with respect to the the log it and the function basically means it's just measure measures how high these superfluous log it's are then what why do we find a negative interaction there because if you look at the neural network and you forward pass and this particular feature is important and you look at the loss G and you backward pass through the log it's if it is smaller than zero that means there there is a negative interaction right here so that basically means that if we make this feature higher then in this case we make this G function go lower okay and that is the case for the correct task because if this is the correct task and the mask is learned adequately that means it should assign a low weight to the superfluous neuron whenever the input features you know are of that task and so it makes sense that this here would be a negative number because what we want if the mask deems the feature important in a positive sense we want that if the feature goes up G goes down and that is exactly why we have the negative interaction right here right so the negative comes from this being negative I hope this sort of makes sense so if the mask is one the mask says basically if that feature goes up the loss goes down now G is a measure of the superfluous neurons the superfluous neurons should be small if the loss is small so if this is really from the task and this feature is really useful that means if we increase the feature the G function should go down and therefore this product here is going to be most likely negative okay and the contrary is you know analogous right here if this is not of this task and the mass can either be 0 or 1 right if it's 0 then this quantity is 0 however if it's 1 it's more likely that the that there the feature here because it's I is not the correct task which basically means that this feature it is for a different task it is good for a different task so the mask of that different task says it's good right here and we have no reason to believe that this would decrease the loss of the loss of this particular data point in this task so it's kind of the inverse reasoning if you look at the actual derivation here it's fairly long and it goes over the cases of the interactions between actually this initialization and the mask so the initialization can be positive or negative as you can see right here and I think I just think that the the intuition here is that the superfluous neurons react differently to a data point of the trained task because they have been kind of made to decrease for that task and for that particular mask as they do for when the data point doesn't match the mask when the data point doesn't match the mask there is no reason for the logits of the superfluous neurons to be low and if the data point task does match the mask there is ample reasons for those to be low I hope that sort of makes sense it is sort of it's a bit more of an intuition but if you really want to dig into it look at the derivation right here okay second point is the fact that the masks and the super positions don't really have to do anything with each other and that's you know I've said throughout the video like remember these tasks are super easy yada yada yada so let me make it clear in this in this diagram right here the super masks these are simply a way to train a neural network in a crude way right I don't think there is you know this distinction between mask and network I don't really like that much because ultimately what you're doing is simply you're training a neural network in a kind of weird way okay the fact that you always use the same underlying you know great neural network doesn't really matter right here it's still what you do in this super mask training is you provide a severely over parameterized network and then the mask simply gets to choose which weights to mix rather than you get to adjust the weights if you adjust the weights you usually get more accurate than with the mask but it's sort of like a quantized neural network that you train right here so that's the super mask thing again I don't think it's important that the underlying network is always the same the only advantage you have is it saves space because these masks are very small the super masks on the other hand this idea that you overlay all of the masks together and then you look at where this at the gradient of the entropy and you look at which of the of the mixing factors the gradient poles the most that's a different idea and the question here is wouldn't that isn't that independent does really depend on the masks or doesn't it and the you know the hypothesis would be that if I simply train you know three different neural networks for three different tasks could I not do the same superposition trick like could I not just add all of them with a respective alpha look at the entropy calculate the gradient with respect to each of the alphas of the entropy and then decide which task it is you know don't need masks simply mix neural networks in superposition so I did it and I actually tried their code is available so big props for their code being available I tried their code it's actually very few changes and I'm going to append my live coding of this at the end of this video so if you want to if you are interested in watching that you can do so but you know the outcome is if I train neural networks and I have I've you know done super quick and initialize them wrongly probably and all but if I train these neural net if I train the masks you get to like 92 percent accuracy in their tasks in each of the tasks and then also in the average if I train the actual neural networks I get to a higher accuracy like 93 something it doesn't matter it's just higher okay so that's hypothesis one is the training masks is just a way of training neural networks the fact that the masks and the network training itself are that close I think is a testament to how easy these tasks are like how easy eminent amnest is I'm going to also hypothesize that if the task gets harder and harder and I don't mean 10 class image net I mean a thousand class image net then these masks are going to degrade severely versus training the actual neural network I might be wrong I mean you can over parameter eyes really heavily and they will still work okay but in any case I trade the train these neural networks and they reached higher accuracy and then I did the exact same thing I laid them in superposition to determine what task it is and I could achieve the exact same result so here in their example they have a hundred percent task classification accuracy and I reached the exact same thing code worked I'm not going to try to scale this up to 250 or 2500 in tasks but I'm going to assume that with you know tuning and stuff that it's going to work about equally well you could make an argument that the masks being sparser they might be differentiated from each other more accurately but I'm not sure maybe but it's it's not a cool it's not a qualitative difference right so these two things are really two separate ideas that find their way together in this paper but ultimately have not much to do with each other okay at least that's from what I can tell I might I might be wrong here and I might be wrong with respect to their G objective and whatnot and you know but I think that that these are two cool ideas but they can be applied independently so the last thing I want to look at is their broader impact statement right here now there is a reason so usually I kind of track these broader impact statement because I think this this is this here is sort of fundamental research right this is fundamental machine learning research we do architecture the multitask learning task isn't really important as long as we have kind of the same tasks right here uncorrelated and so on the same hardness and I've also made the point that it's really important for these tasks to be the same hard for this to work in this place a role right here so um and they do they do describe some of this in this conclusion with you know limitation that we observed has to do with task identity inference when model are not well calibrated models that are overly confident for the wrong task okay so in order for them to infer the correct task they the sort of so if you look at your entropy of the models for the tasks that means you're gonna select the model that is the most sure about the task this only works if the tasks are equally hard okay if one task is much much harder than the other task this other task is always going to say well I'm really confident about this one because the task is just easier it's going to be it's going to train in neural networks is generally more confident and you're going to misclassify a lot of the tasks so so here what does this have to do with the broader impact statement if you look at the broader impact statement what they say right here so they say a goal of continue learning self-manage tasks with a single model however it is not exactly clear what qualifies as a single model therefore a concrete objective has become to learn many tasks as efficiently as possible we believe that subs up is a useful step in this direction however there are consequences to more efficient models both positive and negative so this is sort of what the community does so there are three things that I've seen so far in broader impact statement first you some people say this is not applicable to us which I agree for most fundamental research broader it like the broader impact statement is supposed to be what does this particular method how will this influence broader society so not applicable completely valid for most of these research papers because guess what you can use any method to do good or to do bad and that's that's the second second part second method is basically you you just change a generic statements how you can do good and bad and usually you can't relate it to your particular method in the paper right because your method is I don't know like my faster convergence rate of SGD but and and so what you do is you just go one level up you go up the levels it's always like optimization can be used for good and for bad I mean that's still kind of a bit vague and then you go up further well optimization can do more machine learning and machine learning can be used to do good and bad for example face recognition and things like this so you just go up the levels and that's what they essentially do here and that's what you know most people have defaulted to it's like okay so you know our model here is you know we it basically one can train more efficient models and then they simply highlight what more efficient models can do efficient models require less compute if there's a model by we run on the end device if models are more efficient than large-scale research is not limited to wealthier institutions by the way I also the broader impact statement I believe should be the impact on society and not really on the research community itself so I also this this is a bit shaky with respect to because I'm really regarding what the broader impact statement should be this is not my opinion I'm I'm trying to reflect everything I've read of guidance about what the broader impact statement should be by the way there is also method method three which is to simply tell me more about your paper in the broader impact statement which I guess is the smart method because the broader impact statement can be before before the references so it's in the main part and people are required to read it not like the appendix reviewers are not required to read the appendix reviewers are required to read the broader impact statement so I guess the smart authors will just try to cloak more information about their model in terms of a broader impact statement I guess well whether that's smart is a different discussion but here they just it's it's already defaulting right these it's already the default people simply go level up level up level up until we can you know say something generic and we will also highlight and discuss the negative consequences of models which can efficiently learn many tasks and efficient models in general when models are more efficient they're also more available and less subject to regularization as a study and study of result for instance when a high-impact model is released an institution will hopefully be accompanied by a model card analyzing the bias and intended use of the model by contrast if anyone is able to train a powerful model this may no longer be the case resulting in a proliferation of model with harmful biases or intended use taking the United States for instance bias can be harmful as models show disproportionately more errors for already marginalized groups furthering existing deeply rooted structural racism this this is like well technology this is basically a statement about technology and so why why do I have a particular not issue but why do I pick this broader impact statement they even Rick this here this is this gender shades paper right where people went and they looked at these commercial API's for face recognition I I think that's the paper yeah gender shades so if you have a face recognizer they realized they divided people up by I think gender and race so you know like they built four groups or I haven't I haven't I've just looked at the paper but in my understanding that they divided people up into groups which I find arbitrary to have the these two axes race and gender but okay you can do that and they discovered that these commercial API's have different accuracy for the different groups right and that basically our point is that you know these commercial API's if they're offered for all humans they should work equally well for all humans now now you may be see what it has to do with this paper well this paper is in the business of doing multitask learning so it is very viable to actually frame the the task for example like this is an example if you frame the task of multitask learning like face recognition on different groups of people as a multitask learning problem you have you know group group one right here group two group three and then if at inference time so you can build you know good models for each of the group at inference time you're given an image and you're trying to a fur first which group is that from and then take the appropriate classifier that would be you know that would be a good a hypothetical classifier for this thing now what do we know about this thing this thing is fails if the tasks aren't equally hard also in in specifically if if for one group let's say for group three the the task is way harder because you have less data I guess the one of the main problems there is that the data sets are not equally balanced if you have less data for that then the task becomes de facto harder and the model is less sure about the task which means that it's a double whammy so not only is the model itself less accurate but these the input data point if the person is actually of group three is less likely to be classified correctly into the correct model at to begin with so you know for all the for all I I've had my my share of of comments on the video I made and I still maintain that societal bias can comes about by data set but for all the people saying there are models that exaggerate existing biases in models this would be like if there is any ever any applicability of these broader impact statement guidelines this would be the paper right it's this right here is an actual system that if I have different classifiers and I combine them with this method it will double punish the classifier that is less sure that is less accurate because that is also going to be the one with the higher entropy therefore not as much selected if I give a data point of that particular task and so this is like a I'm not criticizing the method here like by all means like this is a cool method where you can recognize that this happens and try to calibrate accordingly but if there was ever any straight ball for a broader impact statement I would you know this is it and this I'm not I'm not trying I'm not saying that these these authors didn't do that for a reason I believe that look it's been whatever not even half a year since we've started with these general broader impact statements and everybody is already defaulting to simply say technology good technology bad that's that's the people aren't even thinking and so this right this is one of the reasons why I simply find these broader impact statements to be not that like not a good idea because there is a default answer and people are just putting it here even in like when there is an actual obvious immensely obvious thing that they even they even cited like the basis for that so you know that's sort of my take on this I again I enjoyed this paper the code is is available everything is good about this this paper I'm not even the fact that these are you know I think these are kind of two separate ideas they're combined cool they're analyzed formally in theory there's intuition given all good so don't get me wrong this is not like trashing this paper it's just I felt I had something more to say and I think that was it so yeah I'll see you next time with the new paper okay so our goal here is going to be to change this code to not use masks as mixtures but actually use neural networks with real weights as as mixtures and in superposition with each other okay so what we're going to do is we're going to train the different neural networks and then use this kind of superposition trick to figure out which task a data point came from so let's have a look at the code right here and there's a bunch of helper code and if we go down through everything you'll see that this is the MNIST permuted data set so each each task is basically a random permutation of MNIST and if you execute I believe this here and then you train the model and right now it's for five tasks but I guess that's going to be enough for now yeah so if we get some good signal here I guess it's a matter of of doing kind of engineering and plumbing and tuning if until you get it up to whatever 200 or 2000 tasks though I might be wrong there so this is training and I shortly sort of had a look at the code but I haven't actually tried this yet so the thing the model is built here you see this is multi task fully connected which has these different layers right here and it's built by these multi task mask linear models now the multi task mask linear models are defined right here so it's basically a linear model as you can see it's derived from a linear from a linear module and it has a parameter called num tasks and then it has a parameter scores which I guess is are these these masks right here and the scores I'm going to guess are always going to be multiplied by the weights here in the forward so you can see they're in forward you get the weights from the alphas yeah yeah this is the superimposed alright so if we know the task ID down here we get this subnet and we are going to multiply it with the weights if we don't know the task ID we want to get these alphas so the alphas are going to be one over the number of tasks at the beginning we're then going to multiply each of the alphas with the weights and with that we're going to get this subnet mask right here so we need to know what this self dot stacked is so this self dot stacked is getting right here in this cache mask or simply stacking this this get subnet for all of the things so our plan is going to be that this subnet right here is going to be the actual weights of the neural network okay and not just the not just the mask and then we don't need to actually multiply it with the weight we can just just forget about the weight honestly and just train the subnet so for the subnet as you can see here you have this get subnet thing and that's an autograd function which basically means in the forward pass you want to discretize it and in the backward pass this is a straight through estimator so our first task is going and this here should be done now my laptop has stopped breathing so we've trained five tasks and now we can run inference on that so this is when the task is given real quick you can see task one 92 percent 92 percent 92 percent 92 percent so we have a an overall performance of 92.44 percent then when the task is not given we have two things to evaluate whether or not basically how good we are overall and whether or not we get the tasks correct of course the tasks are at this pre requirement so we have a hundred percent task inference accuracy okay so we don't we don't okay we can we could evaluate this here but you can already see the output from last time there is like no difference from the performance of the when the task is given it's always being able to infer the task we want to check out the same thing so we want to change first of all this get subnet this is where it's these scores are discretized now given that these scores are going to be and to end up being our actual weights we want we don't do that we simply return the scores now this is a this is pretty pointless right now but we'll keep it just to be as close as possible to the to that now mask in it this is where we initialize the mask now this is climbing uniform and it has some thing but we want probably we want to train the neural network to be initialized you know as we know it so let's try what what our other initialize function so in it dot what do we have here do we have what's usual I don't even know normal Savi that that sounds about right that sounds about right all right all right so scores and yeah let's try this this could this could I break everything right if you initialize wrongly you get like dumb results so okay signed constant yada yada yada where is that used huh okay that's also initializing something so we calculate the gain and then okay this doesn't seem good we'll just keep it hey why not why not why not just keep it at that all right so cool oh yeah this is for the weight anyway we won't use the weight at all of this layer we'll just use our own weights so here we have these stacked okay we get the scores that's all good like I'm pretty happy with that I'm pretty happy with this mask in it that we make our parameters so these are going to be our different neural networks that we train this all looks good the alphas look good now the only thing we want to do honestly is just to have not the weight times the subnet here but the subnet as such like this is this it do we now train actual neural networks I I have my doubts honestly like there should be no this should be it hmm yeah yeah let's just try it like we're gonna get a mistake somewhere like crash nope nope okay all right actually training so for real like these scores right here the fact what made them a mask is that we discretize them right here so we made them into a mask right here we're not doing that anymore so we're just training floats and then we're also not multiplying it by the weight we are just using those floats which means that we are using the basically a neural network and then here the bias I was worried about the bias but the bias is always zero as you can see here so the bias is always false yeah so we're training five different neural networks for five different tasks and you know according to my hypothesis these masked things are just kind of crude quantized ways are of training neural networks and if if my hypothesis is correct this here is going to turn out probably even better than this masked thing okay so last task training right here I'm starting to breathe good laptop fast laptop very nice come on come on come on and we're done so again we have an average top one performance of 92 point is this even did I even oh no I ran this right here okay like that's the exact same number it was last time so we need to run inference again and if we're given the task ID then we are at 93.9% so we increase slightly which might just be due to the fact that we initialize terribly terribly okay so what does it say about our task inference accuracy maybe there's some mask here set model task the alphas are to one nope no we're good we're good task inference accuracy 100% and I'm going to guess well with the task inference accuracy being 100% I'm going to guess this here will give us the exact same number I like the 93 point some percent so yeah 93.9% so I'm you know I'm going to say right here that the on the super masks and the superposition really are two separate ideas right you it's it's because the paper is like it sounds cool and all with the supermask and superposition but this inference using the superposition and then the entropy to decide is really one idea and training different super math the the advantage in using supermask is of course that the model is way smaller so you can remember it much more easily but also you know that it's really different if there's there's nothing to do with the superposition yeah all right so I'm going I'm going to guess this also works for you know 200 tasks and whatnot the higher order of tasks so I think that's it and we're done here yeah
[ { "start": 0, "end": 5.46, "text": " Hi there! Today we'll look at super masks in superposition again. So this is part" }, { "start": 5.46, "end": 10.14, "text": " two of this paper by Mitchell Wurtzman and Vivek Ramanujan and here's the" }, { "start": 10.14, "end": 16.54, "text": " reason why there's a part two. So after yesterday's video on this paper I" }, { "start": 16.54, "end": 21.98, "text": " couldn't sleep because I really felt that I had left out some important" }, { "start": 21.98, "end": 26.26, "text": " aspects that I wanted to touch on during the video. Now sometimes during videos I" }, { "start": 26.26, "end": 30.92, "text": " look at the clock and I realize like oh crap the video is already like an hour" }, { "start": 30.92, "end": 36.52, "text": " long and I know people are watching on 2x speed anyway but still it's like too" }, { "start": 36.52, "end": 40.760000000000005, "text": " long and I need to wrap it up really soon. And what I felt were pretty" }, { "start": 40.760000000000005, "end": 44.92, "text": " important messages about this paper got lost. So specifically I want to address" }, { "start": 44.92, "end": 50.760000000000005, "text": " three different things. First of all they have like a formal analysis, not a formal" }, { "start": 50.760000000000005, "end": 56.14, "text": " but a kind of more rigorous analysis of what their modified G objective does." }, { "start": 56.14, "end": 60.72, "text": " And I also want to give some intuition in that because I felt I really" }, { "start": 60.72, "end": 69.04, "text": " had done a good job at that. The second part is that the two different ideas" }, { "start": 69.04, "end": 76, "text": " right here being the super masks and the superposition and I think my opinion is" }, { "start": 76, "end": 80.52, "text": " sort of that these are two separate things and they really have nothing to" }, { "start": 80.52, "end": 84.64, "text": " do with each other and I think that didn't really come through last video." }, { "start": 84.64, "end": 89.72, "text": " And the third one being the broader impact statement of this paper which I" }, { "start": 89.72, "end": 96.04, "text": " you know I usually kind of gloss over it and go like haha but I hear there is an" }, { "start": 96.04, "end": 103.24000000000001, "text": " important point to it so yeah we'll get to that. Alright so again not a new paper" }, { "start": 103.24000000000001, "end": 107.32, "text": " today I realized this but I think it's worth kind of diving deeper into this" }, { "start": 107.32, "end": 113.48, "text": " paper which is a very cool paper you know so don't don't get me wrong right" }, { "start": 113.48, "end": 117.96000000000001, "text": " here and I feel mostly I haven't done a good part at explaining it." }, { "start": 117.96000000000001, "end": 126.48, "text": " Like literally lying awake. Okay so let's go to the first point. We had this so if" }, { "start": 126.48, "end": 130.68, "text": " you hadn't seen the first video super masks and superposition basically says" }, { "start": 130.68, "end": 135.64000000000001, "text": " that we want to do lifelong learning and we want to do lifelong learning by" }, { "start": 135.64000000000001, "end": 140.68, "text": " lifelong learning is the task where you have a bunch of tasks in sequence and" }, { "start": 140.68, "end": 145, "text": " you learn them in sequence so one after the other and basically the goal is to" }, { "start": 145, "end": 150.8, "text": " not forget tasks once you learn new tasks and this model does it by" }, { "start": 150.8, "end": 156.12, "text": " always building one of these super masks for each task that is applied to the" }, { "start": 156.12, "end": 162.64000000000001, "text": " same randomly initialized base neural network each time and you know by" }, { "start": 162.64000000000001, "end": 166.88, "text": " keeping the super mask around you won't forget the task and then at inference" }, { "start": 166.88, "end": 170.76, "text": " time if you're given the task and just retrieve the mask if you're not given" }, { "start": 170.76, "end": 175.44, "text": " the tasks you can do this superposition trick where you apply all the masks in a" }, { "start": 175.44, "end": 180.76, "text": " superposition and then you look at sort of the gradient of an entropy function" }, { "start": 180.76, "end": 185.84, "text": " in order to decide which task reduces the entropy the most so which task is" }, { "start": 185.84, "end": 192.2, "text": " the most certain about a particular data point and that you you kind of infer" }, { "start": 192.2, "end": 198.56, "text": " that that's the task you're gonna go with so instead of the entropy which is" }, { "start": 198.56, "end": 203.95999999999998, "text": " you know well reasoned they had this other objective they call a G and G" }, { "start": 203.95999999999998, "end": 210.72, "text": " basically looks at the it's really strange it looks at the superfluous" }, { "start": 210.72, "end": 214.48, "text": " neurons so they also add these superfluous neurons these S neurons" }, { "start": 214.48, "end": 223.44, "text": " right here and they they the G objective will only look at the S neurons in order" }, { "start": 223.44, "end": 228.07999999999998, "text": " to decide whether or not that's the correct task and it's basically just the" }, { "start": 228.07999999999998, "end": 232.83999999999997, "text": " log some X of the S neurons and we had some intuition about them being you know" }, { "start": 232.83999999999997, "end": 237.32, "text": " all small and so on them being like outlier detectors but there is an entire" }, { "start": 237.32, "end": 241.88, "text": " chapter in the appendix where the authors do a sort of more in-depth" }, { "start": 241.88, "end": 249.44, "text": " theoretical analysis of that which you know I it's not not necessary to do this" }, { "start": 249.44, "end": 255.92, "text": " for them so I really enjoy I enjoyed reading that and that gave me sort of" }, { "start": 255.92, "end": 263.8, "text": " the better intuition of what this G objective does so here they say the aim" }, { "start": 263.8, "end": 268.88, "text": " is not to formally prove properties of the algorithm rather we hope that a more" }, { "start": 268.88, "end": 275.56, "text": " mathematical language may prove useful in extending intuition okay so again" }, { "start": 275.56, "end": 279.4, "text": " that's that's pretty cool so they start off by saying you have your neural" }, { "start": 279.4, "end": 287.12, "text": " network is basically W and the the sorry the it's it's this Phi right here and" }, { "start": 287.12, "end": 293.6, "text": " the W are the last layers weights which compute your log it's so Y is going not" }, { "start": 293.6, "end": 297.6, "text": " to be your class but Y is going to be your log it's and P is going to be the" }, { "start": 297.6, "end": 303.84000000000003, "text": " probability vector over your class which if the you calculate this via a softmax" }, { "start": 303.84000000000003, "end": 311.88, "text": " is going to be the following expression right here if you have a mask right then" }, { "start": 311.88, "end": 318.24, "text": " at least in the last layer you can in you can infer it as this right here so" }, { "start": 318.24, "end": 324.16, "text": " you multiply the mask by the last these weights and then that gives you your" }, { "start": 324.16, "end": 331.04, "text": " log it's so they say here with they initialize the weights right here" }, { "start": 331.04, "end": 335.14000000000004, "text": " actually they initialize the they have no bias term and they initialize the" }, { "start": 335.14000000000004, "end": 340.32000000000005, "text": " weights by this constant so plus minus this constant it's not really necessary" }, { "start": 340.32000000000005, "end": 344.88, "text": " to do that but they do it right here it makes the analysis also a bit easier I" }, { "start": 344.88, "end": 349.84000000000003, "text": " guess it just works more well if you have these masks in superposition of" }, { "start": 349.84, "end": 354.2, "text": " course you want to add all of these masks with their respective alpha weighting" }, { "start": 354.2, "end": 363.76, "text": " factor then multiply by the weights and that gives you your log it's so note" }, { "start": 363.76, "end": 367.79999999999995, "text": " that this this doesn't necessarily only have to be the last layers weights" }, { "start": 367.79999999999995, "end": 373.67999999999995, "text": " right here you can view that as any sort of weights of the neural network if you" }, { "start": 373.67999999999995, "end": 378.76, "text": " formulate this Phi correctly so you don't think that they only apply the" }, { "start": 378.76, "end": 383.56, "text": " mask to the last layer they do apply the mask to the entire thing all right now" }, { "start": 383.56, "end": 390.15999999999997, "text": " the the important part here is what happens if we look at the derivative of" }, { "start": 390.15999999999997, "end": 396, "text": " G with respect to one of the alphas and take the maximum negative derivative of" }, { "start": 396, "end": 401.59999999999997, "text": " that G which is that mysterious function that only looks at the at the at the" }, { "start": 401.59999999999997, "end": 407.64, "text": " superfluous neurons so what they want they kind of construct this G by" }, { "start": 407.64, "end": 416.12, "text": " principle what they say is we want a function G that mimics the supervised" }, { "start": 416.12, "end": 422, "text": " loss right we want a function G that is kind of equal like the supervised loss" }, { "start": 422, "end": 429.15999999999997, "text": " if we had the task ID right and that's that's pretty cool because you know the" }, { "start": 429.15999999999997, "end": 435.4, "text": " the supervised loss you sort of need all the information you need the label you" }, { "start": 435.4, "end": 441.79999999999995, "text": " need you need all the all you need the task ID so the supervised loss is" }, { "start": 441.79999999999995, "end": 449.32, "text": " unavailable but we want a function G that in its gradient mimics the supervised" }, { "start": 449.32, "end": 455.56, "text": " loss so they go about constructing this right here they say lemma first lemma" }, { "start": 455.56, "end": 459.2, "text": " it's possible to construct a function G such that the gradient matches the" }, { "start": 459.2, "end": 463.47999999999996, "text": " gradient from the supervised loss for all s neurons so for all these" }, { "start": 463.48, "end": 469.20000000000005, "text": " superfluous neurons specifically we want that the gradient with respect to the" }, { "start": 469.20000000000005, "end": 473.12, "text": " log it's if the gradient to the log it's is equal that means the gradient to all" }, { "start": 473.12, "end": 477.42, "text": " the rest of the network is equal because the rest of the network goes through the" }, { "start": 477.42, "end": 481, "text": " log it's right the gradient through the log it's is equal to the gradient of the" }, { "start": 481, "end": 486.68, "text": " supervised loss to the log it's for all the superfluous neurons and zero" }, { "start": 486.68, "end": 492.44, "text": " otherwise so they say the zero otherwise is pretty easily done in math you know" }, { "start": 492.44, "end": 498.88, "text": " simply set it to zero and in the actual code which you can achieve like this" }, { "start": 498.88, "end": 504.8, "text": " where M indicates the superfluous neurons so this is just they said just" }, { "start": 504.8, "end": 510.28, "text": " multiplied here and the other ones are detached so there is no gradient" }, { "start": 510.28, "end": 516.04, "text": " flowing this is the property that we only look at the superfluous neurons and" }, { "start": 516.04, "end": 523.92, "text": " now we are going to show that the gradient is going to be equal so they" }, { "start": 523.92, "end": 530.76, "text": " say if you had the supervised loss which means if you had the label then this" }, { "start": 530.76, "end": 537.0799999999999, "text": " would be your cross entropy loss okay so you cross it divides into this part" }, { "start": 537.0799999999999, "end": 540.9599999999999, "text": " where you need the label and then this part here you don't need the label now" }, { "start": 540.96, "end": 548.2800000000001, "text": " you can pretty much say look the label is certainly going to be one of" }, { "start": 548.2800000000001, "end": 553.36, "text": " not the superfluous neurons because the superfluous neurons are superfluous they" }, { "start": 553.36, "end": 559.32, "text": " are never the correct neuron so this is always going to be you know not the not" }, { "start": 559.32, "end": 563.44, "text": " the neurons we look at so the gradient certainly this is always going to be" }, { "start": 563.44, "end": 570, "text": " zero because we never we wherever the gradient is flowing that's not where the" }, { "start": 570, "end": 579.92, "text": " where this is one so the gradient of any superfluous neuron is just this thing" }, { "start": 579.92, "end": 586.36, "text": " right here and that's exactly why they build the function G so the function G" }, { "start": 586.36, "end": 591.8, "text": " has this exact gradient the function G if you derive it has that same gradient" }, { "start": 591.8, "end": 600.3599999999999, "text": " as the supervised loss for the superfluous neurons okay so it's sort of" }, { "start": 600.3599999999999, "end": 606.24, "text": " magic but it's not you know it's not magic so they need two more assumptions" }, { "start": 606.24, "end": 611.24, "text": " here to have to get the following properties so the for the first" }, { "start": 611.24, "end": 619.56, "text": " property now because now we want to have G be identifying the correct task so" }, { "start": 619.56, "end": 624.04, "text": " we've already constructed G now we want to show that if we really do this the" }, { "start": 624.04, "end": 631.0799999999999, "text": " gradient with respect to the alphas then if we do it for a wrong task for the" }, { "start": 631.0799999999999, "end": 636.68, "text": " tasks that it's not the task of that particular data point that goes into" }, { "start": 636.68, "end": 642.5999999999999, "text": " computing G then we'll get a value that's probably lower than zero however" }, { "start": 642.5999999999999, "end": 648.8399999999999, "text": " if we plug in if we derive but with respect to the alpha of the correct task" }, { "start": 648.84, "end": 655.6800000000001, "text": " then we get a gradient a negative gradient that's higher than zero okay so" }, { "start": 655.6800000000001, "end": 660.12, "text": " we're now going to prove that this with high probability really allows us to" }, { "start": 660.12, "end": 666.8000000000001, "text": " distinguish the correct task from the wrong task and we need two assumptions" }, { "start": 666.8000000000001, "end": 670.88, "text": " right here the assumption one is we assume that the mask learn on task I" }, { "start": 670.88, "end": 677, "text": " will be independent from the data from task J if the task data is from task J" }, { "start": 677, "end": 683.88, "text": " then this are independent random variables okay so it sort of means that" }, { "start": 683.88, "end": 691.4, "text": " the the tasks themselves are kind of independent but it's not it's it's not" }, { "start": 691.4, "end": 696.02, "text": " the same requirement but you can think of in in the case of permuted M nest or" }, { "start": 696.02, "end": 703.2, "text": " so this is some it's given except if you consider this kind of frequency of" }, { "start": 703.2, "end": 708.24, "text": " brightness and so on but if you have independent task I think that this is" }, { "start": 708.24, "end": 714.6400000000001, "text": " given that means that the features right here and the masks are independent" }, { "start": 714.6400000000001, "end": 720.9200000000001, "text": " variable if if the data is from tax J then the features and the mask from task" }, { "start": 720.9200000000001, "end": 725.6400000000001, "text": " I are independent variable sorry the second assumption you need is that we" }, { "start": 725.6400000000001, "end": 729.44, "text": " assume that a negative weight and a positive weight are equally likely to be" }, { "start": 729.44, "end": 736.24, "text": " masked out okay so this again you can think of with some regularity this is" }, { "start": 736.24, "end": 743.08, "text": " certainly going to be to be given in a randomly initialized neural network note" }, { "start": 743.08, "end": 749.48, "text": " that when the features are 0 which will be the case for 0 mean random features" }, { "start": 749.48, "end": 755.32, "text": " yeah so um yeah before I said this was your neural network this is your random" }, { "start": 755.32, "end": 761.8000000000001, "text": " neural network right and then you mask that and so on if this is a randomly" }, { "start": 761.8000000000001, "end": 766.84, "text": " initialized neural network then you can make a case that the expected features" }, { "start": 766.84, "end": 775.12, "text": " of those will be 0 it doesn't need to be the case but you can you can construct" }, { "start": 775.12, "end": 779.48, "text": " it such that it is so if you have the two things right if you have those two" }, { "start": 779.48, "end": 786.8000000000001, "text": " things then you can prove the following if the data X comes from task J then" }, { "start": 786.8000000000001, "end": 792.32, "text": " when you derive by an alpha that's not of task J you get a number that's" }, { "start": 792.32, "end": 799.72, "text": " smaller than zero in expectation and here the crucial part is you reframe" }, { "start": 799.72, "end": 807.12, "text": " this gradient you reframe reframe reframe and what you'll see is that this" }, { "start": 807.12, "end": 815.6, "text": " here comes out so this is a sum and each element of the sum is going to be" }, { "start": 815.6, "end": 819.36, "text": " greater or equal to zero which means that this thing is greater or equal to" }, { "start": 819.36, "end": 824.72, "text": " zero which means the negative thing is smaller than zero in lemma H1 now we're" }, { "start": 824.72, "end": 829.76, "text": " going to look at lemma H1 to get an intuition of what's going on right here" }, { "start": 829.76, "end": 837.8, "text": " so lemma H1 says if J is the true task and I is not equal to J then this" }, { "start": 837.8, "end": 843.86, "text": " quantity here is greater than zero all right I restarted my tablet and we are" }, { "start": 843.86, "end": 851.08, "text": " back so what's kind of the the intuition behind why this quantity here would be" }, { "start": 851.08, "end": 857.3199999999999, "text": " greater or equal to zero and honestly in order to make it a bit easier I first" }, { "start": 857.32, "end": 865.08, "text": " want to look at whenever I equals J so whenever J is the true task and then I" }, { "start": 865.08, "end": 871.6, "text": " equals J then we can sort of think of the opposite like why why this should be" }, { "start": 871.6, "end": 878.6800000000001, "text": " smaller or equal to zero so consider this this is the run the feature of the" }, { "start": 878.68, "end": 887.7199999999999, "text": " network of you right and then the EUV connects that to the to the mask at point" }, { "start": 887.7199999999999, "end": 896.8, "text": " V and the mask at point at that point UV is either zero or one depending on the" }, { "start": 896.8, "end": 902.3599999999999, "text": " training so this this Xi right here that's going to be the from the" }, { "start": 902.3599999999999, "end": 907.76, "text": " initialization but the mask is going to be zero or one depending on whether that" }, { "start": 907.76, "end": 912.96, "text": " feature contributes sorry whether this entire thing here contributes" }, { "start": 912.96, "end": 919.08, "text": " positively to the task or not so the secret right here why we can make a" }, { "start": 919.08, "end": 924.76, "text": " claim that this is greater or lower than zero is going to be that the mask can" }, { "start": 924.76, "end": 932.76, "text": " only be zero or one it cannot be negative one right so if the mask is" }, { "start": 932.76, "end": 938.2, "text": " zero then obviously this thing is going to be zero however if the mask is one" }, { "start": 938.2, "end": 943.72, "text": " what does it mean if the mask is one that means that this this entire feature" }, { "start": 943.72, "end": 953.72, "text": " right here let's call it F is positively impacting is positively contributing to" }, { "start": 953.72, "end": 961.72, "text": " this particular neuron right here so if the mask is one this is this it means" }, { "start": 961.72, "end": 967.44, "text": " the addition of that feature more of that feature makes that log it go up okay" }, { "start": 967.44, "end": 977.2, "text": " so if the mask is one during training it means that the feature positively" }, { "start": 977.2, "end": 981.4, "text": " contributes to the task so if we look at the gradient with respect to this" }, { "start": 981.4, "end": 985.88, "text": " function with respect to the the log it and the function basically means it's" }, { "start": 985.88, "end": 995.4399999999999, "text": " just measure measures how high these superfluous log it's are then what why" }, { "start": 995.4399999999999, "end": 1001.12, "text": " do we find a negative interaction there because if you look at the neural" }, { "start": 1001.12, "end": 1007.64, "text": " network and you forward pass and this particular feature is important and you" }, { "start": 1007.64, "end": 1013.8, "text": " look at the loss G and you backward pass through the log it's if it is smaller" }, { "start": 1013.8, "end": 1020.12, "text": " than zero that means there there is a negative interaction right here so that" }, { "start": 1020.12, "end": 1028.3999999999999, "text": " basically means that if we make this feature higher then in this case we make" }, { "start": 1028.3999999999999, "end": 1036.96, "text": " this G function go lower okay and that is the case for the correct task because" }, { "start": 1036.96, "end": 1044.16, "text": " if this is the correct task and the mask is learned adequately that means it" }, { "start": 1044.16, "end": 1050.8400000000001, "text": " should assign a low weight to the superfluous neuron whenever the input" }, { "start": 1050.8400000000001, "end": 1057.68, "text": " features you know are of that task and so it makes sense that this here would" }, { "start": 1057.68, "end": 1063.72, "text": " be a negative number because what we want if the mask deems the feature" }, { "start": 1063.72, "end": 1068.92, "text": " important in a positive sense we want that if the feature goes up G goes down" }, { "start": 1068.92, "end": 1076.08, "text": " and that is exactly why we have the negative interaction right here right so" }, { "start": 1076.08, "end": 1081.48, "text": " the negative comes from this being negative I hope this sort of makes sense" }, { "start": 1081.48, "end": 1086.56, "text": " so if the mask is one the mask says basically if that feature goes up the" }, { "start": 1086.56, "end": 1091.84, "text": " loss goes down now G is a measure of the superfluous neurons the superfluous" }, { "start": 1091.84, "end": 1098.08, "text": " neurons should be small if the loss is small so if this is really from the task" }, { "start": 1098.08, "end": 1102.8799999999999, "text": " and this feature is really useful that means if we increase the feature the G" }, { "start": 1102.8799999999999, "end": 1107.9599999999998, "text": " function should go down and therefore this product here is going to be most" }, { "start": 1107.9599999999998, "end": 1116.56, "text": " likely negative okay and the contrary is you know analogous right here if this is" }, { "start": 1116.56, "end": 1123.44, "text": " not of this task and the mass can either be 0 or 1 right if it's 0 then this" }, { "start": 1123.44, "end": 1130.36, "text": " quantity is 0 however if it's 1 it's more likely that the that there the" }, { "start": 1130.36, "end": 1137.12, "text": " feature here because it's I is not the correct task which basically means that" }, { "start": 1137.12, "end": 1143.8799999999999, "text": " this feature it is for a different task it is good for a different task so the" }, { "start": 1143.88, "end": 1147.96, "text": " mask of that different task says it's good right here and we have no reason to" }, { "start": 1147.96, "end": 1152.7600000000002, "text": " believe that this would decrease the loss of the loss of this particular data" }, { "start": 1152.7600000000002, "end": 1160.16, "text": " point in this task so it's kind of the inverse reasoning if you look at the" }, { "start": 1160.16, "end": 1167.4, "text": " actual derivation here it's fairly long and it goes over the cases of the" }, { "start": 1167.4, "end": 1171.5600000000002, "text": " interactions between actually this initialization and the mask so the" }, { "start": 1171.56, "end": 1178.48, "text": " initialization can be positive or negative as you can see right here and I" }, { "start": 1178.48, "end": 1186.08, "text": " think I just think that the the intuition here is that the superfluous" }, { "start": 1186.08, "end": 1192.46, "text": " neurons react differently to a data point of the trained task because they" }, { "start": 1192.46, "end": 1199.96, "text": " have been kind of made to decrease for that task and for that particular mask" }, { "start": 1199.96, "end": 1204.72, "text": " as they do for when the data point doesn't match the mask when the data" }, { "start": 1204.72, "end": 1210.4, "text": " point doesn't match the mask there is no reason for the logits of the superfluous" }, { "start": 1210.4, "end": 1216.7, "text": " neurons to be low and if the data point task does match the mask there is ample" }, { "start": 1216.7, "end": 1222.8400000000001, "text": " reasons for those to be low I hope that sort of makes sense it is sort of it's a" }, { "start": 1222.8400000000001, "end": 1227.14, "text": " bit more of an intuition but if you really want to dig into it look at the" }, { "start": 1227.14, "end": 1234.72, "text": " derivation right here okay second point is the fact that the masks and the super" }, { "start": 1234.72, "end": 1238.88, "text": " positions don't really have to do anything with each other and that's you" }, { "start": 1238.88, "end": 1242.8400000000001, "text": " know I've said throughout the video like remember these tasks are super easy yada" }, { "start": 1242.8400000000001, "end": 1249.16, "text": " yada yada so let me make it clear in this in this diagram right here the" }, { "start": 1249.16, "end": 1255.5600000000002, "text": " super masks these are simply a way to train a neural network in a crude way" }, { "start": 1255.56, "end": 1260.12, "text": " right I don't think there is you know this distinction between mask and" }, { "start": 1260.12, "end": 1265.76, "text": " network I don't really like that much because ultimately what you're doing is" }, { "start": 1265.76, "end": 1271.6399999999999, "text": " simply you're training a neural network in a kind of weird way okay the fact" }, { "start": 1271.6399999999999, "end": 1276.52, "text": " that you always use the same underlying you know great neural network doesn't" }, { "start": 1276.52, "end": 1281.9199999999998, "text": " really matter right here it's still what you do in this super mask training is" }, { "start": 1281.9199999999998, "end": 1285.12, "text": " you provide a severely over parameterized network and then the mask" }, { "start": 1285.12, "end": 1289.36, "text": " simply gets to choose which weights to mix rather than you get to adjust the" }, { "start": 1289.36, "end": 1294.08, "text": " weights if you adjust the weights you usually get more accurate than with the" }, { "start": 1294.08, "end": 1299.1599999999999, "text": " mask but it's sort of like a quantized neural network that you train right here" }, { "start": 1299.1599999999999, "end": 1303.12, "text": " so that's the super mask thing again I don't think it's important that the" }, { "start": 1303.12, "end": 1306.84, "text": " underlying network is always the same the only advantage you have is it saves" }, { "start": 1306.84, "end": 1314.36, "text": " space because these masks are very small the super masks on the other hand this" }, { "start": 1314.36, "end": 1321.3999999999999, "text": " idea that you overlay all of the masks together and then you look at where this" }, { "start": 1321.3999999999999, "end": 1327.8, "text": " at the gradient of the entropy and you look at which of the of the mixing" }, { "start": 1327.8, "end": 1333.36, "text": " factors the gradient poles the most that's a different idea and the question" }, { "start": 1333.36, "end": 1337.52, "text": " here is wouldn't that isn't that independent does really depend on the" }, { "start": 1337.52, "end": 1343.8799999999999, "text": " masks or doesn't it and the you know the hypothesis would be that if I simply" }, { "start": 1343.88, "end": 1348.72, "text": " train you know three different neural networks for three different tasks could" }, { "start": 1348.72, "end": 1352.8400000000001, "text": " I not do the same superposition trick like could I not just add all of them" }, { "start": 1352.8400000000001, "end": 1358.1200000000001, "text": " with a respective alpha look at the entropy calculate the gradient with" }, { "start": 1358.1200000000001, "end": 1362.2800000000002, "text": " respect to each of the alphas of the entropy and then decide which task it is" }, { "start": 1362.2800000000002, "end": 1368, "text": " you know don't need masks simply mix neural networks in superposition so I" }, { "start": 1368, "end": 1372.96, "text": " did it and I actually tried their code is available so big props for their" }, { "start": 1372.96, "end": 1378.16, "text": " code being available I tried their code it's actually very few changes and I'm" }, { "start": 1378.16, "end": 1384.72, "text": " going to append my live coding of this at the end of this video so if you want" }, { "start": 1384.72, "end": 1388.64, "text": " to if you are interested in watching that you can do so but you know the" }, { "start": 1388.64, "end": 1393.1200000000001, "text": " outcome is if I train neural networks and I have I've you know done super quick" }, { "start": 1393.1200000000001, "end": 1398, "text": " and initialize them wrongly probably and all but if I train these neural net if I" }, { "start": 1398, "end": 1403.48, "text": " train the masks you get to like 92 percent accuracy in their tasks in each" }, { "start": 1403.48, "end": 1407.24, "text": " of the tasks and then also in the average if I train the actual neural" }, { "start": 1407.24, "end": 1412, "text": " networks I get to a higher accuracy like 93 something it doesn't matter it's just" }, { "start": 1412, "end": 1418.36, "text": " higher okay so that's hypothesis one is the training masks is just a way of" }, { "start": 1418.36, "end": 1422.7, "text": " training neural networks the fact that the masks and the network training" }, { "start": 1422.7, "end": 1428.2, "text": " itself are that close I think is a testament to how easy these tasks are" }, { "start": 1428.2, "end": 1434.0800000000002, "text": " like how easy eminent amnest is I'm going to also hypothesize that if the" }, { "start": 1434.0800000000002, "end": 1438.04, "text": " task gets harder and harder and I don't mean 10 class image net I mean a" }, { "start": 1438.04, "end": 1444.68, "text": " thousand class image net then these masks are going to degrade severely" }, { "start": 1444.68, "end": 1448.2, "text": " versus training the actual neural network I might be wrong I mean you can" }, { "start": 1448.2, "end": 1453.16, "text": " over parameter eyes really heavily and they will still work okay but in any" }, { "start": 1453.16, "end": 1455.92, "text": " case I trade the train these neural networks and they reached higher" }, { "start": 1455.92, "end": 1461, "text": " accuracy and then I did the exact same thing I laid them in superposition to" }, { "start": 1461, "end": 1465.56, "text": " determine what task it is and I could achieve the exact same result so here in" }, { "start": 1465.56, "end": 1469.6000000000001, "text": " their example they have a hundred percent task classification accuracy and" }, { "start": 1469.6000000000001, "end": 1475.16, "text": " I reached the exact same thing code worked I'm not going to try to scale" }, { "start": 1475.16, "end": 1482.3200000000002, "text": " this up to 250 or 2500 in tasks but I'm going to assume that with you know" }, { "start": 1482.3200000000002, "end": 1488.0400000000002, "text": " tuning and stuff that it's going to work about equally well you could make an" }, { "start": 1488.0400000000002, "end": 1492.8400000000001, "text": " argument that the masks being sparser they might be differentiated from each" }, { "start": 1492.8400000000001, "end": 1500.48, "text": " other more accurately but I'm not sure maybe but it's it's not a cool it's not" }, { "start": 1500.48, "end": 1507.04, "text": " a qualitative difference right so these two things are really two separate ideas" }, { "start": 1507.04, "end": 1513.2, "text": " that find their way together in this paper but ultimately have not much to do" }, { "start": 1513.2, "end": 1523.34, "text": " with each other okay at least that's from what I can tell I might I might be" }, { "start": 1523.34, "end": 1528.3600000000001, "text": " wrong here and I might be wrong with respect to their G objective and whatnot" }, { "start": 1528.36, "end": 1536.12, "text": " and you know but I think that that these are two cool ideas but they can be" }, { "start": 1536.12, "end": 1543.28, "text": " applied independently so the last thing I want to look at is their broader impact" }, { "start": 1543.28, "end": 1549.28, "text": " statement right here now there is a reason so usually I kind of track these" }, { "start": 1549.28, "end": 1552.8799999999999, "text": " broader impact statement because I think this this is this here is sort of" }, { "start": 1552.8799999999999, "end": 1556.52, "text": " fundamental research right this is fundamental machine learning research" }, { "start": 1556.52, "end": 1560.8, "text": " we do architecture the multitask learning task isn't really important as" }, { "start": 1560.8, "end": 1565.4, "text": " long as we have kind of the same tasks right here uncorrelated and so on the" }, { "start": 1565.4, "end": 1568.8799999999999, "text": " same hardness and I've also made the point that it's really important for" }, { "start": 1568.8799999999999, "end": 1574.24, "text": " these tasks to be the same hard for this to work in this place a role right here" }, { "start": 1574.24, "end": 1580.92, "text": " so um and they do they do describe some of this in this conclusion with you know" }, { "start": 1580.92, "end": 1586.44, "text": " limitation that we observed has to do with task identity inference when model" }, { "start": 1586.44, "end": 1591.3600000000001, "text": " are not well calibrated models that are overly confident for the wrong task okay" }, { "start": 1591.3600000000001, "end": 1600.4, "text": " so in order for them to infer the correct task they the sort of so if you" }, { "start": 1600.4, "end": 1605.92, "text": " look at your entropy of the models for the tasks that means you're gonna" }, { "start": 1605.92, "end": 1611.98, "text": " select the model that is the most sure about the task this only works if the" }, { "start": 1611.98, "end": 1617.84, "text": " tasks are equally hard okay if one task is much much harder than the other task" }, { "start": 1617.84, "end": 1622.04, "text": " this other task is always going to say well I'm really confident about this one" }, { "start": 1622.04, "end": 1625.84, "text": " because the task is just easier it's going to be it's going to train in neural" }, { "start": 1625.84, "end": 1630.88, "text": " networks is generally more confident and you're going to misclassify a lot of the" }, { "start": 1630.88, "end": 1637.2, "text": " tasks so so here what does this have to do with the broader impact statement if" }, { "start": 1637.2, "end": 1645.48, "text": " you look at the broader impact statement what they say right here so they say a" }, { "start": 1645.48, "end": 1649.76, "text": " goal of continue learning self-manage tasks with a single model however it is" }, { "start": 1649.76, "end": 1653.24, "text": " not exactly clear what qualifies as a single model therefore a concrete" }, { "start": 1653.24, "end": 1658.52, "text": " objective has become to learn many tasks as efficiently as possible we believe" }, { "start": 1658.52, "end": 1662.64, "text": " that subs up is a useful step in this direction however there are consequences" }, { "start": 1662.64, "end": 1667.44, "text": " to more efficient models both positive and negative so this is sort of what the" }, { "start": 1667.44, "end": 1672.1200000000001, "text": " community does so there are three things that I've seen so far in broader impact" }, { "start": 1672.1200000000001, "end": 1676.64, "text": " statement first you some people say this is not applicable to us which I agree" }, { "start": 1676.64, "end": 1682.0400000000002, "text": " for most fundamental research broader it like the broader impact statement is" }, { "start": 1682.0400000000002, "end": 1687.1200000000001, "text": " supposed to be what does this particular method how will this influence broader" }, { "start": 1687.12, "end": 1694.36, "text": " society so not applicable completely valid for most of these research papers" }, { "start": 1694.36, "end": 1702.2399999999998, "text": " because guess what you can use any method to do good or to do bad and that's" }, { "start": 1702.2399999999998, "end": 1708.6399999999999, "text": " that's the second second part second method is basically you you just change" }, { "start": 1708.6399999999999, "end": 1713.76, "text": " a generic statements how you can do good and bad and usually you can't relate it" }, { "start": 1713.76, "end": 1718.96, "text": " to your particular method in the paper right because your method is I don't" }, { "start": 1718.96, "end": 1725.96, "text": " know like my faster convergence rate of SGD but and and so what you do is you" }, { "start": 1725.96, "end": 1730.3, "text": " just go one level up you go up the levels it's always like optimization can" }, { "start": 1730.3, "end": 1733.28, "text": " be used for good and for bad I mean that's still kind of a bit vague and" }, { "start": 1733.28, "end": 1737.8799999999999, "text": " then you go up further well optimization can do more machine learning and machine" }, { "start": 1737.8799999999999, "end": 1742.24, "text": " learning can be used to do good and bad for example face recognition and things" }, { "start": 1742.24, "end": 1745.76, "text": " like this so you just go up the levels and that's what they essentially do here" }, { "start": 1745.76, "end": 1750.84, "text": " and that's what you know most people have defaulted to it's like okay so you" }, { "start": 1750.84, "end": 1756.04, "text": " know our model here is you know we it basically one can train more efficient" }, { "start": 1756.04, "end": 1760.16, "text": " models and then they simply highlight what more efficient models can do" }, { "start": 1760.16, "end": 1764.18, "text": " efficient models require less compute if there's a model by we run on the end" }, { "start": 1764.18, "end": 1769.24, "text": " device if models are more efficient than large-scale research is not limited to" }, { "start": 1769.24, "end": 1774.36, "text": " wealthier institutions by the way I also the broader impact statement I believe" }, { "start": 1774.36, "end": 1779.32, "text": " should be the impact on society and not really on the research community itself" }, { "start": 1779.32, "end": 1787.6, "text": " so I also this this is a bit shaky with respect to because I'm really regarding" }, { "start": 1787.6, "end": 1791.92, "text": " what the broader impact statement should be this is not my opinion I'm I'm trying" }, { "start": 1791.92, "end": 1797.1200000000001, "text": " to reflect everything I've read of guidance about what the broader impact" }, { "start": 1797.12, "end": 1803.2399999999998, "text": " statement should be by the way there is also method method three which is to" }, { "start": 1803.2399999999998, "end": 1806.4399999999998, "text": " simply tell me more about your paper in the broader impact statement which I" }, { "start": 1806.4399999999998, "end": 1810.6399999999999, "text": " guess is the smart method because the broader impact statement can be before" }, { "start": 1810.6399999999999, "end": 1815.2399999999998, "text": " before the references so it's in the main part and people are required to" }, { "start": 1815.2399999999998, "end": 1819.36, "text": " read it not like the appendix reviewers are not required to read the appendix" }, { "start": 1819.36, "end": 1822.76, "text": " reviewers are required to read the broader impact statement so I guess the" }, { "start": 1822.76, "end": 1827.68, "text": " smart authors will just try to cloak more information about their model in" }, { "start": 1827.68, "end": 1832.08, "text": " terms of a broader impact statement I guess well whether that's smart is a" }, { "start": 1832.08, "end": 1837.96, "text": " different discussion but here they just it's it's already defaulting right these" }, { "start": 1837.96, "end": 1843.64, "text": " it's already the default people simply go level up level up level up until we" }, { "start": 1843.64, "end": 1848.8799999999999, "text": " can you know say something generic and we will also highlight and discuss the" }, { "start": 1848.88, "end": 1852.72, "text": " negative consequences of models which can efficiently learn many tasks and" }, { "start": 1852.72, "end": 1857.0400000000002, "text": " efficient models in general when models are more efficient they're also more" }, { "start": 1857.0400000000002, "end": 1861.2800000000002, "text": " available and less subject to regularization as a study and study of" }, { "start": 1861.2800000000002, "end": 1866.0400000000002, "text": " result for instance when a high-impact model is released an institution will" }, { "start": 1866.0400000000002, "end": 1871.44, "text": " hopefully be accompanied by a model card analyzing the bias and intended use of" }, { "start": 1871.44, "end": 1877.16, "text": " the model by contrast if anyone is able to train a powerful model this may no" }, { "start": 1877.16, "end": 1881, "text": " longer be the case resulting in a proliferation of model with harmful" }, { "start": 1881, "end": 1885.8000000000002, "text": " biases or intended use taking the United States for instance bias can be harmful" }, { "start": 1885.8000000000002, "end": 1890.72, "text": " as models show disproportionately more errors for already marginalized groups" }, { "start": 1890.72, "end": 1896.8000000000002, "text": " furthering existing deeply rooted structural racism this this is like well" }, { "start": 1896.8000000000002, "end": 1904.88, "text": " technology this is basically a statement about technology and so why why do I" }, { "start": 1904.88, "end": 1911.88, "text": " have a particular not issue but why do I pick this broader impact statement they" }, { "start": 1911.88, "end": 1917.96, "text": " even Rick this here this is this gender shades paper right where people went and" }, { "start": 1917.96, "end": 1922.8000000000002, "text": " they looked at these commercial API's for face recognition I I think that's" }, { "start": 1922.8000000000002, "end": 1929.0800000000002, "text": " the paper yeah gender shades so if you have a face" }, { "start": 1929.08, "end": 1937.6, "text": " recognizer they realized they divided people up by I think gender and race so" }, { "start": 1937.6, "end": 1943.78, "text": " you know like they built four groups or I haven't I haven't I've just looked at" }, { "start": 1943.78, "end": 1947.8, "text": " the paper but in my understanding that they divided people up into groups which" }, { "start": 1947.8, "end": 1952.9199999999998, "text": " I find arbitrary to have the these two axes race and gender but okay you can do" }, { "start": 1952.9199999999998, "end": 1958.6399999999999, "text": " that and they discovered that these commercial API's have different accuracy" }, { "start": 1958.64, "end": 1964.16, "text": " for the different groups right and that basically our point is that you know" }, { "start": 1964.16, "end": 1967.2800000000002, "text": " these commercial API's if they're offered for all humans they should work" }, { "start": 1967.2800000000002, "end": 1974.68, "text": " equally well for all humans now now you may be see what it has to do with this" }, { "start": 1974.68, "end": 1982.4, "text": " paper well this paper is in the business of doing multitask learning so it is" }, { "start": 1982.4, "end": 1989, "text": " very viable to actually frame the the task for example like this is an example" }, { "start": 1989, "end": 1994.96, "text": " if you frame the task of multitask learning like face recognition on" }, { "start": 1994.96, "end": 1999.2800000000002, "text": " different groups of people as a multitask learning problem you have you" }, { "start": 1999.2800000000002, "end": 2005.3600000000001, "text": " know group group one right here group two group three and then if at inference" }, { "start": 2005.3600000000001, "end": 2009.5600000000002, "text": " time so you can build you know good models for each of the group at" }, { "start": 2009.56, "end": 2012.72, "text": " inference time you're given an image and you're trying to a fur first which" }, { "start": 2012.72, "end": 2017.4199999999998, "text": " group is that from and then take the appropriate classifier that would be you" }, { "start": 2017.4199999999998, "end": 2022.72, "text": " know that would be a good a hypothetical classifier for this thing now what do we" }, { "start": 2022.72, "end": 2030.3999999999999, "text": " know about this thing this thing is fails if the tasks aren't equally hard" }, { "start": 2030.3999999999999, "end": 2038.6799999999998, "text": " also in in specifically if if for one group let's say for group three the the" }, { "start": 2038.68, "end": 2043.24, "text": " task is way harder because you have less data I guess the one of the main" }, { "start": 2043.24, "end": 2048.36, "text": " problems there is that the data sets are not equally balanced if you have less" }, { "start": 2048.36, "end": 2054.2400000000002, "text": " data for that then the task becomes de facto harder and the model is less sure" }, { "start": 2054.2400000000002, "end": 2062.2400000000002, "text": " about the task which means that it's a double whammy so not only is the model" }, { "start": 2062.2400000000002, "end": 2068.56, "text": " itself less accurate but these the input data point if the person is actually" }, { "start": 2068.56, "end": 2074.32, "text": " of group three is less likely to be classified correctly into the correct" }, { "start": 2074.32, "end": 2081.16, "text": " model at to begin with so you know for all the for all I I've had my my share" }, { "start": 2081.16, "end": 2086.16, "text": " of of comments on the video I made and I still maintain that societal bias can" }, { "start": 2086.16, "end": 2091.2799999999997, "text": " comes about by data set but for all the people saying there are models that" }, { "start": 2091.2799999999997, "end": 2098.32, "text": " exaggerate existing biases in models this would be like if there is any ever" }, { "start": 2098.32, "end": 2103.04, "text": " any applicability of these broader impact statement guidelines this would" }, { "start": 2103.04, "end": 2108.36, "text": " be the paper right it's this right here is an actual system that if I have" }, { "start": 2108.36, "end": 2113.48, "text": " different classifiers and I combine them with this method it will double punish" }, { "start": 2113.48, "end": 2120.04, "text": " the classifier that is less sure that is less accurate because that is also going" }, { "start": 2120.04, "end": 2124.88, "text": " to be the one with the higher entropy therefore not as much selected if I give" }, { "start": 2124.88, "end": 2130.76, "text": " a data point of that particular task and so this is like a I'm not criticizing" }, { "start": 2130.76, "end": 2134.76, "text": " the method here like by all means like this is a cool method where you can" }, { "start": 2134.76, "end": 2139.32, "text": " recognize that this happens and try to calibrate accordingly but if there was" }, { "start": 2139.32, "end": 2145.2000000000003, "text": " ever any straight ball for a broader impact statement I would you know this" }, { "start": 2145.2000000000003, "end": 2151.32, "text": " is it and this I'm not I'm not trying I'm not saying that these these authors" }, { "start": 2151.32, "end": 2157.2400000000002, "text": " didn't do that for a reason I believe that look it's been whatever not even" }, { "start": 2157.2400000000002, "end": 2161.36, "text": " half a year since we've started with these general broader impact statements" }, { "start": 2161.36, "end": 2168.1200000000003, "text": " and everybody is already defaulting to simply say technology good technology" }, { "start": 2168.1200000000003, "end": 2177.8, "text": " bad that's that's the people aren't even thinking and so this right this is one" }, { "start": 2177.8, "end": 2183.6800000000003, "text": " of the reasons why I simply find these broader impact statements to be not that" }, { "start": 2183.6800000000003, "end": 2188.2400000000002, "text": " like not a good idea because there is a default answer and people are just" }, { "start": 2188.2400000000002, "end": 2194.2400000000002, "text": " putting it here even in like when there is an actual obvious immensely obvious" }, { "start": 2194.2400000000002, "end": 2202.36, "text": " thing that they even they even cited like the basis for that so you know" }, { "start": 2202.36, "end": 2210.8, "text": " that's sort of my take on this I again I enjoyed this paper the code is is" }, { "start": 2210.8, "end": 2215.96, "text": " available everything is good about this this paper I'm not even the fact that" }, { "start": 2215.96, "end": 2218.48, "text": " these are you know I think these are kind of two separate ideas they're" }, { "start": 2218.48, "end": 2225.4, "text": " combined cool they're analyzed formally in theory there's intuition given all" }, { "start": 2225.4, "end": 2232.34, "text": " good so don't get me wrong this is not like trashing this paper it's just" }, { "start": 2232.34, "end": 2240.44, "text": " I felt I had something more to say and I think that was it so yeah I'll see you" }, { "start": 2240.44, "end": 2247.2000000000003, "text": " next time with the new paper okay so our goal here is going to be to change this" }, { "start": 2247.2000000000003, "end": 2254.36, "text": " code to not use masks as mixtures but actually use neural networks with real" }, { "start": 2254.36, "end": 2260.04, "text": " weights as as mixtures and in superposition with each other okay so" }, { "start": 2260.04, "end": 2264.2, "text": " what we're going to do is we're going to train the different neural networks and" }, { "start": 2264.2, "end": 2270, "text": " then use this kind of superposition trick to figure out which task a data" }, { "start": 2270, "end": 2277.04, "text": " point came from so let's have a look at the code right here and there's a bunch" }, { "start": 2277.04, "end": 2283.4, "text": " of helper code and if we go down through everything you'll see that this is the" }, { "start": 2283.4, "end": 2288.96, "text": " MNIST permuted data set so each each task is basically a random permutation" }, { "start": 2288.96, "end": 2297.2, "text": " of MNIST and if you execute I believe this here and then you train the model" }, { "start": 2297.2, "end": 2303.32, "text": " and right now it's for five tasks but I guess that's going to be enough for now" }, { "start": 2303.32, "end": 2311.32, "text": " yeah so if we get some good signal here I guess it's a matter of of doing kind" }, { "start": 2311.32, "end": 2316.32, "text": " of engineering and plumbing and tuning if until you get it up to whatever 200" }, { "start": 2316.32, "end": 2323, "text": " or 2000 tasks though I might be wrong there so this is training and I" }, { "start": 2323, "end": 2329.44, "text": " shortly sort of had a look at the code but I haven't actually tried this yet so" }, { "start": 2329.44, "end": 2336.2000000000003, "text": " the thing the model is built here you see this is multi task fully connected" }, { "start": 2336.2000000000003, "end": 2341.6400000000003, "text": " which has these different layers right here and it's built by these multi task" }, { "start": 2341.64, "end": 2349.56, "text": " mask linear models now the multi task mask linear models are defined right" }, { "start": 2349.56, "end": 2354.6, "text": " here so it's basically a linear model as you can see it's derived from a linear" }, { "start": 2354.6, "end": 2361.2799999999997, "text": " from a linear module and it has a parameter called num tasks and then it" }, { "start": 2361.2799999999997, "end": 2369.7599999999998, "text": " has a parameter scores which I guess is are these these masks right here and the" }, { "start": 2369.76, "end": 2375.44, "text": " scores I'm going to guess are always going to be multiplied by the weights" }, { "start": 2375.44, "end": 2381.44, "text": " here in the forward so you can see they're in forward you get the weights" }, { "start": 2381.44, "end": 2389.2000000000003, "text": " from the alphas yeah yeah this is the superimposed alright so if we know the" }, { "start": 2389.2000000000003, "end": 2396.0400000000004, "text": " task ID down here we get this subnet and we are going to multiply it with the" }, { "start": 2396.04, "end": 2401.4, "text": " weights if we don't know the task ID we want to get these alphas so the alphas" }, { "start": 2401.4, "end": 2408.2799999999997, "text": " are going to be one over the number of tasks at the beginning we're then going" }, { "start": 2408.2799999999997, "end": 2417.4, "text": " to multiply each of the alphas with the weights and with that we're going to get" }, { "start": 2417.4, "end": 2423.8, "text": " this subnet mask right here so we need to know what this self dot stacked is so" }, { "start": 2423.8, "end": 2429.0800000000004, "text": " this self dot stacked is getting right here in this cache mask or simply" }, { "start": 2429.0800000000004, "end": 2435.0800000000004, "text": " stacking this this get subnet for all of the things so our plan is going to be" }, { "start": 2435.0800000000004, "end": 2440.32, "text": " that this subnet right here is going to be the actual weights of the neural" }, { "start": 2440.32, "end": 2447.2000000000003, "text": " network okay and not just the not just the mask and then we don't need to" }, { "start": 2447.2000000000003, "end": 2451.6400000000003, "text": " actually multiply it with the weight we can just just forget about the weight" }, { "start": 2451.64, "end": 2458.8799999999997, "text": " honestly and just train the subnet so for the subnet as you can see here you" }, { "start": 2458.8799999999997, "end": 2464.7599999999998, "text": " have this get subnet thing and that's an autograd function which basically means" }, { "start": 2464.7599999999998, "end": 2468.64, "text": " in the forward pass you want to discretize it and in the backward pass" }, { "start": 2468.64, "end": 2473.56, "text": " this is a straight through estimator so our first task is going and this here" }, { "start": 2473.56, "end": 2478.8399999999997, "text": " should be done now my laptop has stopped breathing so we've trained five tasks" }, { "start": 2478.84, "end": 2485, "text": " and now we can run inference on that so this is when the task is given real" }, { "start": 2485, "end": 2493.84, "text": " quick you can see task one 92 percent 92 percent 92 percent 92 percent so we have" }, { "start": 2493.84, "end": 2501.04, "text": " a an overall performance of 92.44 percent then when the task is not given" }, { "start": 2501.04, "end": 2507.32, "text": " we have two things to evaluate whether or not basically how good we are overall" }, { "start": 2507.32, "end": 2511.92, "text": " and whether or not we get the tasks correct of course the tasks are at this" }, { "start": 2511.92, "end": 2518.2000000000003, "text": " pre requirement so we have a hundred percent task inference accuracy okay so" }, { "start": 2518.2000000000003, "end": 2522.76, "text": " we don't we don't okay we can we could evaluate this here but you can already" }, { "start": 2522.76, "end": 2527.52, "text": " see the output from last time there is like no difference from the performance" }, { "start": 2527.52, "end": 2532.7200000000003, "text": " of the when the task is given it's always being able to infer the task we" }, { "start": 2532.72, "end": 2537.3599999999997, "text": " want to check out the same thing so we want to change first of all this get" }, { "start": 2537.3599999999997, "end": 2542.3199999999997, "text": " subnet this is where it's these scores are discretized now given that these" }, { "start": 2542.3199999999997, "end": 2547.2799999999997, "text": " scores are going to be and to end up being our actual weights we want we" }, { "start": 2547.2799999999997, "end": 2550.9199999999996, "text": " don't do that we simply return the scores now this is a this is pretty" }, { "start": 2550.9199999999996, "end": 2558.3999999999996, "text": " pointless right now but we'll keep it just to be as close as possible to the" }, { "start": 2558.4, "end": 2569.12, "text": " to that now mask in it this is where we initialize the mask now this is climbing" }, { "start": 2569.12, "end": 2575.2400000000002, "text": " uniform and it has some thing but we want probably we want to train the" }, { "start": 2575.2400000000002, "end": 2583.78, "text": " neural network to be initialized you know as we know it so let's try what what" }, { "start": 2583.78, "end": 2590.0800000000004, "text": " our other initialize function so in it dot what do we have here do we have" }, { "start": 2590.0800000000004, "end": 2599.6800000000003, "text": " what's usual I don't even know normal Savi that that sounds about right that" }, { "start": 2599.6800000000003, "end": 2609.28, "text": " sounds about right all right all right so scores and yeah let's try this this" }, { "start": 2609.28, "end": 2612.88, "text": " could this could I break everything right if you initialize wrongly you get" }, { "start": 2612.88, "end": 2623.96, "text": " like dumb results so okay signed constant yada yada yada where is that" }, { "start": 2623.96, "end": 2632.6, "text": " used huh okay that's also initializing something so we calculate the gain and" }, { "start": 2632.6, "end": 2641.2400000000002, "text": " then okay this doesn't seem good we'll just keep it hey why not" }, { "start": 2641.24, "end": 2651.04, "text": " why not why not just keep it at that all right so cool oh yeah this is for the" }, { "start": 2651.04, "end": 2655.4799999999996, "text": " weight anyway we won't use the weight at all of this layer we'll just use our own" }, { "start": 2655.4799999999996, "end": 2661.52, "text": " weights so here we have these stacked okay we get the scores that's all good" }, { "start": 2661.52, "end": 2667.9599999999996, "text": " like I'm pretty happy with that I'm pretty happy with this mask in it that" }, { "start": 2667.9599999999996, "end": 2670.8799999999997, "text": " we make our parameters so these are going to be our different neural" }, { "start": 2670.88, "end": 2678.6, "text": " networks that we train this all looks good the alphas look good now the only" }, { "start": 2678.6, "end": 2686.52, "text": " thing we want to do honestly is just to have not the weight times the subnet" }, { "start": 2686.52, "end": 2695.84, "text": " here but the subnet as such like this is this it do we now train actual neural" }, { "start": 2695.84, "end": 2704.56, "text": " networks I I have my doubts honestly like there should be no this should be" }, { "start": 2704.56, "end": 2716.2000000000003, "text": " it hmm yeah yeah let's just try it like we're gonna get a mistake somewhere like" }, { "start": 2716.2, "end": 2726.8399999999997, "text": " crash nope nope okay all right actually training so for real like these scores" }, { "start": 2726.8399999999997, "end": 2733.4399999999996, "text": " right here the fact what made them a mask is that we discretize them right" }, { "start": 2733.4399999999996, "end": 2737.3599999999997, "text": " here so we made them into a mask right here we're not doing that anymore so" }, { "start": 2737.3599999999997, "end": 2740.96, "text": " we're just training floats and then we're also not multiplying it by the" }, { "start": 2740.96, "end": 2745.9199999999996, "text": " weight we are just using those floats which means that we are using the" }, { "start": 2745.92, "end": 2751.8, "text": " basically a neural network and then here the bias I was worried about the bias" }, { "start": 2751.8, "end": 2757.96, "text": " but the bias is always zero as you can see here so the bias is always false" }, { "start": 2757.96, "end": 2763.44, "text": " yeah so we're training five different neural networks for five different tasks" }, { "start": 2763.44, "end": 2770.48, "text": " and you know according to my hypothesis these masked things are just kind of" }, { "start": 2770.48, "end": 2778.4, "text": " crude quantized ways are of training neural networks and if if my hypothesis" }, { "start": 2778.4, "end": 2783.8, "text": " is correct this here is going to turn out probably even better than this" }, { "start": 2783.8, "end": 2789.44, "text": " masked thing okay so last task training right here" }, { "start": 2789.44, "end": 2798.2, "text": " I'm starting to breathe good laptop fast laptop very nice come on come on come" }, { "start": 2798.2, "end": 2808.6, "text": " on and we're done so again we have an average top one performance of 92 point" }, { "start": 2808.6, "end": 2815.08, "text": " is this even did I even oh no I ran this right here okay like that's the exact" }, { "start": 2815.08, "end": 2820.08, "text": " same number it was last time so we need to run inference again and if we're" }, { "start": 2820.08, "end": 2829.2, "text": " given the task ID then we are at 93.9% so we increase slightly which might just be" }, { "start": 2829.2, "end": 2835.72, "text": " due to the fact that we initialize terribly terribly okay so what does it" }, { "start": 2835.72, "end": 2840.18, "text": " say about our task inference accuracy maybe there's some mask here set model" }, { "start": 2840.18, "end": 2849.7599999999998, "text": " task the alphas are to one nope no we're good we're good task inference accuracy" }, { "start": 2849.76, "end": 2855.88, "text": " 100% and I'm going to guess well with the task inference accuracy being 100%" }, { "start": 2855.88, "end": 2861.48, "text": " I'm going to guess this here will give us the exact same number I like the 93" }, { "start": 2861.48, "end": 2870.0800000000004, "text": " point some percent so yeah 93.9% so I'm you know I'm going to say right here" }, { "start": 2870.0800000000004, "end": 2876.6000000000004, "text": " that the on the super masks and the superposition really are two separate" }, { "start": 2876.6, "end": 2885.56, "text": " ideas right you it's it's because the paper is like it sounds cool and all" }, { "start": 2885.56, "end": 2890.88, "text": " with the supermask and superposition but this inference using the superposition" }, { "start": 2890.88, "end": 2897.04, "text": " and then the entropy to decide is really one idea and training different super" }, { "start": 2897.04, "end": 2901.2799999999997, "text": " math the the advantage in using supermask is of course that the model is" }, { "start": 2901.28, "end": 2910.5600000000004, "text": " way smaller so you can remember it much more easily but also you know that it's" }, { "start": 2910.5600000000004, "end": 2913.76, "text": " really different if there's there's nothing to do with the superposition" }, { "start": 2913.76, "end": 2920.28, "text": " yeah all right so I'm going I'm going to guess this also works for you know 200" }, { "start": 2920.28, "end": 2927.88, "text": " tasks and whatnot the higher order of tasks so I think that's it and we're" }, { "start": 2927.88, "end": 2931.56, "text": " done here yeah" } ]
-buULmf7dec
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Decision Transformer: Reinforcement Learning via Sequence Modeling (Research Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "decisiontransformer", "decision transformer", "berkeley", "uc berkeley", "facebook ai language", "fair", "deep learning tutorial", "what is deep learning", "introduction to deep learning", "transformers for reinforcement learning", "transformers for rl", "transformer reinforcement learning", "sequence modeling", "sequence modelling", "sequence modeling reinforcement learning", "reinforcement learning with transformers" ]
#decisiontransformer #reinforcementlearning #transformer Proper credit assignment over long timespans is a fundamental problem in reinforcement learning. Even methods designed to combat this problem, such as TD-learning, quickly reach their limits when rewards are sparse or noisy. This paper reframes offline reinforcement learning as a pure sequence modeling problem, with the actions being sampled conditioned on the given history and desired future rewards. This allows the authors to use recent advances in sequence modeling using Transformers and achieve competitive results in Offline RL benchmarks. OUTLINE: 0:00 - Intro & Overview 4:15 - Offline Reinforcement Learning 10:10 - Transformers in RL 14:25 - Value Functions and Temporal Difference Learning 20:25 - Sequence Modeling and Reward-to-go 27:20 - Why this is ideal for offline RL 31:30 - The context length problem 34:35 - Toy example: Shortest path from random walks 41:00 - Discount factors 45:50 - Experimental Results 49:25 - Do you need to know the best possible reward? 52:15 - Key-to-door toy experiment 56:00 - Comments & Conclusion Paper: https://arxiv.org/abs/2106.01345 Website: https://sites.google.com/berkeley.edu/decision-transformer Code: https://github.com/kzl/decision-transformer Trajectory Transformer: https://trajectory-transformer.github.io/ Upside-Down RL: https://arxiv.org/abs/1912.02875 Abstract: We present a framework that abstracts Reinforcement Learning (RL) as a sequence modeling problem. This allows us to draw upon the simplicity and scalability of the Transformer architecture, and associated advances in language modeling such as GPT-x and BERT. In particular, we present Decision Transformer, an architecture that casts the problem of RL as conditional sequence modeling. Unlike prior approaches to RL that fit value functions or compute policy gradients, Decision Transformer simply outputs the optimal actions by leveraging a causally masked Transformer. By conditioning an autoregressive model on the desired return (reward), past states, and actions, our Decision Transformer model can generate future actions that achieve the desired return. Despite its simplicity, Decision Transformer matches or exceeds the performance of state-of-the-art model-free offline RL baselines on Atari, OpenAI Gym, and Key-to-Door tasks. Authors: Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher LinkedIn: https://www.linkedin.com/in/yannic-kilcher-488534136/ BiliBili: https://space.bilibili.com/1824646584 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Hello there! Today we're going to look at Decision Transformer reinforcement learning via sequence modeling by Lily Chen, Kevin Lu and others of UC Berkeley, Facebook AI Research and Google Brain. On a high level this paper ditches pretty much anything and everything of reinforcement learning in an offline RL setting and substitutes it for simple sequence modeling using transformers of course. And through that they're able to achieve some pretty compelling results in the things they test. At least they're able to keep up and be on par with the current best frameworks for doing offline reinforcement learning. So we're going to look at this paper and at what it does in terms of sequence modeling and how this looks. The key ingredient here besides the transformer is going to be the fact that we are instead of maximizing the reward we're going to condition on the desired reward and through that we can sort of influence what the model is going to do in the future. This allows more effective offline reinforcement learning and makes the offline RL problem pretty straightforward into a sequence modeling problem. I do have a little bit of troubles with the paper in various aspects but I'm sure we'll come to that. But I'm just warning you this might be a bit of a rant mixed with explaining the paper. Though the paper is pretty cool so don't get me wrong on that. That being said there is concurrent work also out of Berkeley as I understand it, where this is called the trajectory transformer. Reinforcement learning is one big sequence modeling problem that uses the sequence modeling in a bit of a different way. So what they do is they use it as sort of a world model and then they use beam search in order to find good trajectories in that. So it's a little bit of a different approach and I just from skimming this paper right here I think this one might be a bit more of an approach that I would subscribe to but I guess we'll see what happens going forward. And oh wait why did this show up? Reinforcement learning upside down by Schmidt Huber. This must just have gotten in here by accident. Sorry. Let's go back to this paper. They say we introduce a framework that abstracts reinforcement learning as a sequence modeling problem. This allows us to draw upon the simplicity and scalability of the transformer architecture and associated advances in language modeling such as the GPT line and BERT. In particular we present the decision transformer, an architecture that casts the problem of RL as conditional sequence modeling. Unlike prior approaches that fit value functions or compute policy gradients, decision transformers simply outputs the optimal actions by leveraging a causally masked transformer. So as I said they ditch things like policy gradients or value functions, none of that. We're simply going to do sequence modeling right here. By conditioning on an autoregressive model on the desired return, past states and actions, our decision transformer model can generate future actions that achieve the desired return. So a key concept here is going to be this desired return thing and here as well. So there are multiple ingredients to this paper. There's a lot to unpack right here. And lastly they say it achieves, it matches or exceeds the performance of state-of-the-art model free offline RL baselines. Again this is sort of zooming down into a problem. So we are in the world of model free and offline reinforcement learning algorithms. As I said there's a lot to unpack here. So first of all what is offline reinforcement learning? This is contrasted to online reinforcement learning. Online reinforcement learning is where you have an agent and an environment and the agent sort of gets to perform actions in the environment and the environment responds with a reward and a state or the not really a state but an observation. But sometimes it is the state if it's not a partially observable environment. So the agent actively gets to interact with the environment to try out things and its goal is going to be to maximize that reward. In offline reinforcement learning it's a different situation. So in offline reinforcement learning your agent is here and what you get is not an environment but what you get is a data set and this data set will contain lots of experience from other agents. So you would simply get to observe what a different agent has done. So there's going to be a lot of episodes in here. So what happened in the past to this other agent and purely by observing that other agent you somehow have to learn a good policy to achieve a good reward. This is different because you cannot go out and sort of test your hypotheses in this world. You cannot have a good idea and say well I'm gonna try that. You can't do sort of targeted exploration and so on. You simply get to look at a bunch of trajectories and then decide what you want to do. So we need a bunch of different approaches here and one that they compare to is... There are two that mainly that they compare to. One is called, they call it BC which is behavior cloning, where what you're trying to do is you simply try to mimic the agent that you observe in the events where it has led to two good rewards. So that's how you maximize the reward. You simply say well that agent there got a good reward so I'm just gonna try to sort of clone that behavior as behavior cloning from the name. I'm butchering the explanation but roughly that's what it's supposed to do. The other approach is you view this as a let's say more a traditional reinforcement learning problem where you do Q learning. So in Q learning what you do is you are in a state and you have maybe like three actions at your disposal and every time you again have three actions at your disposal so you get this sort of tree that you could do. So you're in the first state and what you want is you want to ask your Q function how much is this worth? Maybe the Q function says five, how much is this worth? Six and how much is this worth? Four. So the Q function is supposed to tell you if you take this action and after that action you follow the the policy like after that action you again do ask the Q function for the Q value. What's the total reward you're going to get? Q learning is very very classic reinforcement learning algorithm and you can actually do Q learning from a data set like this. It doesn't need to be you yourself that makes the experience. The thing about Q learning is that it can be done from offline data other than policy gradients. You need sort of a you need a correction if you do policy gradients and it usually doesn't work if it's complete offline data. It might work I'm not super informed like this but Q learning is possible from offline data and apparently the current a currently good baseline is conservative Q learning which you're going to see in this paper which fixes the the the bug let's say that the tendency for these Q functions in the offline setting to overestimate the Q value. So apparently they they tend to overestimate the value that you get from certain actions conservative Q learning is a more like a pessimistic approach. So these are the two baselines that we're going to compare to. You'll notice behavior cloning some kind of relation to inverse reinforcement learning not really or yeah so that's one approach Q learning is also an approach. Here we're just going to do sequence modeling. So what does this mean? And the key concept as I said is going to be the condition on that reward. Sorry so this was offline RL. Now there are people have pointed out problems with the approach here which some of those problems are simply problems of offline reinforcement learning. So for example which data set do you use right here? Turns out in their experiments they use a benchmark data set which is the the data set where this agent right here is a DQN learner so an active reinforcement learner. So naturally you're going to get out like some some good episodes out of that so it's more like learning from expert demonstration rather than from random random demonstrations okay. So it's crucially important which data set you use but that's that's a fault of offline RL of the setting itself rather than of this particular algorithm. So I just want to point that out but keep in mind the data set they're using for their main experiments is one of let's say a rather high performing agent in this world. So that's that. So the second thing right here is their use of a transformer. Now is the use of a transformer crucial to this algorithm? And the answer is no. So whenever the transformer comes to mind this can be any sequence modeling algorithm right here. Transformers are trendy okay but this can be an LSTM that does autoregressive sequence modeling. Anything that does sort of autoregressive sequence modeling is going to be good for this task right here. The core here is going to be this is a sequence model it's not an RL model. In fact transformers for RL have been a thing you know. Usually what people do is they use LSTMs as a backbone for reinforcement learning algorithms. Using transformers has several advantages in offline and or online reinforcement learning algorithms. So usually you have some sort of a state right here. So you have your history with states and actions and rewards and so on and an LSTM will take in that state and and action. Well let's just let's do it something like this. So you have state action reward, state action reward, state action reward. Whatever you did in the past right. So an LSTM will take that in and it will propagate its hidden state through times. I realize some of you youngsters might not actually know what an LSTM is. This is a recurrent neural network that processes one time step at a time and then here at the end you're supposed to output whatever the next action is going to be right. You have your history of actions you're supposed to output whatever the next action is going to be and you're gonna get back a state and a reward along with it and then you incorporate that right here into the next action. So if you train this thing in any way let's say Q learning, policy gradient, whatnot. If it's a Q learning you're not going to output an action directly. You're going to output Q values. That's a minor modification to the A. What you have to do is you have to and that's the difficulty in reinforcement learning in general. You have to somehow make a connection between the rewards you get from this let's say this action gets your reward. The reward you get from the action to something that you predicted. So you predicted several, you predicted an action here and an action here right. These are these actions. Now just because you got a reward from this action it doesn't actually mean that this action was the smart action or the good action right. If you are in a chess game and it's not the actual last move that is the good move even though that move gets you the all the reward. The crucial move might have happened 20 moves before. So the underlying reinforcement learning problem is to assign that reward to which action was actually the smart action such that in the future you can take that more. So maybe this action right here was the smart action. So you need a way to figure out that that was the smart action and you know back propagation over time will do this but in an LSTM you can see right here you need to back propagate you know through one, two, maybe three different computation steps in order to reach there and now this is three steps but think if the good action was 50 steps ago or 500 steps ago this quickly gets gets tricky. Normally we can unroll LSTMs like this for maybe I don't even know like not more than a couple of dozen steps right. So it gets tricky. So what people do is they use what's called dynamic programming and that is a thing that here with the sequence modeling approach we're going to ditch and this this is one of the fundamental things. So instead of having to just learn from the reward and assign it to an action what you're going to do is you're also going to along with the actions right here you're going to output a value and the value tells you sort of how good you are doing. The Q function in a way is already a value so if you're doing Q learning you're doing this automatically and then the way you learn this is called temporal difference learning. So you know let's say this is the this here is the final stage of the game okay so you always get a reward here it's maybe plus one here it's minus five and so on okay now instead of back propagating only that reward back what you're going to do is at every step you want to predict a value obviously the last value is going to be equal to the reward itself but here your value is sort of your expected reward in the future if you take you know the good actions that you're going to take. So here your value might be maybe negative 4.5 because you know you're actually no you're probably going to take the action that gives you a good reward right so it's maybe like plus point nine because you're fairly sure you're going to take that good action and then down here it's maybe so you get five reward from going there no wait that's the Q value I said that's the Q value so here your value is going to be something like plus point seven so it doesn't really matter what the numbers are what matters is that now you're not your learning signal doesn't just come from the from the reward itself your learning signal is you're from here you're trying to predict the reward but you're also trying to predict the output of your own function like one or two or three steps into the future so if you've done an episode and at the end you got a reward right here you could your value function right here could try to just output that reward but that's really noisy so what you're doing is you're saying well you know I have predicted a value here and here and here and here and here so why aren't I training my value function to also predict these things and by predict I basically mean so if if I was in this value and this transition got me like a reward of something then this value here should equal to this minus this reward because you know like that's that's how the value is supposed to function so you're trying to predict the output of your own value function this also works with the Q function this is the famous Bellman recurrence relation where the Q function of a state is equal to the reward you get from performing an action according to the policy in that state plus the Q function at the state that you're reaching so again with the same policy and the the R here is drawn from the action that the policy gives you something like this so the R is the result of performing the action so this this fundamental relation is the basis of Q learning and you can do as I said right here this is called temporal difference learning so what they call TD all of this is based on concepts of dynamic programming we all ditch this here and so it is important to go through so that you understand what we're not doing okay why do we need all of this why do we need the Q functions and the temporal difference learning and so on well because it's really hard to do that credit assignment over long stretches of time now in we can see that this is the case with an LSTM right especially if we can't back propagate all the way through the LSTM in a transformer what does a transformer do you have a sequence what does a transformer do it uses attention in order to look at a sequence at a whole right it through the attention mechanism it can route information from any sequence element to any other sequence element in a single step so essentially it technically could do this credit assignment right here in a single step if and that's a big if if anything fits into its context okay and that's I think one of the crucial criticisms of this paper right here in that as far as no I don't think all it fits in all into the context but you can see that there's a trade-off right you're able to do the assignment in one step okay but as soon as you would like to predict correlations and do credit assignment across longer spans than the context you need to resort back to something like the dynamic programming approaches right here which they say they can ditch now they don't only say that because their context is long but that is when they say how the transformer benefits this instead of like an LSTM or something like this this is the reason that you can do this credit assignment in one step across the context however always think that statement has an if if the credit assignment needs to happen longer than one context like if the relevant action for the reward is more away the transformers out of luck because doesn't fit into the context and we would need to go back to something like this but there is a second reason of course and that is the sequence modeling approach and that is something I I see at the core of this a little bit so the the causal transformer you know cool it's a transformer okay we could use any other sequence modeling approach now viewing RL as a sequence modeling problem is a different thing so what does this thing do so instead of having a neural network that you know here is here's the history okay this is the history this is the rewards you got in the past and disregard the little hat on the or it's the states of the past it's the actions of the past actually extends into the past okay so this is the input you get and you would get that in any other reinforcement learning algorithm what you would get to is this thing right here the current state right and this goes through a little encoder they use the DQN encoder so this is a little convolutional neural network right that encodes the state so it's technically able to handle very complex states and so on by simply encoding them into a latent space so there's no attention on the like on in the state space right here that the attention really happens over the over the sequence now from this right the classic RL algorithms they wouldn't have this from this they would try to predict an action that maximizes the future reward what this does differently is they say well instead of giving me an action that maximizes the future reward I want to I want to tell the system what reward I would like and then it's not giving me an action to maximize the reward it is actually supposed to give me an action that achieves exactly the reward that I have presented okay so I ask it for a reward and it gives me the action that corresponds to achieving that reward in the future this is is different right and I can still do reward maximization by simply putting a high number there right I want to get a lot of reward and like 21 is the maximum in Pong which this game is right here so you can say I want to achieve 21 reward please give me an action that achieves 21 reward and that will be corresponding to getting as much reward as possible notice that you do need to know the maximum reward it doesn't actually work if you just would put 1 billion billion billion as we will like as the their experiments kind of indicate so that's a drawback of this now just when I go back to this paper that slipped in just by accident I have this open right here by Schmidt hooper don't predict rewards it says just map them to actions so they say we transform reinforcement learning into a form of supervised learning okay which sounds like you know offline RL by turning RL on its head and did you look at this the memes are strong in this one okay upside down RL I've actually made a video on upside down RL they say or standard RL predicts rewards while whatever this is instead uses rewards as task defining inputs together with representations of time horizon and other computable functions of historic and desired future data our L Lutterer learns to interpret these input observations as command mapping them to actions through supervised learning on past possibly accidental experience okay so this it is actually I of course this isn't by accident so I knew this paper right here and when I read this paper it immediately sprung into my mind and Schmidt Hooper also I as I see it wasn't the entirely first who did anything like this like we've known about goal conditioned reinforcement learning for a while and so on so this is not necessarily a new idea they do reference Schmidt hooper's paper very briefly in in this paper staying stating that it's kind of a Markovian approach and and so on even though here you have Markovian interfaces and here you have non Markovian partially observable interfaces and the advantages that Schmidt hooper names right here are very much the same for example they continuously say they don't need discount factors and here also you have no problems with discount factors and so on so I I wanted to point this out and I wanted to point out that the paper is referenced in this paper but essentially here you have the three components the component is offline RL plus a transformer plus viewing the problem as a sequence modeling problem by conditioning on the reward so why does this make sense to condition on the on the future desired reward well it makes sense first of all because in classic reinforcement learning why don't we do that why don't we we say I want to get this reward please give me the action to it because it's a lot more work right if I just want to maximize my reward I need a function right I need a neural network here is my state here is my neural network maybe it's a policy gradient method give me an action and that action is supposed to maximize the reward so now I need an additional input the desired reward and also give me an action now the network doesn't only need to remember what do I need to do to perform well it needs to be able to distinguish what do I need to do to perform well what do I need to do to perform a little bit worse what do I need to do to perform terribly it's a lot more stuff to remember for the network the hope of course is that with all the the advances we've seen in sequence modeling that essentially these transformers are capable of of memorizing or learning all of those different things we know that transformers are almost unlimited in their capacity to absorb data and learn stuff so the hope is that these models will be capable of learning that thing the neck at doing this though is this is a technique that naturally maps to offline reinforcement learning so offline reinforcement learning in general is a harder task than online reinforcement learning right for the reasons I outlined however this particular thing lends itself extremely well to the task of offline reinforcement learning so what do I mean if you have a history you take one history from here and it says well I was in this state I performed this action I got this reward I was in this state and then I came to this state I performed this action I got this reward and so on okay what you can try to do and what Q learning tries to do is it tries to somehow learn the the Q function that takes state and action condition on the history and sort of predict the future rewards and so on so it tries to figure out what it needed to do instead of doing what this agent did in order to achieve higher rewards so it is sort of trying to look at the agent that it it sees critically and be like mmm you probably didn't do something well there but it has no way to act in the world it has no way to to go out and try it itself instead this thing it simply accepts it's like it accepts the history it simply says oh well you did these things and you got this reward okay cool and if you know anything about these sequence models and transformers that they can memorize stuff quite well so going forward maybe think of these what these transformers do as simply memorizing the the training data set okay I know it's not the case but you memorize the training data set well now if you memorize the training data set and you're in this situation right here you see a history you see a state and the sort of that the human tells you I would like to get 21 reward what the transformer can do is it can simply say okay let me go into my training data set let me find some let me find some sequence where the agent was in the same kind of history also was in this state and also ended up getting about 21 reward out of the future actions now what did that agent do well it did this action okay and it's reasonable to assume that you know if you're in the same kind of history and if you want the same reward as that agent got you should probably act the same as that agent did okay it is a lot like behavior cloning though behavior cloning still focuses on sort of getting higher reward as I under understand it so it simply takes what comes in as expert demonstrations whereas here you just you accept the history as it is and if you're in a new situation you the question to the sequence model is essentially how would a sequence that evolves like this okay that evolves like this how would it continue in the training data set and what it will give you it will give you the action that agents who were in a similar situation and ended up getting that similar reward that you want to get those what did those agents do just do the same thing and you're probably going to end up in the same place as they did okay that's that's the approach right here you can see how this is is useful right though again it it only given that we ditch all of the RL given that we ditch all of the RL mechanics right here which they claim as a positive and certainly it is a positive you don't need to parse out what you needed to do and so on you simply accept history and say okay I'm gonna do the same kind of things instead of that if so I just said I'm going to look at agents that had the same kind of history and were in the same kind of situation now if you think about back about this problem right here of the context length what if the future reward right here is crucially dependent on an action you did back here right you could have two agents that have the exact same history as far as the context reaches back but done a different action back here and the sequence model would have no trouble sorry would have like no chance of differentiating between the two it they look the same okay one agent ended up with a really nice reward the other agent ended up with a really bad reward even worse the data set couldn't contain an agent that ended up in the bad reward but had you done Q learning you could maybe figure it out from other trajectories so as much as they I feel as much as they tout the ability to ditch the whole mechanic like the whole machinery of reinforcement learning right here you're on into the same problem like even with this like all of this it does not alleviate the problem if you want to go beyond how far you can back prop you need to you need to use the dynamic programming approaches okay like I don't see a way around it maybe I'm terribly wrong but you know so that the transformers are good for doing the credit assignment over the longer distances than the LSTM's yes certainly but that's valid for online offline RL and so on whether you do sequence modeling or not it doesn't alleviate the problem that these approaches were trying to solve in the first place though the sequence modeling approach is different and does bring like a different view on the problem and again you can do the sequence modeling approach because it there is hope that with these transformers you can actually absorb that much data and learn from that so that is sort of the thing we're in that that was actually already the the technique right here we were not even past the the first page and that is that's already the thing you get this data and there like you can deterministically you can see that right you can deterministically transform this into the format they want so this state action and desired future return or future return you simply look into the future which you can do because it's a data set and you sort of calculate what the the future reward is at this particular time step so you can easily generate that training data then you can use classic sequence modeling in order to do that their idea of what happens is encapsulated again in this in this thing right here so this is a very very example problem that they come up with so they consider a task up here of finding the shortest path in a on a directed graph which can be posed as an RL problem okay the reward is zero when the agent is at the goal node and negative one otherwise we train GPT model to predict the next token in a sequence of returns to go which is the sum of future reward state and actions training only on random walk data with no expert demonstrations we can generate optimal trajectories at test time by adding a prior to generate the highest possible returns they also say see more details and empirical results in the appendix I've looked at the appendix nothing there I've looked at the code nothing there just just saying I mean it is a toy example to illustrate but like there's nothing there of this example so what they do is they have a graph there is a goal you're supposed to just find the the shortest path what you do is you just do random walks okay some of these random walks will actually fail like this one here so the all the rewards are negative infinity some of them will succeed and then you can generate that training data okay so from here that all the future reward is negative four from this particular random walk you did here okay here you start at a different location also negative four because you're gonna take four steps now what you do with this sequence modeling approach is you say I want to start from this node however however I would like to get a reward of negative three which is a lesser reward than you got all the way here so what you're asking the model to do and by the way like I'm pretty sure this should say negative two to make their example compelling okay but so I think there's kind of a flaw in this toy example but I hope you can still see what they're doing so you're saying I would like to get a very high reward or a low negative reward I guess a low magnitude negative reward going from here which corresponds to finding a really short path right and what the model is going to do is going to look at its training data as well was I in a similar situation and some point like in the training data set and it's gonna find yes yes actually here I was in a very much similar situation and and so I wanted to get exactly exactly that reward I was in that situation the history is a bit different but you know who cares now I'm here as well and what did the agent do that then went on and reached exactly the reward I want well it did this action right here okay I'll just I'll just do that same action this is just comes out of the sequence model right so it's the sequence model simply tells you how would a sequence that started like this continue and it tells you the action and then it looks at this thing right here and here is a bit where it fails right they say each step gets you negative one reward so technically at inference time at inference time what you would do is you would look at here so you get negative one from here so here you will put negative two so at the beginning you have to specify the reward you want to get and from there on you can calculate sort of the next reward they need this to be negative one right here actually because so let's just imagine that for some reason you got a negative two here right so they need this to be negative one because that makes their example so the sequence model says well was I in this situation at some point and I got out I got a negative one yes I was here and what did I do to achieve that I went there okay I'm gonna go there ah now I'm at the goal okay and technically you find somewhat the shortest now this again this doesn't the example here doesn't work because he start with negative three you're gonna end up with negative two right here that wouldn't match the blue one that would actually match this one so you would not get the shortest path so you should actually start out with an oracle knowing that the shortest path is negative two that would of course not match any example you have in your training data but the sequence model could say well this is kind of close to this right so the most likely action is still going to be the one right here and then you take the one right here and then you're in the negative one regime and then you match this one right here I hope you can see right how that that figures out a bit so this can also handle if you don't get the expected reward which of course can happen it's not everything is always deterministic so because you reassess after every step you reassess you ask sort of your training data set and this is very much how we think of these big transformer language models what they do is they sort of interpolate the training data set so they stitch together different pieces of the training data set which is you can see that happening right here of course you already saw the flaw you need to know what reward you would like to achieve and so like by the way lot tech is beautiful isn't it maybe that's just my thing I don't I don't recall that being like this so that by the way the code is available and also the pseudocode big props here you can see that the decision transformer in blue in Atari lags a bit behind what they call TD learning so this TD learning that's the the conference conservative Q learning and the behavior cloning which they term BC in the open in the open AI gym it outperforms it a little bit and then there's these key to door task that we're going to get into in just a bit so I just want to quickly mention that their primary comparison here is this SQL and they make a big deal about sort of not needing discount factors and I'm not really sure what they mean there are usually two different discount factors in these algorithms so one of them is usually found right here in the objective formulation so here they say what we want to do is maximize the expected return which is this quantity right here okay so what you want to do is you maximize your expected future returns in the episode now this is usually different some people formulate it as the expected return in the future but discounted by a discount factor that you raise to the power so you're essentially saying the future rewards are less valuable than current rewards and that gives you some sort of stability but it also gets you short-sightedness and so on however this is a choice this is a choice of the problem formulation now I get people train with this for maybe stability reasons and then they still test and actually report the undiscounted reward at the end okay but I'm just saying this is a choice and their choice right here is different from what CQL does so CQL explicitly maximizes the discounted future returns while they maximize the future returns I just want to point out that there is an actual difference here the other difference is in the TD learning okay so the by the way if you don't do this if you don't discount your returns you get the situation that you can you can cycle so if you know if you if you get like positive rewards or zero rewards for certain transitions it can just like if someone is losing okay a game so here would be negative one this is the only two options either lose or you know go back here now chess has a built-in protection against this but other things you can just agent will just circle forever because it doesn't cost anything and if it were to go here it would actually lose so you usually discount no actually that's not why you discount sorry that that is a bad example but there are good reasons to discount future words here you would actually implement some sort of a penalty like minus point one for just any step you do yeah but discounting maybe you could you could win if you could win the agent could still go in circles because well it can still win later right yeah in any case that's one discount fact the other discount factor is in the TD learning so right here and that's a different discount factor you say well I'm going to predict this next step right here that's probably a pretty accurate description and that reward here is quite a good signal given that I am in in this step right here the next one maybe a bit more noisy right because it's two steps ahead and then I could you know I could be doing different actions maybe the transition is stochastic so when I learn my value function from all of these different goals okay I am going to value this target as a learning objective right here you have that recurrence relation I'm going to value this target the highest I'm going to value this one a little bit less some I'm more trying to match this oops sorry I'm more trying to match this one right here given that reward then I'm going to match this one right here giving the given the two rewards maybe both should be accurate so the value should match this their reward plus this one the value should also match these two rewards plus this one but the second one is more unsure so the TD learning usually you have classically called another discount factor lambda where you discount sort of future losses and they say we don't need the discount factor right here I don't know which one which one they're referring to but what I want to point out here is that yeah the objective is different so maybe they say we can get by with this objective I don't see that that's a choice of the modeler and you run into problems with some environments if you don't have a discount factor in any case you can see right here in the experiments for example this is Atari the decision transformer outperforms CQL in some respects it it trails it in other ones I mean they also look at like these standard deviations are are quite high in the open AI gym it is a bit it looks a bit better in that it sorry it does outperform CQL in quite a number of things and also with less standard deviation right here yeah also they they compare against sort of behavior cloning where you retroactively only train on the best such-and-such percent of the experience and they find that if you hit the correct percentage which is not necessarily the only the best trajectories if you hit the correct percentage sometimes behavior cloning can actually give you a better performance however hitting that percentage of course requires another hyper parameter search and you as an oracle you kind of have to you know you have to go and filter and you have to try out and you don't know you have to have some sort of a validation set whereas the decision transformer is just one run now throughout all of this they're sort of touting that they don't need as many like searches and as many you know like here you need to choose that percentage you need to figure it out but if you look at their actual configuration of hyper parameters down here they do things like well we have one architecture for these Atari games but then we have a different one for pong right we have a context length for these Atari games but then a different one for pong because pong is actually quite a sparse reward ish game okay compared these other ones so they make the context length bigger in order to capture a longer history because otherwise it couldn't differentiate the agents and they would need to use TD or some kind of dynamic programming right there and then there's also this this how the return to go conditioning like how much reward you want to get and that's a problem like so here again they do something and this is like they look at the baseline they look at CQL how much did that achieve and then they just choose to achieve a multiple of that one they say it's like you look at your competitor at what you're compared to and then you base your decisions off of the result of that so you know I kind of get it and also this multiplier they take it is very informed by them knowing the games right in pong you know you can reach at max 21 so that's they condition on the reward of 20 in in sequence it's I think it's unbounded so they they do it 1.5 times the performance of that and yeah so I'm not I'm like I'm not saying this is invalid experiments but like this this looking at your competitor and then basing crucial hyper parameters off of their performance but I'm sure it I'm sure it will work otherwise but just know that you need to have a good idea of what reward you can even achieve and what's possible given your data set right so CQL also takes into account like it also learns from the same data set and that's sort of how they know what's possible from that data set yeah so is this a problem that you need to know the reward can't you just put a hundred billion billion billion and the answer is no you see right here this orange line is the highest reward that was observed in the data set now this is is gamer normalized that's why it's not like 21 but here the experiment it's actually a pretty cool experiment is since you're not only maximizing the word you can you can ask the model to give you any reward you want so the green line is what you want it and if the blue line is what you achieved matches the green line exactly the model always gives you the actions to to make that reward that you requested happen okay and you can see that green line in the blue and they match pretty accurately for a long stretch which meaning means that this the sequence modeling approach can really not only give you the max reward but it can give you sort of any reward because it remembers all the sequences though probably not the lowest ones because you're actually learning from a DQN learner that it has probably only good trajectories okay but you can see as soon as you go past the highest observed reward it not only does it stay flat it actually drops down again and you can see that pattern pretty much anywhere where you have an orange line like this so here you what maybe you stay maybe you drop down here it's that kind of seems like you stay it's only that here in the sequest where it's a bit better but like this is a gamer normalized score of three like a gamer would achieve 100 here but you can also see that sort of drop compared to the green line so that means you can't just put a hundred billion essentially so you need to know the reward that you're going for sometimes no problem sometimes actual problem okay and that reward is not only dependent on the game it is also dependent on the game but it is also dependent on like how your data set ace that you learn from is structured you need to know what your agent can achieve they do some other relations with respect to context length they actually find that larger context length helps so if you don't provide a long context the performance drops it makes sense in that the transformer is able to match the history to observe trajectories better on the other hand technically reinforcement learning algorithm since these are in Atari are fully observable if you do frame stacking you know technically an RL agent shouldn't shouldn't care about the more of the past but you know RL algorithms do they're not perfect the last thing is that key to door thing where they show that okay there this is a an experiment toy setting by the way again I did not find this in the appendix I did not find code for this so we actually we don't know too much about this experiment but as far as I understand there's one room there's two rooms there's three rooms in the first room there's a key in the last room there's a door now you're thrown into the first room you get to walk around a bit then you're thrown into the second room you get to walk for a variable length of time and then you thrown into the last room if you have put taken the key and you you reach the door here then you get a good reward otherwise you fail okay so the middle room is called a distractor because if you have something like an LSTM or if you have something like Q learning or something so the the problem with this sorry Q equals R plus Q is that this sort of looks one step ahead okay this recurrence relation that means if you have a learning signal somewhere way down the line you need to sort of propagate it's not back prop it's actually you need to learning step propagate the fact that there is a signal back here all the way through these time steps in the past where a transformer can just go like okay so this is this is an experiment designed to show that this really helps so you can see right here they can analyze what their system says about the expected reward in the future so you can always ask it how probable is a given reward in the future and you can see whenever the agent doesn't pick up the key it immediately knows as soon as it gets into that second room it immediately knows it's lost no matter what happens in the last room if it does pick up the key in these two situations it estimates a future reward of about point five and you can see it does not degrade across the distractor room okay so no no matter how long the distractor room is does not degrade and that's the key difference between this and like let's say TD learning Q learning approaches it does not it doesn't forget because there is no dynamic programming involved and then you know in the last thing if it reaches the door obviously it says well that's a high value if it doesn't reach the door it changes its mind now I would have liked to see whether or not and this is why I was keen on seeing the parameters of this whether or not this right here is inside or outside the context length of the transformer they used and I'm going to guess it's still inside because as soon as that's outside or like let's say more like this as soon as that's outside the context length the the the system has no the sequence model has no way of knowing whether that particular agent picked up the key so it cannot predict anything I think what they're what they want to show right here sorry that's an alarm what they want to show right here is the fact that the attention weighs heavily on those frames where it picks up the key or reaches the door which is fine right we can we can get that transformers learn that however here I'd really you know like to see what happens if you go outside of that and again if you go outside of that you're going to revert back to the old method so ultimately the transformer gives you a longer context where you can do one-step assignment of credit but again as soon as you exceed that as with the LSTM as soon as you exceed these you need the classic approaches and I feel the paper is a little bit is a little bit shady on the fact that they get like a constant factor longer context with what they're doing but it doesn't really solve the problem okay in my mind I might be wrong please tell me if I'm wrong read the paper for yourself it is a good paper I hope we can cover the trajectory transformer in the future and with that I wish you all the best bye bye
[ { "start": 0, "end": 5.46, "text": " Hello there! Today we're going to look at Decision Transformer reinforcement" }, { "start": 5.46, "end": 11.68, "text": " learning via sequence modeling by Lily Chen, Kevin Lu and others of UC Berkeley," }, { "start": 11.68, "end": 17.88, "text": " Facebook AI Research and Google Brain. On a high level this paper ditches pretty" }, { "start": 17.88, "end": 22.62, "text": " much anything and everything of reinforcement learning in an offline RL" }, { "start": 22.62, "end": 28.86, "text": " setting and substitutes it for simple sequence modeling using transformers of" }, { "start": 28.86, "end": 34.48, "text": " course. And through that they're able to achieve some pretty compelling results" }, { "start": 34.48, "end": 40.84, "text": " in the things they test. At least they're able to keep up and be on par with the" }, { "start": 40.84, "end": 46.1, "text": " current best frameworks for doing offline reinforcement learning. So we're" }, { "start": 46.1, "end": 51.8, "text": " going to look at this paper and at what it does in terms of" }, { "start": 51.8, "end": 56.94, "text": " sequence modeling and how this looks. The key ingredient here besides the" }, { "start": 56.94, "end": 61.559999999999995, "text": " transformer is going to be the fact that we are instead of maximizing the reward" }, { "start": 61.559999999999995, "end": 68.72, "text": " we're going to condition on the desired reward and through that we can sort" }, { "start": 68.72, "end": 72.16, "text": " of influence what the model is going to do in the future. This allows more" }, { "start": 72.16, "end": 77.2, "text": " effective offline reinforcement learning and makes the offline RL problem pretty" }, { "start": 77.2, "end": 82.56, "text": " straightforward into a sequence modeling problem. I do have a little bit of" }, { "start": 82.56, "end": 87.44, "text": " troubles with the paper in various aspects but I'm sure we'll come to that." }, { "start": 87.44, "end": 93.04, "text": " But I'm just warning you this might be a bit of a rant mixed with explaining the" }, { "start": 93.04, "end": 97.8, "text": " paper. Though the paper is pretty cool so don't get me wrong on that. That" }, { "start": 97.8, "end": 104.80000000000001, "text": " being said there is concurrent work also out of Berkeley as I understand it, where" }, { "start": 104.80000000000001, "end": 110.2, "text": " this is called the trajectory transformer. Reinforcement learning is" }, { "start": 110.2, "end": 115.2, "text": " one big sequence modeling problem that uses the sequence modeling in a bit of a" }, { "start": 115.2, "end": 119.48, "text": " different way. So what they do is they use it as sort of a world model and then" }, { "start": 119.48, "end": 125.48, "text": " they use beam search in order to find good trajectories in that." }, { "start": 125.48, "end": 131.04, "text": " So it's a little bit of a different approach and I just from skimming this" }, { "start": 131.04, "end": 136.96, "text": " paper right here I think this one might be a bit more of an approach" }, { "start": 136.96, "end": 142.88, "text": " that I would subscribe to but I guess we'll see what happens going forward." }, { "start": 142.88, "end": 149.08, "text": " And oh wait why did this show up? Reinforcement learning upside down by" }, { "start": 149.08, "end": 154.84, "text": " Schmidt Huber. This must just have gotten in here by accident. Sorry. Let's" }, { "start": 154.84, "end": 161, "text": " go back to this paper. They say we introduce a framework that abstracts" }, { "start": 161, "end": 166.64000000000001, "text": " reinforcement learning as a sequence modeling problem. This allows us to draw" }, { "start": 166.64, "end": 170.95999999999998, "text": " upon the simplicity and scalability of the transformer architecture and" }, { "start": 170.95999999999998, "end": 175.88, "text": " associated advances in language modeling such as the GPT line and BERT." }, { "start": 175.88, "end": 180.64, "text": " In particular we present the decision transformer, an architecture that casts" }, { "start": 180.64, "end": 185.73999999999998, "text": " the problem of RL as conditional sequence modeling. Unlike prior approaches" }, { "start": 185.73999999999998, "end": 190.64, "text": " that fit value functions or compute policy gradients, decision" }, { "start": 190.64, "end": 195.83999999999997, "text": " transformers simply outputs the optimal actions by leveraging a causally masked" }, { "start": 195.84, "end": 203.68, "text": " transformer. So as I said they ditch things like policy gradients or value" }, { "start": 203.68, "end": 209.88, "text": " functions, none of that. We're simply going to do sequence modeling right here." }, { "start": 209.88, "end": 216.64000000000001, "text": " By conditioning on an autoregressive model on the desired return, past states" }, { "start": 216.64000000000001, "end": 220.28, "text": " and actions, our decision transformer model can generate future" }, { "start": 220.28, "end": 223.72, "text": " actions that achieve the desired return. So a key concept here is going to be" }, { "start": 223.72, "end": 230.44, "text": " this desired return thing and here as well. So there are multiple ingredients" }, { "start": 230.44, "end": 237.36, "text": " to this paper. There's a lot to unpack right here. And lastly they say it" }, { "start": 237.36, "end": 241.84, "text": " achieves, it matches or exceeds the performance of state-of-the-art model" }, { "start": 241.84, "end": 248.2, "text": " free offline RL baselines. Again this is sort of zooming down into a problem. So" }, { "start": 248.2, "end": 254.64, "text": " we are in the world of model free and offline reinforcement learning algorithms." }, { "start": 254.64, "end": 259.92, "text": " As I said there's a lot to unpack here. So first of all what is" }, { "start": 259.92, "end": 264.32, "text": " offline reinforcement learning? This is contrasted to online reinforcement" }, { "start": 264.32, "end": 268.48, "text": " learning. Online reinforcement learning is where you have an agent and an" }, { "start": 268.48, "end": 272.59999999999997, "text": " environment and the agent sort of gets to perform actions in the environment" }, { "start": 272.6, "end": 278.20000000000005, "text": " and the environment responds with a reward and a state or the not really a" }, { "start": 278.20000000000005, "end": 284.84000000000003, "text": " state but an observation. But sometimes it is the state if it's not a" }, { "start": 284.84000000000003, "end": 290.32000000000005, "text": " partially observable environment. So the agent actively gets to interact with the" }, { "start": 290.32000000000005, "end": 295.12, "text": " environment to try out things and its goal is going to be to maximize that" }, { "start": 295.12, "end": 302.32000000000005, "text": " reward. In offline reinforcement learning it's a different situation. So in offline" }, { "start": 302.32, "end": 308.56, "text": " reinforcement learning your agent is here and what you get is not an" }, { "start": 308.56, "end": 314.2, "text": " environment but what you get is a data set and this data set will contain" }, { "start": 314.2, "end": 322.92, "text": " lots of experience from other agents. So you would simply get to" }, { "start": 322.92, "end": 328, "text": " observe what a different agent has done. So there's going to be a lot of" }, { "start": 328, "end": 333.92, "text": " episodes in here. So what happened in the past to this other agent and purely by" }, { "start": 333.92, "end": 339.04, "text": " observing that other agent you somehow have to learn a good policy to achieve" }, { "start": 339.04, "end": 343.84, "text": " a good reward. This is different because you cannot go out and sort of test your" }, { "start": 343.84, "end": 349.28, "text": " hypotheses in this world. You cannot have a good idea and say well I'm gonna try" }, { "start": 349.28, "end": 354.88, "text": " that. You can't do sort of targeted exploration and so on. You simply get to" }, { "start": 354.88, "end": 361.68, "text": " look at a bunch of trajectories and then decide what you want to do. So we need a" }, { "start": 361.68, "end": 369.08, "text": " bunch of different approaches here and one that they compare to is..." }, { "start": 369.08, "end": 373.08, "text": " There are two that mainly that they compare to. One is called, they call it BC" }, { "start": 373.08, "end": 377.32, "text": " which is behavior cloning, where what you're trying to do is you simply try to" }, { "start": 377.32, "end": 384.4, "text": " mimic the agent that you observe in the events where it has led to two good" }, { "start": 384.4, "end": 388.79999999999995, "text": " rewards. So that's how you maximize the reward. You simply say well that agent" }, { "start": 388.79999999999995, "end": 393.2, "text": " there got a good reward so I'm just gonna try to sort of clone that" }, { "start": 393.2, "end": 397.59999999999997, "text": " behavior as behavior cloning from the name. I'm butchering the explanation but" }, { "start": 397.59999999999997, "end": 403.08, "text": " roughly that's what it's supposed to do. The other approach is you view this as a" }, { "start": 403.08, "end": 406.47999999999996, "text": " let's say more a traditional reinforcement learning problem where you" }, { "start": 406.47999999999996, "end": 413.84, "text": " do Q learning. So in Q learning what you do is you are in a state and you have" }, { "start": 413.84, "end": 419.15999999999997, "text": " maybe like three actions at your disposal and every time you again have" }, { "start": 419.15999999999997, "end": 425.03999999999996, "text": " three actions at your disposal so you get this sort of tree that you could do." }, { "start": 425.03999999999996, "end": 429.56, "text": " So you're in the first state and what you want is you want to ask your Q" }, { "start": 429.56, "end": 435.4, "text": " function how much is this worth? Maybe the Q function says five," }, { "start": 435.4, "end": 440, "text": " how much is this worth? Six and how much is this worth? Four. So the Q function is" }, { "start": 440, "end": 445.48, "text": " supposed to tell you if you take this action and after that action you follow" }, { "start": 445.48, "end": 453.76, "text": " the the policy like after that action you again do ask the Q function for the" }, { "start": 453.76, "end": 460.44, "text": " Q value. What's the total reward you're going to get? Q learning is very" }, { "start": 460.44, "end": 464.64, "text": " very classic reinforcement learning algorithm and you can actually do Q" }, { "start": 464.64, "end": 470.47999999999996, "text": " learning from a data set like this. It doesn't need to be you yourself that" }, { "start": 470.47999999999996, "end": 475.68, "text": " makes the experience. The thing about Q learning is that it can be done" }, { "start": 475.68, "end": 482, "text": " from offline data other than policy gradients. You need sort of a you need a" }, { "start": 482, "end": 486.36, "text": " correction if you do policy gradients and it usually doesn't work if it's" }, { "start": 486.36, "end": 491.4, "text": " complete offline data. It might work I'm not super informed like this but Q" }, { "start": 491.4, "end": 495.84, "text": " learning is possible from offline data and apparently the current a currently" }, { "start": 495.84, "end": 500.08, "text": " good baseline is conservative Q learning which you're going to see in this paper" }, { "start": 500.08, "end": 508.59999999999997, "text": " which fixes the the the bug let's say that the tendency for these Q" }, { "start": 508.59999999999997, "end": 514.4399999999999, "text": " functions in the offline setting to overestimate the Q value. So apparently" }, { "start": 514.4399999999999, "end": 519.88, "text": " they they tend to overestimate the value that you get from certain actions" }, { "start": 519.88, "end": 525.4, "text": " conservative Q learning is a more like a pessimistic approach. So these are the" }, { "start": 525.4, "end": 529.4399999999999, "text": " two baselines that we're going to compare to. You'll notice behavior cloning some" }, { "start": 529.4399999999999, "end": 535.4399999999999, "text": " kind of relation to inverse reinforcement learning not really or yeah" }, { "start": 535.4399999999999, "end": 540.92, "text": " so that's one approach Q learning is also an approach. Here we're just going" }, { "start": 540.92, "end": 546.28, "text": " to do sequence modeling. So what does this mean? And the key concept as I said" }, { "start": 546.28, "end": 551.6, "text": " is going to be the condition on that reward. Sorry so this was offline RL." }, { "start": 551.6, "end": 557.52, "text": " Now there are people have pointed out problems with the approach here which" }, { "start": 557.52, "end": 560.9599999999999, "text": " some of those problems are simply problems of offline reinforcement" }, { "start": 560.9599999999999, "end": 566.4, "text": " learning. So for example which data set do you use right here? Turns out in their" }, { "start": 566.4, "end": 571.8399999999999, "text": " experiments they use a benchmark data set which is the the data set where this" }, { "start": 571.84, "end": 577.2800000000001, "text": " agent right here is a DQN learner so an active reinforcement learner. So" }, { "start": 577.2800000000001, "end": 582.2, "text": " naturally you're going to get out like some some good episodes out of that so" }, { "start": 582.2, "end": 586.6, "text": " it's more like learning from expert demonstration rather than from random" }, { "start": 586.6, "end": 592.5600000000001, "text": " random demonstrations okay. So it's crucially important which data set you" }, { "start": 592.5600000000001, "end": 598.88, "text": " use but that's that's a fault of offline RL of the setting itself rather than of" }, { "start": 598.88, "end": 603.2, "text": " this particular algorithm. So I just want to point that out but keep in mind" }, { "start": 603.2, "end": 608, "text": " the data set they're using for their main experiments is one of let's say a" }, { "start": 608, "end": 615.36, "text": " rather high performing agent in this world. So that's that. So the second" }, { "start": 615.36, "end": 622.76, "text": " thing right here is their use of a transformer. Now is the use of a" }, { "start": 622.76, "end": 628.48, "text": " transformer crucial to this algorithm? And the answer is no. So whenever" }, { "start": 628.48, "end": 634.32, "text": " the transformer comes to mind this can be any sequence modeling algorithm right" }, { "start": 634.32, "end": 639.48, "text": " here. Transformers are trendy okay but this can be an LSTM that does" }, { "start": 639.48, "end": 643.88, "text": " autoregressive sequence modeling. Anything that does sort of autoregressive" }, { "start": 643.88, "end": 648.36, "text": " sequence modeling is going to be good for this task right here. The core" }, { "start": 648.36, "end": 654.4, "text": " here is going to be this is a sequence model it's not an RL model. In fact" }, { "start": 654.4, "end": 659.1999999999999, "text": " transformers for RL have been a thing you know. Usually what people do is they" }, { "start": 659.1999999999999, "end": 663.4, "text": " use LSTMs as a backbone for reinforcement learning algorithms. Using" }, { "start": 663.4, "end": 668.76, "text": " transformers has several advantages in offline and or online reinforcement" }, { "start": 668.76, "end": 673.16, "text": " learning algorithms. So usually you have some sort of a state right here. So you" }, { "start": 673.16, "end": 679.16, "text": " have your history with states and actions and rewards and so on and an" }, { "start": 679.16, "end": 686.68, "text": " LSTM will take in that state and and action. Well let's just let's do it" }, { "start": 686.68, "end": 693.3199999999999, "text": " something like this. So you have state action reward, state action reward, state" }, { "start": 693.3199999999999, "end": 698.4, "text": " action reward. Whatever you did in the past right. So an LSTM will take that in" }, { "start": 698.4, "end": 703.9399999999999, "text": " and it will propagate its hidden state through times. I realize some of you" }, { "start": 703.9399999999999, "end": 707.8, "text": " youngsters might not actually know what an LSTM is. This is a recurrent neural" }, { "start": 707.8, "end": 713.4, "text": " network that processes one time step at a time and then here at the end you're" }, { "start": 713.4, "end": 716.9599999999999, "text": " supposed to output whatever the next action is going to be right. You have" }, { "start": 716.9599999999999, "end": 720.56, "text": " your history of actions you're supposed to output whatever the next action is" }, { "start": 720.56, "end": 725.7199999999999, "text": " going to be and you're gonna get back a state and a reward along with it and then" }, { "start": 725.7199999999999, "end": 730.8, "text": " you incorporate that right here into the next action. So if you train this thing" }, { "start": 730.8, "end": 735.4799999999999, "text": " in any way let's say Q learning, policy gradient, whatnot. If it's a Q learning" }, { "start": 735.48, "end": 738.8000000000001, "text": " you're not going to output an action directly. You're going to output Q" }, { "start": 738.8000000000001, "end": 745.24, "text": " values. That's a minor modification to the A. What you have to do is you have to" }, { "start": 745.24, "end": 748.9200000000001, "text": " and that's the difficulty in reinforcement learning in general. You" }, { "start": 748.9200000000001, "end": 754, "text": " have to somehow make a connection between the rewards you get from this" }, { "start": 754, "end": 758.76, "text": " let's say this action gets your reward. The reward you get from the action to" }, { "start": 758.76, "end": 764.88, "text": " something that you predicted. So you predicted several, you predicted an" }, { "start": 764.88, "end": 770.4, "text": " action here and an action here right. These are these actions. Now just because" }, { "start": 770.4, "end": 773.76, "text": " you got a reward from this action it doesn't actually mean that this action" }, { "start": 773.76, "end": 779.56, "text": " was the smart action or the good action right. If you are in a chess game and it's" }, { "start": 779.56, "end": 784, "text": " not the actual last move that is the good move even though that move gets you" }, { "start": 784, "end": 790.16, "text": " the all the reward. The crucial move might have happened 20 moves before. So" }, { "start": 790.16, "end": 796.68, "text": " the underlying reinforcement learning problem is to assign that reward to" }, { "start": 796.68, "end": 801.24, "text": " which action was actually the smart action such that in the future you can" }, { "start": 801.24, "end": 805.6, "text": " take that more. So maybe this action right here was the smart action. So you" }, { "start": 805.6, "end": 811.12, "text": " need a way to figure out that that was the smart action and you know back" }, { "start": 811.12, "end": 816.22, "text": " propagation over time will do this but in an LSTM you can see right here you" }, { "start": 816.22, "end": 822.48, "text": " need to back propagate you know through one, two, maybe three different" }, { "start": 822.48, "end": 827.26, "text": " computation steps in order to reach there and now this is three steps but" }, { "start": 827.26, "end": 833.76, "text": " think if the good action was 50 steps ago or 500 steps ago this quickly gets" }, { "start": 833.76, "end": 840.72, "text": " gets tricky. Normally we can unroll LSTMs like this for maybe I don't even" }, { "start": 840.72, "end": 847.9200000000001, "text": " know like not more than a couple of dozen steps right. So it gets tricky. So" }, { "start": 847.9200000000001, "end": 853.4, "text": " what people do is they use what's called dynamic programming and that is a thing" }, { "start": 853.4, "end": 858.4, "text": " that here with the sequence modeling approach we're going to ditch and this" }, { "start": 858.4, "end": 867.12, "text": " this is one of the fundamental things. So instead of having to just learn from the" }, { "start": 867.12, "end": 871.5600000000001, "text": " reward and assign it to an action what you're going to do is you're also going" }, { "start": 871.5600000000001, "end": 876.5, "text": " to along with the actions right here you're going to output a value and the" }, { "start": 876.5, "end": 881.48, "text": " value tells you sort of how good you are doing. The Q function in a way is already" }, { "start": 881.48, "end": 885.9, "text": " a value so if you're doing Q learning you're doing this automatically and then" }, { "start": 885.9, "end": 893.46, "text": " the way you learn this is called temporal difference learning. So you know" }, { "start": 893.46, "end": 898.6800000000001, "text": " let's say this is the this here is the final stage of the game okay so you" }, { "start": 898.6800000000001, "end": 903.6800000000001, "text": " always get a reward here it's maybe plus one here it's minus five and so on okay" }, { "start": 903.6800000000001, "end": 908.44, "text": " now instead of back propagating only that reward back what you're going to do" }, { "start": 908.44, "end": 913.36, "text": " is at every step you want to predict a value obviously the last value is going" }, { "start": 913.36, "end": 920.32, "text": " to be equal to the reward itself but here your value is sort of your expected" }, { "start": 920.32, "end": 924.8000000000001, "text": " reward in the future if you take you know the good actions that you're going" }, { "start": 924.8000000000001, "end": 931.08, "text": " to take. So here your value might be maybe negative 4.5 because you know" }, { "start": 931.08, "end": 935.36, "text": " you're actually no you're probably going to take the action that gives you a good" }, { "start": 935.36, "end": 941.2600000000001, "text": " reward right so it's maybe like plus point nine because you're fairly sure" }, { "start": 941.2600000000001, "end": 947.2, "text": " you're going to take that good action and then down here it's maybe so you get" }, { "start": 947.2, "end": 953.2, "text": " five reward from going there no wait that's the Q value I said that's the Q" }, { "start": 953.2, "end": 961.8000000000001, "text": " value so here your value is going to be something like plus point seven so it" }, { "start": 961.8000000000001, "end": 966, "text": " doesn't really matter what the numbers are what matters is that now you're not" }, { "start": 966, "end": 974.2800000000001, "text": " your learning signal doesn't just come from the from the reward itself your" }, { "start": 974.28, "end": 979.48, "text": " learning signal is you're from here you're trying to predict the reward but" }, { "start": 979.48, "end": 984.68, "text": " you're also trying to predict the output of your own function like one or two or" }, { "start": 984.68, "end": 989.66, "text": " three steps into the future so if you've done an episode and at the end you got a" }, { "start": 989.66, "end": 996, "text": " reward right here you could your value function right here could try to just" }, { "start": 996, "end": 1000.12, "text": " output that reward but that's really noisy so what you're doing is you're" }, { "start": 1000.12, "end": 1005.48, "text": " saying well you know I have predicted a value here and here and here and here" }, { "start": 1005.48, "end": 1012.72, "text": " and here so why aren't I training my value function to also predict these" }, { "start": 1012.72, "end": 1020.96, "text": " things and by predict I basically mean so if if I was in this value and this" }, { "start": 1020.96, "end": 1026.56, "text": " transition got me like a reward of something then this value here should" }, { "start": 1026.56, "end": 1032.84, "text": " equal to this minus this reward because you know like that's that's how the" }, { "start": 1032.84, "end": 1036.84, "text": " value is supposed to function so you're trying to predict the output of your own" }, { "start": 1036.84, "end": 1040.96, "text": " value function this also works with the Q function this is the famous Bellman" }, { "start": 1040.96, "end": 1048.6, "text": " recurrence relation where the Q function of a state is equal to the reward you" }, { "start": 1048.6, "end": 1054.76, "text": " get from performing an action according to the policy in that state plus the Q" }, { "start": 1054.76, "end": 1060.56, "text": " function at the state that you're reaching so again with the same policy" }, { "start": 1060.56, "end": 1067.2, "text": " and the the R here is drawn from the action that the policy gives you" }, { "start": 1067.2, "end": 1073.2, "text": " something like this so the R is the result of performing the action so this" }, { "start": 1073.2, "end": 1080.08, "text": " this fundamental relation is the basis of Q learning and you can do as I said" }, { "start": 1080.08, "end": 1085, "text": " right here this is called temporal difference learning so what they call TD" }, { "start": 1085, "end": 1091.32, "text": " all of this is based on concepts of dynamic programming we all ditch this" }, { "start": 1091.32, "end": 1096.32, "text": " here and so it is important to go through so that you understand what we're" }, { "start": 1096.32, "end": 1101.32, "text": " not doing okay why do we need all of this why do we need the Q functions and" }, { "start": 1101.32, "end": 1105.36, "text": " the temporal difference learning and so on well because it's really hard to do" }, { "start": 1105.36, "end": 1112.56, "text": " that credit assignment over long stretches of time now in we can see" }, { "start": 1112.56, "end": 1116.4799999999998, "text": " that this is the case with an LSTM right especially if we can't back propagate" }, { "start": 1116.4799999999998, "end": 1122.6799999999998, "text": " all the way through the LSTM in a transformer what does a transformer do" }, { "start": 1122.6799999999998, "end": 1126.56, "text": " you have a sequence what does a transformer do it uses attention in" }, { "start": 1126.56, "end": 1132.3999999999999, "text": " order to look at a sequence at a whole right it through the attention mechanism" }, { "start": 1132.4, "end": 1137.8400000000001, "text": " it can route information from any sequence element to any other sequence" }, { "start": 1137.8400000000001, "end": 1143.8200000000002, "text": " element in a single step so essentially it technically could do this credit" }, { "start": 1143.8200000000002, "end": 1149.5600000000002, "text": " assignment right here in a single step if and that's a big if if anything fits" }, { "start": 1149.5600000000002, "end": 1155.64, "text": " into its context okay and that's I think one of the crucial criticisms of this" }, { "start": 1155.64, "end": 1163.64, "text": " paper right here in that as far as no I don't think all it fits in all into the" }, { "start": 1163.64, "end": 1170.1000000000001, "text": " context but you can see that there's a trade-off right you're able to do the" }, { "start": 1170.1000000000001, "end": 1176.8400000000001, "text": " assignment in one step okay but as soon as you would like to predict correlations" }, { "start": 1176.8400000000001, "end": 1182.68, "text": " and do credit assignment across longer spans than the context you need to" }, { "start": 1182.68, "end": 1186.6000000000001, "text": " resort back to something like the dynamic programming approaches right" }, { "start": 1186.6000000000001, "end": 1192.1200000000001, "text": " here which they say they can ditch now they don't only say that because their" }, { "start": 1192.1200000000001, "end": 1198.0800000000002, "text": " context is long but that is when they say how the transformer benefits this" }, { "start": 1198.0800000000002, "end": 1203.92, "text": " instead of like an LSTM or something like this this is the reason that you" }, { "start": 1203.92, "end": 1209.8400000000001, "text": " can do this credit assignment in one step across the context however always" }, { "start": 1209.84, "end": 1215.52, "text": " think that statement has an if if the credit assignment needs to happen longer" }, { "start": 1215.52, "end": 1220.04, "text": " than one context like if the relevant action for the reward is more away the" }, { "start": 1220.04, "end": 1224.32, "text": " transformers out of luck because doesn't fit into the context and we would need" }, { "start": 1224.32, "end": 1229.1599999999999, "text": " to go back to something like this but there is a second reason of course and" }, { "start": 1229.1599999999999, "end": 1235.9599999999998, "text": " that is the sequence modeling approach and that is something I I see at the" }, { "start": 1235.96, "end": 1241.8, "text": " core of this a little bit so the the causal transformer you know cool it's a" }, { "start": 1241.8, "end": 1246.76, "text": " transformer okay we could use any other sequence modeling approach now viewing" }, { "start": 1246.76, "end": 1251.92, "text": " RL as a sequence modeling problem is a different thing so what does this thing" }, { "start": 1251.92, "end": 1260.16, "text": " do so instead of having a neural network that you know here is here's the history" }, { "start": 1260.16, "end": 1264.8400000000001, "text": " okay this is the history this is the rewards you got in the past and" }, { "start": 1264.84, "end": 1270.28, "text": " disregard the little hat on the or it's the states of the past it's the actions" }, { "start": 1270.28, "end": 1275.1999999999998, "text": " of the past actually extends into the past okay so this is the input you get" }, { "start": 1275.1999999999998, "end": 1279.04, "text": " and you would get that in any other reinforcement learning algorithm what" }, { "start": 1279.04, "end": 1284.1999999999998, "text": " you would get to is this thing right here the current state right and this" }, { "start": 1284.1999999999998, "end": 1287.8799999999999, "text": " goes through a little encoder they use the DQN encoder so this is a little" }, { "start": 1287.8799999999999, "end": 1291.9599999999998, "text": " convolutional neural network right that encodes the state so it's technically" }, { "start": 1291.96, "end": 1297.76, "text": " able to handle very complex states and so on by simply encoding them into a" }, { "start": 1297.76, "end": 1304.8, "text": " latent space so there's no attention on the like on in the state space right" }, { "start": 1304.8, "end": 1309.92, "text": " here that the attention really happens over the over the sequence now from this" }, { "start": 1309.92, "end": 1314.14, "text": " right the classic RL algorithms they wouldn't have this from this they would" }, { "start": 1314.14, "end": 1320.52, "text": " try to predict an action that maximizes the future reward what this does" }, { "start": 1320.52, "end": 1328.08, "text": " differently is they say well instead of giving me an action that maximizes the" }, { "start": 1328.08, "end": 1334.52, "text": " future reward I want to I want to tell the system what reward I would like and" }, { "start": 1334.52, "end": 1339.28, "text": " then it's not giving me an action to maximize the reward it is actually" }, { "start": 1339.28, "end": 1345.4, "text": " supposed to give me an action that achieves exactly the reward that I have" }, { "start": 1345.4, "end": 1350.52, "text": " presented okay so I ask it for a reward and it gives me the action that" }, { "start": 1350.52, "end": 1355.4, "text": " corresponds to achieving that reward in the future this is is different right" }, { "start": 1355.4, "end": 1360.96, "text": " and I can still do reward maximization by simply putting a high number there" }, { "start": 1360.96, "end": 1368.1200000000001, "text": " right I want to get a lot of reward and like 21 is the maximum in Pong which" }, { "start": 1368.1200000000001, "end": 1373.24, "text": " this game is right here so you can say I want to achieve 21 reward please give me" }, { "start": 1373.24, "end": 1378.16, "text": " an action that achieves 21 reward and that will be corresponding to getting as" }, { "start": 1378.16, "end": 1384.52, "text": " much reward as possible notice that you do need to know the maximum reward it" }, { "start": 1384.52, "end": 1388.92, "text": " doesn't actually work if you just would put 1 billion billion billion as we will" }, { "start": 1388.92, "end": 1394.6, "text": " like as the their experiments kind of indicate so that's a drawback of this" }, { "start": 1394.6, "end": 1403.16, "text": " now just when I go back to this paper that slipped in just by accident I" }, { "start": 1403.16, "end": 1409.0800000000002, "text": " have this open right here by Schmidt hooper don't predict rewards it says" }, { "start": 1409.0800000000002, "end": 1414.24, "text": " just map them to actions so they say we transform reinforcement learning into a" }, { "start": 1414.24, "end": 1420.92, "text": " form of supervised learning okay which sounds like you know offline RL by" }, { "start": 1420.92, "end": 1427.0800000000002, "text": " turning RL on its head and did you look at this the memes are strong in this one" }, { "start": 1427.0800000000002, "end": 1432.68, "text": " okay upside down RL I've actually made a video on upside down RL they say or" }, { "start": 1432.68, "end": 1440.76, "text": " standard RL predicts rewards while whatever this is instead uses rewards" }, { "start": 1440.76, "end": 1445.64, "text": " as task defining inputs together with representations of time horizon and" }, { "start": 1445.64, "end": 1452.48, "text": " other computable functions of historic and desired future data our L Lutterer" }, { "start": 1452.48, "end": 1457.24, "text": " learns to interpret these input observations as command mapping them to" }, { "start": 1457.24, "end": 1464.92, "text": " actions through supervised learning on past possibly accidental experience okay" }, { "start": 1464.92, "end": 1474.1200000000001, "text": " so this it is actually I of course this isn't by accident so I knew this paper" }, { "start": 1474.1200000000001, "end": 1479.6, "text": " right here and when I read this paper it immediately sprung into my mind and" }, { "start": 1479.6, "end": 1485, "text": " Schmidt Hooper also I as I see it wasn't the entirely first who did anything like" }, { "start": 1485, "end": 1489.08, "text": " this like we've known about goal conditioned reinforcement learning for" }, { "start": 1489.08, "end": 1495.52, "text": " a while and so on so this is not necessarily a new idea they do reference" }, { "start": 1495.52, "end": 1502.16, "text": " Schmidt hooper's paper very briefly in in this paper staying stating that it's" }, { "start": 1502.16, "end": 1508.12, "text": " kind of a Markovian approach and and so on even though here you have Markovian" }, { "start": 1508.12, "end": 1514.52, "text": " interfaces and here you have non Markovian partially observable interfaces" }, { "start": 1514.52, "end": 1520.28, "text": " and the advantages that Schmidt hooper names right here are very much the same" }, { "start": 1520.28, "end": 1525.76, "text": " for example they continuously say they don't need discount factors and here" }, { "start": 1525.76, "end": 1530.84, "text": " also you have no problems with discount factors and so on so I I wanted to point" }, { "start": 1530.84, "end": 1536, "text": " this out and I wanted to point out that the paper is referenced in this paper" }, { "start": 1536, "end": 1540.6399999999999, "text": " but essentially here you have the three components the component is offline RL" }, { "start": 1540.64, "end": 1547.2, "text": " plus a transformer plus viewing the problem as a sequence modeling problem" }, { "start": 1547.2, "end": 1554.88, "text": " by conditioning on the reward so why does this make sense to condition on" }, { "start": 1554.88, "end": 1562.2800000000002, "text": " the on the future desired reward well it makes sense first of all because in" }, { "start": 1562.2800000000002, "end": 1567.96, "text": " classic reinforcement learning why don't we do that why don't we we say I want to" }, { "start": 1567.96, "end": 1572.68, "text": " get this reward please give me the action to it because it's a lot more" }, { "start": 1572.68, "end": 1577.92, "text": " work right if I just want to maximize my reward I need a function right I need a" }, { "start": 1577.92, "end": 1582.4, "text": " neural network here is my state here is my neural network maybe it's a policy" }, { "start": 1582.4, "end": 1588.8400000000001, "text": " gradient method give me an action and that action is supposed to maximize the" }, { "start": 1588.8400000000001, "end": 1595.72, "text": " reward so now I need an additional input the desired reward and also give me an" }, { "start": 1595.72, "end": 1598.8, "text": " action now the network doesn't only need to remember what do I need to do to" }, { "start": 1598.8, "end": 1603.2, "text": " perform well it needs to be able to distinguish what do I need to do to" }, { "start": 1603.2, "end": 1606.88, "text": " perform well what do I need to do to perform a little bit worse what do I" }, { "start": 1606.88, "end": 1612.4, "text": " need to do to perform terribly it's a lot more stuff to remember for the" }, { "start": 1612.4, "end": 1617.84, "text": " network the hope of course is that with all the the advances we've seen in" }, { "start": 1617.84, "end": 1624.8, "text": " sequence modeling that essentially these transformers are capable of of" }, { "start": 1624.8, "end": 1629.52, "text": " memorizing or learning all of those different things we know that" }, { "start": 1629.52, "end": 1634.24, "text": " transformers are almost unlimited in their capacity to absorb data and learn" }, { "start": 1634.24, "end": 1640.48, "text": " stuff so the hope is that these models will be capable of learning that thing" }, { "start": 1640.48, "end": 1649.96, "text": " the neck at doing this though is this is a technique that naturally maps to" }, { "start": 1649.96, "end": 1654.28, "text": " offline reinforcement learning so offline reinforcement learning in" }, { "start": 1654.28, "end": 1658.2, "text": " general is a harder task than online reinforcement learning right for the" }, { "start": 1658.2, "end": 1665.48, "text": " reasons I outlined however this particular thing lends itself extremely" }, { "start": 1665.48, "end": 1670.32, "text": " well to the task of offline reinforcement learning so what do I mean" }, { "start": 1670.32, "end": 1677.48, "text": " if you have a history you take one history from here and it says well I" }, { "start": 1677.48, "end": 1681.32, "text": " was in this state I performed this action I got this reward I was in this" }, { "start": 1681.32, "end": 1685.6799999999998, "text": " state and then I came to this state I performed this action I got this reward" }, { "start": 1685.6799999999998, "end": 1692.48, "text": " and so on okay what you can try to do and what Q learning tries to do is it" }, { "start": 1692.48, "end": 1697.1599999999999, "text": " tries to somehow learn the the Q function that takes state and action" }, { "start": 1697.1599999999999, "end": 1703.6, "text": " condition on the history and sort of predict the future rewards and so on so" }, { "start": 1703.6, "end": 1708.72, "text": " it tries to figure out what it needed to do instead of doing what this agent did" }, { "start": 1708.72, "end": 1716.1200000000001, "text": " in order to achieve higher rewards so it is sort of trying to look at the agent" }, { "start": 1716.1200000000001, "end": 1721.08, "text": " that it it sees critically and be like mmm you probably didn't do something" }, { "start": 1721.08, "end": 1725.92, "text": " well there but it has no way to act in the world it has no way to to go out and" }, { "start": 1725.92, "end": 1730.76, "text": " try it itself instead this thing it simply accepts it's like it accepts the" }, { "start": 1730.76, "end": 1735.1200000000001, "text": " history it simply says oh well you did these things and you got this reward" }, { "start": 1735.12, "end": 1742.1599999999999, "text": " okay cool and if you know anything about these sequence models and transformers" }, { "start": 1742.1599999999999, "end": 1749.28, "text": " that they can memorize stuff quite well so going forward maybe think of these" }, { "start": 1749.28, "end": 1753.4399999999998, "text": " what these transformers do as simply memorizing the the training data set" }, { "start": 1753.4399999999998, "end": 1758.1599999999999, "text": " okay I know it's not the case but you memorize the training data set well now" }, { "start": 1758.1599999999999, "end": 1763.56, "text": " if you memorize the training data set and you're in this situation right here" }, { "start": 1763.56, "end": 1770.04, "text": " you see a history you see a state and the sort of that the human tells you I" }, { "start": 1770.04, "end": 1774.96, "text": " would like to get 21 reward what the transformer can do is it can simply say" }, { "start": 1774.96, "end": 1782.32, "text": " okay let me go into my training data set let me find some let me find some" }, { "start": 1782.32, "end": 1789.36, "text": " sequence where the agent was in the same kind of history also was in this state" }, { "start": 1789.36, "end": 1796, "text": " and also ended up getting about 21 reward out of the future actions now what" }, { "start": 1796, "end": 1800.9599999999998, "text": " did that agent do well it did this action okay and it's reasonable to assume" }, { "start": 1800.9599999999998, "end": 1806.4799999999998, "text": " that you know if you're in the same kind of history and if you want the same" }, { "start": 1806.4799999999998, "end": 1812.1599999999999, "text": " reward as that agent got you should probably act the same as that agent did" }, { "start": 1812.1599999999999, "end": 1818.2199999999998, "text": " okay it is a lot like behavior cloning though behavior cloning still focuses on" }, { "start": 1818.22, "end": 1824.16, "text": " sort of getting higher reward as I under understand it so it simply takes what" }, { "start": 1824.16, "end": 1828.64, "text": " comes in as expert demonstrations whereas here you just you accept the" }, { "start": 1828.64, "end": 1834.52, "text": " history as it is and if you're in a new situation you the question to the" }, { "start": 1834.52, "end": 1841, "text": " sequence model is essentially how would a sequence that evolves like this okay" }, { "start": 1841, "end": 1846.04, "text": " that evolves like this how would it continue in the training data set and" }, { "start": 1846.04, "end": 1850.72, "text": " what it will give you it will give you the action that agents who were in a" }, { "start": 1850.72, "end": 1856.84, "text": " similar situation and ended up getting that similar reward that you want to get" }, { "start": 1856.84, "end": 1862.92, "text": " those what did those agents do just do the same thing and you're probably going" }, { "start": 1862.92, "end": 1867.8, "text": " to end up in the same place as they did okay that's that's the approach right" }, { "start": 1867.8, "end": 1877.52, "text": " here you can see how this is is useful right though again it it only given that" }, { "start": 1877.52, "end": 1884.32, "text": " we ditch all of the RL given that we ditch all of the RL mechanics right here" }, { "start": 1884.32, "end": 1889.04, "text": " which they claim as a positive and certainly it is a positive you don't" }, { "start": 1889.04, "end": 1892.36, "text": " need to parse out what you needed to do and so on you simply accept history and" }, { "start": 1892.36, "end": 1901.6399999999999, "text": " say okay I'm gonna do the same kind of things instead of that if so I just said" }, { "start": 1901.6399999999999, "end": 1905.56, "text": " I'm going to look at agents that had the same kind of history and were in the" }, { "start": 1905.56, "end": 1910.28, "text": " same kind of situation now if you think about back about this problem right here" }, { "start": 1910.28, "end": 1919.6799999999998, "text": " of the context length what if the future reward right here is crucially" }, { "start": 1919.68, "end": 1925.3200000000002, "text": " dependent on an action you did back here right you could have two agents that have" }, { "start": 1925.3200000000002, "end": 1930.4, "text": " the exact same history as far as the context reaches back but done a" }, { "start": 1930.4, "end": 1936.1200000000001, "text": " different action back here and the sequence model would have no trouble" }, { "start": 1936.1200000000001, "end": 1941.16, "text": " sorry would have like no chance of differentiating between the two it they" }, { "start": 1941.16, "end": 1945.76, "text": " look the same okay one agent ended up with a really nice reward the other" }, { "start": 1945.76, "end": 1950.48, "text": " agent ended up with a really bad reward even worse the data set couldn't" }, { "start": 1950.48, "end": 1956.36, "text": " contain an agent that ended up in the bad reward but had you done Q learning" }, { "start": 1956.36, "end": 1963.2, "text": " you could maybe figure it out from other trajectories so as much as they I feel" }, { "start": 1963.2, "end": 1968.72, "text": " as much as they tout the ability to ditch the whole mechanic like the whole" }, { "start": 1968.72, "end": 1972.92, "text": " machinery of reinforcement learning right here you're on into the same" }, { "start": 1972.92, "end": 1977.88, "text": " problem like even with this like all of this it does not alleviate the problem" }, { "start": 1977.88, "end": 1984.16, "text": " if you want to go beyond how far you can back prop you need to you need to use" }, { "start": 1984.16, "end": 1989.2, "text": " the dynamic programming approaches okay like I don't see a way around it maybe" }, { "start": 1989.2, "end": 1994.76, "text": " I'm terribly wrong but you know so that the transformers are good for doing the" }, { "start": 1994.76, "end": 2001.8000000000002, "text": " credit assignment over the longer distances than the LSTM's yes certainly" }, { "start": 2001.8, "end": 2006.02, "text": " but that's valid for online offline RL and so on whether you do sequence" }, { "start": 2006.02, "end": 2010.6, "text": " modeling or not it doesn't alleviate the problem that these approaches were" }, { "start": 2010.6, "end": 2015.84, "text": " trying to solve in the first place though the sequence modeling approach is" }, { "start": 2015.84, "end": 2021.04, "text": " different and does bring like a different view on the problem and again" }, { "start": 2021.04, "end": 2025.8799999999999, "text": " you can do the sequence modeling approach because it there is hope that" }, { "start": 2025.8799999999999, "end": 2029.68, "text": " with these transformers you can actually absorb that much data and learn from" }, { "start": 2029.68, "end": 2036.4, "text": " that so that is sort of the thing we're in that that was actually already the" }, { "start": 2036.4, "end": 2042.76, "text": " the technique right here we were not even past the the first page and that is" }, { "start": 2042.76, "end": 2048.08, "text": " that's already the thing you get this data and there like you can" }, { "start": 2048.08, "end": 2051.6800000000003, "text": " deterministically you can see that right you can deterministically transform" }, { "start": 2051.6800000000003, "end": 2057.56, "text": " this into the format they want so this state action and desired future return" }, { "start": 2057.56, "end": 2062.08, "text": " or future return you simply look into the future which you can do because it's" }, { "start": 2062.08, "end": 2067.64, "text": " a data set and you sort of calculate what the the future reward is at this" }, { "start": 2067.64, "end": 2072, "text": " particular time step so you can easily generate that training data then you can" }, { "start": 2072, "end": 2077.96, "text": " use classic sequence modeling in order to do that their idea of what happens" }, { "start": 2077.96, "end": 2086.68, "text": " is encapsulated again in this in this thing right here so this is a very very" }, { "start": 2086.68, "end": 2094.8399999999997, "text": " example problem that they come up with so they consider a task up here of" }, { "start": 2094.8399999999997, "end": 2100.6, "text": " finding the shortest path in a on a directed graph which can be posed as an" }, { "start": 2100.6, "end": 2108.48, "text": " RL problem okay the reward is zero when the agent is at the goal node and" }, { "start": 2108.48, "end": 2113.3999999999996, "text": " negative one otherwise we train GPT model to predict the next token in a" }, { "start": 2113.4, "end": 2118.12, "text": " sequence of returns to go which is the sum of future reward state and actions" }, { "start": 2118.12, "end": 2123.44, "text": " training only on random walk data with no expert demonstrations we can generate" }, { "start": 2123.44, "end": 2128.76, "text": " optimal trajectories at test time by adding a prior to generate the highest" }, { "start": 2128.76, "end": 2134.08, "text": " possible returns they also say see more details and empirical results in the" }, { "start": 2134.08, "end": 2137.92, "text": " appendix I've looked at the appendix nothing there I've looked at the code" }, { "start": 2137.92, "end": 2143.32, "text": " nothing there just just saying I mean it is a toy example to illustrate but like" }, { "start": 2143.32, "end": 2151.1600000000003, "text": " there's nothing there of this example so what they do is they have a graph there" }, { "start": 2151.1600000000003, "end": 2157.32, "text": " is a goal you're supposed to just find the the shortest path what you do is you" }, { "start": 2157.32, "end": 2161.2000000000003, "text": " just do random walks okay some of these random walks will actually fail like" }, { "start": 2161.2000000000003, "end": 2166.36, "text": " this one here so the all the rewards are negative infinity some of them will" }, { "start": 2166.36, "end": 2172.2400000000002, "text": " succeed and then you can generate that training data okay so from here that all" }, { "start": 2172.24, "end": 2177.3999999999996, "text": " the future reward is negative four from this particular random walk you did here" }, { "start": 2177.3999999999996, "end": 2181.56, "text": " okay here you start at a different location also negative four because" }, { "start": 2181.56, "end": 2186.72, "text": " you're gonna take four steps now what you do with this sequence modeling" }, { "start": 2186.72, "end": 2193.8399999999997, "text": " approach is you say I want to start from this node however however I would like" }, { "start": 2193.84, "end": 2203.1200000000003, "text": " to get a reward of negative three which is a lesser reward than you got all the" }, { "start": 2203.1200000000003, "end": 2209.44, "text": " way here so what you're asking the model to do and by the way like I'm pretty" }, { "start": 2209.44, "end": 2215.44, "text": " sure this should say negative two to make their example compelling okay but" }, { "start": 2215.44, "end": 2219.92, "text": " so I think there's kind of a flaw in this toy example but I hope you can" }, { "start": 2219.92, "end": 2224.48, "text": " still see what they're doing so you're saying I would like to get a very high" }, { "start": 2224.48, "end": 2230.12, "text": " reward or a low negative reward I guess a low magnitude negative reward going" }, { "start": 2230.12, "end": 2235, "text": " from here which corresponds to finding a really short path right and what the" }, { "start": 2235, "end": 2238.84, "text": " model is going to do is going to look at its training data as well was I in a" }, { "start": 2238.84, "end": 2244.32, "text": " similar situation and some point like in the training data set and it's gonna" }, { "start": 2244.32, "end": 2252.32, "text": " find yes yes actually here I was in a very much similar situation and and so I" }, { "start": 2252.32, "end": 2256.28, "text": " wanted to get exactly exactly that reward I was in that situation the" }, { "start": 2256.28, "end": 2261.56, "text": " history is a bit different but you know who cares now I'm here as well and what" }, { "start": 2261.56, "end": 2267.1600000000003, "text": " did the agent do that then went on and reached exactly the reward I want well it" }, { "start": 2267.1600000000003, "end": 2272.48, "text": " did this action right here okay I'll just I'll just do that same action this" }, { "start": 2272.48, "end": 2277, "text": " is just comes out of the sequence model right so it's the sequence model simply" }, { "start": 2277, "end": 2282.2400000000002, "text": " tells you how would a sequence that started like this continue and it tells" }, { "start": 2282.2400000000002, "end": 2288, "text": " you the action and then it looks at this thing right here and here is a bit where" }, { "start": 2288, "end": 2292.2, "text": " it fails right they say each step gets you negative one reward so technically" }, { "start": 2292.2, "end": 2298.56, "text": " at inference time at inference time what you would do is you would look at here" }, { "start": 2298.56, "end": 2302.96, "text": " so you get negative one from here so here you will put negative two so at the" }, { "start": 2302.96, "end": 2306.4, "text": " beginning you have to specify the reward you want to get and from there on you" }, { "start": 2306.4, "end": 2311.04, "text": " can calculate sort of the next reward they need this to be negative one right" }, { "start": 2311.04, "end": 2316.68, "text": " here actually because so let's just imagine that for some reason you got a" }, { "start": 2316.68, "end": 2322.08, "text": " negative two here right so they need this to be negative one because that" }, { "start": 2322.08, "end": 2325.6, "text": " makes their example so the sequence model says well was I in this situation" }, { "start": 2325.6, "end": 2331.96, "text": " at some point and I got out I got a negative one yes I was here and what did" }, { "start": 2331.96, "end": 2337.2799999999997, "text": " I do to achieve that I went there okay I'm gonna go there ah now I'm at the" }, { "start": 2337.2799999999997, "end": 2341.48, "text": " goal okay and technically you find somewhat the shortest now this again this" }, { "start": 2341.48, "end": 2345.24, "text": " doesn't the example here doesn't work because he start with negative three" }, { "start": 2345.24, "end": 2348.36, "text": " you're gonna end up with negative two right here that wouldn't match the blue" }, { "start": 2348.36, "end": 2353.96, "text": " one that would actually match this one so you would not get the shortest path so" }, { "start": 2353.96, "end": 2358.2400000000002, "text": " you should actually start out with an oracle knowing that the shortest path is" }, { "start": 2358.2400000000002, "end": 2364.16, "text": " negative two that would of course not match any example you have in your" }, { "start": 2364.16, "end": 2368.48, "text": " training data but the sequence model could say well this is kind of close to" }, { "start": 2368.48, "end": 2374.8, "text": " this right so the most likely action is still going to be the one right here and" }, { "start": 2374.8, "end": 2378.88, "text": " then you take the one right here and then you're in the negative one regime" }, { "start": 2378.88, "end": 2385, "text": " and then you match this one right here I hope you can see right how that that" }, { "start": 2385, "end": 2389.28, "text": " figures out a bit so this can also handle if you don't get the expected" }, { "start": 2389.28, "end": 2393.76, "text": " reward which of course can happen it's not everything is always deterministic" }, { "start": 2393.76, "end": 2399.28, "text": " so because you reassess after every step you reassess you ask sort of your" }, { "start": 2399.28, "end": 2403.48, "text": " training data set and this is very much how we think of these big transformer" }, { "start": 2403.48, "end": 2406.6, "text": " language models what they do is they sort of interpolate the training data" }, { "start": 2406.6, "end": 2411.04, "text": " set so they stitch together different pieces of the training data set which is" }, { "start": 2411.04, "end": 2418.7999999999997, "text": " you can see that happening right here of course you already saw the flaw you need" }, { "start": 2418.7999999999997, "end": 2427.7599999999998, "text": " to know what reward you would like to achieve and so like by the way lot tech" }, { "start": 2427.7599999999998, "end": 2432.92, "text": " is beautiful isn't it maybe that's just my thing I don't I don't recall that" }, { "start": 2432.92, "end": 2437.08, "text": " being like this so that by the way the code is available and also the pseudocode" }, { "start": 2437.08, "end": 2443.2400000000002, "text": " big props here you can see that the decision transformer in blue in Atari" }, { "start": 2443.2400000000002, "end": 2447.4, "text": " lags a bit behind what they call TD learning so this TD learning that's the" }, { "start": 2447.4, "end": 2451.64, "text": " the conference conservative Q learning and the behavior cloning which they" }, { "start": 2451.64, "end": 2458.6800000000003, "text": " term BC in the open in the open AI gym it outperforms it a little bit and then" }, { "start": 2458.68, "end": 2464.64, "text": " there's these key to door task that we're going to get into in just a bit so" }, { "start": 2464.64, "end": 2470.7999999999997, "text": " I just want to quickly mention that their primary comparison here is this" }, { "start": 2470.7999999999997, "end": 2479.64, "text": " SQL and they make a big deal about sort of not needing discount factors and I'm" }, { "start": 2479.64, "end": 2484.04, "text": " not really sure what they mean there are usually two different discount factors" }, { "start": 2484.04, "end": 2490.4, "text": " in these algorithms so one of them is usually found right here in the" }, { "start": 2490.4, "end": 2495.84, "text": " objective formulation so here they say what we want to do is maximize the" }, { "start": 2495.84, "end": 2500.2799999999997, "text": " expected return which is this quantity right here okay so what you want to do" }, { "start": 2500.2799999999997, "end": 2506.96, "text": " is you maximize your expected future returns in the episode now this is" }, { "start": 2506.96, "end": 2516.84, "text": " usually different some people formulate it as the expected return in the future" }, { "start": 2516.84, "end": 2522.28, "text": " but discounted by a discount factor that you raise to the power so you're" }, { "start": 2522.28, "end": 2527.4, "text": " essentially saying the future rewards are less valuable than current rewards" }, { "start": 2527.4, "end": 2531.32, "text": " and that gives you some sort of stability but it also gets you short-sightedness" }, { "start": 2531.32, "end": 2536.96, "text": " and so on however this is a choice this is a choice of the problem formulation" }, { "start": 2536.96, "end": 2541.84, "text": " now I get people train with this for maybe stability reasons and then they" }, { "start": 2541.84, "end": 2547.96, "text": " still test and actually report the undiscounted reward at the end okay but" }, { "start": 2547.96, "end": 2552.1600000000003, "text": " I'm just saying this is a choice and their choice right here is different" }, { "start": 2552.1600000000003, "end": 2559.0800000000004, "text": " from what CQL does so CQL explicitly maximizes the discounted future returns" }, { "start": 2559.08, "end": 2565.36, "text": " while they maximize the future returns I just want to point out that there is an" }, { "start": 2565.36, "end": 2570.7599999999998, "text": " actual difference here the other difference is in the TD learning okay so" }, { "start": 2570.7599999999998, "end": 2576.92, "text": " the by the way if you don't do this if you don't discount your returns you get" }, { "start": 2576.92, "end": 2583.24, "text": " the situation that you can you can cycle so if you know if you if you get like" }, { "start": 2583.24, "end": 2588.4, "text": " positive rewards or zero rewards for certain transitions it can just like" }, { "start": 2588.4, "end": 2595, "text": " if someone is losing okay a game so here would be negative one this is the only" }, { "start": 2595, "end": 2601.28, "text": " two options either lose or you know go back here now chess has a built-in" }, { "start": 2601.28, "end": 2605.56, "text": " protection against this but other things you can just agent will just circle" }, { "start": 2605.56, "end": 2609.28, "text": " forever because it doesn't cost anything and if it were to go here it would" }, { "start": 2609.28, "end": 2616.1600000000003, "text": " actually lose so you usually discount no actually that's not why you discount" }, { "start": 2616.16, "end": 2620.72, "text": " sorry that that is a bad example but there are good reasons to discount" }, { "start": 2620.72, "end": 2623.56, "text": " future words here you would actually implement some sort of a penalty like" }, { "start": 2623.56, "end": 2630.52, "text": " minus point one for just any step you do yeah but discounting maybe you could you" }, { "start": 2630.52, "end": 2635.3199999999997, "text": " could win if you could win the agent could still go in circles because well" }, { "start": 2635.3199999999997, "end": 2640.7599999999998, "text": " it can still win later right yeah in any case that's one discount fact the other" }, { "start": 2640.76, "end": 2647.44, "text": " discount factor is in the TD learning so right here and that's a different" }, { "start": 2647.44, "end": 2652.7200000000003, "text": " discount factor you say well I'm going to predict this next step right here" }, { "start": 2652.7200000000003, "end": 2657.1200000000003, "text": " that's probably a pretty accurate description and that reward here is" }, { "start": 2657.1200000000003, "end": 2661.88, "text": " quite a good signal given that I am in in this step right here the next one" }, { "start": 2661.88, "end": 2667.1000000000004, "text": " maybe a bit more noisy right because it's two steps ahead and then I could" }, { "start": 2667.1, "end": 2670.8399999999997, "text": " you know I could be doing different actions maybe the transition is" }, { "start": 2670.8399999999997, "end": 2677.12, "text": " stochastic so when I learn my value function from all of these different" }, { "start": 2677.12, "end": 2684.56, "text": " goals okay I am going to value this target as a learning objective right" }, { "start": 2684.56, "end": 2688, "text": " here you have that recurrence relation I'm going to value this target the" }, { "start": 2688, "end": 2692.4, "text": " highest I'm going to value this one a little bit less some I'm more trying to" }, { "start": 2692.4, "end": 2700.28, "text": " match this oops sorry I'm more trying to match this one right here given that" }, { "start": 2700.28, "end": 2704.8, "text": " reward then I'm going to match this one right here giving the given the two" }, { "start": 2704.8, "end": 2710.04, "text": " rewards maybe both should be accurate so the value should match this their reward" }, { "start": 2710.04, "end": 2714.84, "text": " plus this one the value should also match these two rewards plus this one" }, { "start": 2714.84, "end": 2721.2400000000002, "text": " but the second one is more unsure so the TD learning usually you have" }, { "start": 2721.24, "end": 2727.7999999999997, "text": " classically called another discount factor lambda where you discount sort of" }, { "start": 2727.7999999999997, "end": 2733.3599999999997, "text": " future losses and they say we don't need the discount factor right here I don't" }, { "start": 2733.3599999999997, "end": 2737.9199999999996, "text": " know which one which one they're referring to but what I want to point" }, { "start": 2737.9199999999996, "end": 2742.56, "text": " out here is that yeah the objective is different so maybe they say we can get" }, { "start": 2742.56, "end": 2747.7, "text": " by with this objective I don't see that that's a choice of the modeler and you" }, { "start": 2747.7, "end": 2751.7599999999998, "text": " run into problems with some environments if you don't have a discount factor in" }, { "start": 2751.7599999999998, "end": 2756.08, "text": " any case you can see right here in the experiments for example this is Atari" }, { "start": 2756.08, "end": 2765.4399999999996, "text": " the decision transformer outperforms CQL in some respects it it trails it in" }, { "start": 2765.4399999999996, "end": 2770.52, "text": " other ones I mean they also look at like these standard deviations are are quite" }, { "start": 2770.52, "end": 2778.8, "text": " high in the open AI gym it is a bit it looks a bit better in that it sorry it" }, { "start": 2778.8, "end": 2785.28, "text": " does outperform CQL in quite a number of things and also with less standard" }, { "start": 2785.28, "end": 2793.2, "text": " deviation right here yeah also they they compare against sort of behavior cloning" }, { "start": 2793.2, "end": 2800.7599999999998, "text": " where you retroactively only train on the best such-and-such percent of the" }, { "start": 2800.7599999999998, "end": 2805.72, "text": " experience and they find that if you hit the correct percentage which is not" }, { "start": 2805.72, "end": 2808.7999999999997, "text": " necessarily the only the best trajectories if you hit the correct" }, { "start": 2808.7999999999997, "end": 2811.96, "text": " percentage sometimes behavior cloning can actually give you a better" }, { "start": 2811.96, "end": 2816.4399999999996, "text": " performance however hitting that percentage of course requires another" }, { "start": 2816.4399999999996, "end": 2821, "text": " hyper parameter search and you as an oracle you kind of have to you know you" }, { "start": 2821, "end": 2825.6, "text": " have to go and filter and you have to try out and you don't know you have to" }, { "start": 2825.6, "end": 2829.68, "text": " have some sort of a validation set whereas the decision transformer is just" }, { "start": 2829.68, "end": 2834.68, "text": " one run now throughout all of this they're sort of touting that they don't" }, { "start": 2834.68, "end": 2839.6, "text": " need as many like searches and as many you know like here you need to choose" }, { "start": 2839.6, "end": 2843.84, "text": " that percentage you need to figure it out but if you look at their actual" }, { "start": 2843.84, "end": 2849, "text": " configuration of hyper parameters down here they do things like well we have" }, { "start": 2849, "end": 2853.48, "text": " one architecture for these Atari games but then we have a different one for" }, { "start": 2853.48, "end": 2858.36, "text": " pong right we have a context length for these Atari games but then a different" }, { "start": 2858.36, "end": 2862.64, "text": " one for pong because pong is actually quite a sparse reward ish game okay" }, { "start": 2862.64, "end": 2867.32, "text": " compared these other ones so they make the context length bigger in order to" }, { "start": 2867.32, "end": 2871.32, "text": " capture a longer history because otherwise it couldn't differentiate the" }, { "start": 2871.32, "end": 2876.48, "text": " agents and they would need to use TD or some kind of dynamic programming right" }, { "start": 2876.48, "end": 2881.28, "text": " there and then there's also this this how the return to go conditioning like" }, { "start": 2881.28, "end": 2886.28, "text": " how much reward you want to get and that's a problem like so here again they" }, { "start": 2886.28, "end": 2891.52, "text": " do something and this is like they look at the baseline they look at CQL how" }, { "start": 2891.52, "end": 2897.16, "text": " much did that achieve and then they just choose to achieve a multiple of that one" }, { "start": 2897.16, "end": 2902.4, "text": " they say it's like you look at your competitor at what you're compared to and" }, { "start": 2902.4, "end": 2909.32, "text": " then you base your decisions off of the result of that so you know I kind of get" }, { "start": 2909.32, "end": 2913.88, "text": " it and also this multiplier they take it is very informed by them knowing the" }, { "start": 2913.88, "end": 2921.2400000000002, "text": " games right in pong you know you can reach at max 21 so that's they condition" }, { "start": 2921.2400000000002, "end": 2928.76, "text": " on the reward of 20 in in sequence it's I think it's unbounded so they they do it" }, { "start": 2928.76, "end": 2937.6400000000003, "text": " 1.5 times the performance of that and yeah so I'm not I'm like I'm not saying" }, { "start": 2937.6400000000003, "end": 2942.48, "text": " this is invalid experiments but like this this looking at your competitor and" }, { "start": 2942.48, "end": 2950.0800000000004, "text": " then basing crucial hyper parameters off of their performance but I'm sure it I'm" }, { "start": 2950.0800000000004, "end": 2953.76, "text": " sure it will work otherwise but just know that you need to have a good idea" }, { "start": 2953.76, "end": 2959, "text": " of what reward you can even achieve and what's possible given your data set right" }, { "start": 2959, "end": 2964.0800000000004, "text": " so CQL also takes into account like it also learns from the same data set and" }, { "start": 2964.0800000000004, "end": 2969.48, "text": " that's sort of how they know what's possible from that data set yeah so is" }, { "start": 2969.48, "end": 2972.5200000000004, "text": " this a problem that you need to know the reward can't you just put a hundred" }, { "start": 2972.5200000000004, "end": 2978.3, "text": " billion billion billion and the answer is no you see right here this orange" }, { "start": 2978.3, "end": 2984.7200000000003, "text": " line is the highest reward that was observed in the data set now this is is" }, { "start": 2984.7200000000003, "end": 2990.0800000000004, "text": " gamer normalized that's why it's not like 21 but here the experiment it's" }, { "start": 2990.0800000000004, "end": 2993.96, "text": " actually a pretty cool experiment is since you're not only maximizing the" }, { "start": 2993.96, "end": 2999.4, "text": " word you can you can ask the model to give you any reward you want so the" }, { "start": 2999.4, "end": 3003.52, "text": " green line is what you want it and if the blue line is what you achieved" }, { "start": 3003.52, "end": 3007.88, "text": " matches the green line exactly the model always gives you the actions to to make" }, { "start": 3007.88, "end": 3012.32, "text": " that reward that you requested happen okay and you can see that green line in" }, { "start": 3012.32, "end": 3016.52, "text": " the blue and they match pretty accurately for a long stretch which" }, { "start": 3016.52, "end": 3020.96, "text": " meaning means that this the sequence modeling approach can really not only" }, { "start": 3020.96, "end": 3024.44, "text": " give you the max reward but it can give you sort of any reward because it" }, { "start": 3024.44, "end": 3029.96, "text": " remembers all the sequences though probably not the lowest ones because" }, { "start": 3029.96, "end": 3034.6800000000003, "text": " you're actually learning from a DQN learner that it has probably only good" }, { "start": 3034.68, "end": 3040.48, "text": " trajectories okay but you can see as soon as you go past the highest" }, { "start": 3040.48, "end": 3047.7599999999998, "text": " observed reward it not only does it stay flat it actually drops down again and" }, { "start": 3047.7599999999998, "end": 3051.7599999999998, "text": " you can see that pattern pretty much anywhere where you have an orange line" }, { "start": 3051.7599999999998, "end": 3057.08, "text": " like this so here you what maybe you stay maybe you drop down here it's that" }, { "start": 3057.08, "end": 3061.5, "text": " kind of seems like you stay it's only that here in the sequest where it's a" }, { "start": 3061.5, "end": 3065.58, "text": " bit better but like this is a gamer normalized score of three like a gamer" }, { "start": 3065.58, "end": 3071.52, "text": " would achieve 100 here but you can also see that sort of drop compared to the" }, { "start": 3071.52, "end": 3076.72, "text": " green line so that means you can't just put a hundred billion essentially so you" }, { "start": 3076.72, "end": 3080.72, "text": " need to know the reward that you're going for sometimes no problem" }, { "start": 3080.72, "end": 3085.92, "text": " sometimes actual problem okay and that reward is not only dependent on the game" }, { "start": 3085.92, "end": 3090.42, "text": " it is also dependent on the game but it is also dependent on like how your data" }, { "start": 3090.42, "end": 3094.04, "text": " set ace that you learn from is structured you need to know what your" }, { "start": 3094.04, "end": 3099.44, "text": " agent can achieve they do some other relations with respect to context length" }, { "start": 3099.44, "end": 3105.6, "text": " they actually find that larger context length helps so if you don't provide a" }, { "start": 3105.6, "end": 3111.8, "text": " long context the performance drops it makes sense in that the transformer is" }, { "start": 3111.8, "end": 3116.92, "text": " able to match the history to observe trajectories better on the other hand" }, { "start": 3116.92, "end": 3122.16, "text": " technically reinforcement learning algorithm since these are in Atari are" }, { "start": 3122.16, "end": 3126.44, "text": " fully observable if you do frame stacking you know technically an RL" }, { "start": 3126.44, "end": 3133.48, "text": " agent shouldn't shouldn't care about the more of the past but you know RL" }, { "start": 3133.48, "end": 3139.6800000000003, "text": " algorithms do they're not perfect the last thing is that key to door thing" }, { "start": 3139.6800000000003, "end": 3146.7000000000003, "text": " where they show that okay there this is a an experiment toy setting by the way" }, { "start": 3146.7, "end": 3152.96, "text": " again I did not find this in the appendix I did not find code for this so" }, { "start": 3152.96, "end": 3156.24, "text": " we actually we don't know too much about this experiment but as far as I" }, { "start": 3156.24, "end": 3163.3999999999996, "text": " understand there's one room there's two rooms there's three rooms in the first" }, { "start": 3163.3999999999996, "end": 3169.52, "text": " room there's a key in the last room there's a door now you're thrown into" }, { "start": 3169.52, "end": 3173, "text": " the first room you get to walk around a bit then you're thrown into the second" }, { "start": 3173, "end": 3177.88, "text": " room you get to walk for a variable length of time and then you thrown into" }, { "start": 3177.88, "end": 3185.2, "text": " the last room if you have put taken the key and you you reach the door here then" }, { "start": 3185.2, "end": 3190.4, "text": " you get a good reward otherwise you fail okay so the middle room is called a" }, { "start": 3190.4, "end": 3196.16, "text": " distractor because if you have something like an LSTM or if you have" }, { "start": 3196.16, "end": 3203.3999999999996, "text": " something like Q learning or something so the the problem with this sorry Q" }, { "start": 3203.3999999999996, "end": 3209.96, "text": " equals R plus Q is that this sort of looks one step ahead okay this recurrence" }, { "start": 3209.96, "end": 3214.2599999999998, "text": " relation that means if you have a learning signal somewhere way down the" }, { "start": 3214.2599999999998, "end": 3221, "text": " line you need to sort of propagate it's not back prop it's actually you need to" }, { "start": 3221, "end": 3227.04, "text": " learning step propagate the fact that there is a signal back here all the way" }, { "start": 3227.04, "end": 3231.36, "text": " through these time steps in the past where a transformer can just go like" }, { "start": 3231.36, "end": 3238.32, "text": " okay so this is this is an experiment designed to show that this really helps" }, { "start": 3238.32, "end": 3245.52, "text": " so you can see right here they can analyze what their system says about the" }, { "start": 3245.52, "end": 3249.7, "text": " expected reward in the future so you can always ask it how probable is a given" }, { "start": 3249.7, "end": 3254.8799999999997, "text": " reward in the future and you can see whenever the agent doesn't pick up the" }, { "start": 3254.8799999999997, "end": 3260.04, "text": " key it immediately knows as soon as it gets into that second room it immediately" }, { "start": 3260.04, "end": 3265.64, "text": " knows it's lost no matter what happens in the last room if it does pick up the" }, { "start": 3265.64, "end": 3272.2799999999997, "text": " key in these two situations it estimates a future reward of about point five and" }, { "start": 3272.2799999999997, "end": 3278.3999999999996, "text": " you can see it does not degrade across the distractor room okay so no no matter" }, { "start": 3278.4, "end": 3284.48, "text": " how long the distractor room is does not degrade and that's the key difference" }, { "start": 3284.48, "end": 3289.64, "text": " between this and like let's say TD learning Q learning approaches it does" }, { "start": 3289.64, "end": 3296.76, "text": " not it doesn't forget because there is no dynamic programming involved and then" }, { "start": 3296.76, "end": 3300.64, "text": " you know in the last thing if it reaches the door obviously it says well that's a" }, { "start": 3300.64, "end": 3304.88, "text": " high value if it doesn't reach the door it changes its mind now I would have" }, { "start": 3304.88, "end": 3310.92, "text": " liked to see whether or not and this is why I was keen on seeing the parameters" }, { "start": 3310.92, "end": 3317.4, "text": " of this whether or not this right here is inside or outside the context length" }, { "start": 3317.4, "end": 3323.12, "text": " of the transformer they used and I'm going to guess it's still inside because" }, { "start": 3323.12, "end": 3328, "text": " as soon as that's outside or like let's say more like this as soon as that's" }, { "start": 3328, "end": 3333.28, "text": " outside the context length the the the system has no the sequence model has no" }, { "start": 3333.28, "end": 3339.44, "text": " way of knowing whether that particular agent picked up the key so it cannot" }, { "start": 3339.44, "end": 3343.6000000000004, "text": " predict anything I think what they're what they want to show right here sorry" }, { "start": 3343.6000000000004, "end": 3347.28, "text": " that's an alarm what they want to show right here is the fact that the" }, { "start": 3347.28, "end": 3351.92, "text": " attention weighs heavily on those frames where it picks up the key or reaches the" }, { "start": 3351.92, "end": 3356.42, "text": " door which is fine right we can we can get that transformers learn that however" }, { "start": 3356.42, "end": 3360.84, "text": " here I'd really you know like to see what happens if you go outside of that" }, { "start": 3360.84, "end": 3365.6000000000004, "text": " and again if you go outside of that you're going to revert back to the old" }, { "start": 3365.6000000000004, "end": 3370.52, "text": " method so ultimately the transformer gives you a longer context where you" }, { "start": 3370.52, "end": 3375.84, "text": " can do one-step assignment of credit but again as soon as you exceed that as with" }, { "start": 3375.84, "end": 3381.4, "text": " the LSTM as soon as you exceed these you need the classic approaches and I feel" }, { "start": 3381.4, "end": 3387.28, "text": " the paper is a little bit is a little bit shady on the fact that they get like" }, { "start": 3387.28, "end": 3392.52, "text": " a constant factor longer context with what they're doing but it doesn't really" }, { "start": 3392.52, "end": 3397.6400000000003, "text": " solve the problem okay in my mind I might be wrong please tell me if I'm" }, { "start": 3397.6400000000003, "end": 3402.0600000000004, "text": " wrong read the paper for yourself it is a good paper I hope we can cover the" }, { "start": 3402.0600000000004, "end": 3407.92, "text": " trajectory transformer in the future and with that I wish you all the best bye" }, { "start": 3407.92, "end": 3417.48, "text": " bye" } ]
WknN4E-y44E
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Research Conference ICML drops their acceptance rate | Area Chairs instructed to be more picky
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "icml", "peer review", "machine learning conference", "icml conference", "icml submission", "icml paper accepted", "how to write machine learning papers", "how to publish a paper", "how to publish in machine learning", "how to do a phd in machine learning", "deep learning conference", "machine learning research conference", "icml acceptance rate", "icml submissions", "icml area chairs", "machine learning news" ]
#icml #machinelearning #conference In a controversial move, ICML Area Chairs were instructed to raise the bar on acceptance to drop the acceptance rate by 10% from the previous trajectory. This raises a lot of questions about the pains of an academic peer review system under the load of an exponentially increasing field of study. Who draws the short stick? Usually not the big corporations. References: https://www.reddit.com/r/MachineLearning/comments/n243qw/d_icml_conference_we_plan_to_reduce_the_number_of/ https://twitter.com/tomgoldsteincs/status/1388156022112624644 https://twitter.com/ryan_p_adams/status/1388164670410866692 https://github.com/lixin4ever/Conference-Acceptance-Rate Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher LinkedIn: https://www.linkedin.com/in/yannic-kilcher-488534136/ BiliBili: https://space.bilibili.com/1824646584 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Good morning, I hope you had a good night's sleep. It's just another day where the review system in machine learning is completely and utterly broken this time courtesy of the ICML chairs, apparently notifying the senior area chairs to reduce the number of accepted submissions by about 10%. According to current meta review statistics, we need to raise the acceptance bar. Also saying we plan to reduce the number of accepted papers, please work with your senior area chair to raise the bar area chairs and senior area chairs do not have to accept a paper only because there is nothing wrong with it. So the ICML conference is trying to raise the bar on scientific publication in their venue by just accepting a little bit less papers than they would do according to current trajectory of the review process. ICML currently is in the post review post rebuttal process where the actual acceptance decisions are made. Now, why is this important? This is important because there are only about three or four large conferences in machine learning each year depending on your subfield bit more or even a bit less. For many places, if you want to get a PhD, if you want to get tenure, if you want to achieve anything in academia, you need to publish papers at those venues. And given that the field is exploding currently getting a paper there is quite difficult acceptance rates have been dropping steadily in the past few years, though you can see the number of accepted papers has actually risen. This is a consequence of the exponential growth of the machine learning field. Now there's a growing concern that the review process isn't really good. And what gets published and what doesn't get published is just kind of a wash in the noisy process, which is true. I've made quite a number of videos about the really flawed review process in machine learning. Essentially, here is what we know, if your paper is really good, then it's going to get accepted very probably, you might get unlucky, but with high probability, it's going to get there. If your paper is really bad, also with a high probability, it's going to get rejected. However, for most papers, which aren't extremely good, which aren't extremely bad, there's just this middle area, most papers fall into this middle area. And it's really a roll of a dice, you get some reviewers, they might know what they're talking about, they might not know what they're talking about, they have their favorite data set, you didn't evaluate on it, they reject or they weak accept because they just don't want to deal with your rebuttal. It's an all around fun process, but it can ruin your life. And for a conference such as ICML, it is important that it keeps up its reputation for only publishing the best papers and really good scientific results. So by reducing the acceptance rate, what they'll do is they'll put more focus on the really good papers that stand out, which can be interpreted as a good thing, because ultimately, the really good papers will still stay while some of the borderline papers will drop out. That gives you a stronger signal that whatever comes from this conference is a valuable scientific publication. On the other hand, you can say given how noisy that review process is, you simply compress a little bit the amount of people that draw a lucky lottery ticket. And given that the field is growing, and there is huge pressure on people to publish, and also the fact that large corporations throw extreme amounts of money of getting papers published at these conferences, weeding out the academics that don't have as much resources, it is a bit of a controversial decision. Essentially, reviewers and area chairs are even more incentivized to just find anything wrong with a paper and rejected because of it. And the downside of that is that if you don't have as much resources to train on every data set, you're probably going to be out much more likely. And also if you have some really cool idea that just doesn't work yet quite well doesn't beat state of the art yet, but is quite interesting, also very probably you're not going to get there. So while the optimist might see a stronger signal for an acceptance rating at that conference, and just higher quality output, and the pessimist might see the noisy process and say, well, what is it all worth? It doesn't mean anything to get accepted anyway. And now it's just less papers that do. And also large companies are going to dominate the field. And also academics are going to draw the short stick. The optimist and the pessimist are no match for the PhD student. See, what they seem to be doing right here is specify the acceptance their target in percent, which means number of accepted papers divided by number of submitted papers. I hope you see where this is going. The target acceptance rate in the eyes of the conference means that the numerator should be smaller. However, you can reach that same acceptance rate by just making the denominator larger. Now hypothetically, if just everyone would submit more papers, we could drop the acceptance rate, but also raise the chances that our actual papers are going to get in. Now, in this hypothetical scenario, I would not be advocating for submitting fake papers or just empty PDFs. But you might have some papers in the drawer, like this beauty right here that I wrote back in I don't know when, where I designed a method to defend against black box model theft attacks, which I thought was pretty smart. But honestly, it needs a lot of work to actually make it work. And I just did not bother. It's an archive right now. But even though I am not happy with it as it is, it is certainly better than a lot of stuff that I've seen submitted to ICML that I've read as a reviewer, and even some stuff that actually got accepted at the end. So compared to that, I don't see a reason why this should not be worthy. So you, my friend, are going to ICML next year. How about that? Of course, all just a hypothetical. I'm not advocating for you to mess with a system that's clearly broken and needs to be renewed. And we should reinvent the whole thing. However, it's fun to think about. If you have some thoughts on hypothetical scenarios or stories about how your papers got rejected that we all love to tell, tell me in the comments and see you next time.
[ { "start": 0, "end": 4.96, "text": " Good morning, I hope you had a good night's sleep. It's just another day where the review system" }, { "start": 4.96, "end": 12.48, "text": " in machine learning is completely and utterly broken this time courtesy of the ICML chairs," }, { "start": 12.48, "end": 21.12, "text": " apparently notifying the senior area chairs to reduce the number of accepted submissions" }, { "start": 21.12, "end": 28.16, "text": " by about 10%. According to current meta review statistics, we need to raise the acceptance bar." }, { "start": 28.16, "end": 33.52, "text": " Also saying we plan to reduce the number of accepted papers, please work with your senior" }, { "start": 33.52, "end": 40.16, "text": " area chair to raise the bar area chairs and senior area chairs do not have to accept a paper only" }, { "start": 40.16, "end": 46.16, "text": " because there is nothing wrong with it. So the ICML conference is trying to raise the bar on" }, { "start": 46.16, "end": 53.36, "text": " scientific publication in their venue by just accepting a little bit less papers than they" }, { "start": 53.36, "end": 60.56, "text": " would do according to current trajectory of the review process. ICML currently is in the post" }, { "start": 60.56, "end": 66.32, "text": " review post rebuttal process where the actual acceptance decisions are made. Now, why is this" }, { "start": 66.32, "end": 71.6, "text": " important? This is important because there are only about three or four large conferences in" }, { "start": 71.6, "end": 77.68, "text": " machine learning each year depending on your subfield bit more or even a bit less. For many" }, { "start": 77.68, "end": 82.48, "text": " places, if you want to get a PhD, if you want to get tenure, if you want to achieve anything in" }, { "start": 82.48, "end": 88.72, "text": " academia, you need to publish papers at those venues. And given that the field is exploding" }, { "start": 88.72, "end": 96.08, "text": " currently getting a paper there is quite difficult acceptance rates have been dropping steadily in" }, { "start": 96.08, "end": 101.52000000000001, "text": " the past few years, though you can see the number of accepted papers has actually risen. This is a" }, { "start": 101.52000000000001, "end": 107.92, "text": " consequence of the exponential growth of the machine learning field. Now there's a growing concern" }, { "start": 107.92, "end": 113.84, "text": " that the review process isn't really good. And what gets published and what doesn't get published is" }, { "start": 113.84, "end": 118.88, "text": " just kind of a wash in the noisy process, which is true. I've made quite a number of videos about" }, { "start": 118.88, "end": 125.04, "text": " the really flawed review process in machine learning. Essentially, here is what we know," }, { "start": 125.04, "end": 130.8, "text": " if your paper is really good, then it's going to get accepted very probably, you might get unlucky," }, { "start": 130.8, "end": 135.84, "text": " but with high probability, it's going to get there. If your paper is really bad, also with" }, { "start": 135.84, "end": 141.84, "text": " a high probability, it's going to get rejected. However, for most papers, which aren't extremely" }, { "start": 141.84, "end": 148.16, "text": " good, which aren't extremely bad, there's just this middle area, most papers fall into this middle" }, { "start": 148.16, "end": 154.24, "text": " area. And it's really a roll of a dice, you get some reviewers, they might know what they're" }, { "start": 154.24, "end": 157.84, "text": " talking about, they might not know what they're talking about, they have their favorite data set," }, { "start": 157.84, "end": 162.72, "text": " you didn't evaluate on it, they reject or they weak accept because they just don't want to deal" }, { "start": 162.72, "end": 168.32, "text": " with your rebuttal. It's an all around fun process, but it can ruin your life. And for a conference" }, { "start": 168.32, "end": 176.07999999999998, "text": " such as ICML, it is important that it keeps up its reputation for only publishing the best papers" }, { "start": 176.07999999999998, "end": 182.16, "text": " and really good scientific results. So by reducing the acceptance rate, what they'll do is they'll" }, { "start": 182.16, "end": 188.16, "text": " put more focus on the really good papers that stand out, which can be interpreted as a good thing," }, { "start": 188.16, "end": 193.6, "text": " because ultimately, the really good papers will still stay while some of the borderline papers" }, { "start": 193.6, "end": 198.07999999999998, "text": " will drop out. That gives you a stronger signal that whatever comes from this conference is a" }, { "start": 198.07999999999998, "end": 203.12, "text": " valuable scientific publication. On the other hand, you can say given how noisy that review" }, { "start": 203.12, "end": 207.92, "text": " process is, you simply compress a little bit the amount of people that draw a lucky lottery ticket." }, { "start": 207.92, "end": 213.35999999999999, "text": " And given that the field is growing, and there is huge pressure on people to publish, and also the" }, { "start": 213.36, "end": 219.04000000000002, "text": " fact that large corporations throw extreme amounts of money of getting papers published at these" }, { "start": 219.04000000000002, "end": 225.12, "text": " conferences, weeding out the academics that don't have as much resources, it is a bit of a" }, { "start": 225.12, "end": 230.4, "text": " controversial decision. Essentially, reviewers and area chairs are even more incentivized to just" }, { "start": 230.4, "end": 236.16000000000003, "text": " find anything wrong with a paper and rejected because of it. And the downside of that is that" }, { "start": 236.16000000000003, "end": 240.72000000000003, "text": " if you don't have as much resources to train on every data set, you're probably going to be" }, { "start": 240.72, "end": 246.08, "text": " out much more likely. And also if you have some really cool idea that just doesn't work yet quite" }, { "start": 246.08, "end": 251.52, "text": " well doesn't beat state of the art yet, but is quite interesting, also very probably you're not" }, { "start": 251.52, "end": 257.76, "text": " going to get there. So while the optimist might see a stronger signal for an acceptance rating at" }, { "start": 257.76, "end": 264.56, "text": " that conference, and just higher quality output, and the pessimist might see the noisy process and" }, { "start": 264.56, "end": 270.08, "text": " say, well, what is it all worth? It doesn't mean anything to get accepted anyway. And now it's just" }, { "start": 270.08, "end": 275.52, "text": " less papers that do. And also large companies are going to dominate the field. And also academics" }, { "start": 275.52, "end": 281.68, "text": " are going to draw the short stick. The optimist and the pessimist are no match for the PhD student." }, { "start": 281.68, "end": 288.24, "text": " See, what they seem to be doing right here is specify the acceptance their target in percent," }, { "start": 288.24, "end": 292.96, "text": " which means number of accepted papers divided by number of submitted papers." }, { "start": 292.96, "end": 299.76, "text": " I hope you see where this is going. The target acceptance rate in the eyes of the conference" }, { "start": 299.76, "end": 305.35999999999996, "text": " means that the numerator should be smaller. However, you can reach that same acceptance rate" }, { "start": 305.35999999999996, "end": 311.52, "text": " by just making the denominator larger. Now hypothetically, if just everyone would submit" }, { "start": 311.52, "end": 318.08, "text": " more papers, we could drop the acceptance rate, but also raise the chances that our actual papers" }, { "start": 318.08, "end": 324.32, "text": " are going to get in. Now, in this hypothetical scenario, I would not be advocating for submitting" }, { "start": 324.32, "end": 333.44, "text": " fake papers or just empty PDFs. But you might have some papers in the drawer, like this beauty" }, { "start": 333.44, "end": 338.79999999999995, "text": " right here that I wrote back in I don't know when, where I designed a method to defend against" }, { "start": 338.79999999999995, "end": 345.68, "text": " black box model theft attacks, which I thought was pretty smart. But honestly, it needs a lot of work" }, { "start": 345.68, "end": 351.76, "text": " to actually make it work. And I just did not bother. It's an archive right now. But even" }, { "start": 351.76, "end": 357.12, "text": " though I am not happy with it as it is, it is certainly better than a lot of stuff that I've" }, { "start": 357.12, "end": 363.2, "text": " seen submitted to ICML that I've read as a reviewer, and even some stuff that actually got accepted at" }, { "start": 363.2, "end": 370, "text": " the end. So compared to that, I don't see a reason why this should not be worthy. So you, my friend," }, { "start": 370, "end": 380.88, "text": " are going to ICML next year. How about that? Of course, all just a hypothetical. I'm not advocating" }, { "start": 380.88, "end": 387.44, "text": " for you to mess with a system that's clearly broken and needs to be renewed. And we should" }, { "start": 387.44, "end": 394.88, "text": " reinvent the whole thing. However, it's fun to think about. If you have some thoughts on hypothetical" }, { "start": 394.88, "end": 401.44, "text": " scenarios or stories about how your papers got rejected that we all love to tell, tell me in" }, { "start": 401.44, "end": 428.88, "text": " the comments and see you next time." } ]
P_xeshTnPZg
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Perceiver: General Perception with Iterative Attention (Google DeepMind Research Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "deep learning tutorial", "what is deep learning", "introduction to deep learning", "deepmind", "perceiver", "cross attention", "attention mechanism", "attention is all you need", "google deepmind", "deepmind perceiver", "perceiver model", "perciever model", "perciever", "self attention", "rnn", "recurrent neural network", "weight sharing", "computer vision", "natural language processing", "fourier features" ]
#perceiver #deepmind #transformer Inspired by the fact that biological creatures attend to multiple modalities at the same time, DeepMind releases its new Perceiver model. Based on the Transformer architecture, the Perceiver makes no assumptions on the modality of the input data and also solves the long-standing quadratic bottleneck problem. This is achieved by having a latent low-dimensional Transformer, where the input data is fed multiple times via cross-attention. The Perceiver's weights can also be shared across layers, making it very similar to an RNN. Perceivers achieve competitive performance on ImageNet and state-of-the-art on other modalities, all while making no architectural adjustments to input data. OUTLINE: 0:00 - Intro & Overview 2:20 - Built-In assumptions of Computer Vision Models 5:10 - The Quadratic Bottleneck of Transformers 8:00 - Cross-Attention in Transformers 10:45 - The Perceiver Model Architecture & Learned Queries 20:05 - Positional Encodings via Fourier Features 23:25 - Experimental Results & Attention Maps 29:05 - Comments & Conclusion Paper: https://arxiv.org/abs/2103.03206 My Video on Transformers (Attention is All You Need): https://youtu.be/iDulhoQ2pro Abstract: Biological systems understand the world by simultaneously processing high-dimensional inputs from modalities as diverse as vision, audition, touch, proprioception, etc. The perception models used in deep learning on the other hand are designed for individual modalities, often relying on domain-specific assumptions such as the local grid structures exploited by virtually all existing vision models. These priors introduce helpful inductive biases, but also lock models to individual modalities. In this paper we introduce the Perceiver - a model that builds upon Transformers and hence makes few architectural assumptions about the relationship between its inputs, but that also scales to hundreds of thousands of inputs, like ConvNets. The model leverages an asymmetric attention mechanism to iteratively distill inputs into a tight latent bottleneck, allowing it to scale to handle very large inputs. We show that this architecture performs competitively or beyond strong, specialized models on classification tasks across various modalities: images, point clouds, audio, video and video+audio. The Perceiver obtains performance comparable to ResNet-50 on ImageNet without convolutions and by directly attending to 50,000 pixels. It also surpasses state-of-the-art results for all modalities in AudioSet. Authors: Andrew Jaegle, Felix Gimeno, Andrew Brock, Andrew Zisserman, Oriol Vinyals, Joao Carreira Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher LinkedIn: https://www.linkedin.com/in/yannic-kilcher-488534136/ BiliBili: https://space.bilibili.com/1824646584 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Hi there, how is everyone doing? Today we'll look at the Perceiver general perception with iterative attention by Andrew Yegel, Felix Gimino, Andrew Brock, Andrew Sizzerman, Oriol Vinyls and Jao Carrera of DeepMind. This paper on a high level describes a model called the Perceiver and what this model does is it interleaves latent self-attention mechanism with cross-attention mechanism and so it is a transformer and the secret is that the data only enters the transformer through this cross-attention mechanism that allows the model to have the latent array be of significantly lower size than the data array and this solves in part the transformer's quadratic memory and compute bottleneck. The image comes in or the data rather comes in multiple times through this stack and the weights can be shared making it essentially a recurrent neural network. This model here works for any modality so the paper not only does images but videos and audio and point clouds and you almost have to change pretty much nothing about the input in order for the model to work. This is a pretty big step towards first of all making transformers more deep and second of all applying the same models to very very different modalities of data. We'll dive into the paper, we'll look at how it's done, it's actually a fairly simple idea so shouldn't take us too long I always say that but maybe today we'll achieve it. If you like content like this yeah tell me how you feel in the comments, leave a like, tell your friends about it and let's go. So they motivate the name, the name Perceiver it's not really tied to anything they motivate it by saying biological systems understand the world by simultaneously processing high dimensional inputs from modalities as diverse as vision, audition, touch, proprioception, etc. The perception models used in deep learning on the other hand are designed for individual modalities often rely on those domain specific assumptions such as the local grid structures exploited by virtually all existing vision models. So what do they mean? They mean if we have an image and the image is of a not a cat a house what did you think? So the image is of a house and if we have an image processing pipeline usually what it will do is it will assume that the image is some sort of grid and that you can localize any pixel by its XY coordinate and also that the pixel is in some kind of relation to the pixel around it. We usually build models according to that so a convolutional neural network very explicitly will slide over a filter over the image with all shared weights and therefore it directly says that what matters to a pixel is the pixels around it and only in the upper layers and after some pooling do these receptive fields grow such that more and more information across larger distances is incorporated. On the other hand something like a visual transformer like the VIT what it will do is it will do transformer like attention but because it can't because the images are so large because whatever 224 by 224 pixels are just too much to put into one transformer it will simply subdivide the image into these patches and therefore it also essentially says it will take each patch and make a vector out of it so it also essentially says that whatever pixels are close together they go into this one vector so they're treated as a group. So this paper says that all the current architectures that deal with computer vision somehow have this built in. However the the so other models have that too other modalities like audio video and so on and the perceiver here is supposed to alleviate that so they say it induces helpful inductive biases but also lock models to individual modalities. In this paper we introduce the perceiver a model that builds upon transformers and hence makes few architectural assumptions about the relationship between its inputs but also scales to hundreds of thousands of inputs like conv nets. So transformers notably have our models that transform sequences to sequences or let's say sets to sets so you have an input set and what we've usually come to know as transformers are stacks of self attention layers and in the self attention layer what you would do is you would simply transform the input into an equally length output sequence and in the middle you'd have this attention mechanism and the attention mechanism essentially needs to compute the weight between every one of the inputs and every one of the outputs giving rise to an O of let's call that M I think they call it M squared so here you have M sequence length so an O of M squared compute and memory requirements. Now if M is small that's not a problem but if we go into the range of NLP usually so in in NLP we usually deal with M's in the order of I don't know 2000 1000 let's say 1000 so in the order of 1000 though we would want more ideally but in the in the computer vision our M is easily something like 50k which is about 224 squared so the M squared would be 50,000 squared and that just blows the memory of our computers maybe not the ones in the future but certainly the ones now. Alright so the problem here is that these transformer architectures take too much memory what this paper does is it goes ahead and it says couldn't we do a better job so usually in a transformer layer I'm gonna draw this again here as two layers what you'll do is you'll compute queries keys and values from the same input so you have your input right here and what you'll do is you'll compute queries keys and values from that input and those get mingled together in the attention and that gives you the next layer and you'll produce queries keys and values again queries especially are of size m by D keys are also of size m by D now if you multiply those two together and you transpose this you can eat clearly see that gives you an a matrix of size m by M what this paper does is it it says okay we can draw back actually on what the very initial transformers proposed the very initial transformers if you remember and if you don't you can go watch my video on it the very initial transformers were something like generative models that had an input sequence and they had an output sequence so the output sequence and maybe that wasn't fully completed yet right so you want to predict the next thing but there was a clear distinction between sequence a and sequence B now sequence B would do self-attention so they would have these stacks of self-attention layers with the quadratic thing and ultimately you'd want some kind of output here such that you know what the next word would be this is an it's sort of an autoregressive model however the input did not use self-attention it used cross attention so it was also a stack but it used cross attention so it went like sort of like this over and the way that works is so by the way think of machine translation right so here is the German sentence and here is the half finished English sentence that you would want to complete so if you want to know what's here you need to attend to the English sentence so every part of the English sentence needs to attend to the English sentence but also every part of the English sentence needs to attend to the German sentence that's why you have these paths going over but none of the German sentence necessarily needs to attend to the English sentence so it It could make sense, but it's, you know, it's a restriction where you say, okay, the information flows from the German sentence to the English sentence. So, and that results in this cross attention where the keys and the values are produced from send like sequence a, but the queries to do the cross attention. So the queries for this particular flow of information are produced by the target sentence. And you'll notice something these now can be of different lengths, notably if the sentence B right now is much shorter than the sentence, a that would result in a shorter queue. And that would result not in an M by M here, but that would result in like an M by something smaller, right? And let's call this N and if N is much smaller than M, then you don't have this quadratic bottleneck. So that's exactly what this model does. Essentially, let me just get rid of all of this stuff again. This is akin to a few things. So it's akin to the original transformers. It's also akin to, if you remember the model D E T R, which is a detection model. And what we call the things there are learned queries. So what do we do here? We start with our goal is to be to have a latent array that is not huge. So N here is a size that we can handle in a regular transformer. And this stack, the top row here is just a regular self-attention transformer with all the drawbacks. But because we only have a queue of, we only have sequences of length N, the self-attention modules right here. So this is latent transformer. This is classic self-attention that we do here and here. And, you know, in all the stacks, in all the layers to follow, but we can handle it because N is relatively small. So in this paper, I think N is something like 500 or a 1000. It's something you can handle with current hardware. The problem is when you, when you know, you want to bring in an image, but this is quite smart. What do they do? They take the image and they just unroll it into a byte array. So now we have the M here and the M is huge. The M is 50,000. However, because we produce the queries from the latent array and not from the image itself, we won't get the quadratic blowup. So this is M and this is N and you can see that results in an N by M attention matrix and not an M by M attention matrix. So in this cross attention module, the data of the image comes in to the latent into the transformer. However, it is not transformed into an equally long sequence. It is transformed into a much shorter sequence, namely this latent state. On this latent state, we have a transformer transforming it into a new latent state. From that queries are generated to do cross attention again to the same image. So the same image will come in every single layer. The same image will come into the, into the architecture and so on. So if this reminds you of a recurrent neural network, that it is sort of a recurrent neural network, especially because they say you can also shape these weights between repeats. If you share these weights, it is definitely a recurrent neural network where this here is the initial state, which you either learn or randomly initialize. In this case, I'm pretty sure this is learned though. I might have misread. So this concept, again, it relates to RNNs. In fact, it is an RNN. If you share the weights, it relates to learn, which is a recurrent neural network, or aogendos that is part of this corollary, where you can distinguish by different stock parts from the occasional ANDs. So here, we have, there's two learning Understands then we have two learning queries. That'll just get you through, will show you basically how many queries in two learned queries, as opposed to generated queries. queries, they have no clue about the incoming data. So what you generate here is just kind of a generic set of queries. Like what would you know, what would you like to know about this incoming data point? And you have a thousand things that you can want to know and you have, I don't know, 50,000 things to attend to. So you're going to choose a thousand criteria, right, to gather from that input data. Now, the way attention works, right, is the queries, you have a set of queries, queue, and you have a set of keys down here, a bunch of keys, more than queries, and every query exposes sort of a vector and every key exposes a vector. And the information is routed by means of highest or high inner product. So you would route things that have a high inner product together like these two. Yeah, those are the ones that you would route. So every key potentially has a, not potentially every key has a vector associated with it. So the queries essentially say, what kind of things I would like to know of the incoming data. And the keys, say for each pixel in the data, say what kind of things that particular pixel offers to the to the to the model. If you just do this once, you might get some generic information, but then you get to do it again. And you will notice that the queries here, the later queries are a result of that processing. So the data comes through through here, right, and influences these next queries. Therefore, these next queries here can be dependent on the earlier data. So you can pretty easily see that, you know, now, the next time you're going to attend to this data, you do this in an informed fashion, you already kind of know what's in there. So you refine what you would like to know about the data and so on, you can refine and refine, you can ask for more and more specific things, the more you learn about the data. So this is really a process of learning more and more about the data in a dynamic way where you can say what you would like to know. And, you know, this, I think it's a great idea. It might be refined in the future, but it certainly does. Also, you know, it makes sense. And it also solves the kind of quadratic bottleneck. Oh, wait, I almost forgot I had a visual demonstration of how the quadratic bottleneck here is solved. Bear with me. Here's a matrix, it's M by M. Now watch. Problem solved. All right. So by the way, the lower is supposed to represent N by M. I did not write that down. Okay. So this not only allows you to overcome this quadratic bottleneck, it also allows you to build much more of a dynamic model of the data. So you can see that the data is not only a dynamic model, but it also allows you to overcome this quadratic bottleneck. It also allows you to build much deeper transformers. So I believe their best architecture here had 40, sorry, 48 layers of transformer, which, you know, we can do in kind of NLP, but it takes a lot of hardware. And when they also share the weights, their number of parameters in these things is not a standard ResNet. So yeah, pretty cool. So they apply this to pictures, they apply this to videos, they apply this to audio, they apply it to video and audio together, they apply it to 3D point clouds. Though one has to say for video, they don't actually put the entire video into so that this here isn't the entire video. But they, I think they also put kind of little time space chunks of the video in it. So it doesn't solve yet all the problems with transformers. It's still, if a data point is huge, you won't get it in there. Simply by the fact that is linearly huge. What you will solve is the fact that things are quadratically huge. The last thing to do is to pay attention to this thing, positional encodings. Now, the way they do positional encodings is, so now we have like a fully independent, like a data modality independent architecture, right? It's important to realize this. This thing here has nothing to do with an image, like is it an image? Who knows, right? We don't care. We simply, this is the array of pixels. This is simply the unrolled image. There is no convolutional filter, there's no patching or batching or anything. There's just the image or it's the audio data, right? It's like sample after sample of audio data and so on. This, you can even think of a situation where you would feed in different different parts of the data from time step to time step, in which case it really becomes like a recurrent neural network. But the point is the transformers, they are invariant to position. So if I feed one, two, three, four, five into a transformer, it will do exactly the same thing as if I feed three, one, two, four, five. That is not much of a permutation, but it is. So it is invariant. Now that stifles it because we, you know, there is something to something being in a certain location, right? Especially if you think of text, word order matters and so on. But there's a clear distinction. We don't want to build these things into the architecture, but we want to give the model the possibility to exploit that information because clearly it's there like a piece of text is not just a set. It is an actual string of ordered words. So what do we do? We give positional encodings with the input and positional encodings, you know, have been used all over the place. Transformers specifically need them. The way this paper does positional encodings is like they do it or much like they do it in the first transformer paper, and that is by Fourier features. So if you have five inputs right here, you build up kind of a Fourier bank of frequencies. So this is the lowest frequency, something like this, like a sine wave, and then a higher frequency. Well, five probably wasn't the optimal thing to demonstrate this. So by kind of indexing, so here, if we look at the position number two right here, it has like, if we just consider this binary, it has like, no, not binary, like 0.9, 0.9 minus one. That's kind of the encoding. That's the positional encoding of that location. And if we look at three, it's 0.9 minus one, one. So you can see that you can, with this kind of positional encoding, as opposed to a learned positional encoding, what you can do is you can always detect when two things are close together. That means that in the lower frequencies, they will share the same number. And you can, but you can also do very high resolution, you go to the highest frequencies, and if they're different there, but if they match all of the frequencies above them, that means they're like right next to each other. So that's how you do positional encoding with Fourier features. Again, I discuss this at length in my attention is all you need video. The Fourier features also have the additional benefit that you don't rely on learned encodings, which means you don't, you don't rely on the fact that you have kind of an exact or a maximum amount of sequence length. So the yeah, I mean, you still have kind of a maximum here. But I like this more because it's sort of independent, it's one less thing to learn. And the learning happens in the processing itself. So in terms of experiments, it's pretty simple. They are in vision, they are on par with something like a ResNet 50. And they are, you know, they're doing pretty well in vision without any sort of assumption that the input data is an image, right? That's the, that's the crazy part. So other than the position encodings, which are the the Fourier features in two dimensions, there is nothing here saying this is an image, it's simply a array of pixels. This it, I think that's crazy. And sorry, this is visualization of the attention maps. So in this model, specifically, what they do is layer one has a set of weights, then layers two to I think seven have a different set of weights, and then layer eight has another set of weights. So layer one is the blue here, layer two to seven share the weights, they're green. And the last layer, I don't have, do I have orange here? Okay. And you can see that these are the attention maps of different channels. And they stress that they don't overlay it on the image. So the attention map in the first layer actually really attends to the image pixels, you can see the dog clearly in many, many of these attention maps right here, like where it attends to clearly attends to parts of the of the dog. And it seems that it can do sort of edge. No, it kind of attends to the intensity of the pixels, right in the first layer, then in this second to seventh layer, attention maps look like this. So they look like sort of a grid. So they heavily rely on these positional encodings in order to build up this grid. However, this grid is not always the same. It's sort of different from the image. So it's not always the same. It's sort of different for different things. And then in the last layer, again, my question would actually be how I see that these things are different from channel to channel. So these are the different channels right here. But how different are they from input to input? Like has the model just kind of learned a general sequence of attention maps for all possible images like that it works well, because it's pretty, it's kind of suspicious, right, that these maps they seem like so my question would be how much do these attention maps really depend on the input versus how much are they just general attention maps, right, then? And so I can totally see that this model might just do all the work in the latent transformer by simply having so many layers, and that the attention isn't too important, like it would always do the same sort of attention, no matter what the input is, and I can see a model like that totally performing well. So in order for me to demonstrate that this idea really works as advertised, namely that, you know, the model selects itself what it wants to attend to iteratively informed by the data and so on. It would be cool to see that these things somehow depend on the data because this grid pattern right now tells me that maybe they don't. Okay, so the last thing they also apply this, as I said, to audio, video, 3D point clouds, and I think they outperform other methods in these. So they reach state of the art in a bunch of them, which, you know, pretty, pretty cool. Of course, image computer vision has been sort of the prime or one of the prime disciplines of of deep learning research. So that's maybe a bit more competitive. Last thing I want to show here is the ablations. So they find specifically that, you know, the number of latent variables, which is the, you know, the size of the queue, the, the, the end. So the, this is what we need to keep small in order to avoid this quadratic bottleneck, you can pretty clearly see that as this goes up, performance goes up. So this at least validates, you know, our intuition that if we could do bigger transformers, it probably would be a good idea. Number of attends, I think that is how many times the how many times the image goes into the structure. Also here, the more the better, and number of transformers per attend, that's, you know, how many in between self attention layers do you have per time you attend the image. So that gives your model time to process and time to decide what to attend to next time. Also here, we see, we see a rise, though, it would be interesting to see like an interaction term between between these two things that will tell us if it's just about making the model deeper or or not. Okay, so that was all I had to say, you can kind of check out the attention maps they have here themselves, they have them for audio, they have them here, I think for the video. And also there are a bunch of experimental details that are also pretty cool. However, I just think it's a cool idea. And I'm excited to see where people take this. All right, that was it from me. I'll see you next time. Bye bye.
[ { "start": 0, "end": 5.8, "text": " Hi there, how is everyone doing? Today we'll look at the Perceiver general" }, { "start": 5.8, "end": 11.64, "text": " perception with iterative attention by Andrew Yegel, Felix Gimino, Andrew Brock," }, { "start": 11.64, "end": 18.48, "text": " Andrew Sizzerman, Oriol Vinyls and Jao Carrera of DeepMind. This paper on a" }, { "start": 18.48, "end": 25.32, "text": " high level describes a model called the Perceiver and what this model does is it" }, { "start": 25.32, "end": 32.4, "text": " interleaves latent self-attention mechanism with cross-attention" }, { "start": 32.4, "end": 38.88, "text": " mechanism and so it is a transformer and the secret is that the data only enters" }, { "start": 38.88, "end": 43.28, "text": " the transformer through this cross-attention mechanism that allows the" }, { "start": 43.28, "end": 49.08, "text": " model to have the latent array be of significantly lower size than the data" }, { "start": 49.08, "end": 55.68, "text": " array and this solves in part the transformer's quadratic memory and compute bottleneck." }, { "start": 55.68, "end": 63.519999999999996, "text": " The image comes in or the data rather comes in multiple times" }, { "start": 63.519999999999996, "end": 69.32, "text": " through this stack and the weights can be shared making it essentially a" }, { "start": 69.32, "end": 76.52, "text": " recurrent neural network. This model here works for any modality so the paper not" }, { "start": 76.52, "end": 82.96, "text": " only does images but videos and audio and point clouds and you almost have to" }, { "start": 82.96, "end": 87.8, "text": " change pretty much nothing about the input in order for the model to" }, { "start": 87.8, "end": 93.6, "text": " work. This is a pretty big step towards first of all making transformers" }, { "start": 93.6, "end": 100.12, "text": " more deep and second of all applying the same models to very very different" }, { "start": 100.12, "end": 106.24, "text": " modalities of data. We'll dive into the paper, we'll look at how it's done, it's" }, { "start": 106.24, "end": 112.03999999999999, "text": " actually a fairly simple idea so shouldn't take us too long I always say" }, { "start": 112.03999999999999, "end": 118.56, "text": " that but maybe today we'll achieve it. If you like content like this yeah tell me" }, { "start": 118.56, "end": 123.11999999999999, "text": " how you feel in the comments, leave a like, tell your friends about it and let's" }, { "start": 123.11999999999999, "end": 130.51999999999998, "text": " go. So they motivate the name, the name Perceiver it's not really tied to" }, { "start": 130.51999999999998, "end": 135.32, "text": " anything they motivate it by saying biological systems understand the" }, { "start": 135.32, "end": 140.44, "text": " world by simultaneously processing high dimensional inputs from modalities as" }, { "start": 140.44, "end": 147.4, "text": " diverse as vision, audition, touch, proprioception, etc. The perception" }, { "start": 147.4, "end": 151.68, "text": " models used in deep learning on the other hand are designed for individual" }, { "start": 151.68, "end": 156.07999999999998, "text": " modalities often rely on those domain specific assumptions such as the local" }, { "start": 156.07999999999998, "end": 161.18, "text": " grid structures exploited by virtually all existing vision models. So what do" }, { "start": 161.18, "end": 167.6, "text": " they mean? They mean if we have an image and the image is of a not a cat a house" }, { "start": 167.6, "end": 175.92000000000002, "text": " what did you think? So the image is of a house and if we have an image processing" }, { "start": 175.92000000000002, "end": 181, "text": " pipeline usually what it will do is it will assume that the image is some sort" }, { "start": 181, "end": 186.92000000000002, "text": " of grid and that you can localize any pixel by its XY coordinate and also that" }, { "start": 186.92, "end": 192.44, "text": " the pixel is in some kind of relation to the pixel around it. We usually build" }, { "start": 192.44, "end": 197.07999999999998, "text": " models according to that so a convolutional neural network very" }, { "start": 197.07999999999998, "end": 203.79999999999998, "text": " explicitly will slide over a filter over the image with all shared weights and" }, { "start": 203.79999999999998, "end": 209.56, "text": " therefore it directly says that what matters to a pixel is the pixels around" }, { "start": 209.56, "end": 214.04, "text": " it and only in the upper layers and after some pooling do these receptive" }, { "start": 214.04, "end": 220.56, "text": " fields grow such that more and more information across larger distances is" }, { "start": 220.56, "end": 227.23999999999998, "text": " incorporated. On the other hand something like a visual transformer like the VIT" }, { "start": 227.23999999999998, "end": 232.48, "text": " what it will do is it will do transformer like attention but because" }, { "start": 232.48, "end": 239.2, "text": " it can't because the images are so large because whatever 224 by 224 pixels are" }, { "start": 239.2, "end": 244.79999999999998, "text": " just too much to put into one transformer it will simply subdivide the" }, { "start": 244.79999999999998, "end": 250.83999999999997, "text": " image into these patches and therefore it also essentially says it will take" }, { "start": 250.83999999999997, "end": 257.52, "text": " each patch and make a vector out of it so it also essentially says that whatever" }, { "start": 257.52, "end": 263.52, "text": " pixels are close together they go into this one vector so they're treated as a" }, { "start": 263.52, "end": 268.96, "text": " group. So this paper says that all the current architectures that deal" }, { "start": 268.96, "end": 278.15999999999997, "text": " with computer vision somehow have this built in. However the the so other" }, { "start": 278.15999999999997, "end": 282.44, "text": " models have that too other modalities like audio video and so on and the" }, { "start": 282.44, "end": 290.08, "text": " perceiver here is supposed to alleviate that so they say it induces helpful" }, { "start": 290.08, "end": 294.71999999999997, "text": " inductive biases but also lock models to individual modalities. In this paper we" }, { "start": 294.72, "end": 298.96000000000004, "text": " introduce the perceiver a model that builds upon transformers and hence makes" }, { "start": 298.96000000000004, "end": 304.32000000000005, "text": " few architectural assumptions about the" }, { "start": 304.32000000000005, "end": 308.32000000000005, "text": " relationship between its inputs but also scales to hundreds of thousands of" }, { "start": 308.32000000000005, "end": 316.52000000000004, "text": " inputs like conv nets. So transformers notably have our models that transform" }, { "start": 316.52000000000004, "end": 321.56, "text": " sequences to sequences or let's say sets to sets so you have an input set and" }, { "start": 321.56, "end": 326.84, "text": " what we've usually come to know as transformers are stacks of self" }, { "start": 326.84, "end": 331.16, "text": " attention layers and in the self attention layer what you would do is you" }, { "start": 331.16, "end": 337.36, "text": " would simply transform the input into an equally length output sequence and in" }, { "start": 337.36, "end": 342.04, "text": " the middle you'd have this attention mechanism and the attention mechanism" }, { "start": 342.04, "end": 346.76, "text": " essentially needs to compute the weight between every one of the inputs and" }, { "start": 346.76, "end": 354.32, "text": " every one of the outputs giving rise to an O of let's call that M I think they" }, { "start": 354.32, "end": 360.32, "text": " call it M squared so here you have M sequence length so an O of M squared" }, { "start": 360.32, "end": 368.4, "text": " compute and memory requirements. Now if M is small that's not a problem but if we" }, { "start": 368.4, "end": 375.64, "text": " go into the range of NLP usually so in in NLP we usually deal with M's in the" }, { "start": 375.64, "end": 384.91999999999996, "text": " order of I don't know 2000 1000 let's say 1000 so in the order of 1000 though we" }, { "start": 384.91999999999996, "end": 391.68, "text": " would want more ideally but in the in the computer vision our M is easily" }, { "start": 391.68, "end": 399.15999999999997, "text": " something like 50k which is about 224 squared so the M squared would be" }, { "start": 399.15999999999997, "end": 405.47999999999996, "text": " 50,000 squared and that just blows the memory of our computers maybe not the" }, { "start": 405.48, "end": 411.40000000000003, "text": " ones in the future but certainly the ones now. Alright so the problem here is" }, { "start": 411.40000000000003, "end": 417.68, "text": " that these transformer architectures take too much memory what this paper does" }, { "start": 417.68, "end": 424.76, "text": " is it goes ahead and it says couldn't we do a better job so usually in a" }, { "start": 424.76, "end": 430.36, "text": " transformer layer I'm gonna draw this again here as two layers what you'll do" }, { "start": 430.36, "end": 437.6, "text": " is you'll compute queries keys and values from the same input so you have" }, { "start": 437.6, "end": 443.40000000000003, "text": " your input right here and what you'll do is you'll compute queries keys and" }, { "start": 443.40000000000003, "end": 449.2, "text": " values from that input and those get mingled together in the attention and" }, { "start": 449.2, "end": 455.24, "text": " that gives you the next layer and you'll produce queries keys and values again" }, { "start": 455.24, "end": 464.96000000000004, "text": " queries especially are of size m by D keys are also of size m by D now if you" }, { "start": 464.96000000000004, "end": 470, "text": " multiply those two together and you transpose this you can eat clearly see" }, { "start": 470, "end": 480.12, "text": " that gives you an a matrix of size m by M what this paper does is it it says" }, { "start": 480.12, "end": 487.56, "text": " okay we can draw back actually on what the very initial transformers proposed" }, { "start": 487.56, "end": 492.2, "text": " the very initial transformers if you remember and if you don't you can go" }, { "start": 492.2, "end": 496.88, "text": " watch my video on it the very initial transformers were something like" }, { "start": 496.88, "end": 504.32, "text": " generative models that had an input sequence and they had an output sequence" }, { "start": 504.32, "end": 508.96, "text": " so the output sequence and maybe that wasn't fully completed yet right so you" }, { "start": 508.96, "end": 512.28, "text": " want to predict the next thing but there was a clear distinction between" }, { "start": 512.28, "end": 520.76, "text": " sequence a and sequence B now sequence B would do self-attention so they would" }, { "start": 520.76, "end": 525.1999999999999, "text": " have these stacks of self-attention layers with the quadratic thing and" }, { "start": 525.1999999999999, "end": 529.92, "text": " ultimately you'd want some kind of output here such that you know what the" }, { "start": 529.92, "end": 534.88, "text": " next word would be this is an it's sort of an autoregressive model however the" }, { "start": 534.88, "end": 542.04, "text": " input did not use self-attention it used cross attention so it was also a stack" }, { "start": 542.04, "end": 550.48, "text": " but it used cross attention so it went like sort of like this over and the way" }, { "start": 550.48, "end": 555.24, "text": " that works is so by the way think of machine translation right so here is the" }, { "start": 555.24, "end": 559.56, "text": " German sentence and here is the half finished English sentence that you would" }, { "start": 559.56, "end": 565.02, "text": " want to complete so if you want to know what's here you need to attend to the" }, { "start": 565.02, "end": 570.2399999999999, "text": " English sentence so every part of the English sentence needs to attend to the" }, { "start": 570.2399999999999, "end": 575.3599999999999, "text": " English sentence but also every part of the English sentence needs to attend to" }, { "start": 575.3599999999999, "end": 581.28, "text": " the German sentence that's why you have these paths going over but none of the" }, { "start": 581.28, "end": 585.6199999999999, "text": " German sentence necessarily needs to attend to the English sentence so it" }, { "start": 585.62, "end": 590.04, "text": " It could make sense, but it's, you know, it's a restriction where you say, okay," }, { "start": 590.04, "end": 593.94, "text": " the information flows from the German sentence to the English sentence." }, { "start": 594.3, "end": 600.3, "text": " So, and that results in this cross attention where the keys and the values are" }, { "start": 600.3, "end": 605.84, "text": " produced from send like sequence a, but the queries to do the cross attention." }, { "start": 605.84, "end": 612.28, "text": " So the queries for this particular flow of information are produced by the target" }, { "start": 612.28, "end": 612.66, "text": " sentence." }, { "start": 612.66, "end": 617.06, "text": " And you'll notice something these now can be of different lengths," }, { "start": 617.06, "end": 621.18, "text": " notably if the sentence B right now is much shorter than the sentence," }, { "start": 621.2199999999999, "end": 624.38, "text": " a that would result in a shorter queue." }, { "start": 624.5799999999999, "end": 630.74, "text": " And that would result not in an M by M here, but that would result in like an M" }, { "start": 631.14, "end": 633.74, "text": " by something smaller, right?" }, { "start": 634.18, "end": 639.78, "text": " And let's call this N and if N is much smaller than M, then you don't have this" }, { "start": 639.78, "end": 641.86, "text": " quadratic bottleneck." }, { "start": 641.86, "end": 644.54, "text": " So that's exactly what this model does." }, { "start": 644.54, "end": 648.5, "text": " Essentially, let me just get rid of all of this stuff again." }, { "start": 649.98, "end": 652.54, "text": " This is akin to a few things." }, { "start": 652.54, "end": 654.34, "text": " So it's akin to the original transformers." }, { "start": 654.34, "end": 663.02, "text": " It's also akin to, if you remember the model D E T R, which is a detection model." }, { "start": 663.34, "end": 668.34, "text": " And what we call the things there are learned queries." }, { "start": 668.58, "end": 670.38, "text": " So what do we do here?" }, { "start": 670.38, "end": 676.9, "text": " We start with our goal is to be to have a latent array that is not huge." }, { "start": 676.9, "end": 681.66, "text": " So N here is a size that we can handle in a regular transformer." }, { "start": 682.7, "end": 690.1, "text": " And this stack, the top row here is just a regular self-attention transformer" }, { "start": 690.1, "end": 691.7, "text": " with all the drawbacks." }, { "start": 692.7, "end": 698.7, "text": " But because we only have a queue of, we only have sequences of length N, the" }, { "start": 698.7, "end": 701.1, "text": " self-attention modules right here." }, { "start": 701.1, "end": 702.6600000000001, "text": " So this is latent transformer." }, { "start": 702.7, "end": 707.34, "text": " This is classic self-attention that we do here and here." }, { "start": 708.3000000000001, "end": 713.5, "text": " And, you know, in all the stacks, in all the layers to follow, but we can handle" }, { "start": 713.5, "end": 716.0600000000001, "text": " it because N is relatively small." }, { "start": 716.3000000000001, "end": 721.38, "text": " So in this paper, I think N is something like 500 or a 1000." }, { "start": 721.98, "end": 724.46, "text": " It's something you can handle with current hardware." }, { "start": 724.46, "end": 730.58, "text": " The problem is when you, when you know, you want to bring in an image, but" }, { "start": 730.58, "end": 731.86, "text": " this is quite smart." }, { "start": 732.0600000000001, "end": 732.9000000000001, "text": " What do they do?" }, { "start": 732.9000000000001, "end": 737.7, "text": " They take the image and they just unroll it into a byte array." }, { "start": 737.98, "end": 740.82, "text": " So now we have the M here and the M is huge." }, { "start": 740.82, "end": 742.3000000000001, "text": " The M is 50,000." }, { "start": 742.5400000000001, "end": 748.0600000000001, "text": " However, because we produce the queries from the latent array and not from the" }, { "start": 748.0600000000001, "end": 752.62, "text": " image itself, we won't get the quadratic blowup." }, { "start": 752.62, "end": 758.14, "text": " So this is M and this is N and you can see that results in an N by M attention" }, { "start": 758.14, "end": 761.34, "text": " matrix and not an M by M attention matrix." }, { "start": 761.74, "end": 769.22, "text": " So in this cross attention module, the data of the image comes in to the" }, { "start": 769.26, "end": 771.46, "text": " latent into the transformer." }, { "start": 772.1, "end": 776.26, "text": " However, it is not transformed into an equally long sequence." }, { "start": 776.26, "end": 780.1, "text": " It is transformed into a much shorter sequence, namely this latent state." }, { "start": 780.1, "end": 784.0600000000001, "text": " On this latent state, we have a transformer transforming it into a new latent state." }, { "start": 784.5400000000001, "end": 789.26, "text": " From that queries are generated to do cross attention again to the same image." }, { "start": 789.26, "end": 792.5, "text": " So the same image will come in every single layer." }, { "start": 792.5400000000001, "end": 799.38, "text": " The same image will come into the, into the architecture and so on." }, { "start": 799.78, "end": 804.38, "text": " So if this reminds you of a recurrent neural network, that it is sort of a" }, { "start": 804.38, "end": 807.98, "text": " recurrent neural network, especially because they say you can also shape" }, { "start": 807.98, "end": 810.14, "text": " these weights between repeats." }, { "start": 810.38, "end": 814.34, "text": " If you share these weights, it is definitely a recurrent neural network" }, { "start": 814.58, "end": 820.58, "text": " where this here is the initial state, which you either learn or randomly initialize." }, { "start": 821.0600000000001, "end": 825.0600000000001, "text": " In this case, I'm pretty sure this is learned though." }, { "start": 825.46, "end": 826.94, "text": " I might have misread." }, { "start": 827.82, "end": 831.46, "text": " So this concept, again, it relates to RNNs." }, { "start": 831.5, "end": 833.26, "text": " In fact, it is an RNN." }, { "start": 833.26, "end": 837.46, "text": " If you share the weights, it relates to learn, which is a recurrent neural" }, { "start": 837.46, "end": 841.26, "text": " network, or aogendos that is part of this corollary, where you can" }, { "start": 841.26, "end": 846.7, "text": " distinguish by different stock parts from the occasional ANDs." }, { "start": 847.1, "end": 851.82, "text": " So here, we have, there's two learning Understands then we have" }, { "start": 851.82, "end": 853.7800000000001, "text": " two learning queries." }, { "start": 853.86, "end": 859.0600000000001, "text": " That'll just get you through, will show you basically how many queries" }, { "start": 859.0600000000001, "end": 864.0600000000001, "text": " in two learned queries, as opposed to generated queries." }, { "start": 864.06, "end": 869.66, "text": " queries, they have no clue about the incoming data. So what you generate here is just kind of a" }, { "start": 869.66, "end": 875.3399999999999, "text": " generic set of queries. Like what would you know, what would you like to know about this incoming" }, { "start": 875.3399999999999, "end": 881.26, "text": " data point? And you have a thousand things that you can want to know and you have, I don't know," }, { "start": 881.26, "end": 890.38, "text": " 50,000 things to attend to. So you're going to choose a thousand criteria, right, to gather from" }, { "start": 890.38, "end": 897.1, "text": " that input data. Now, the way attention works, right, is the queries, you have a set of queries," }, { "start": 897.9, "end": 905.18, "text": " queue, and you have a set of keys down here, a bunch of keys, more than queries, and every" }, { "start": 905.18, "end": 913.18, "text": " query exposes sort of a vector and every key exposes a vector. And the information is routed" }, { "start": 913.18, "end": 919.74, "text": " by means of highest or high inner product. So you would route things that have a high inner product" }, { "start": 919.74, "end": 927.34, "text": " together like these two. Yeah, those are the ones that you would route. So every key potentially" }, { "start": 927.34, "end": 934.94, "text": " has a, not potentially every key has a vector associated with it. So the queries essentially say," }, { "start": 934.94, "end": 943.34, "text": " what kind of things I would like to know of the incoming data. And the keys, say for each pixel" }, { "start": 943.34, "end": 952.3000000000001, "text": " in the data, say what kind of things that particular pixel offers to the to the to the model." }, { "start": 953.1, "end": 957.98, "text": " If you just do this once, you might get some generic information, but then you get to do it" }, { "start": 957.98, "end": 966.7, "text": " again. And you will notice that the queries here, the later queries are a result of that processing." }, { "start": 966.7, "end": 974.86, "text": " So the data comes through through here, right, and influences these next queries. Therefore," }, { "start": 974.86, "end": 983.1800000000001, "text": " these next queries here can be dependent on the earlier data. So you can pretty easily see that," }, { "start": 983.1800000000001, "end": 988.1400000000001, "text": " you know, now, the next time you're going to attend to this data, you do this in an informed" }, { "start": 988.1400000000001, "end": 993.1800000000001, "text": " fashion, you already kind of know what's in there. So you refine what you would like to know" }, { "start": 993.18, "end": 999.26, "text": " about the data and so on, you can refine and refine, you can ask for more and more specific" }, { "start": 999.26, "end": 1006.54, "text": " things, the more you learn about the data. So this is really a process of learning more and" }, { "start": 1006.54, "end": 1012.8599999999999, "text": " more about the data in a dynamic way where you can say what you would like to know. And, you know," }, { "start": 1012.8599999999999, "end": 1020.14, "text": " this, I think it's a great idea. It might be refined in the future, but it certainly does." }, { "start": 1020.14, "end": 1026.62, "text": " Also, you know, it makes sense. And it also solves the kind of quadratic bottleneck. Oh," }, { "start": 1026.62, "end": 1033.18, "text": " wait, I almost forgot I had a visual demonstration of how the quadratic bottleneck here is solved." }, { "start": 1033.18, "end": 1039.9, "text": " Bear with me. Here's a matrix, it's M by M. Now watch." }, { "start": 1039.9, "end": 1052.0600000000002, "text": " Problem solved. All right. So by the way, the lower is supposed to represent N by M. I did not" }, { "start": 1052.0600000000002, "end": 1058.7, "text": " write that down. Okay. So this not only allows you to overcome this quadratic bottleneck, it also" }, { "start": 1058.7, "end": 1065.66, "text": " allows you to build much more of a dynamic model of the data. So you can see that the data is" }, { "start": 1065.66, "end": 1070.94, "text": " not only a dynamic model, but it also allows you to overcome this quadratic bottleneck. It also" }, { "start": 1070.94, "end": 1077.98, "text": " allows you to build much deeper transformers. So I believe their best architecture here had 40," }, { "start": 1078.5400000000002, "end": 1086.3000000000002, "text": " sorry, 48 layers of transformer, which, you know, we can do in kind of NLP, but it takes a lot of" }, { "start": 1086.3000000000002, "end": 1093.18, "text": " hardware. And when they also share the weights, their number of parameters in these things is not" }, { "start": 1093.18, "end": 1105.1000000000001, "text": " a standard ResNet. So yeah, pretty cool. So they apply this to pictures, they apply this to videos," }, { "start": 1105.1000000000001, "end": 1109.8200000000002, "text": " they apply this to audio, they apply it to video and audio together, they apply it to 3D point" }, { "start": 1109.8200000000002, "end": 1116.7, "text": " clouds. Though one has to say for video, they don't actually put the entire video into so that" }, { "start": 1116.7, "end": 1124.8600000000001, "text": " this here isn't the entire video. But they, I think they also put kind of little time space chunks" }, { "start": 1124.8600000000001, "end": 1131.42, "text": " of the video in it. So it doesn't solve yet all the problems with transformers. It's still," }, { "start": 1131.42, "end": 1136.94, "text": " if a data point is huge, you won't get it in there. Simply by the fact that is linearly huge." }, { "start": 1136.94, "end": 1147.18, "text": " What you will solve is the fact that things are quadratically huge. The last thing to do is to" }, { "start": 1147.18, "end": 1154.46, "text": " pay attention to this thing, positional encodings. Now, the way they do positional encodings is," }, { "start": 1155.26, "end": 1160.78, "text": " so now we have like a fully independent, like a data modality independent architecture, right?" }, { "start": 1160.78, "end": 1166.6200000000001, "text": " It's important to realize this. This thing here has nothing to do with an image, like is it" }, { "start": 1166.62, "end": 1173.02, "text": " an image? Who knows, right? We don't care. We simply, this is the array of pixels. This is" }, { "start": 1173.02, "end": 1182.06, "text": " simply the unrolled image. There is no convolutional filter, there's no patching or batching or" }, { "start": 1182.06, "end": 1187.9799999999998, "text": " anything. There's just the image or it's the audio data, right? It's like sample after sample of" }, { "start": 1187.9799999999998, "end": 1195.1799999999998, "text": " audio data and so on. This, you can even think of a situation where you would feed in different" }, { "start": 1195.18, "end": 1201.02, "text": " different parts of the data from time step to time step, in which case it really becomes like" }, { "start": 1201.02, "end": 1214.0600000000002, "text": " a recurrent neural network. But the point is the transformers, they are invariant to position." }, { "start": 1214.0600000000002, "end": 1221.18, "text": " So if I feed one, two, three, four, five into a transformer, it will do exactly the same thing" }, { "start": 1221.18, "end": 1230.38, "text": " as if I feed three, one, two, four, five. That is not much of a permutation, but it is. So it is" }, { "start": 1230.38, "end": 1238.54, "text": " invariant. Now that stifles it because we, you know, there is something to something being in" }, { "start": 1238.54, "end": 1244.0600000000002, "text": " a certain location, right? Especially if you think of text, word order matters and so on." }, { "start": 1245.98, "end": 1250.38, "text": " But there's a clear distinction. We don't want to build these things into the architecture," }, { "start": 1250.38, "end": 1256.8600000000001, "text": " but we want to give the model the possibility to exploit that information because clearly it's there" }, { "start": 1256.8600000000001, "end": 1265.42, "text": " like a piece of text is not just a set. It is an actual string of ordered words. So what do we do?" }, { "start": 1265.42, "end": 1271.5800000000002, "text": " We give positional encodings with the input and positional encodings, you know, have been used all" }, { "start": 1271.5800000000002, "end": 1279.66, "text": " over the place. Transformers specifically need them. The way this paper does positional encodings" }, { "start": 1279.66, "end": 1285.42, "text": " is like they do it or much like they do it in the first transformer paper, and that is by Fourier" }, { "start": 1285.42, "end": 1292.78, "text": " features. So if you have five inputs right here, you build up kind of a Fourier bank of frequencies." }, { "start": 1293.8200000000002, "end": 1299.74, "text": " So this is the lowest frequency, something like this, like a sine wave, and then a higher frequency." }, { "start": 1299.74, "end": 1308.78, "text": " Well, five probably wasn't the optimal thing to demonstrate this. So by kind of indexing, so here," }, { "start": 1308.78, "end": 1317.1, "text": " if we look at the position number two right here, it has like, if we just consider this binary," }, { "start": 1317.1, "end": 1326.22, "text": " it has like, no, not binary, like 0.9, 0.9 minus one. That's kind of the encoding. That's the" }, { "start": 1326.22, "end": 1334.86, "text": " positional encoding of that location. And if we look at three, it's 0.9 minus one, one." }, { "start": 1334.86, "end": 1341.1799999999998, "text": " So you can see that you can, with this kind of positional encoding, as opposed to a learned" }, { "start": 1341.1799999999998, "end": 1347.9799999999998, "text": " positional encoding, what you can do is you can always detect when two things are close together." }, { "start": 1347.9799999999998, "end": 1354.86, "text": " That means that in the lower frequencies, they will share the same number. And you can, but you" }, { "start": 1354.86, "end": 1359.34, "text": " can also do very high resolution, you go to the highest frequencies, and if they're different" }, { "start": 1359.34, "end": 1365.1, "text": " there, but if they match all of the frequencies above them, that means they're like right next" }, { "start": 1365.1, "end": 1370.22, "text": " to each other. So that's how you do positional encoding with Fourier features. Again, I discuss" }, { "start": 1370.22, "end": 1378.3799999999999, "text": " this at length in my attention is all you need video. The Fourier features also have the additional" }, { "start": 1378.3799999999999, "end": 1384.9399999999998, "text": " benefit that you don't rely on learned encodings, which means you don't, you don't rely on the fact" }, { "start": 1384.94, "end": 1393.3400000000001, "text": " that you have kind of an exact or a maximum amount of sequence length. So the yeah, I mean," }, { "start": 1393.3400000000001, "end": 1400.46, "text": " you still have kind of a maximum here. But I like this more because it's sort of independent," }, { "start": 1400.46, "end": 1406.8600000000001, "text": " it's one less thing to learn. And the learning happens in the processing itself. So in terms" }, { "start": 1406.8600000000001, "end": 1413.5800000000002, "text": " of experiments, it's pretty simple. They are in vision, they are on par with something like" }, { "start": 1413.58, "end": 1422.1399999999999, "text": " a ResNet 50. And they are, you know, they're doing pretty well in vision without any sort of" }, { "start": 1422.1399999999999, "end": 1430.22, "text": " assumption that the input data is an image, right? That's the, that's the crazy part. So other than" }, { "start": 1430.22, "end": 1436.46, "text": " the position encodings, which are the the Fourier features in two dimensions, there is nothing here" }, { "start": 1436.46, "end": 1445.58, "text": " saying this is an image, it's simply a array of pixels. This it, I think that's crazy. And sorry," }, { "start": 1449.58, "end": 1455.82, "text": " this is visualization of the attention maps. So in this model, specifically, what they do is" }, { "start": 1455.82, "end": 1463.98, "text": " layer one has a set of weights, then layers two to I think seven have a different set of weights," }, { "start": 1463.98, "end": 1471.26, "text": " and then layer eight has another set of weights. So layer one is the blue here, layer two to seven" }, { "start": 1471.26, "end": 1479.98, "text": " share the weights, they're green. And the last layer, I don't have, do I have orange here? Okay." }, { "start": 1481.9, "end": 1488.06, "text": " And you can see that these are the attention maps of different channels. And they stress that they" }, { "start": 1488.06, "end": 1495.4199999999998, "text": " don't overlay it on the image. So the attention map in the first layer actually really attends to" }, { "start": 1495.4199999999998, "end": 1502.94, "text": " the image pixels, you can see the dog clearly in many, many of these attention maps right here," }, { "start": 1502.94, "end": 1510.46, "text": " like where it attends to clearly attends to parts of the of the dog. And it seems that it can do" }, { "start": 1510.46, "end": 1518.7, "text": " sort of edge. No, it kind of attends to the intensity of the pixels, right in the first layer," }, { "start": 1518.7, "end": 1525.1000000000001, "text": " then in this second to seventh layer, attention maps look like this. So they look like sort of a" }, { "start": 1525.1000000000001, "end": 1532.78, "text": " grid. So they heavily rely on these positional encodings in order to build up this grid. However," }, { "start": 1532.78, "end": 1537.98, "text": " this grid is not always the same. It's sort of different from the image. So it's not always" }, { "start": 1537.98, "end": 1544.94, "text": " the same. It's sort of different for different things. And then in the last layer, again, my" }, { "start": 1544.94, "end": 1550.22, "text": " question would actually be how I see that these things are different from channel to channel. So" }, { "start": 1550.22, "end": 1557.02, "text": " these are the different channels right here. But how different are they from input to input? Like" }, { "start": 1557.02, "end": 1563.74, "text": " has the model just kind of learned a general sequence of attention maps for all possible" }, { "start": 1563.74, "end": 1569.02, "text": " images like that it works well, because it's pretty, it's kind of suspicious, right, that" }, { "start": 1570.06, "end": 1576.06, "text": " these maps they seem like so my question would be how much do these attention maps really depend" }, { "start": 1576.6200000000001, "end": 1587.02, "text": " on the input versus how much are they just general attention maps, right, then? And so I can totally" }, { "start": 1587.02, "end": 1594.22, "text": " see that this model might just do all the work in the latent transformer by simply having so many" }, { "start": 1594.22, "end": 1600.78, "text": " layers, and that the attention isn't too important, like it would always do the same sort of attention," }, { "start": 1601.58, "end": 1609.42, "text": " no matter what the input is, and I can see a model like that totally performing well. So in order for" }, { "start": 1609.42, "end": 1614.22, "text": " me to demonstrate that this idea really works as advertised, namely that, you know, the model" }, { "start": 1614.22, "end": 1618.54, "text": " selects itself what it wants to attend to iteratively informed by the data and so on." }, { "start": 1619.66, "end": 1625.82, "text": " It would be cool to see that these things somehow depend on the data because this grid pattern" }, { "start": 1625.82, "end": 1635.74, "text": " right now tells me that maybe they don't. Okay, so the last thing they also apply this, as I said," }, { "start": 1635.74, "end": 1642.8600000000001, "text": " to audio, video, 3D point clouds, and I think they outperform other methods in these. So they reach" }, { "start": 1642.86, "end": 1648.78, "text": " state of the art in a bunch of them, which, you know, pretty, pretty cool. Of course, image" }, { "start": 1648.78, "end": 1657.74, "text": " computer vision has been sort of the prime or one of the prime disciplines of of deep learning" }, { "start": 1657.74, "end": 1664.4599999999998, "text": " research. So that's maybe a bit more competitive. Last thing I want to show here is the ablations." }, { "start": 1664.4599999999998, "end": 1670.2199999999998, "text": " So they find specifically that, you know, the number of latent variables, which is the," }, { "start": 1670.22, "end": 1677.82, "text": " you know, the size of the queue, the, the, the end. So the, this is what we need to keep small" }, { "start": 1677.82, "end": 1685.26, "text": " in order to avoid this quadratic bottleneck, you can pretty clearly see that as this goes up," }, { "start": 1685.26, "end": 1692.38, "text": " performance goes up. So this at least validates, you know, our intuition that if we could do bigger" }, { "start": 1692.38, "end": 1701.42, "text": " transformers, it probably would be a good idea. Number of attends, I think that is how many times" }, { "start": 1701.42, "end": 1710.7, "text": " the how many times the image goes into the structure. Also here, the more the better," }, { "start": 1710.7, "end": 1717.18, "text": " and number of transformers per attend, that's, you know, how many in between self attention layers" }, { "start": 1717.18, "end": 1723.42, "text": " do you have per time you attend the image. So that gives your model time to process and time" }, { "start": 1723.42, "end": 1732.14, "text": " to decide what to attend to next time. Also here, we see, we see a rise, though, it would be" }, { "start": 1732.14, "end": 1739.9, "text": " interesting to see like an interaction term between between these two things that will tell us if" }, { "start": 1739.9, "end": 1749.66, "text": " it's just about making the model deeper or or not. Okay, so that was all I had to say, you can kind" }, { "start": 1749.66, "end": 1755.26, "text": " of check out the attention maps they have here themselves, they have them for audio, they have" }, { "start": 1755.26, "end": 1761.98, "text": " them here, I think for the video. And also there are a bunch of experimental details that are also" }, { "start": 1761.98, "end": 1770.06, "text": " pretty cool. However, I just think it's a cool idea. And I'm excited to see where people take this." }, { "start": 1770.06, "end": 1792.62, "text": " All right, that was it from me. I'll see you next time. Bye bye." } ]
MpdbFLXOOIw
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Supervised Contrastive Learning
[ "Science & Technology" ]
[ "deep learning", "machine learning", "supervised learning", "classification", "classifier", "labels", "pretraining", "unsupervised", "self-supervised", "representation learning", "representations", "hidden space", "loss function", "google", "mit", "imagenet" ]
The cross-entropy loss has been the default in deep learning for the last few years for supervised learning. This paper proposes a new loss, the supervised contrastive loss, and uses it to pre-train the network in a supervised fashion. The resulting model, when fine-tuned to ImageNet, achieves new state-of-the-art. https://arxiv.org/abs/2004.11362 Abstract: Cross entropy is the most widely used loss function for supervised training of image classification models. In this paper, we propose a novel training methodology that consistently outperforms cross entropy on supervised learning tasks across different architectures and data augmentations. We modify the batch contrastive loss, which has recently been shown to be very effective at learning powerful representations in the self-supervised setting. We are thus able to leverage label information more effectively than cross entropy. Clusters of points belonging to the same class are pulled together in embedding space, while simultaneously pushing apart clusters of samples from different classes. In addition to this, we leverage key ingredients such as large batch sizes and normalized embeddings, which have been shown to benefit self-supervised learning. On both ResNet-50 and ResNet-200, we outperform cross entropy by over 1%, setting a new state of the art number of 78.8% among methods that use AutoAugment data augmentation. The loss also shows clear benefits for robustness to natural corruptions on standard benchmarks on both calibration and accuracy. Compared to cross entropy, our supervised contrastive loss is more stable to hyperparameter settings such as optimizers or data augmentations. Authors: Prannay Khosla, Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola, Aaron Maschinot, Ce Liu, Dilip Krishnan Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there, today we're looking at supervised contrastive learning by people from Google Research and MIT. Now this paper proposes a new loss for supervised learning. And you might recognize that this is a big claim. So forever now we've basically used this cross entropy loss in order to do supervised training of neural networks. This paper proposes to replace that with the supervised contrastive loss. And let's jump straight into the results here. They say our supervised contrastive loss outperforms the cross entropy loss with standard data augmentations such as auto augment and rand augment. So these are some of the previous state of the art data augmentation techniques used together with the cross entropy loss. And they say their supervised contrastive loss outperforms them. You can see here on ImageNet, which is the biggest vision benchmark or the most famous one, this new loss, the supervised contrastive loss, outperforms these other methods by something like a percent. One percent is a big improvement on ImageNet right now. So they claim it is a big claim, right? We recognize if this is true, this could be a game changer basically for all of supervised learning. And supervised learning is really the only thing right now in deep learning that works. So it could revolutionize the field. So here's the but. It is actually not a new loss to replace the cross entropy loss. And they do come about this pretty quickly. I don't think they're dishonest or lying or anything here. But it is sort of if you start reading you like what this is a new loss. It is not. It is a new way of pre training the network for a classification task. And so let's look into this. So if you look at what does what does it mean to build a classifier, this is what you usually do. You do supervised cross entropy training, you have an image and the image here is of a dog, you put it through your network, and you obtain a representation. So the representation here are is this last layer, or the second to last layer. And you put that through a classification layer and then a softmax. And what you get as an output is basically a probability distribution. And let's say you have three classes here. There's dog, there's cat, and there's horse. And let's say the network doesn't yet isn't yet trained very well. So the probability for dog here is fairly low. So this is basically what the network thinks of that image, like which class does it belong to with what probability. They also have this label right here. So the label dog for that image, what you do with that is you do a one hot vector. So that would look like this. So the one is at the position where the correct class is. And then the cross entropy loss takes all of this and does the following. There's a sum over all your classes. In this case, you have three classes. And let's call these the labels L. And you want to always take the label of the class times the log probability that the network thinks belongs to this class. So you can quickly see that this if the label is zero, so for all the incorrect classes, that means this entire term drops away. And only if the label is one, so only the correct class that will result in the log probability of the class where the label is the correct label. So in order to make this a loss, you actually have to put a negative sign in front of here because you want to this so this entire thing reduces to the log probability of the correct class. This is what you want to maximize. Therefore, you if you want to minimize something you need. So you minimize the negative log probability of the correct class, which means you maximize the probability. If you've never looked at the cross entropy loss like this, it is important to notice that you're going to say, hey, all this does is pull this here up, right? And it doesn't do anything to the other ones. But you have to realize that this softmax operation, since this is a probability distribution, all of this is normalized to sum up to one. So implicitly, you will push these down through the normalization, right? So what this does is it pushes the correct class up, and it pushes the other classes down. So this, to look at this is going to be important later. Because if you look at what this representation here does, so again, you have the network produces a representation here. This is 2000 dimensional, and then it does, it adds on top this classification layer, this classification layer is simply a linear layer, and then a softmax on top. So how you have to imagine this is that there is a representation space, this 2000 dimensional space, and the representations are made in such a way that the labels such that sorry, let's have three classes here. The representations are made in such a way that a linear classifier can separate them correctly, right? So here, this would be like a boundary. And then this would be another boundary. And this maybe would be another decision boundary. So you can see that linear classifier can separate the classes well. That is the goal. If you use this softmax cross entropy loss, that is implicitly what will happen in the representation space W. All it cares about is that the classes are on one side of the decision boundary and everything else is on the other side of a decision boundary. So if you have the network isn't trained very well at the beginning, and you maybe have a sample of the green class here, it will push the network such that the representation of that sample will go onto the other side of this decision boundary and it will push the decision boundary at the same time to make that happen more easily. Right, so it will optimize all of this at the same time. That's what you do. That's how you optimize the representations. So this work here, and another work has said, wouldn't it be great if the representation and decision boundaries weren't just trained at the same time for this, but we learn good representations first, such that classifying them becomes very simple. And in essence, what this paper says is, if we have a representation space W, shouldn't images of the same class, shouldn't we just make them close together? So without caring about decision boundaries, we just want them to be close to each other. And we want them to be far apart from other classes. If that happens, you can see that a linear classifier is going to have a very easy time separating these classes later. So that's exactly what this paper does. It has a pre training stage and a training stage. So in the pre training stage, this is over here, supervised contrastive. In the pre training stage, it simply tries to learn these representations, like over like down here, such that without the decision boundaries, class thing, images of the same class are close together, and images of different classes are far apart, which notice the subtle difference right to the cross entropy loss where you just care about them being on one or the other side of a decision boundary. And in stage this, so this stage one, and then in stage two, and there is where where it comes in, you basically freeze the network. So you freeze these weights down here, these are frozen, you don't train them anymore. All you train is this one classification layer. So the represent you actually freeze also the representation layer here, you only train the classifier on top in stage two, but you train it using softmax and using the cross entropy loss. So you you train the classifier in the old cross entropy way, using just normal supervised learning. So what we see here is that the stage one pre training is is what's training the network and the cross entropy loss only trains the classifier. Right, so let's look at how this pre training actually work what is using what it's using is a method called contrastive pre training. Now in contrastive pre training, and they have a little diagram up here, what this does is if you look at the classic way of doing contrastive pre training, you have to go to the unsupervised pre training literature, people have kind of discovered that they can improve a neural network by pre training it first in an unsupervised way. This is also called some of these methods are called self supervised. So the advantage here of self supervised or unsupervised pre training is that you don't need labels. What you want to do is simply to make the representation space somewhat meaningful, right? So you simply want the network to learn representations of images that are somehow meaningful, right? That are there. And here's how you do it. So you want to take an image like this dog here. And then you want to randomly augment this image, which just means you want to produce different versions of the same image. In this case down here, this is a random crop, it's cropped about here, it's still the same image but it's kind of a different version of it. In the case here, you can see that it's flipped left right and the brightness is slightly increased. So these are just different versions of the same image. And what you also want are what's called negatives. Natives are simply different images from your data set, right? For example, this or this or this, you don't care as long as they're different, right? You just sample a bunch. And what you want, so your embedding space and they make a big deal here that they are normalized and that seems to work better. But this is not necessary for the idea to work. The big idea is here that if you have an image right here, let's say this is the dog, and the blue dots here are the augmented versions of the same dog, and the green dots are all the other images in the data set. What you want is that all the images that come from the original same image are pulled close together and everything else is pushed apart. Right? So that's why these are called positives and these are called negatives. So the contrastive training basically means that you always want to have a set that you pull together in representation space and a set called the negatives that you push apart. So the network basically learns about these random transformations that you have here. The network learns what it means to come from the same image. It learns to be robust to these kind of transformations. It learns about the data in general and how to spread the data and embedding space with these transformations. So this usually ends up in a pretty good representation space and people have been using this in recent years in order to gain significant improvements. Now the problem here, if you specifically do this to pre-train a classifier is the thing they show on the right. So on the left here you have a picture of a dog. But if you just do this self-supervised, you do it without the labels. So it can happen that this image here shows up in the negatives, but it is also of a dog. And now this image here is going to end up maybe being this image here. And you see what happens to it. It's a green one. So it's going to get pushed apart. And this is going to make the entire task for the later classifier much harder because if they are pushed apart from each other, how is a linear classifier going to have them on the same side of the decision boundary while having everything else on a different side? So the task here is implicitly making the task for the later classifier harder by pushing apart samples that should be of the same class. And so this is not happening if you introduce labels to the pre-training objective. That's what they do, the supervised contrastive objective. Now you still, all you want to do is here, we're going to draw the same embedding space and we're going to draw this original dog image. And we're going to draw the augmented version of the original dog image. But now we also have the following. We also have these images, which are images of the same class. So we're going to put them in black here. And let's say the augmented versions around them in smaller black dots, augmented versions of those, right? You can augment them as well. And then you have the negative samples. And the negative samples are not just any images, but just images of different classes. So you just go over your mini batch and all everything that's of the same class becomes positives, including their augmentations, and everything that is not in the same class becomes negatives. And also you can augment them as well. So now we have a bunch of things in our embedding space. And our objective is simply going to be, again, we want to push away all the images that are not of the same class as our original, as our red original image, which is called the anchor. So all of this needs to be pushed away. But now we want to pull together all the augmented versions of the original image, but also we want to pull together all of the other images of the same class, including also their augmented versions. So all of this is going to be pulled together. So not only does the network learn about these augmentations, which again, for this idea, the augmentations aren't even necessary. The network learns a representation space where images of the same class are close together, which again is going to make the task of later linear classifiers that needs to separate this class from other classes very, very easy. And again, the other images aren't just going to be pushed away, but if they're from the same class, let's say this and this image are from the same class, all of those are going to be pushed apart from our red dot, but by themselves being pushed together to their own cluster here of their own class. I hope this makes sense. And I hope the difference to the cross entropy objective is sort of clear. The cross entropy objective simply from the beginning just cares about which side of the decision boundary you're on. While this pre training objective first cares to put things close together that are in the same class, and then the decision classifier will have a much easier time. The reason why this works better than the because because it's not entirely clear from the beginning that why this should work better because it's working with the same information. It's just because people have generally found that these pre training contrastive pre training objectives, they just are somewhat better at exploiting the information in the data set than if you just hammer on hammer with the contrastive sorry with the cross entropy loss from the beginning. So but it is not fully explained yet why this works better because it's working with the same data. Again, the difference here is that the previous methods of contrastive pre training the self supervised ones, they did not have access to the labels. And the advantage of that is you can have a giant database of unlabeled additional data that you do the pre training on. Whereas here we do the pre training, including the labels. So here, the label dog is an intrinsic part because we need to know which of the samples we need to pull together. But that also means we cannot leverage the maybe that we have more unlabeled data and unlabeled data is pretty cheap to obtain. So that's the advantages and disadvantages here. So this new loss, so they they do compare this here. And usually in these contrastive objectives, you have somewhat like two encoders, one to encode the the anchor and one to encode the augmented versions. And this one is like a momentum with shared weights and so on. All of this isn't really important. If you want to look into that look into papers like momentum contrast, or I did one on curl for reinforcement learning. I think the the general gist of it is clear. So they compare the formulation of their loss to the self supervised one, usually it takes the form of things like this. So one is the the anchor here, and then the zji would be the positive example. And you see here that the inner product between the anchor and the positive example, sorry about that, the inner product should be high, because here the loss is the negative of whatever is here. So if you minimize the loss, you say I want the inner product between my anchor, and whatever is the positive sample to be high, and everything else here, which includes the thing on the top, but it also includes everything else, I want the inner product to be low, and which is exactly the thing where you push you pull together the positives, and you push apart everything else. That, that is the standard objective that you had before they, they extend this, but it looks almost the same. So compared to the unsupervised objective now, first of all, they extend this such that you can have more than one positive sample. Now this is also possible in the unsupervised way. So they just augmented by this. And they also now this is the crucial part, they include the labels into the pre turning objective. So they say everywhere where I and J have the same label should be maximized in the inner product, so should be pulled together, while everything else is being pushed apart. Yeah, so they say we generalize to an arbitrary number of positives. And they also say contrastive power increases with more negatives. I think that's just a finding that they have that when they add more negatives, so when they increase the batch size, that contrastive power increases. They do analyze their gradient, which I find is pretty neat. You can already see that if you formulate a loss, of course, the gradient is going to go in the negative direction, but they make it clear that if you look at the gradient for the positive cases, what appears is this one minus p ij quantity and the p ij quantity is exactly the inner product between i and j normalized, of course. So if you minimum, so the gradient is going to point into the negative direction of that for the positives, which means you're going to pull them together. And it's going to push into this direction for the negative classes, which means you push them apart. And they also analyze what happens with relation to hardness. So they say there are two kinds of, if you just look at the positive samples, there are two kinds. There are easy positives where the network has already learned to match them closely, where the inner product is almost one. If you look at them, that means the p ij quantity is large, right? Because that is basically the inner product. And you look at this term, this term is exactly what we saw in the gradient. Then you see that this here, since this is one, this entire thing is zero. This is also high. This is close to one. So this entire thing is zero. This is almost zero. But if you have a hard positive where the network hasn't learned yet to align the inner product properly or align the representation properly, then the angle between the things again, these are normalized, the angle is they're approximately orthogonal. So the gradient magnitude is going to be this here is going to be approximately zero. So this is close to one and this here, since this is also zero is also close to one. So this is going to be larger than zero, which means that their loss focuses on the examples that are that the network cannot yet represent well, according to their objective, which makes sense, right? First of all, but second of all, it that is exactly the same thing as in the cross entropy loss. So if you look at the cross entropy loss and you have a situation where the network is really good already for a given sample, so it already puts a dog into the dog class, then the gradient will not be pulling much for that sample. It might mainly focuses on where you're still wrong. So it is like I appreciate the analysis, but it is not a notable difference. I think what they want to show is that their loss, if you do gradient descent really does what it is supposed to do, namely, first of all, it does this pulling together pushing apart of inner products for the positive and negative samples. And it mainly focuses on samples where you not yet have found a good representation to align them with others. It focuses on pairs that are not yet correctly close or together or far apart. They also connect this to the triplet loss, where they can show after some approximation, that if their loss only has one positive and one negative sample, it is going to be proportional to the triplet loss. The triplet loss is basically where you have an image and you find one positive, I think that's going to be of the same class right here, and you find one negative of a different class and you try to push those apart while pulling those together. The problem here, they say, is the problem of hard negative sampling. In order for this to make sense, you need the negative sample to be what's called a hard negative sample. So this is called this hard negative mining, because you only have one negative sample, you better make this something where the network can learn from. And if it's too easy, the network can't learn anything. And thereby you have the problem of hard negative mining, where you often have to filter through your mini batch or even through your data set to find a good negative sample to go along with this pair of positive samples. But I don't really see how their method, except that it has a bunch of positives and negative samples, except for that, which I guess you could also apply to the triplet loss. There's not really a difference here. Again, if your method is a contrastive method, you do have the problem that if you simply sample at random, your negative samples are going to become easier and easier over the training over the course of training. And you get the problem of at some point, you're going to have to do actively sample hard negatives. I think this paper just gets around it by having huge batch sizes. So yeah, but again, they do get state of the art on ImageNet for these types of networks and augmentation strategies. And they do look at how their loss appears to be more hyperparameter stable. So if they change out the augmentation, if they change the optimizer or the learning rate, you can see here that the spread in accuracy is much smaller than for the cross entropy loss except here, but it is hard to compare variances of things that don't have the same means in terms of accuracy. So take this on the right here with a grain of salt. They also evaluate this on corrupted ImageNet. So there's an ImageNet data set where it has several levels of corruptedness of the data set. And you can see your accuracy goes down, but the accuracy for the cross entropy loss goes down faster than for the supervised contrastive loss. You see they start together like this and they go further apart. Now it is not clear to me whether that's just an effect. Like if you just trained a supervised contrastive loss also to this level, whether it would fall off at the same speed or whether because it is the supervised contrastive loss, it would kind of match that curve. It's not clear whether that's really an effect of the difference of the losses or it's just an effect of the fact that they aren't the same accuracy to begin with. Again, this kind of shifting, you can't really compare things that have different means in the first place. But it is an interesting finding that their method is more stable to these corruptions. I just want to point out at the end, their training details just highlight they train for up to 700 epochs during the pre-training stage, which is, I think, standard but mad. And they trained up models with batch sizes up to 8192. So you need like a super TPU cluster to run these kind of things. And I am never exactly trusting of numbers like this. Even though it's kind of a good improvement, it is still like a 1% improvement. And in these small numbers, I just feel there might be a big effect that things like batch sizes and how much you put into computing, how much compute you put into it and what else you're doing. There might be so much influence of that, that I first want to see this replicated multiple times across the entire field before I'm going to really trust that this is a good thing to do. Alright, so I hope you like this. If you're still here, thank you. Consider subscribing. If you have a comment, please leave it. I usually read them. And with that, bye bye.
[ { "start": 0, "end": 5.96, "text": " Hi there, today we're looking at supervised contrastive learning by people from Google" }, { "start": 5.96, "end": 8.56, "text": " Research and MIT." }, { "start": 8.56, "end": 14.120000000000001, "text": " Now this paper proposes a new loss for supervised learning." }, { "start": 14.120000000000001, "end": 19.12, "text": " And you might recognize that this is a big claim." }, { "start": 19.12, "end": 25.48, "text": " So forever now we've basically used this cross entropy loss in order to do supervised training" }, { "start": 25.48, "end": 27.68, "text": " of neural networks." }, { "start": 27.68, "end": 33.8, "text": " This paper proposes to replace that with the supervised contrastive loss." }, { "start": 33.8, "end": 36.4, "text": " And let's jump straight into the results here." }, { "start": 36.4, "end": 42.68, "text": " They say our supervised contrastive loss outperforms the cross entropy loss with standard data" }, { "start": 42.68, "end": 47.36, "text": " augmentations such as auto augment and rand augment." }, { "start": 47.36, "end": 52.56, "text": " So these are some of the previous state of the art data augmentation techniques used" }, { "start": 52.56, "end": 55.6, "text": " together with the cross entropy loss." }, { "start": 55.6, "end": 59, "text": " And they say their supervised contrastive loss outperforms them." }, { "start": 59, "end": 65.24000000000001, "text": " You can see here on ImageNet, which is the biggest vision benchmark or the most famous" }, { "start": 65.24000000000001, "end": 71.68, "text": " one, this new loss, the supervised contrastive loss, outperforms these other methods by" }, { "start": 71.68, "end": 74.68, "text": " something like a percent." }, { "start": 74.68, "end": 78.1, "text": " One percent is a big improvement on ImageNet right now." }, { "start": 78.1, "end": 82.56, "text": " So they claim it is a big claim, right?" }, { "start": 82.56, "end": 89.2, "text": " We recognize if this is true, this could be a game changer basically for all of supervised" }, { "start": 89.2, "end": 90.2, "text": " learning." }, { "start": 90.2, "end": 95.18, "text": " And supervised learning is really the only thing right now in deep learning that works." }, { "start": 95.18, "end": 99.12, "text": " So it could revolutionize the field." }, { "start": 99.12, "end": 100.44, "text": " So here's the but." }, { "start": 100.44, "end": 105.76, "text": " It is actually not a new loss to replace the cross entropy loss." }, { "start": 105.76, "end": 110.76, "text": " And they do come about this pretty quickly." }, { "start": 110.76, "end": 114.84, "text": " I don't think they're dishonest or lying or anything here." }, { "start": 114.84, "end": 119.96000000000001, "text": " But it is sort of if you start reading you like what this is a new loss." }, { "start": 119.96000000000001, "end": 120.96000000000001, "text": " It is not." }, { "start": 120.96000000000001, "end": 127.60000000000001, "text": " It is a new way of pre training the network for a classification task." }, { "start": 127.60000000000001, "end": 131.04000000000002, "text": " And so let's look into this." }, { "start": 131.04000000000002, "end": 137.8, "text": " So if you look at what does what does it mean to build a classifier, this is what you usually" }, { "start": 137.8, "end": 138.8, "text": " do." }, { "start": 138.8, "end": 142.76000000000002, "text": " You do supervised cross entropy training, you have an image and the image here is of" }, { "start": 142.76000000000002, "end": 148, "text": " a dog, you put it through your network, and you obtain a representation." }, { "start": 148, "end": 154.72, "text": " So the representation here are is this last layer, or the second to last layer." }, { "start": 154.72, "end": 159.76000000000002, "text": " And you put that through a classification layer and then a softmax." }, { "start": 159.76000000000002, "end": 164.24, "text": " And what you get as an output is basically a probability distribution." }, { "start": 164.24, "end": 167.08, "text": " And let's say you have three classes here." }, { "start": 167.08, "end": 171.32000000000002, "text": " There's dog, there's cat, and there's horse." }, { "start": 171.32000000000002, "end": 175.36, "text": " And let's say the network doesn't yet isn't yet trained very well." }, { "start": 175.36, "end": 179.8, "text": " So the probability for dog here is fairly low." }, { "start": 179.8, "end": 184.64000000000001, "text": " So this is basically what the network thinks of that image, like which class does it belong" }, { "start": 184.64000000000001, "end": 186.04000000000002, "text": " to with what probability." }, { "start": 186.04000000000002, "end": 189.3, "text": " They also have this label right here." }, { "start": 189.3, "end": 195.34, "text": " So the label dog for that image, what you do with that is you do a one hot vector." }, { "start": 195.34, "end": 197.68, "text": " So that would look like this." }, { "start": 197.68, "end": 202.44, "text": " So the one is at the position where the correct class is." }, { "start": 202.44, "end": 206.5, "text": " And then the cross entropy loss takes all of this and does the following." }, { "start": 206.5, "end": 208.28, "text": " There's a sum over all your classes." }, { "start": 208.28, "end": 210.6, "text": " In this case, you have three classes." }, { "start": 210.6, "end": 218.6, "text": " And let's call these the labels L. And you want to always take the label of the class" }, { "start": 218.6, "end": 225.96, "text": " times the log probability that the network thinks belongs to this class." }, { "start": 225.96, "end": 234.1, "text": " So you can quickly see that this if the label is zero, so for all the incorrect classes," }, { "start": 234.1, "end": 236.84, "text": " that means this entire term drops away." }, { "start": 236.84, "end": 244.95999999999998, "text": " And only if the label is one, so only the correct class that will result in the log" }, { "start": 244.96, "end": 253.48000000000002, "text": " probability of the class where the label is the correct label." }, { "start": 253.48000000000002, "end": 258.24, "text": " So in order to make this a loss, you actually have to put a negative sign in front of here" }, { "start": 258.24, "end": 265.12, "text": " because you want to this so this entire thing reduces to the log probability of the correct" }, { "start": 265.12, "end": 266.12, "text": " class." }, { "start": 266.12, "end": 267.72, "text": " This is what you want to maximize." }, { "start": 267.72, "end": 273.28000000000003, "text": " Therefore, you if you want to minimize something you need." }, { "start": 273.28, "end": 278.96, "text": " So you minimize the negative log probability of the correct class, which means you maximize" }, { "start": 278.96, "end": 283.46, "text": " the probability." }, { "start": 283.46, "end": 287.79999999999995, "text": " If you've never looked at the cross entropy loss like this, it is important to notice" }, { "start": 287.79999999999995, "end": 293.52, "text": " that you're going to say, hey, all this does is pull this here up, right?" }, { "start": 293.52, "end": 296.55999999999995, "text": " And it doesn't do anything to the other ones." }, { "start": 296.55999999999995, "end": 301.79999999999995, "text": " But you have to realize that this softmax operation, since this is a probability distribution," }, { "start": 301.8, "end": 304.52000000000004, "text": " all of this is normalized to sum up to one." }, { "start": 304.52000000000004, "end": 309.88, "text": " So implicitly, you will push these down through the normalization, right?" }, { "start": 309.88, "end": 314, "text": " So what this does is it pushes the correct class up, and it pushes the other classes" }, { "start": 314, "end": 315.04, "text": " down." }, { "start": 315.04, "end": 320.28000000000003, "text": " So this, to look at this is going to be important later." }, { "start": 320.28000000000003, "end": 326.44, "text": " Because if you look at what this representation here does, so again, you have the network" }, { "start": 326.44, "end": 328.52, "text": " produces a representation here." }, { "start": 328.52, "end": 334.44, "text": " This is 2000 dimensional, and then it does, it adds on top this classification layer," }, { "start": 334.44, "end": 340.4, "text": " this classification layer is simply a linear layer, and then a softmax on top." }, { "start": 340.4, "end": 346.44, "text": " So how you have to imagine this is that there is a representation space, this 2000 dimensional" }, { "start": 346.44, "end": 357.64, "text": " space, and the representations are made in such a way that the labels such that sorry," }, { "start": 357.64, "end": 360.12, "text": " let's have three classes here." }, { "start": 360.12, "end": 365.52, "text": " The representations are made in such a way that a linear classifier can separate them" }, { "start": 365.52, "end": 367.64, "text": " correctly, right?" }, { "start": 367.64, "end": 370.56, "text": " So here, this would be like a boundary." }, { "start": 370.56, "end": 374.28, "text": " And then this would be another boundary." }, { "start": 374.28, "end": 377, "text": " And this maybe would be another decision boundary." }, { "start": 377, "end": 382.41999999999996, "text": " So you can see that linear classifier can separate the classes well." }, { "start": 382.41999999999996, "end": 383.41999999999996, "text": " That is the goal." }, { "start": 383.42, "end": 389.08000000000004, "text": " If you use this softmax cross entropy loss, that is implicitly what will happen in the" }, { "start": 389.08000000000004, "end": 394.88, "text": " representation space W. All it cares about is that the classes are on one side of the" }, { "start": 394.88, "end": 401.24, "text": " decision boundary and everything else is on the other side of a decision boundary." }, { "start": 401.24, "end": 406.08000000000004, "text": " So if you have the network isn't trained very well at the beginning, and you maybe have" }, { "start": 406.08000000000004, "end": 412.44, "text": " a sample of the green class here, it will push the network such that the representation" }, { "start": 412.44, "end": 418.6, "text": " of that sample will go onto the other side of this decision boundary and it will push" }, { "start": 418.6, "end": 423.24, "text": " the decision boundary at the same time to make that happen more easily." }, { "start": 423.24, "end": 426.56, "text": " Right, so it will optimize all of this at the same time." }, { "start": 426.56, "end": 427.92, "text": " That's what you do." }, { "start": 427.92, "end": 429.96, "text": " That's how you optimize the representations." }, { "start": 429.96, "end": 439.28, "text": " So this work here, and another work has said, wouldn't it be great if the representation" }, { "start": 439.28, "end": 445.23999999999995, "text": " and decision boundaries weren't just trained at the same time for this, but we learn good" }, { "start": 445.23999999999995, "end": 451.08, "text": " representations first, such that classifying them becomes very simple." }, { "start": 451.08, "end": 459.03999999999996, "text": " And in essence, what this paper says is, if we have a representation space W, shouldn't" }, { "start": 459.03999999999996, "end": 465.47999999999996, "text": " images of the same class, shouldn't we just make them close together?" }, { "start": 465.48, "end": 472.02000000000004, "text": " So without caring about decision boundaries, we just want them to be close to each other." }, { "start": 472.02000000000004, "end": 476.12, "text": " And we want them to be far apart from other classes." }, { "start": 476.12, "end": 481.20000000000005, "text": " If that happens, you can see that a linear classifier is going to have a very easy time" }, { "start": 481.20000000000005, "end": 485.16, "text": " separating these classes later." }, { "start": 485.16, "end": 488.48, "text": " So that's exactly what this paper does." }, { "start": 488.48, "end": 491.98, "text": " It has a pre training stage and a training stage." }, { "start": 491.98, "end": 496.8, "text": " So in the pre training stage, this is over here, supervised contrastive." }, { "start": 496.8, "end": 503.36, "text": " In the pre training stage, it simply tries to learn these representations, like over" }, { "start": 503.36, "end": 511.76, "text": " like down here, such that without the decision boundaries, class thing, images of the same" }, { "start": 511.76, "end": 519.12, "text": " class are close together, and images of different classes are far apart, which notice the subtle" }, { "start": 519.12, "end": 523.72, "text": " difference right to the cross entropy loss where you just care about them being on one" }, { "start": 523.72, "end": 527.24, "text": " or the other side of a decision boundary." }, { "start": 527.24, "end": 537.08, "text": " And in stage this, so this stage one, and then in stage two, and there is where where" }, { "start": 537.08, "end": 540.5600000000001, "text": " it comes in, you basically freeze the network." }, { "start": 540.5600000000001, "end": 545.44, "text": " So you freeze these weights down here, these are frozen, you don't train them anymore." }, { "start": 545.44, "end": 550, "text": " All you train is this one classification layer." }, { "start": 550, "end": 556, "text": " So the represent you actually freeze also the representation layer here, you only train" }, { "start": 556, "end": 562.6800000000001, "text": " the classifier on top in stage two, but you train it using softmax and using the cross" }, { "start": 562.6800000000001, "end": 563.82, "text": " entropy loss." }, { "start": 563.82, "end": 571.5200000000001, "text": " So you you train the classifier in the old cross entropy way, using just normal supervised" }, { "start": 571.5200000000001, "end": 572.5200000000001, "text": " learning." }, { "start": 572.52, "end": 579.48, "text": " So what we see here is that the stage one pre training is is what's training the network" }, { "start": 579.48, "end": 582.4, "text": " and the cross entropy loss only trains the classifier." }, { "start": 582.4, "end": 588.64, "text": " Right, so let's look at how this pre training actually work what is using what it's using" }, { "start": 588.64, "end": 592.52, "text": " is a method called contrastive pre training." }, { "start": 592.52, "end": 597.36, "text": " Now in contrastive pre training, and they have a little diagram up here, what this does" }, { "start": 597.36, "end": 603.6, "text": " is if you look at the classic way of doing contrastive pre training, you have to go to" }, { "start": 603.6, "end": 610.48, "text": " the unsupervised pre training literature, people have kind of discovered that they can" }, { "start": 610.48, "end": 616.46, "text": " improve a neural network by pre training it first in an unsupervised way." }, { "start": 616.46, "end": 620.66, "text": " This is also called some of these methods are called self supervised." }, { "start": 620.66, "end": 626.6, "text": " So the advantage here of self supervised or unsupervised pre training is that you don't" }, { "start": 626.6, "end": 628.12, "text": " need labels." }, { "start": 628.12, "end": 634.44, "text": " What you want to do is simply to make the representation space somewhat meaningful," }, { "start": 634.44, "end": 635.44, "text": " right?" }, { "start": 635.44, "end": 642.72, "text": " So you simply want the network to learn representations of images that are somehow meaningful, right?" }, { "start": 642.72, "end": 644.48, "text": " That are there." }, { "start": 644.48, "end": 646.84, "text": " And here's how you do it." }, { "start": 646.84, "end": 653.62, "text": " So you want to take an image like this dog here." }, { "start": 653.62, "end": 659.4, "text": " And then you want to randomly augment this image, which just means you want to produce" }, { "start": 659.4, "end": 661.78, "text": " different versions of the same image." }, { "start": 661.78, "end": 666.92, "text": " In this case down here, this is a random crop, it's cropped about here, it's still the same" }, { "start": 666.92, "end": 669.44, "text": " image but it's kind of a different version of it." }, { "start": 669.44, "end": 674.52, "text": " In the case here, you can see that it's flipped left right and the brightness is slightly" }, { "start": 674.52, "end": 676.5600000000001, "text": " increased." }, { "start": 676.5600000000001, "end": 679.8, "text": " So these are just different versions of the same image." }, { "start": 679.8, "end": 683.5600000000001, "text": " And what you also want are what's called negatives." }, { "start": 683.56, "end": 687.76, "text": " Natives are simply different images from your data set, right?" }, { "start": 687.76, "end": 692, "text": " For example, this or this or this, you don't care as long as they're different, right?" }, { "start": 692, "end": 694, "text": " You just sample a bunch." }, { "start": 694, "end": 700.2399999999999, "text": " And what you want, so your embedding space and they make a big deal here that they are" }, { "start": 700.2399999999999, "end": 702.7199999999999, "text": " normalized and that seems to work better." }, { "start": 702.7199999999999, "end": 707.64, "text": " But this is not necessary for the idea to work." }, { "start": 707.64, "end": 717.48, "text": " The big idea is here that if you have an image right here, let's say this is the dog, and" }, { "start": 717.48, "end": 724.04, "text": " the blue dots here are the augmented versions of the same dog, and the green dots are all" }, { "start": 724.04, "end": 726.3199999999999, "text": " the other images in the data set." }, { "start": 726.3199999999999, "end": 734.96, "text": " What you want is that all the images that come from the original same image are pulled" }, { "start": 734.96, "end": 739.76, "text": " close together and everything else is pushed apart." }, { "start": 739.76, "end": 741.1600000000001, "text": " Right?" }, { "start": 741.1600000000001, "end": 746.9200000000001, "text": " So that's why these are called positives and these are called negatives." }, { "start": 746.9200000000001, "end": 751.5600000000001, "text": " So the contrastive training basically means that you always want to have a set that you" }, { "start": 751.5600000000001, "end": 757.64, "text": " pull together in representation space and a set called the negatives that you push apart." }, { "start": 757.64, "end": 763.9200000000001, "text": " So the network basically learns about these random transformations that you have here." }, { "start": 763.92, "end": 768.28, "text": " The network learns what it means to come from the same image." }, { "start": 768.28, "end": 771.8, "text": " It learns to be robust to these kind of transformations." }, { "start": 771.8, "end": 777.76, "text": " It learns about the data in general and how to spread the data and embedding space with" }, { "start": 777.76, "end": 779.16, "text": " these transformations." }, { "start": 779.16, "end": 785.8399999999999, "text": " So this usually ends up in a pretty good representation space and people have been using this in recent" }, { "start": 785.8399999999999, "end": 790.28, "text": " years in order to gain significant improvements." }, { "start": 790.28, "end": 797.56, "text": " Now the problem here, if you specifically do this to pre-train a classifier is the thing" }, { "start": 797.56, "end": 799.12, "text": " they show on the right." }, { "start": 799.12, "end": 804.6, "text": " So on the left here you have a picture of a dog." }, { "start": 804.6, "end": 809.3199999999999, "text": " But if you just do this self-supervised, you do it without the labels." }, { "start": 809.3199999999999, "end": 817.64, "text": " So it can happen that this image here shows up in the negatives, but it is also of a dog." }, { "start": 817.64, "end": 823.08, "text": " And now this image here is going to end up maybe being this image here." }, { "start": 823.08, "end": 824.4399999999999, "text": " And you see what happens to it." }, { "start": 824.4399999999999, "end": 825.4399999999999, "text": " It's a green one." }, { "start": 825.4399999999999, "end": 827.24, "text": " So it's going to get pushed apart." }, { "start": 827.24, "end": 832.16, "text": " And this is going to make the entire task for the later classifier much harder because" }, { "start": 832.16, "end": 838.8, "text": " if they are pushed apart from each other, how is a linear classifier going to have them" }, { "start": 838.8, "end": 843.4, "text": " on the same side of the decision boundary while having everything else on a different" }, { "start": 843.4, "end": 844.76, "text": " side?" }, { "start": 844.76, "end": 852.42, "text": " So the task here is implicitly making the task for the later classifier harder by pushing" }, { "start": 852.42, "end": 857.2, "text": " apart samples that should be of the same class." }, { "start": 857.2, "end": 864, "text": " And so this is not happening if you introduce labels to the pre-training objective." }, { "start": 864, "end": 867.56, "text": " That's what they do, the supervised contrastive objective." }, { "start": 867.56, "end": 874.4, "text": " Now you still, all you want to do is here, we're going to draw the same embedding space" }, { "start": 874.4, "end": 877.1999999999999, "text": " and we're going to draw this original dog image." }, { "start": 877.1999999999999, "end": 881.52, "text": " And we're going to draw the augmented version of the original dog image." }, { "start": 881.52, "end": 884.52, "text": " But now we also have the following." }, { "start": 884.52, "end": 889.04, "text": " We also have these images, which are images of the same class." }, { "start": 889.04, "end": 892.16, "text": " So we're going to put them in black here." }, { "start": 892.16, "end": 898.1999999999999, "text": " And let's say the augmented versions around them in smaller black dots, augmented versions" }, { "start": 898.1999999999999, "end": 899.1999999999999, "text": " of those, right?" }, { "start": 899.1999999999999, "end": 901.1999999999999, "text": " You can augment them as well." }, { "start": 901.2, "end": 904.6800000000001, "text": " And then you have the negative samples." }, { "start": 904.6800000000001, "end": 910.12, "text": " And the negative samples are not just any images, but just images of different classes." }, { "start": 910.12, "end": 915.32, "text": " So you just go over your mini batch and all everything that's of the same class becomes" }, { "start": 915.32, "end": 920.9200000000001, "text": " positives, including their augmentations, and everything that is not in the same class" }, { "start": 920.9200000000001, "end": 922.0400000000001, "text": " becomes negatives." }, { "start": 922.0400000000001, "end": 924.84, "text": " And also you can augment them as well." }, { "start": 924.84, "end": 928.1600000000001, "text": " So now we have a bunch of things in our embedding space." }, { "start": 928.16, "end": 934.64, "text": " And our objective is simply going to be, again, we want to push away all the images that are" }, { "start": 934.64, "end": 939.68, "text": " not of the same class as our original, as our red original image, which is called the" }, { "start": 939.68, "end": 940.68, "text": " anchor." }, { "start": 940.68, "end": 944.0799999999999, "text": " So all of this needs to be pushed away." }, { "start": 944.0799999999999, "end": 950.28, "text": " But now we want to pull together all the augmented versions of the original image, but also we" }, { "start": 950.28, "end": 957.72, "text": " want to pull together all of the other images of the same class, including also their augmented" }, { "start": 957.72, "end": 958.72, "text": " versions." }, { "start": 958.72, "end": 961.2, "text": " So all of this is going to be pulled together." }, { "start": 961.2, "end": 965.84, "text": " So not only does the network learn about these augmentations, which again, for this idea," }, { "start": 965.84, "end": 968.6, "text": " the augmentations aren't even necessary." }, { "start": 968.6, "end": 975.32, "text": " The network learns a representation space where images of the same class are close together," }, { "start": 975.32, "end": 980.6800000000001, "text": " which again is going to make the task of later linear classifiers that needs to separate" }, { "start": 980.6800000000001, "end": 984.3000000000001, "text": " this class from other classes very, very easy." }, { "start": 984.3, "end": 987.92, "text": " And again, the other images aren't just going to be pushed away, but if they're from the" }, { "start": 987.92, "end": 992.4399999999999, "text": " same class, let's say this and this image are from the same class, all of those are" }, { "start": 992.4399999999999, "end": 999.16, "text": " going to be pushed apart from our red dot, but by themselves being pushed together to" }, { "start": 999.16, "end": 1003.4399999999999, "text": " their own cluster here of their own class." }, { "start": 1003.4399999999999, "end": 1004.88, "text": " I hope this makes sense." }, { "start": 1004.88, "end": 1011.1999999999999, "text": " And I hope the difference to the cross entropy objective is sort of clear." }, { "start": 1011.2, "end": 1015.76, "text": " The cross entropy objective simply from the beginning just cares about which side of the" }, { "start": 1015.76, "end": 1017.5600000000001, "text": " decision boundary you're on." }, { "start": 1017.5600000000001, "end": 1023.6400000000001, "text": " While this pre training objective first cares to put things close together that are in the" }, { "start": 1023.6400000000001, "end": 1031.24, "text": " same class, and then the decision classifier will have a much easier time." }, { "start": 1031.24, "end": 1037.46, "text": " The reason why this works better than the because because it's not entirely clear from" }, { "start": 1037.46, "end": 1042.2, "text": " the beginning that why this should work better because it's working with the same information." }, { "start": 1042.2, "end": 1047.72, "text": " It's just because people have generally found that these pre training contrastive pre training" }, { "start": 1047.72, "end": 1053.6000000000001, "text": " objectives, they just are somewhat better at exploiting the information in the data" }, { "start": 1053.6000000000001, "end": 1061, "text": " set than if you just hammer on hammer with the contrastive sorry with the cross entropy" }, { "start": 1061, "end": 1064.06, "text": " loss from the beginning." }, { "start": 1064.06, "end": 1068.86, "text": " So but it is not fully explained yet why this works better because it's working with the" }, { "start": 1068.86, "end": 1070.04, "text": " same data." }, { "start": 1070.04, "end": 1076.62, "text": " Again, the difference here is that the previous methods of contrastive pre training the self" }, { "start": 1076.62, "end": 1081.04, "text": " supervised ones, they did not have access to the labels." }, { "start": 1081.04, "end": 1088.04, "text": " And the advantage of that is you can have a giant database of unlabeled additional data" }, { "start": 1088.04, "end": 1091.8, "text": " that you do the pre training on." }, { "start": 1091.8, "end": 1095.44, "text": " Whereas here we do the pre training, including the labels." }, { "start": 1095.44, "end": 1100.48, "text": " So here, the label dog is an intrinsic part because we need to know which of the samples" }, { "start": 1100.48, "end": 1102.36, "text": " we need to pull together." }, { "start": 1102.36, "end": 1108.44, "text": " But that also means we cannot leverage the maybe that we have more unlabeled data and" }, { "start": 1108.44, "end": 1111.32, "text": " unlabeled data is pretty cheap to obtain." }, { "start": 1111.32, "end": 1115.12, "text": " So that's the advantages and disadvantages here." }, { "start": 1115.12, "end": 1121.4399999999998, "text": " So this new loss, so they they do compare this here." }, { "start": 1121.4399999999998, "end": 1127.3999999999999, "text": " And usually in these contrastive objectives, you have somewhat like two encoders, one to" }, { "start": 1127.3999999999999, "end": 1132.6399999999999, "text": " encode the the anchor and one to encode the augmented versions." }, { "start": 1132.6399999999999, "end": 1137.2399999999998, "text": " And this one is like a momentum with shared weights and so on." }, { "start": 1137.2399999999998, "end": 1139.28, "text": " All of this isn't really important." }, { "start": 1139.28, "end": 1144.6, "text": " If you want to look into that look into papers like momentum contrast, or I did one on curl" }, { "start": 1144.6, "end": 1147.48, "text": " for reinforcement learning." }, { "start": 1147.48, "end": 1154.04, "text": " I think the the general gist of it is clear." }, { "start": 1154.04, "end": 1159.52, "text": " So they compare the formulation of their loss to the self supervised one, usually it takes" }, { "start": 1159.52, "end": 1162, "text": " the form of things like this." }, { "start": 1162, "end": 1170.3999999999999, "text": " So one is the the anchor here, and then the zji would be the positive example." }, { "start": 1170.4, "end": 1175.0800000000002, "text": " And you see here that the inner product between the anchor and the positive example, sorry" }, { "start": 1175.0800000000002, "end": 1185.24, "text": " about that, the inner product should be high, because here the loss is the negative of whatever" }, { "start": 1185.24, "end": 1186.46, "text": " is here." }, { "start": 1186.46, "end": 1192.96, "text": " So if you minimize the loss, you say I want the inner product between my anchor, and whatever" }, { "start": 1192.96, "end": 1199.16, "text": " is the positive sample to be high, and everything else here, which includes the thing on the" }, { "start": 1199.16, "end": 1204.8400000000001, "text": " top, but it also includes everything else, I want the inner product to be low, and which" }, { "start": 1204.8400000000001, "end": 1211.8400000000001, "text": " is exactly the thing where you push you pull together the positives, and you push apart" }, { "start": 1211.8400000000001, "end": 1213.88, "text": " everything else." }, { "start": 1213.88, "end": 1221.0400000000002, "text": " That, that is the standard objective that you had before they, they extend this, but" }, { "start": 1221.0400000000002, "end": 1223.1000000000001, "text": " it looks almost the same." }, { "start": 1223.1, "end": 1229.28, "text": " So compared to the unsupervised objective now, first of all, they extend this such that" }, { "start": 1229.28, "end": 1231.56, "text": " you can have more than one positive sample." }, { "start": 1231.56, "end": 1236.1799999999998, "text": " Now this is also possible in the unsupervised way." }, { "start": 1236.1799999999998, "end": 1240.1999999999998, "text": " So they just augmented by this." }, { "start": 1240.1999999999998, "end": 1245.12, "text": " And they also now this is the crucial part, they include the labels into the pre turning" }, { "start": 1245.12, "end": 1246.12, "text": " objective." }, { "start": 1246.12, "end": 1253.9599999999998, "text": " So they say everywhere where I and J have the same label should be maximized in the" }, { "start": 1253.9599999999998, "end": 1261.28, "text": " inner product, so should be pulled together, while everything else is being pushed apart." }, { "start": 1261.28, "end": 1271.7199999999998, "text": " Yeah, so they say we generalize to an arbitrary number of positives." }, { "start": 1271.7199999999998, "end": 1274.9199999999998, "text": " And they also say contrastive power increases with more negatives." }, { "start": 1274.92, "end": 1279.1200000000001, "text": " I think that's just a finding that they have that when they add more negatives, so when" }, { "start": 1279.1200000000001, "end": 1286.3600000000001, "text": " they increase the batch size, that contrastive power increases." }, { "start": 1286.3600000000001, "end": 1292.28, "text": " They do analyze their gradient, which I find is pretty neat." }, { "start": 1292.28, "end": 1295.44, "text": " You can already see that if you formulate a loss, of course, the gradient is going to" }, { "start": 1295.44, "end": 1301.1200000000001, "text": " go in the negative direction, but they make it clear that if you look at the gradient" }, { "start": 1301.12, "end": 1307.9199999999998, "text": " for the positive cases, what appears is this one minus p ij quantity and the p ij quantity" }, { "start": 1307.9199999999998, "end": 1313.2199999999998, "text": " is exactly the inner product between i and j normalized, of course." }, { "start": 1313.2199999999998, "end": 1320.6399999999999, "text": " So if you minimum, so the gradient is going to point into the negative direction of that" }, { "start": 1320.6399999999999, "end": 1324.6799999999998, "text": " for the positives, which means you're going to pull them together." }, { "start": 1324.68, "end": 1334.04, "text": " And it's going to push into this direction for the negative classes, which means you" }, { "start": 1334.04, "end": 1335.8400000000001, "text": " push them apart." }, { "start": 1335.8400000000001, "end": 1341.1200000000001, "text": " And they also analyze what happens with relation to hardness." }, { "start": 1341.1200000000001, "end": 1345.76, "text": " So they say there are two kinds of, if you just look at the positive samples, there are" }, { "start": 1345.76, "end": 1346.76, "text": " two kinds." }, { "start": 1346.76, "end": 1351.44, "text": " There are easy positives where the network has already learned to match them closely," }, { "start": 1351.44, "end": 1353.3600000000001, "text": " where the inner product is almost one." }, { "start": 1353.36, "end": 1359.04, "text": " If you look at them, that means the p ij quantity is large, right?" }, { "start": 1359.04, "end": 1362.74, "text": " Because that is basically the inner product." }, { "start": 1362.74, "end": 1368.52, "text": " And you look at this term, this term is exactly what we saw in the gradient." }, { "start": 1368.52, "end": 1373.9599999999998, "text": " Then you see that this here, since this is one, this entire thing is zero." }, { "start": 1373.9599999999998, "end": 1375.3999999999999, "text": " This is also high." }, { "start": 1375.3999999999999, "end": 1376.3999999999999, "text": " This is close to one." }, { "start": 1376.3999999999999, "end": 1378.1999999999998, "text": " So this entire thing is zero." }, { "start": 1378.1999999999998, "end": 1380.24, "text": " This is almost zero." }, { "start": 1380.24, "end": 1385.58, "text": " But if you have a hard positive where the network hasn't learned yet to align the inner" }, { "start": 1385.58, "end": 1392.32, "text": " product properly or align the representation properly, then the angle between the things" }, { "start": 1392.32, "end": 1400.06, "text": " again, these are normalized, the angle is they're approximately orthogonal." }, { "start": 1400.06, "end": 1408.76, "text": " So the gradient magnitude is going to be this here is going to be approximately zero." }, { "start": 1408.76, "end": 1414.78, "text": " So this is close to one and this here, since this is also zero is also close to one." }, { "start": 1414.78, "end": 1423.24, "text": " So this is going to be larger than zero, which means that their loss focuses on the examples" }, { "start": 1423.24, "end": 1430.56, "text": " that are that the network cannot yet represent well, according to their objective, which" }, { "start": 1430.56, "end": 1432.2, "text": " makes sense, right?" }, { "start": 1432.2, "end": 1437.32, "text": " First of all, but second of all, it that is exactly the same thing as in the cross entropy" }, { "start": 1437.32, "end": 1438.32, "text": " loss." }, { "start": 1438.32, "end": 1443.12, "text": " So if you look at the cross entropy loss and you have a situation where the network is" }, { "start": 1443.12, "end": 1449, "text": " really good already for a given sample, so it already puts a dog into the dog class," }, { "start": 1449, "end": 1454.48, "text": " then the gradient will not be pulling much for that sample." }, { "start": 1454.48, "end": 1458.24, "text": " It might mainly focuses on where you're still wrong." }, { "start": 1458.24, "end": 1463.8999999999999, "text": " So it is like I appreciate the analysis, but it is not a notable difference." }, { "start": 1463.9, "end": 1471.16, "text": " I think what they want to show is that their loss, if you do gradient descent really does" }, { "start": 1471.16, "end": 1476.98, "text": " what it is supposed to do, namely, first of all, it does this pulling together pushing" }, { "start": 1476.98, "end": 1481, "text": " apart of inner products for the positive and negative samples." }, { "start": 1481, "end": 1488.0800000000002, "text": " And it mainly focuses on samples where you not yet have found a good representation to" }, { "start": 1488.0800000000002, "end": 1489.52, "text": " align them with others." }, { "start": 1489.52, "end": 1496.56, "text": " It focuses on pairs that are not yet correctly close or together or far apart." }, { "start": 1496.56, "end": 1505.08, "text": " They also connect this to the triplet loss, where they can show after some approximation," }, { "start": 1505.08, "end": 1512.12, "text": " that if their loss only has one positive and one negative sample, it is going to be proportional" }, { "start": 1512.12, "end": 1513.28, "text": " to the triplet loss." }, { "start": 1513.28, "end": 1519.08, "text": " The triplet loss is basically where you have an image and you find one positive, I think" }, { "start": 1519.08, "end": 1524.3999999999999, "text": " that's going to be of the same class right here, and you find one negative of a different" }, { "start": 1524.3999999999999, "end": 1531.12, "text": " class and you try to push those apart while pulling those together." }, { "start": 1531.12, "end": 1535.6, "text": " The problem here, they say, is the problem of hard negative sampling." }, { "start": 1535.6, "end": 1540.6799999999998, "text": " In order for this to make sense, you need the negative sample to be what's called a" }, { "start": 1540.6799999999998, "end": 1542.32, "text": " hard negative sample." }, { "start": 1542.32, "end": 1547.28, "text": " So this is called this hard negative mining, because you only have one negative sample," }, { "start": 1547.28, "end": 1551.6399999999999, "text": " you better make this something where the network can learn from." }, { "start": 1551.6399999999999, "end": 1555.36, "text": " And if it's too easy, the network can't learn anything." }, { "start": 1555.36, "end": 1560.36, "text": " And thereby you have the problem of hard negative mining, where you often have to filter through" }, { "start": 1560.36, "end": 1565.24, "text": " your mini batch or even through your data set to find a good negative sample to go along" }, { "start": 1565.24, "end": 1568.24, "text": " with this pair of positive samples." }, { "start": 1568.24, "end": 1574.84, "text": " But I don't really see how their method, except that it has a bunch of positives and negative" }, { "start": 1574.84, "end": 1581.08, "text": " samples, except for that, which I guess you could also apply to the triplet loss." }, { "start": 1581.08, "end": 1583.36, "text": " There's not really a difference here." }, { "start": 1583.36, "end": 1590.22, "text": " Again, if your method is a contrastive method, you do have the problem that if you simply" }, { "start": 1590.22, "end": 1596.8, "text": " sample at random, your negative samples are going to become easier and easier over the" }, { "start": 1596.8, "end": 1599.3999999999999, "text": " training over the course of training." }, { "start": 1599.4, "end": 1606.44, "text": " And you get the problem of at some point, you're going to have to do actively sample" }, { "start": 1606.44, "end": 1607.5600000000002, "text": " hard negatives." }, { "start": 1607.5600000000002, "end": 1611.24, "text": " I think this paper just gets around it by having huge batch sizes." }, { "start": 1611.24, "end": 1618.6000000000001, "text": " So yeah, but again, they do get state of the art on ImageNet for these types of networks" }, { "start": 1618.6000000000001, "end": 1621.44, "text": " and augmentation strategies." }, { "start": 1621.44, "end": 1627.6200000000001, "text": " And they do look at how their loss appears to be more hyperparameter stable." }, { "start": 1627.62, "end": 1632.08, "text": " So if they change out the augmentation, if they change the optimizer or the learning" }, { "start": 1632.08, "end": 1638.52, "text": " rate, you can see here that the spread in accuracy is much smaller than for the cross" }, { "start": 1638.52, "end": 1645, "text": " entropy loss except here, but it is hard to compare variances of things that don't have" }, { "start": 1645, "end": 1648.36, "text": " the same means in terms of accuracy." }, { "start": 1648.36, "end": 1654.1999999999998, "text": " So take this on the right here with a grain of salt." }, { "start": 1654.1999999999998, "end": 1657.28, "text": " They also evaluate this on corrupted ImageNet." }, { "start": 1657.28, "end": 1664.56, "text": " So there's an ImageNet data set where it has several levels of corruptedness of the data" }, { "start": 1664.56, "end": 1665.56, "text": " set." }, { "start": 1665.56, "end": 1671.5, "text": " And you can see your accuracy goes down, but the accuracy for the cross entropy loss goes" }, { "start": 1671.5, "end": 1676.6399999999999, "text": " down faster than for the supervised contrastive loss." }, { "start": 1676.6399999999999, "end": 1680.84, "text": " You see they start together like this and they go further apart." }, { "start": 1680.84, "end": 1684.12, "text": " Now it is not clear to me whether that's just an effect." }, { "start": 1684.12, "end": 1688.36, "text": " Like if you just trained a supervised contrastive loss also to this level, whether it would" }, { "start": 1688.36, "end": 1694.8, "text": " fall off at the same speed or whether because it is the supervised contrastive loss, it" }, { "start": 1694.8, "end": 1697.08, "text": " would kind of match that curve." }, { "start": 1697.08, "end": 1701.4399999999998, "text": " It's not clear whether that's really an effect of the difference of the losses or it's just" }, { "start": 1701.4399999999998, "end": 1708.3999999999999, "text": " an effect of the fact that they aren't the same accuracy to begin with." }, { "start": 1708.4, "end": 1714.2800000000002, "text": " Again, this kind of shifting, you can't really compare things that have different means in" }, { "start": 1714.2800000000002, "end": 1715.3200000000002, "text": " the first place." }, { "start": 1715.3200000000002, "end": 1722.52, "text": " But it is an interesting finding that their method is more stable to these corruptions." }, { "start": 1722.52, "end": 1729.72, "text": " I just want to point out at the end, their training details just highlight they train" }, { "start": 1729.72, "end": 1736.42, "text": " for up to 700 epochs during the pre-training stage, which is, I think, standard but mad." }, { "start": 1736.42, "end": 1741.44, "text": " And they trained up models with batch sizes up to 8192." }, { "start": 1741.44, "end": 1746.88, "text": " So you need like a super TPU cluster to run these kind of things." }, { "start": 1746.88, "end": 1753.52, "text": " And I am never exactly trusting of numbers like this." }, { "start": 1753.52, "end": 1758.64, "text": " Even though it's kind of a good improvement, it is still like a 1% improvement." }, { "start": 1758.64, "end": 1770, "text": " And in these small numbers, I just feel there might be a big effect that things like batch" }, { "start": 1770, "end": 1778.14, "text": " sizes and how much you put into computing, how much compute you put into it and what" }, { "start": 1778.14, "end": 1779.4, "text": " else you're doing." }, { "start": 1779.4, "end": 1785.64, "text": " There might be so much influence of that, that I first want to see this replicated multiple" }, { "start": 1785.64, "end": 1793.64, "text": " times across the entire field before I'm going to really trust that this is a good thing" }, { "start": 1793.64, "end": 1794.64, "text": " to do." }, { "start": 1794.64, "end": 1797.3600000000001, "text": " Alright, so I hope you like this." }, { "start": 1797.3600000000001, "end": 1799.6000000000001, "text": " If you're still here, thank you." }, { "start": 1799.6000000000001, "end": 1801.2, "text": " Consider subscribing." }, { "start": 1801.2, "end": 1802.76, "text": " If you have a comment, please leave it." }, { "start": 1802.76, "end": 1805.0400000000002, "text": " I usually read them." }, { "start": 1805.04, "end": 1816.08, "text": " And with that, bye bye." } ]
dWGjoInRaAs
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
[ML News] DeepMind fails to get independence from Google
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "ml news", "machine learning news", "tech news", "technology news", "deep learning news", "google deepmind", "does google own deepmind", "deepmind offices", "does deepmind make profit", "who pays for deepmind", "when did google buy deepmind", "how much did google pay for deepmind", "alphago", "alphafold" ]
#deepmind #google #mlnews DeepMind has reportedly failed to negotiate for greater independence from Google/Alphabet. While DeepMind wanted to set up a non-profit-like structure, Google seems to go for the opposite approach and seek tight integration. How is AI best served? Original Article: https://www.wsj.com/articles/google-unit-deepmind-triedand-failedto-win-ai-autonomy-from-parent-11621592951 Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher LinkedIn: https://www.linkedin.com/in/yannic-kilcher-488534136/ BiliBili: https://space.bilibili.com/1824646584 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Hello, everyone. Today we're going to look at some news in the machine learning world. The Wall Street Journal here writes Google unit DeepMind tried and failed to win AI autonomy from parents. So apparently, DeepMind has sought to become more independent of Google in the past. And here they write that it's been founded in 2010 and bought by Google in 2014. And starting in 2015, there were already talks as far as we want to be more independent. Now apparently, DeepMind told staff late last month that Google has called off those talks. Here it says DeepMind's founders had sought among other ideas, a legal structure used by nonprofit groups reasoning that the powerful artificial intelligence they were researching shouldn't be controlled by a single corporate entity. On the other hand, from Google's point of view, the proposed structure didn't make financial sense for Alphabet, given its total investment in the unit and its willingness to bankroll DeepMind. So DeepMind sold itself to Google because of money needs. Their research consumes ginormous quantities of energy and of researchers and that costs a lot of money. So they cashed in 500 billion as a price. Said it bought the startup for 500 million and the losses of the company were about $660 million. This company makes giant losses because what they do is essentially PR. So the position of Google here is that they want to bring the teams closer together and have a stronger impact rather than separating the teams. This is an asset to Google, a tech asset. So for DeepMind, it's pretty easy to push for a nonprofit structure given that, you know, they will never make profit ever. And their claims to wanting to be open and not in the hands of a single thing. I could take it more seriously if they were ever to publish in open access journals, which they don't. They publish in nature. Oh, you got to pay 20 bucks for that article. Thanks, DeepMind. Surely you don't want the technology to fall into the hands of a select few. If they were to actually open source their code and not just some crappy pseudo code that has lots of mistakes in it. I'm sure you want to just distribute that stuff out of there. Because if it's just in the hand of a single minority, that would be terrible. Right? Right? No, I think what they want is they recognize they got something good going there. They got someone paying for their bills and they don't want someone from top down telling them, hey, make it more into a product. Hey, give it to us. We need it to make money. What are you talking about? Google wants this technology in their products as fast as possible, as best as possible. And DeepMind researchers are just really, really smart people that output these things. Lastly, I want to show you this rendering of the proposed new DeepMind offices in here. Like if that is not the most dystopian future picture I've ever seen. I mean, it does look cool, but it is a bit on the elitist side, I would feel it's a cool office, like, sure, I take it. Absolutely great. What I'm saying is, you want this on one hand, but then also you want giant loss making and independence. On the other hand, maybe that's not possible at the same time. I'm just not really sure that that is the reason DeepMind seeks independence. All right, that was it for me. This is already too long. Tell me what you think in the comments. What should DeepMind do? What should Google do? Who's the good guy? Who's the bad guy? How should AI benefit all of humanity? Or are we all doomed? Peace out.
[ { "start": 0, "end": 8.98, "text": " Hello, everyone. Today we're going to look at some news in the machine learning world." }, { "start": 8.98, "end": 15.84, "text": " The Wall Street Journal here writes Google unit DeepMind tried and failed to win AI autonomy" }, { "start": 15.84, "end": 22.72, "text": " from parents. So apparently, DeepMind has sought to become more independent of Google" }, { "start": 22.72, "end": 30.439999999999998, "text": " in the past. And here they write that it's been founded in 2010 and bought by Google" }, { "start": 30.439999999999998, "end": 35.4, "text": " in 2014. And starting in 2015, there were already talks as far as we want to be more" }, { "start": 35.4, "end": 41.76, "text": " independent. Now apparently, DeepMind told staff late last month that Google has called" }, { "start": 41.76, "end": 47.96, "text": " off those talks. Here it says DeepMind's founders had sought among other ideas, a legal structure" }, { "start": 47.96, "end": 52.84, "text": " used by nonprofit groups reasoning that the powerful artificial intelligence they were" }, { "start": 52.84, "end": 58.08, "text": " researching shouldn't be controlled by a single corporate entity. On the other hand, from" }, { "start": 58.08, "end": 62.96, "text": " Google's point of view, the proposed structure didn't make financial sense for Alphabet," }, { "start": 62.96, "end": 67.52, "text": " given its total investment in the unit and its willingness to bankroll DeepMind. So DeepMind" }, { "start": 67.52, "end": 74.32, "text": " sold itself to Google because of money needs. Their research consumes ginormous quantities" }, { "start": 74.32, "end": 81.24, "text": " of energy and of researchers and that costs a lot of money. So they cashed in 500 billion" }, { "start": 81.24, "end": 87.6, "text": " as a price. Said it bought the startup for 500 million and the losses of the company" }, { "start": 87.6, "end": 95.53999999999999, "text": " were about $660 million. This company makes giant losses because what they do is essentially" }, { "start": 95.53999999999999, "end": 101.08, "text": " PR. So the position of Google here is that they want to bring the teams closer together" }, { "start": 101.08, "end": 107.64, "text": " and have a stronger impact rather than separating the teams. This is an asset to Google, a tech" }, { "start": 107.64, "end": 113.28, "text": " asset. So for DeepMind, it's pretty easy to push for a nonprofit structure given that," }, { "start": 113.28, "end": 119.4, "text": " you know, they will never make profit ever. And their claims to wanting to be open and" }, { "start": 119.4, "end": 125.5, "text": " not in the hands of a single thing. I could take it more seriously if they were ever to" }, { "start": 125.5, "end": 130.68, "text": " publish in open access journals, which they don't. They publish in nature. Oh, you got" }, { "start": 130.68, "end": 135.28, "text": " to pay 20 bucks for that article. Thanks, DeepMind. Surely you don't want the technology" }, { "start": 135.28, "end": 140.8, "text": " to fall into the hands of a select few. If they were to actually open source their code" }, { "start": 140.8, "end": 145.04000000000002, "text": " and not just some crappy pseudo code that has lots of mistakes in it. I'm sure you want" }, { "start": 145.04000000000002, "end": 149.68, "text": " to just distribute that stuff out of there. Because if it's just in the hand of a single" }, { "start": 149.68, "end": 156.38, "text": " minority, that would be terrible. Right? Right? No, I think what they want is they recognize" }, { "start": 156.38, "end": 159.96, "text": " they got something good going there. They got someone paying for their bills and they" }, { "start": 159.96, "end": 165.8, "text": " don't want someone from top down telling them, hey, make it more into a product. Hey, give" }, { "start": 165.8, "end": 172.94, "text": " it to us. We need it to make money. What are you talking about? Google wants this technology" }, { "start": 172.94, "end": 178.12, "text": " in their products as fast as possible, as best as possible. And DeepMind researchers" }, { "start": 178.12, "end": 182.84, "text": " are just really, really smart people that output these things. Lastly, I want to show" }, { "start": 182.84, "end": 189.84, "text": " you this rendering of the proposed new DeepMind offices in here. Like if that is not the most" }, { "start": 189.84, "end": 196.72, "text": " dystopian future picture I've ever seen. I mean, it does look cool, but it is a bit on" }, { "start": 196.72, "end": 202.16, "text": " the elitist side, I would feel it's a cool office, like, sure, I take it. Absolutely" }, { "start": 202.16, "end": 207.44, "text": " great. What I'm saying is, you want this on one hand, but then also you want giant loss" }, { "start": 207.44, "end": 212.76, "text": " making and independence. On the other hand, maybe that's not possible at the same time." }, { "start": 212.76, "end": 217.6, "text": " I'm just not really sure that that is the reason DeepMind seeks independence. All right," }, { "start": 217.6, "end": 222.48, "text": " that was it for me. This is already too long. Tell me what you think in the comments. What" }, { "start": 222.48, "end": 226.32, "text": " should DeepMind do? What should Google do? Who's the good guy? Who's the bad guy? How" }, { "start": 226.32, "end": 248.32, "text": " should AI benefit all of humanity? Or are we all doomed? Peace out." } ]
rHQPBqMULXo
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Machine Learning PhD Survival Guide 2021 | Advice on Topic Selection, Papers, Conferences & more!
[ "Science & Technology" ]
[ "machine learning phd", "how to do a phd in machine learning", "phd advice", "machine learning phd thesis topics", "machine learning phd topics", "how to machine learning phd", "how to select a thesis topic", "how to machine learning conferences", "how to write a machine learning paper", "advice for phd students", "advice for new phd students", "how to survive a phd", "what to do in a machine learning phd", "deep learning phd advice", "machine learning phd thesis", "machine learning phd thesis topic" ]
#machinelearning #phd #howto This video is advice for new PhD students in the field of Machine Learning in 2021 and after. The field has shifted dramatically in the last few years and navigating grad school can be very hard, especially when you're as clueless as I was when I started. The video is a personal recount of my mistakes and what I've learned from them. If you already have several published papers and know what to do, this video is not for you. However, if you are not even sure where to start, how to select a topic, or what goes in a paper, you might benefit from this video, because that's exactly how I felt. Main Takeaways: - Select niche topics rather than hype topics - Write papers that can't be rejected - Don't be discouraged by bad reviews - Take reviewing & teaching seriously - Keep up your focus - Conferences are for networking - Internships are great opportunities - Team up with complementary skills - Don't work too hard OUTLINE: 0:00 - Intro & Overview 1:25 - Thesis Topic Selection 4:25 - How To Publish Papers 5:35 - Dealing With Reviewers 6:30 - How To Be A Reviewer 7:40 - Take Teaching Seriously 8:30 - Maintain Focus 10:20 - Navigating Conferences 12:40 - Internships 13:40 - Collaborations 14:55 - Don't Forget To Enjoy Transcript: https://www.notion.so/Yannic-Kilcher-s-PhD-Survival-Guide-Transcript-c507ab8e963e496fbb185cdfdb8d65ae Credits to Lanz for editing Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher LinkedIn: https://www.linkedin.com/in/yannic-kilcher-488534136/ BiliBili: https://space.bilibili.com/1824646584 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
on how to do a PhD. So mainly that you don't repeat my mistakes. Train. So you've made it into a PhD program. Congratulations, you made it. So today we're going to have a look at what to do during a PhD, how to succeed at publishing papers, how to deal with reviews, what to do at conferences and many other things. So I hope you enjoy this little guide of how to survive a machine learning PhD in 2021. So first of all, let me say, I'm not good at this. I'm not an expert. I'm at the end of my PhD and I've done many things wrong and by no means am I a successful academic. However, if you're like myself, and at the beginning of your PhD, you don't really have a clue what to do, you don't know how to select topics, you don't know how to write papers, or even what a paper is really, then there might be something in here that could help you. I'm not super successful myself. But what I can tell you is that I've seen many people who are good at it. So I can tell you what those people did right, what I did wrong, and generally what I think you should do. Alright, that being said, let's dive right in. When it comes down to choosing a topic, make sure you look for something that your advisor or the senior people around you have lots of experience in. They can help you much better like this. You also want to choose something that matches your particular interests, because you're going to be stuck with it for a while. Lastly, you want to choose something that fits your expertise, where you're already reasonably good at or can get good at very quickly. At the intersection of those three things, you're going to find something that is unique to you, and is going to be a very good topic for your PhD. But there are a few more things to consider when selecting a topic. First of all, resources, how much access to resources you have will determine what kind of topics are even accessible to you as a researcher. So I'm going to assume that you do not have a giant compute cluster or heaps of money around. And therefore, my recommendations are going to be for, let's say the rather average PhD student who is not a giant tech company. However, if you do happen to have 1000s of TPUs in your backyard, ignore my advice and just train big language models. Alright, there are two fundamental ways how you can choose a topic. Way one is to choose the biggest most hype topic in the area right now. Now that is not necessarily a bad strategy, but it has some drawbacks. And the reason is that in a hype topic, there are many papers, but there is also a giant amount of competition, not only from other researchers, but from large corporations with lots and lots of resources behind them. And the bigger reason why it's a bad idea is the fact that they wane. If you pick transformers to research today, it's very likely that three, four years down the road, you'll still be stuck with transformers, the field has moved on. And now all of these people that have made the same choice, namely to invest in the biggest topic right now, are trying to finish their PhD, are trying to get papers published in that topic that is no longer of such a big interest at that particular point in time, and therefore already be on the declining side of the hype cycle. So what's the alternative to hype topics? The alternative is niche topics. And that's what I would recommend for most people. The advantages of finding niches is there isn't as much competition around and you can actually become an expert and the best at whatever you do. Some examples of niche topics are things like bandits, optimization, biologically plausible neural network, text based games, I'm not suggesting you go into these topics, but look for smaller communities that nevertheless publish year after year after year. Alright, so now the important stuff, how do you get papers published? Now if I had to summarize the style of writing papers that get published in one sentence is that write papers that cannot be rejected. And that is not as obvious as it sounds. The review process in machine learning is heavily incentivized to reject your paper as quickly and easily as possible. Do not give reviewers any reason to reject your paper. And the easiest way to learn how to write papers is to literally read papers. Go into your niche, gather the papers that are there, read them, try to emulate their writing style, try to emulate the type and way they do and present experiments, try to emulate the way they write up theoretical foundations for their ideas. Your goal is going to be to write a paper where there is no obvious criticism to be had by reviewers. Reviews are the single biggest obstacle to achieving your goals. And let me tell you right now, getting reviews is one of the most cruel experiences you're going to have in your PhD. Reviewers are nasty, they don't have time, they don't read the paper correctly, they misunderstand, they criticize that you didn't evaluate on some obscure data set. And in general, you're going to feel quite misunderstood by reviewers. This happens to all of us. What I can tell you is don't get discouraged by bad reviews. Don't take individual reviews too seriously, and just resubmit the paper to the next conference. So keep your sanity, don't take it personally. There are many famous papers that have been rejected at first try. And not because the paper was bad, but just because the reviewers were crappy. Now there are going to be things during your PhD that you'll have to do that are not writing papers. And one of those things is, especially as you get more senior, you're going to be asked to review yourself. Now it is an easy option to take all that frustration that you have against reviewing, and you see all these other people doing such a crappy job that you just think, whatever, I'm going to do a crappy job myself. And it's tempting. It's very tempting, especially because you gain nothing from doing good reviews. But other than a you, hey, thanks for the review. You'll get nothing. And it is really, really hard to write a good review. Do it. Nevertheless, please, not only are you helping the field by being not one of the crappy reviewers, but writing a good review also helps you really dig into a paper, really see the weaknesses in other papers. And it makes you a better author, researcher, and community member. So for your own sake, and for the community, take the review seriously, even though you don't have time, even though other people do a crappy job. Another thing that you're going to be asked to do very probably is teaching. Now again, you're going to have very little incentive to do a good job at teaching. After all, students are nuisances, the faster you can get it over with, the better the earlier you can go back to writing papers. However, I urge you to take teaching seriously, not only because the world relies on the next generation of researchers being competent, but also think about the fact that the people you teach will be probably some of them working with you in the future. They might be researchers in other labs you collaborate with, they might even be joining your own lab, and you will profit from them being more competent. So take teaching seriously for your benefit and for the benefit of your students. So besides the things you have to do, like reviewing and teaching, what should you work on all day? And here's my answer. Start working on your thing, go pee, and then continue working on your thing. A PhD is first and foremost an exercise in long term focus, you're going to be tempted to do all kinds of things during your PhD, you're going to look and here's a reading group and here's a seminar and here's a lecture. Now unless it is on your specific thing on your specific niche, it's probably going to be not a productive use of your time. I'm not saying you shouldn't go there. What I'm saying is that be aware that what ultimately gets you to get your papers is a long term laser focus on your topic and other topics will creep up on you. It's going to be so interesting because you're stuck here with your thing that you know and that is boring and there's going to be this other cool topic. Wow. Here we are, this is the NURBS 2019 poster session, one of the poster sessions. There are about 250 posters in this room and there are so many people. It is crazy, every single poster has a ball of people around it, presenters trying to explain to the bystanders their work. And you're going to be tempted, oh this is interesting, this is interesting, this is interesting and my topic is so lame. I'm going to just look into this and that's also cool. Yeah, you know who did that? Me. It did not turn out well. Focus, focus, focus, focus your research on your thing and you'll be successful. So now you've written your paper, you've submitted it to peer review and with a little bit of luck you've actually managed to get it published and you get to go to a conference. Now the conference itself and the conference website and everyone on Twitter might give you the impression that conferences are there for people giving talks about their research and you listening and learning. That's crap. Conferences, especially the talking part of conferences, have become more and more irrelevant with the years. Specifically now that everything is recorded and streamed, just look at that stuff from the comfort of your couch at 2x speed. You're missing nothing. These talks are often very short, very rehearsed and most importantly they are about research that is at least six months old. The interesting part about conferences are the people there. The interesting talking happens in workshops, in panels, in tutorials, try to find places where current research is discussed. Workshops are a great place to go for this because the research is often much more recent and not done yet. Go to conferences to interact with people. This whole oh we come together for research, that's a charade. The best researchers I know do nothing else but meet and talk to people all day at conferences. And I don't mean this in a mean way. I don't mean go out and deliberately engineer contact with people for your own benefit. No, a conference is a place where you can find other people that are interested in the same things as you are and you can talk to them, get to know things that you could never get to know through a writing or in a paper. A lot of paper authors will tell you things face to face that they would never write down. A paper such as which experiments that don't work, problems in research, weaknesses of papers. You'll get a lot of knowledge by being there and talking to people. But you have to go out of your way and do it actively. I know this is hard for a lot of us but it pays off and it's going to make your life a lot more enjoyable. All right the next thing I want to talk about is internships. Should you go to an internship at a company at a different university and this depends entirely on your preference. Now I myself have had pretty good experiences with internships and people I know have done so as well. Generally if you do an internship it gives you a bit of a different perspective because you do it at a different place. And if you do an internship with a large company it can be quite a switch of environment. You'll have access to many more resources and you can do maybe a little bit of a different type of research and most importantly you'll meet people that are not academics or not academics anymore. And that is very very valuable. Once you've been stuck in academia for a while meeting someone who just cares to build a cool product is so refreshing and gets you a bit down to earth with what's really important. Lastly I want to talk about the topic of collaborations. Now academia is a bit tricky in that the system tries to alienate and isolate you as a person. You need those first author papers, you need to provide a personal contribution to the knowledge of humankind. Look for people who have the same interests in terms of topic but who have a little bit different skills or experiences such that your papers and your research can become more well rounded. That could be a difference in theoretical versus experimental knowledge, that could be a difference in your academic background. So if you can find someone that has complementary skills to yours and is interested in the same niche it definitely pays off to work together and produce research together. However only do this if they really work in the same field. It is very tempting to start all kinds of collaborations with people all over the place. If you can handle that good for you but again it pays to have a little bit of focus on your particular field and really view collaborations as a joint effort to get research done more quickly and with more rigor. Right so the way I discussed it right now it seems like doing a PhD is gruesome and lots of work and you never get to do anything fun and while there is an aspect to that and it definitely can happen to people especially if they want to finish real quickly. I urge you to also make some time to enjoy this time. A PhD is a cool time. You'll get to meet so many interesting people, get to learn so many interesting topics and ideas and you'll hopefully get to go to many interesting places and that is an invaluable experience. So my advice is if you can take it a bit easier, enjoy your time, take as much out of it as you can and don't work all the time. Maybe you'll have half a year longer, who cares? You only get to do a PhD once and enjoy the time at university while you still can. You can get a job any day. So I hope you've gained at least something from this video and you should be on a path to a successful machine learning PhD. Cheers!
[ { "start": 0, "end": 3.92, "text": " on how to do a PhD. So mainly that you don't repeat my mistakes." }, { "start": 7.28, "end": 7.6000000000000005, "text": " Train." }, { "start": 12.72, "end": 18.240000000000002, "text": " So you've made it into a PhD program. Congratulations, you made it. So today we're" }, { "start": 18.240000000000002, "end": 25.04, "text": " going to have a look at what to do during a PhD, how to succeed at publishing papers," }, { "start": 25.04, "end": 30.4, "text": " how to deal with reviews, what to do at conferences and many other things. So I hope" }, { "start": 30.4, "end": 36.48, "text": " you enjoy this little guide of how to survive a machine learning PhD in 2021." }, { "start": 44.96, "end": 51.44, "text": " So first of all, let me say, I'm not good at this. I'm not an expert. I'm at the end of my PhD and" }, { "start": 51.44, "end": 57.92, "text": " I've done many things wrong and by no means am I a successful academic. However, if you're like" }, { "start": 57.92, "end": 64.16, "text": " myself, and at the beginning of your PhD, you don't really have a clue what to do, you don't know how" }, { "start": 64.16, "end": 69.75999999999999, "text": " to select topics, you don't know how to write papers, or even what a paper is really, then there" }, { "start": 69.75999999999999, "end": 75.03999999999999, "text": " might be something in here that could help you. I'm not super successful myself. But what I can" }, { "start": 75.03999999999999, "end": 80.88, "text": " tell you is that I've seen many people who are good at it. So I can tell you what those people" }, { "start": 80.88, "end": 87.44, "text": " did right, what I did wrong, and generally what I think you should do. Alright, that being said," }, { "start": 87.44, "end": 94, "text": " let's dive right in. When it comes down to choosing a topic, make sure you look for something that" }, { "start": 94, "end": 98.96, "text": " your advisor or the senior people around you have lots of experience in. They can help you much" }, { "start": 98.96, "end": 104.47999999999999, "text": " better like this. You also want to choose something that matches your particular interests, because" }, { "start": 104.47999999999999, "end": 109.28, "text": " you're going to be stuck with it for a while. Lastly, you want to choose something that fits" }, { "start": 109.28, "end": 115.68, "text": " your expertise, where you're already reasonably good at or can get good at very quickly. At the" }, { "start": 115.68, "end": 121.68, "text": " intersection of those three things, you're going to find something that is unique to you, and is" }, { "start": 121.68, "end": 126.96000000000001, "text": " going to be a very good topic for your PhD. But there are a few more things to consider when" }, { "start": 126.96000000000001, "end": 134.8, "text": " selecting a topic. First of all, resources, how much access to resources you have will determine" }, { "start": 134.8, "end": 141.28, "text": " what kind of topics are even accessible to you as a researcher. So I'm going to assume that you do" }, { "start": 141.28, "end": 147.92000000000002, "text": " not have a giant compute cluster or heaps of money around. And therefore, my recommendations are going" }, { "start": 147.92000000000002, "end": 155.84, "text": " to be for, let's say the rather average PhD student who is not a giant tech company. However, if you" }, { "start": 155.84, "end": 161.44, "text": " do happen to have 1000s of TPUs in your backyard, ignore my advice and just train big language" }, { "start": 161.44, "end": 169.12, "text": " models. Alright, there are two fundamental ways how you can choose a topic. Way one is to choose" }, { "start": 169.12, "end": 176, "text": " the biggest most hype topic in the area right now. Now that is not necessarily a bad strategy," }, { "start": 176, "end": 182.56, "text": " but it has some drawbacks. And the reason is that in a hype topic, there are many papers," }, { "start": 182.56, "end": 189.6, "text": " but there is also a giant amount of competition, not only from other researchers, but from large" }, { "start": 189.6, "end": 196, "text": " corporations with lots and lots of resources behind them. And the bigger reason why it's a bad" }, { "start": 196, "end": 203.04, "text": " idea is the fact that they wane. If you pick transformers to research today, it's very likely" }, { "start": 203.04, "end": 209.2, "text": " that three, four years down the road, you'll still be stuck with transformers, the field has moved on." }, { "start": 209.2, "end": 214.4, "text": " And now all of these people that have made the same choice, namely to invest in the biggest topic" }, { "start": 214.4, "end": 220.56, "text": " right now, are trying to finish their PhD, are trying to get papers published in that topic that" }, { "start": 220.56, "end": 226.88, "text": " is no longer of such a big interest at that particular point in time, and therefore already" }, { "start": 226.88, "end": 232.56, "text": " be on the declining side of the hype cycle. So what's the alternative to hype topics? The" }, { "start": 232.56, "end": 238.24, "text": " alternative is niche topics. And that's what I would recommend for most people. The advantages" }, { "start": 238.24, "end": 244.56, "text": " of finding niches is there isn't as much competition around and you can actually become" }, { "start": 244.56, "end": 253.84, "text": " an expert and the best at whatever you do. Some examples of niche topics are things like bandits," }, { "start": 253.84, "end": 259.76, "text": " optimization, biologically plausible neural network, text based games, I'm not suggesting" }, { "start": 259.76, "end": 265.92, "text": " you go into these topics, but look for smaller communities that nevertheless publish year after" }, { "start": 265.92, "end": 272.24, "text": " year after year. Alright, so now the important stuff, how do you get papers published? Now if" }, { "start": 272.24, "end": 279.6, "text": " I had to summarize the style of writing papers that get published in one sentence is that" }, { "start": 280.16, "end": 286.56, "text": " write papers that cannot be rejected. And that is not as obvious as it sounds. The review process" }, { "start": 286.56, "end": 297.28000000000003, "text": " in machine learning is heavily incentivized to reject your paper as quickly and easily as possible." }, { "start": 297.28000000000003, "end": 304.8, "text": " Do not give reviewers any reason to reject your paper. And the easiest way to learn how to write" }, { "start": 304.8, "end": 312.88, "text": " papers is to literally read papers. Go into your niche, gather the papers that are there," }, { "start": 312.88, "end": 321.2, "text": " read them, try to emulate their writing style, try to emulate the type and way they do and present" }, { "start": 321.2, "end": 328.71999999999997, "text": " experiments, try to emulate the way they write up theoretical foundations for their ideas. Your goal" }, { "start": 328.71999999999997, "end": 336.08, "text": " is going to be to write a paper where there is no obvious criticism to be had by reviewers. Reviews" }, { "start": 336.08, "end": 341.6, "text": " are the single biggest obstacle to achieving your goals. And let me tell you right now," }, { "start": 341.6, "end": 348.56, "text": " getting reviews is one of the most cruel experiences you're going to have in your PhD." }, { "start": 348.56, "end": 355.92, "text": " Reviewers are nasty, they don't have time, they don't read the paper correctly, they misunderstand," }, { "start": 355.92, "end": 360.88, "text": " they criticize that you didn't evaluate on some obscure data set. And in general, you're going to" }, { "start": 360.88, "end": 367.04, "text": " feel quite misunderstood by reviewers. This happens to all of us. What I can tell you is" }, { "start": 367.04, "end": 374.08000000000004, "text": " don't get discouraged by bad reviews. Don't take individual reviews too seriously, and just" }, { "start": 374.08000000000004, "end": 379.36, "text": " resubmit the paper to the next conference. So keep your sanity, don't take it personally." }, { "start": 379.92, "end": 386.24, "text": " There are many famous papers that have been rejected at first try. And not because the paper" }, { "start": 386.24, "end": 394, "text": " was bad, but just because the reviewers were crappy. Now there are going to be things during" }, { "start": 394, "end": 400.48, "text": " your PhD that you'll have to do that are not writing papers. And one of those things is," }, { "start": 400.48, "end": 406.72, "text": " especially as you get more senior, you're going to be asked to review yourself. Now it is an easy" }, { "start": 406.72, "end": 413.36, "text": " option to take all that frustration that you have against reviewing, and you see all these other" }, { "start": 413.36, "end": 420.24, "text": " people doing such a crappy job that you just think, whatever, I'm going to do a crappy job myself." }, { "start": 420.24, "end": 426.56, "text": " And it's tempting. It's very tempting, especially because you gain nothing from doing good reviews." }, { "start": 426.56, "end": 432.48, "text": " But other than a you, hey, thanks for the review. You'll get nothing. And it is really," }, { "start": 432.48, "end": 438.16, "text": " really hard to write a good review. Do it. Nevertheless, please, not only are you helping" }, { "start": 438.16, "end": 444.24, "text": " the field by being not one of the crappy reviewers, but writing a good review also helps you really" }, { "start": 444.24, "end": 450.8, "text": " dig into a paper, really see the weaknesses in other papers. And it makes you a better author," }, { "start": 450.8, "end": 455.28000000000003, "text": " researcher, and community member. So for your own sake, and for the community," }, { "start": 455.84000000000003, "end": 461.12, "text": " take the review seriously, even though you don't have time, even though other people do a crappy" }, { "start": 461.12, "end": 468.88, "text": " job. Another thing that you're going to be asked to do very probably is teaching. Now again," }, { "start": 468.88, "end": 474.88, "text": " you're going to have very little incentive to do a good job at teaching. After all, students are" }, { "start": 474.88, "end": 480.96, "text": " nuisances, the faster you can get it over with, the better the earlier you can go back to writing" }, { "start": 480.96, "end": 486.64, "text": " papers. However, I urge you to take teaching seriously, not only because the world relies" }, { "start": 486.64, "end": 491.28, "text": " on the next generation of researchers being competent, but also think about the fact that" }, { "start": 491.28, "end": 497.36, "text": " the people you teach will be probably some of them working with you in the future. They might be" }, { "start": 497.36, "end": 503.28000000000003, "text": " researchers in other labs you collaborate with, they might even be joining your own lab, and you" }, { "start": 503.28000000000003, "end": 509.2, "text": " will profit from them being more competent. So take teaching seriously for your benefit and for" }, { "start": 509.2, "end": 514.5600000000001, "text": " the benefit of your students. So besides the things you have to do, like reviewing and teaching," }, { "start": 515.28, "end": 521.84, "text": " what should you work on all day? And here's my answer. Start working on your thing, go pee," }, { "start": 521.84, "end": 529.0400000000001, "text": " and then continue working on your thing. A PhD is first and foremost an exercise in long term" }, { "start": 529.0400000000001, "end": 535.6, "text": " focus, you're going to be tempted to do all kinds of things during your PhD, you're going to look" }, { "start": 535.6, "end": 542.1600000000001, "text": " and here's a reading group and here's a seminar and here's a lecture. Now unless it is on your" }, { "start": 542.1600000000001, "end": 547.52, "text": " specific thing on your specific niche, it's probably going to be not a productive use of" }, { "start": 547.52, "end": 552.56, "text": " your time. I'm not saying you shouldn't go there. What I'm saying is that be aware that what" }, { "start": 552.56, "end": 561.76, "text": " ultimately gets you to get your papers is a long term laser focus on your topic and other topics" }, { "start": 561.76, "end": 568.16, "text": " will creep up on you. It's going to be so interesting because you're stuck here with your" }, { "start": 568.16, "end": 573.84, "text": " thing that you know and that is boring and there's going to be this other cool topic. Wow." }, { "start": 573.84, "end": 580.32, "text": " Here we are, this is the NURBS 2019 poster session, one of the poster sessions. There are" }, { "start": 580.32, "end": 588.72, "text": " about 250 posters in this room and there are so many people. It is crazy, every single poster" }, { "start": 588.72, "end": 596.88, "text": " has a ball of people around it, presenters trying to explain to the bystanders their work." }, { "start": 599.12, "end": 602.72, "text": " And you're going to be tempted, oh this is interesting, this is interesting, this is" }, { "start": 602.72, "end": 610.08, "text": " interesting and my topic is so lame. I'm going to just look into this and that's also cool." }, { "start": 611.12, "end": 621.44, "text": " Yeah, you know who did that? Me. It did not turn out well. Focus, focus, focus, focus your research" }, { "start": 621.44, "end": 628.4, "text": " on your thing and you'll be successful. So now you've written your paper, you've submitted it" }, { "start": 628.4, "end": 633.28, "text": " to peer review and with a little bit of luck you've actually managed to get it published" }, { "start": 633.28, "end": 639.12, "text": " and you get to go to a conference. Now the conference itself and the conference website" }, { "start": 639.12, "end": 644.9599999999999, "text": " and everyone on Twitter might give you the impression that conferences are there for" }, { "start": 644.9599999999999, "end": 651.28, "text": " people giving talks about their research and you listening and learning. That's crap. Conferences," }, { "start": 651.28, "end": 656.8, "text": " especially the talking part of conferences, have become more and more irrelevant with the years." }, { "start": 656.8, "end": 662.64, "text": " Specifically now that everything is recorded and streamed, just look at that stuff from the comfort" }, { "start": 662.64, "end": 669.3599999999999, "text": " of your couch at 2x speed. You're missing nothing. These talks are often very short, very rehearsed" }, { "start": 669.3599999999999, "end": 675.68, "text": " and most importantly they are about research that is at least six months old. The interesting part" }, { "start": 675.68, "end": 682.88, "text": " about conferences are the people there. The interesting talking happens in workshops, in panels," }, { "start": 682.88, "end": 691.04, "text": " in tutorials, try to find places where current research is discussed. Workshops are a great" }, { "start": 691.04, "end": 697.84, "text": " place to go for this because the research is often much more recent and not done yet. Go to" }, { "start": 697.84, "end": 705.04, "text": " conferences to interact with people. This whole oh we come together for research, that's a charade." }, { "start": 705.04, "end": 712.32, "text": " The best researchers I know do nothing else but meet and talk to people all day at conferences." }, { "start": 712.32, "end": 719.0400000000001, "text": " And I don't mean this in a mean way. I don't mean go out and deliberately engineer contact with people" }, { "start": 719.0400000000001, "end": 725.44, "text": " for your own benefit. No, a conference is a place where you can find other people that are interested" }, { "start": 725.44, "end": 731.12, "text": " in the same things as you are and you can talk to them, get to know things that you could never get" }, { "start": 731.12, "end": 737.6, "text": " to know through a writing or in a paper. A lot of paper authors will tell you things face to face" }, { "start": 737.6, "end": 743.52, "text": " that they would never write down. A paper such as which experiments that don't work, problems in" }, { "start": 743.52, "end": 750.8000000000001, "text": " research, weaknesses of papers. You'll get a lot of knowledge by being there and talking to people." }, { "start": 750.8000000000001, "end": 757.0400000000001, "text": " But you have to go out of your way and do it actively. I know this is hard for a lot of us" }, { "start": 757.0400000000001, "end": 762.1600000000001, "text": " but it pays off and it's going to make your life a lot more enjoyable. All right the next thing I" }, { "start": 762.16, "end": 768.0799999999999, "text": " want to talk about is internships. Should you go to an internship at a company at a different university" }, { "start": 768.0799999999999, "end": 775.1999999999999, "text": " and this depends entirely on your preference. Now I myself have had pretty good experiences with" }, { "start": 775.1999999999999, "end": 781.4399999999999, "text": " internships and people I know have done so as well. Generally if you do an internship it gives you a" }, { "start": 781.4399999999999, "end": 786.8, "text": " bit of a different perspective because you do it at a different place. And if you do an internship" }, { "start": 786.8, "end": 792.64, "text": " with a large company it can be quite a switch of environment. You'll have access to many more" }, { "start": 792.64, "end": 797.76, "text": " resources and you can do maybe a little bit of a different type of research and most importantly" }, { "start": 798.3199999999999, "end": 807.12, "text": " you'll meet people that are not academics or not academics anymore. And that is very very valuable." }, { "start": 807.12, "end": 813.04, "text": " Once you've been stuck in academia for a while meeting someone who just cares to build a cool" }, { "start": 813.04, "end": 818.9599999999999, "text": " product is so refreshing and gets you a bit down to earth with what's really important. Lastly I" }, { "start": 818.9599999999999, "end": 825.92, "text": " want to talk about the topic of collaborations. Now academia is a bit tricky in that the system" }, { "start": 825.92, "end": 833.36, "text": " tries to alienate and isolate you as a person. You need those first author papers, you need to provide" }, { "start": 833.36, "end": 839.92, "text": " a personal contribution to the knowledge of humankind. Look for people who have the same" }, { "start": 839.92, "end": 847.04, "text": " interests in terms of topic but who have a little bit different skills or experiences such that your" }, { "start": 847.04, "end": 853.52, "text": " papers and your research can become more well rounded. That could be a difference in theoretical" }, { "start": 853.52, "end": 858.64, "text": " versus experimental knowledge, that could be a difference in your academic background. So if" }, { "start": 858.64, "end": 865.52, "text": " you can find someone that has complementary skills to yours and is interested in the same niche it" }, { "start": 865.52, "end": 872.8, "text": " definitely pays off to work together and produce research together. However only do this if they" }, { "start": 872.8, "end": 879.04, "text": " really work in the same field. It is very tempting to start all kinds of collaborations with people" }, { "start": 879.04, "end": 885.1999999999999, "text": " all over the place. If you can handle that good for you but again it pays to have a little bit" }, { "start": 885.1999999999999, "end": 891.1999999999999, "text": " of focus on your particular field and really view collaborations as a joint effort to get" }, { "start": 891.2, "end": 899.36, "text": " research done more quickly and with more rigor. Right so the way I discussed it right now it" }, { "start": 899.36, "end": 906.8000000000001, "text": " seems like doing a PhD is gruesome and lots of work and you never get to do anything fun and" }, { "start": 906.8000000000001, "end": 912.6400000000001, "text": " while there is an aspect to that and it definitely can happen to people especially if they want to" }, { "start": 912.64, "end": 921.76, "text": " finish real quickly. I urge you to also make some time to enjoy this time. A PhD is a cool time." }, { "start": 921.76, "end": 927.4399999999999, "text": " You'll get to meet so many interesting people, get to learn so many interesting topics and ideas" }, { "start": 927.4399999999999, "end": 935.04, "text": " and you'll hopefully get to go to many interesting places and that is an invaluable experience. So my" }, { "start": 935.04, "end": 943.1999999999999, "text": " advice is if you can take it a bit easier, enjoy your time, take as much out of it as you can and" }, { "start": 943.1999999999999, "end": 949.28, "text": " don't work all the time. Maybe you'll have half a year longer, who cares? You only get to do a PhD" }, { "start": 949.28, "end": 955.76, "text": " once and enjoy the time at university while you still can. You can get a job any day. So I hope" }, { "start": 955.76, "end": 962.0799999999999, "text": " you've gained at least something from this video and you should be on a path to a successful" }, { "start": 962.08, "end": 966.24, "text": " machine learning PhD. Cheers!" } ]
_8KNb5iqblE
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Longformer: The Long-Document Transformer
[ "Science & Technology" ]
[ "deep learning", "machine learning", "nlp", "natural language processing", "machine translation", "arxiv", "attention mechanism", "attention", "transformer", "bert", "roberta", "mlm", "convolution", "memory", "linear", "sliding", "dilated", "sparse" ]
The Longformer extends the Transformer by introducing sliding window attention and sparse global attention. This allows for the processing of much longer documents than classic models like BERT. Paper: https://arxiv.org/abs/2004.05150 Code: https://github.com/allenai/longformer Abstract: Transformer-based models are unable to process long sequences due to their self-attention operation, which scales quadratically with the sequence length. To address this limitation, we introduce the Longformer with an attention mechanism that scales linearly with sequence length, making it easy to process documents of thousands of tokens or longer. Longformer's attention mechanism is a drop-in replacement for the standard self-attention and combines a local windowed attention with a task motivated global attention. Following prior work on long-sequence transformers, we evaluate Longformer on character-level language modeling and achieve state-of-the-art results on text8 and enwik8. In contrast to most prior work, we also pretrain Longformer and finetune it on a variety of downstream tasks. Our pretrained Longformer consistently outperforms RoBERTa on long document tasks and sets new state-of-the-art results on WikiHop and TriviaQA. Authors: Iz Beltagy, Matthew E. Peters, Arman Cohan Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there, today we're looking at Longformer, the long document transformer by Is Beltaji, Matthew Peters and Armin Cohen of Allen AI. So the longformer is a variant of the transformer as you might have guessed. The longformer is a transformer that can deal with long documents, so it's aptly named. So I am going to discuss what differentiates the longformer from the transformer. If you don't know what a transformer is, watch the video on attention is all you need. I have a video on that. And I would also suggest you watch the video on BERT, because a lot of the architecture and training here is based on the BERT or variants of BERT. So I'll basically explain what makes the longformer different such that it gets long documents, right, so she can be applied to long documents. So what is the problem with the original transformer? If you have a transformer model and let's say you're doing an NLP task, which usually is where transformers are used, and you want to have a paragraph like this one right here, the abstract of the paper, and maybe you want to predict whether the paper gets accepted at a conference or not. Now the classic transformers, they have a limit, a very harsh limit, on the amount of tokens that they can look at at the same time. So what you would do in a classic transformer is you couldn't process this entire thing, let's say, you would divide it in chunks, you'd say, okay, here's my first chunk from here to here, my second chunk from here to here, and then here to here, and so on. So you go through the documents, split it up in chunks, process each of the chunks individually, and then maybe aggregate the predictions. But of course the drawback is that the model cannot make specific connections between, let's say, some word here, like operation, and somewhere down here, like language. It cannot connect the two on a neural level, at least not in the classic transformer architectures. Now there are ways to try to alleviate this, but classically, if you split up your documents into individual samples, they become independent, you cannot do attention, this attention mechanism cannot operate over across the boundaries of these chunks. So the long former, the goal is to actually just be able to put this entire document here into the model at the same time. So let's look a bit closer into this. In a classic transformer model, what you'll have is you'll have layers of what is called attention mechanism. I'm gonna draw six units here, and the units are actually the input sequence. So in a transformer, other than like a classic neural network, you don't actually have numbers of units in the layers, but you can input as many as long sequences as you want, until your memory limit is reached basically. So these units, they expose something called keys on the lower layer, and these are vectors that point somewhere, and the upper layer will produce what are called queries. And again, I invite you to look at the attention is all you need video if you want more explanation. And basically the keys and queries, they decide where information gets routed to. So the routing of information is what makes the transformer the transformer. So for example, this here is probably going to be routed to this here. So the information is routed like this, and then this here is going to be routed like this. You see the routing is according to the dot product of the keys and queries. So in essence, if you have an input sequence tokens, and you usually transform in a transformer, you transform the things into same length sequences. That has to do a lot also with how you want to pre-train things and so on. So we're not really going to change that part. If you have n input sequence and n tokens on the next layer, and everything can attend to everything, so all the inner products are computed, right? Everything is connected to everything. That means that you're going to end up with an O of n squared memory requirement, because you have n squared connections. The way to alleviate this is much much like you would alleviate this in a classic neural network. So in a classic neural network, imagine you have this MLP, a multi-layer perceptron, or what usually known as a fully connected layer, right? So here I have the same thing, but it's not a transformer. It's a classic neural network, fully connected. So I have D units right here, and D units in this first hidden layer. And I'll have a weight matrix in here, right? And the weight matrix means everything is connected to everything, right? Everything connects to everything else. Again, my memory requirement here is D squared. Now how do we deal with this in a classic neural network? We go to what is called a convolutional neural network. At least that's one of the methods. So let's make this again, but let's now make this a convolutional neural network. What we'll have is we'll have a convolutional kernel. In this case, it's just of length 3, right? So we just have 3 units here, and they will do the same fully connected pattern, but only over these 3 units right here. And then we slide the kernel over, right? Now it's in this position. It's still the same 3 units, but now these 3 things are connected to these 3 things that they're now over, right? And so you keep sliding this over across the lower layer until you're finally at the end here. And now you've reduced the memory consumption from D squared to just D times, and if this is usually the kernel size, it's called K, to D times K. And K you can keep pretty much constant, so that's O of D, right? The same goes for the long former. So in the long former, the idea is that you have a so-called sliding window attention. It's exactly the same as it is in the convolution, except that you don't have these hidden units here, but these are actually parts of the input sequence, and instead of the weight matrix here, you have the attention mechanism over the keys, queries, and values. But the idea is similar. So you can basically say this is a sort of a convolution, and we've already had this in the video about axial attention a bit. Now of course this is your trade-off memory for performance, because before, right, before, I'm gonna draw, let's draw it on top of this fully connected layer, before all the units could attend to all the units, right? And now the unit can only attend to its immediate neighborhood, right? This green unit here can only attend to itself in the lower layer and its immediate neighbors if the kernel size is 3. But consider what happens in the next layer. So in the next layer I have, for example, this unit right here. This is same unit, right, on the next layer. It can attend to these two and itself in the lower layer, but these two themselves can attend to all of these, right, so that the one on the right can attend to one more. So in the first layer this particular unit had information from these three units, but in the second layer the same unit has now information across these five, right, and this is kind of this cone of attention. It gets bigger and bigger as you go through the layers. So you lose the information to incorporate wide ranges of information in a single layer, but you regain it through depth, right? The deeper you go the more a single unit gets information, right, this unit gets information from this unit over here through the layers, through the layers. It can't watch the unit right here in this layer. That's not possible, but it gets the information through the layers. Of course there's still a trade-off, like a fully connected layer could just do this in one step and then in the next layer it could do it again, right, it can do much more complex computation. But if you believe that the most important information is actually in the neighborhoods of the individual tokens, which is conceivable in something like a convolutional neural network, you know that, you know, in an image usually you have localized information, right, if there's a cat here then the nose and the eyes of the cat are pretty close together. So in order to recognize it's a cat you mostly want local information, more and more local information. So in an image that makes sense and in a text it also makes sense to a degree in that usually words close together in a sentence, they are important for each other, right, but the power of the transformer was initially that it could attend to everything in a sentence, right. So for example if you have again the paragraph here, the power of the transformer, at least that was said, is the fact that this piece of text here could make a connection to this piece of text here and therefore the understanding of the entire paragraph could be reliant on this connection being made, which a local model can't do. But if you go through depth that you might be able to recover that. So the longformer is basically what the convolutional neural network does for MLPs, it does it for transformers, right. So instead of n by n giving you n squared now you go into this way where you have, so if you do the same for the transformer you go to o n times, let's call it w, and w being your window size in this case. They have an illustration of this right here. So in a original transformer this is an attention matrix. So here you have your n units in a sequence and drawn in is which unit can attend to which other unit in a given layer. So you'll see this particular unit i here can attend of course to itself, right, can attend to unit i. But it can also attend to this unit or to this unit or to this unit to any unit, right. And that's what gives you this n squared attention because any unit can attend to any unit. Now in this sliding window attention pattern, and this is one of the core components of the longformer, you see that the i-th unit here right here can attend to itself, right, but also to this and to this, this, but no more. It can only attend to the i-th unit or to i minus w to i plus w, right. And this here is a window of size w. This is this sliding window. So a given unit can only attend to itself or its neighbors in one layer, right. And this is exactly what a convolution is. Like if you see if you see this pattern, this is a this is a convolutional pattern. Now the second core component is they expand on this idea in that they make they create these dilated sliding windows. Now you see you already know what a sliding window is. Now they're saying well if you if you have this sliding window it might take quite a number of layers in order to you know get your attention of the entire sequence incorporated. We saw before it took like three layers to get halfway through this sequence of what was it like six tokens and it took us like three layers and with so basically if you go if you go one layer up right one layer up you gain one more context window in each direction, right. So it's not you'd have to go very very deep in order to incorporate the information from these very long sequences and the sliding the dilated sliding window helps this where they say well technically now any any any sequence here so again if we have this sequence and this is the next layer actually let's just draw so this unit right here it will be able to attend this and this but not this and not this but it will also be able to attend this and this but not this and not this sorry not this so it'll skip one so right these these attention patterns they will always kind of skip skip one and the idea is that now you have a vastly greater window of attention right your your window size is now way bigger that means you can incorporate information way faster across the layers like global information but of course now they're kind of arguing against each other in when they do this sliding window they say well we pose that mostly local information is important for NLP right the words right around the word are important and now if they say this here they basically say oh well it's not so important that we miss this word right here which is right next to the word that they are attending from which is counter counter to what they just said that probably the most important information is around the word they they do get around this by saying well if we have different layers in a transformer and in the lower layers will use this sliding window fully local and in the higher layers will use this dilated window and therefore in the in the lower layers we postulate that local information is actually what's needed to understand local features and then in the higher layers we want more global information because it will incorporate features from the from the local informations of the lower layers all right I can I can get the argumentation but I feel that's just something they've thrown in there to make it work better after they tried it out and the the last idea here in the long former is what they call global attention and these global attention is sparse what it means is that there are some special units here so in this this this and this unit and these special units as you can see from the attention pattern these are these can actually attend to everything so this unit can attend for example to this one or to this one or to anything these can attend to anything and any unit can attend to those right any unit can attend to the the first unit right here right so these are your special tokens your special units and they have global attention and the reason for this particularly is that sometimes this is needed and this is an engineering choice right the example I can give is let's say you have a question answering task in a question answering task what you usually have is a question and a paragraph and let's say the task here is to answer yes or no is the is the question so the question might be a statement right I don't know King James was King of England from 1120 to 1140 and then the paragraph will be the Wikipedia entry for King James and the the question is yes or no is the is the question true or not is the statement made true or not how you would feed this to a birdmold to a transformer is you concatenate these two things quest statement in paragraph right these are the tokens right here and then you would separate them using a special token called the separator token this is just to inform the model that here is where the first thing stops and the next thing starts and then at the beginning you would put a special token called the CLS token now usually what you do is you send these things through your transformer and now in the last layer right you end up as we've seen before because you always transform a sequence into a sequence you end up with a sequence again but you just want a single thing you just want yes or no so you designate you say this particular unit here that corresponds to the CLS token that's what I'm going to throw into a logistic regression and that's what will give me my yes or no answer and that's how you train it right so you you don't want to single out any of these any of these as like special so you simply already include a special token at the beginning that then you take the classification from right it's pretty smart but also you say ah this is such a special token I want that to be able to attend to anything right even though for example this unit right here it can only attend to its neighbors right it has this cone thing and this unit right here has this cone thing this unit right here can always attend to anything at each of the layers right it can attend to anything and anything can attend to it so it can get information from anywhere routed to it in each of the layers and it can send information to any of the other units. This is an engineering choice. So at the beginning, you as an engineer have to say which one of these tokens are special tokens. For these tokens, you'll actually then do full attention. It can attend to and from anything. What are our new memory requirements? What this will give us is, first of all, we have N tokens. And here W is our window size. So we have N times W memory. But then we also add the global attention. So plus the number of special tokens times, if there's a special token, it will have N times 2 memory requirement, because it can attend from and to in each layer. And this entire thing, sorry, with the plus, this entire thing times the number of layers. So this is your new attention memory requirements. And as you can see here, N plus N. So this is going to be order of N, instead of much smaller than order of N squared, as we had for the original transformer. Right. So this is what the longformer basically does. Now they have written custom CUDA kernels for doing this dilated attention and so on, which is pretty cool. And they have code available for the model. They test this on a number of language tasks. And what I find interesting is, actually, they start from the Roberta checkpoint, which Roberta, where is it said? Somewhere, oh yeah, this Roberta model right here is a variant of BERT. Right, you can see the name in here. It's a variant of BERT. And that's their baseline. And they start from these checkpoints, as far as I understand, and they kind of copy over the position embeddings and so on. And therefore, they only need to train not very much past the Roberta. Now the reason why they can copy it over actually is, and this I find very interesting, is they use a window size of 512. So until I read this, I got away from reading the paper thinking that this window size here might be fairly small. Right. So this window size, it might be, you know, maybe 10, 20, 30 tokens or something, right? But actually, this window size is 512 in their formulation, which basically means that this is as much as one of the classic models could take as a document. Right. So, sorry, let's go over. So this here is 512. So this is what a classic model could take as an entire document. And in the classic model, you simply split up the document, feed chunks, right? And then aggregate over them. Now the longformer basically has this. So right now, for now, I said it has less memory requirements. Actually, it has the same memory requirements as a classic model, but it is also able, because of these global attention, to kind of incorporate information from the surrounding things. So that's the new part. Because if you think about it, if this W here is 512, 512 was the original N. So 512 was the N0. Whatever the old models had as an N. So right now, if I replace this, and let's not take care of this. If I replace this, it's actually N times N0. And that regresses to the classic model if you plug in N0 here, right? So the new part really is the fact that you have this sliding window, and the global attention is able to incorporate information from these special tokens as well. Because sliding window, that was done before. So I just don't want to get to you the wrong impression that now we can run transformers on like very small memory machines. We can't. But we can run them on the same memory machines, because this is the same length, right? But also feed in longer documents, and have some information of the entire document be propagated to these blocks, which before we couldn't. Before we could just simply feed these blocks as one and not have global information. So that's the new thing. At least they haven't tested it on the smaller things, which is cool from an engineering point, right? You would want to, because if you want to show that you're better, you would want to basically be able to be as powerful as the old model, but then be more powerful. And that's what they do. All right. So if you want to check out the experiments and the ablations, it's very interesting because they turn on and off a lot of things in their model, and kind of check out where things come from, what helps, what doesn't. And I'll leave this to you, and I'll link it. And with that, thanks for listening, watching, and bye-bye.
[ { "start": 0, "end": 5.54, "text": " Hi there, today we're looking at Longformer, the long document transformer by" }, { "start": 5.54, "end": 13.48, "text": " Is Beltaji, Matthew Peters and Armin Cohen of Allen AI. So the longformer is a" }, { "start": 13.48, "end": 19.240000000000002, "text": " variant of the transformer as you might have guessed. The longformer is a" }, { "start": 19.240000000000002, "end": 26.72, "text": " transformer that can deal with long documents, so it's aptly named. So I am" }, { "start": 26.72, "end": 32.72, "text": " going to discuss what differentiates the longformer from the transformer. If you" }, { "start": 32.72, "end": 37.92, "text": " don't know what a transformer is, watch the video on attention is all you need. I" }, { "start": 37.92, "end": 43.96, "text": " have a video on that. And I would also suggest you watch the video on BERT," }, { "start": 43.96, "end": 50.120000000000005, "text": " because a lot of the architecture and training here is based on the BERT" }, { "start": 50.12, "end": 56.599999999999994, "text": " or variants of BERT. So I'll basically explain what makes the longformer" }, { "start": 56.599999999999994, "end": 61.12, "text": " different such that it gets long documents, right, so she can be applied" }, { "start": 61.12, "end": 66.2, "text": " to long documents. So what is the problem with the original transformer?" }, { "start": 66.2, "end": 72.44, "text": " If you have a transformer model and let's say you're doing an NLP task, which" }, { "start": 72.44, "end": 78.68, "text": " usually is where transformers are used, and you want to have a paragraph like" }, { "start": 78.68, "end": 83.76, "text": " this one right here, the abstract of the paper, and maybe you want to predict" }, { "start": 83.76, "end": 89.12, "text": " whether the paper gets accepted at a conference or not. Now the classic" }, { "start": 89.12, "end": 96.84, "text": " transformers, they have a limit, a very harsh limit, on the amount of tokens that" }, { "start": 96.84, "end": 100.80000000000001, "text": " they can look at at the same time. So what you would do in a classic" }, { "start": 100.80000000000001, "end": 107.32000000000001, "text": " transformer is you couldn't process this entire thing, let's say, you would divide" }, { "start": 107.32, "end": 112.24, "text": " it in chunks, you'd say, okay, here's my first chunk from here to here, my second" }, { "start": 112.24, "end": 118, "text": " chunk from here to here, and then here to here, and so on. So you go through the" }, { "start": 118, "end": 123.24, "text": " documents, split it up in chunks, process each of the chunks individually, and then" }, { "start": 123.24, "end": 128.92, "text": " maybe aggregate the predictions. But of course the drawback is that the model" }, { "start": 128.92, "end": 135.4, "text": " cannot make specific connections between, let's say, some word here, like operation," }, { "start": 135.4, "end": 140.92000000000002, "text": " and somewhere down here, like language. It cannot connect the two on a neural" }, { "start": 140.92000000000002, "end": 145.20000000000002, "text": " level, at least not in the classic transformer architectures. Now there are" }, { "start": 145.20000000000002, "end": 152.52, "text": " ways to try to alleviate this, but classically, if you split up your" }, { "start": 152.52, "end": 157.72, "text": " documents into individual samples, they become independent, you cannot do" }, { "start": 157.72, "end": 162.6, "text": " attention, this attention mechanism cannot operate over across the boundaries" }, { "start": 162.6, "end": 169.96, "text": " of these chunks. So the long former, the goal is to actually just be able" }, { "start": 169.96, "end": 177.51999999999998, "text": " to put this entire document here into the model at the same time. So let's look" }, { "start": 177.51999999999998, "end": 182.64, "text": " a bit closer into this. In a classic transformer model, what you'll have is" }, { "start": 182.64, "end": 189.2, "text": " you'll have layers of what is called attention mechanism. I'm gonna draw six" }, { "start": 189.2, "end": 195.44, "text": " units here, and the units are actually the input sequence. So in a transformer," }, { "start": 195.44, "end": 200.64, "text": " other than like a classic neural network, you don't actually have numbers of units" }, { "start": 200.64, "end": 208.72, "text": " in the layers, but you can input as many as long sequences as you" }, { "start": 208.72, "end": 216.2, "text": " want, until your memory limit is reached basically. So these units, they expose" }, { "start": 216.2, "end": 221.67999999999998, "text": " something called keys on the lower layer, and these are vectors that" }, { "start": 221.67999999999998, "end": 228.56, "text": " point somewhere, and the upper layer will produce what are called queries. And" }, { "start": 228.56, "end": 234.23999999999998, "text": " again, I invite you to look at the attention is all you need video if you" }, { "start": 234.23999999999998, "end": 238.88, "text": " want more explanation. And basically the keys and queries, they decide where" }, { "start": 238.88, "end": 245.44, "text": " information gets routed to. So the routing of information is what makes" }, { "start": 245.44, "end": 250.92, "text": " the transformer the transformer. So for example, this here is probably going to" }, { "start": 250.92, "end": 255.68, "text": " be routed to this here. So the information is routed like this, and then" }, { "start": 255.68, "end": 258.88, "text": " this here is going to be routed like this. You see the routing is according to" }, { "start": 258.88, "end": 267.15999999999997, "text": " the dot product of the keys and queries. So in essence, if you have" }, { "start": 267.15999999999997, "end": 274.64, "text": " an input sequence tokens, and you usually transform in a transformer, you transform" }, { "start": 274.64, "end": 281.71999999999997, "text": " the things into same length sequences. That has to do a lot also with how you" }, { "start": 281.71999999999997, "end": 286.96, "text": " want to pre-train things and so on. So we're not really going to change that" }, { "start": 286.96, "end": 294.08, "text": " part. If you have n input sequence and n tokens on the next layer, and everything" }, { "start": 294.08, "end": 298.4, "text": " can attend to everything, so all the inner products are computed, right?" }, { "start": 298.4, "end": 302.96, "text": " Everything is connected to everything. That means that you're going to end up" }, { "start": 302.96, "end": 308.23999999999995, "text": " with an O of n squared memory requirement, because you have n squared" }, { "start": 308.23999999999995, "end": 316.84, "text": " connections. The way to alleviate this is much much like you would alleviate this" }, { "start": 316.84, "end": 321.76, "text": " in a classic neural network. So in a classic neural network, imagine you have" }, { "start": 321.76, "end": 327, "text": " this MLP, a multi-layer perceptron, or what usually known as a fully connected" }, { "start": 327, "end": 331.88, "text": " layer, right? So here I have the same thing, but it's not a transformer. It's a" }, { "start": 331.88, "end": 338.12, "text": " classic neural network, fully connected. So I have D units right here, and D units" }, { "start": 338.12, "end": 342.88, "text": " in this first hidden layer. And I'll have a weight matrix in here, right? And the" }, { "start": 342.88, "end": 346.92, "text": " weight matrix means everything is connected to everything, right? Everything" }, { "start": 346.92, "end": 354.28, "text": " connects to everything else. Again, my memory requirement here is D squared. Now" }, { "start": 354.28, "end": 358.6, "text": " how do we deal with this in a classic neural network? We go to what is called a" }, { "start": 358.6, "end": 363.52000000000004, "text": " convolutional neural network. At least that's one of the methods. So let's" }, { "start": 363.52000000000004, "end": 369.88, "text": " make this again, but let's now make this a convolutional neural network. What we'll" }, { "start": 369.88, "end": 375.52000000000004, "text": " have is we'll have a convolutional kernel. In this case, it's just of length 3, right?" }, { "start": 375.52000000000004, "end": 382.8, "text": " So we just have 3 units here, and they will do the same fully connected pattern," }, { "start": 382.8, "end": 389.92, "text": " but only over these 3 units right here. And then we slide the kernel over, right?" }, { "start": 389.92, "end": 395.32, "text": " Now it's in this position. It's still the same 3 units, but now these 3 things" }, { "start": 395.32, "end": 401.08000000000004, "text": " are connected to these 3 things that they're now over, right? And so you keep" }, { "start": 401.08000000000004, "end": 408.48, "text": " sliding this over across the lower layer until you're finally at the end here. And" }, { "start": 408.48, "end": 413.76, "text": " now you've reduced the memory consumption from D squared to just D" }, { "start": 413.76, "end": 420.84000000000003, "text": " times, and if this is usually the kernel size, it's called K, to D times K. And K" }, { "start": 420.84000000000003, "end": 428.28000000000003, "text": " you can keep pretty much constant, so that's O of D, right? The same goes for" }, { "start": 428.28000000000003, "end": 433.44, "text": " the long former. So in the long former, the idea is that you have a so-called" }, { "start": 433.44, "end": 438.48, "text": " sliding window attention. It's exactly the same as it is in the convolution," }, { "start": 438.48, "end": 444, "text": " except that you don't have these hidden units here, but these are actually" }, { "start": 444, "end": 449.28, "text": " parts of the input sequence, and instead of the weight matrix here, you have the" }, { "start": 449.28, "end": 454.76, "text": " attention mechanism over the keys, queries, and values. But the idea is" }, { "start": 454.76, "end": 460.24, "text": " similar. So you can basically say this is a sort of a convolution, and we've" }, { "start": 460.24, "end": 465.64, "text": " already had this in the video about axial attention a bit. Now of course this" }, { "start": 465.64, "end": 474.48, "text": " is your trade-off memory for performance, because before, right, before, I'm gonna" }, { "start": 474.48, "end": 481.52, "text": " draw, let's draw it on top of this fully connected layer, before all the units" }, { "start": 481.52, "end": 487.44, "text": " could attend to all the units, right? And now the unit can only attend to its" }, { "start": 487.44, "end": 493.92, "text": " immediate neighborhood, right? This green unit here can only attend to itself in" }, { "start": 493.92, "end": 500.32, "text": " the lower layer and its immediate neighbors if the kernel size is 3. But" }, { "start": 500.32, "end": 505.24, "text": " consider what happens in the next layer. So in the next layer I have, for example," }, { "start": 505.24, "end": 511.96, "text": " this unit right here. This is same unit, right, on the next layer. It can attend to" }, { "start": 511.96, "end": 518.8, "text": " these two and itself in the lower layer, but these two themselves can attend to" }, { "start": 518.8, "end": 524.92, "text": " all of these, right, so that the one on the right can attend to one more. So in" }, { "start": 524.92, "end": 532.8, "text": " the first layer this particular unit had information from these three units, but" }, { "start": 532.8, "end": 537.56, "text": " in the second layer the same unit has now information across these five, right," }, { "start": 537.56, "end": 544.16, "text": " and this is kind of this cone of attention. It gets bigger and bigger as" }, { "start": 544.16, "end": 549.1999999999999, "text": " you go through the layers. So you lose the information to incorporate wide" }, { "start": 549.1999999999999, "end": 554.4399999999999, "text": " ranges of information in a single layer, but you regain it through depth, right?" }, { "start": 554.4399999999999, "end": 561, "text": " The deeper you go the more a single unit gets information, right, this unit gets" }, { "start": 561, "end": 566.0799999999999, "text": " information from this unit over here through the layers, through the layers." }, { "start": 566.08, "end": 571.88, "text": " It can't watch the unit right here in this layer. That's not possible, but it" }, { "start": 571.88, "end": 575.72, "text": " gets the information through the layers. Of course there's still a trade-off, like" }, { "start": 575.72, "end": 579.84, "text": " a fully connected layer could just do this in one step and then in the next" }, { "start": 579.84, "end": 584.88, "text": " layer it could do it again, right, it can do much more complex computation. But if" }, { "start": 584.88, "end": 588.72, "text": " you believe that the most important information is actually in the" }, { "start": 588.72, "end": 593.76, "text": " neighborhoods of the individual tokens, which is conceivable in something like a" }, { "start": 593.76, "end": 599.8, "text": " convolutional neural network, you know that, you know, in an image usually you" }, { "start": 599.8, "end": 606.48, "text": " have localized information, right, if there's a cat here then the nose and the" }, { "start": 606.48, "end": 610.48, "text": " eyes of the cat are pretty close together. So in order to recognize it's a" }, { "start": 610.48, "end": 617, "text": " cat you mostly want local information, more and more local information. So in" }, { "start": 617, "end": 621.68, "text": " an image that makes sense and in a text it also makes sense to a degree in that" }, { "start": 621.68, "end": 627.8399999999999, "text": " usually words close together in a sentence, they are important for" }, { "start": 627.8399999999999, "end": 634.12, "text": " each other, right, but the power of the transformer was initially that it could" }, { "start": 634.12, "end": 641.28, "text": " attend to everything in a sentence, right. So for example if you have again the" }, { "start": 641.28, "end": 645.0799999999999, "text": " paragraph here, the power of the transformer, at least that was said, is" }, { "start": 645.08, "end": 653, "text": " the fact that this piece of text here could make a connection to this" }, { "start": 653, "end": 657.36, "text": " piece of text here and therefore the understanding of the entire paragraph" }, { "start": 657.36, "end": 662.8000000000001, "text": " could be reliant on this connection being made, which a local model can't do." }, { "start": 662.8000000000001, "end": 668, "text": " But if you go through depth that you might be able to recover that. So the" }, { "start": 668, "end": 673.32, "text": " longformer is basically what the convolutional neural network does for" }, { "start": 673.32, "end": 682.24, "text": " MLPs, it does it for transformers, right. So instead of n by n giving you n squared" }, { "start": 682.24, "end": 689.5200000000001, "text": " now you go into this way where you have, so if you do the same for the" }, { "start": 689.5200000000001, "end": 697.1600000000001, "text": " transformer you go to o n times, let's call it w, and w being your window size" }, { "start": 697.16, "end": 706.16, "text": " in this case. They have an illustration of this right here. So in a original" }, { "start": 706.16, "end": 712.04, "text": " transformer this is an attention matrix. So here you have your n units in a" }, { "start": 712.04, "end": 718.4, "text": " sequence and drawn in is which unit can attend to which other unit in a given" }, { "start": 718.4, "end": 725.8399999999999, "text": " layer. So you'll see this particular unit i here can attend of course to itself," }, { "start": 725.84, "end": 732.5600000000001, "text": " right, can attend to unit i. But it can also attend to this unit or to this unit" }, { "start": 732.5600000000001, "end": 739.88, "text": " or to this unit to any unit, right. And that's what gives you this n squared" }, { "start": 739.88, "end": 745.88, "text": " attention because any unit can attend to any unit. Now in this sliding window" }, { "start": 745.88, "end": 751, "text": " attention pattern, and this is one of the core components of the longformer, you" }, { "start": 751, "end": 761.64, "text": " see that the i-th unit here right here can attend to itself, right, but also to" }, { "start": 761.64, "end": 774.28, "text": " this and to this, this, but no more. It can only attend to the i-th unit or to i" }, { "start": 774.28, "end": 784.64, "text": " minus w to i plus w, right. And this here is a window of size w. This is this" }, { "start": 784.64, "end": 791.4, "text": " sliding window. So a given unit can only attend to itself or its neighbors in one" }, { "start": 791.4, "end": 796.28, "text": " layer, right. And this is exactly what a convolution is. Like if you see if you" }, { "start": 796.28, "end": 804.36, "text": " see this pattern, this is a this is a convolutional pattern. Now the second" }, { "start": 804.36, "end": 810.28, "text": " core component is they expand on this idea in that they make they create these" }, { "start": 810.28, "end": 816.0799999999999, "text": " dilated sliding windows. Now you see you already know what a sliding window is." }, { "start": 816.0799999999999, "end": 822.0799999999999, "text": " Now they're saying well if you if you have this sliding window it might take" }, { "start": 822.08, "end": 828.2800000000001, "text": " quite a number of layers in order to you know get your attention of the entire" }, { "start": 828.2800000000001, "end": 834.2800000000001, "text": " sequence incorporated. We saw before it took like three layers to get halfway" }, { "start": 834.2800000000001, "end": 841.84, "text": " through this sequence of what was it like six tokens and it took us like" }, { "start": 841.84, "end": 849.6400000000001, "text": " three layers and with so basically if you go if you go one layer up right one" }, { "start": 849.64, "end": 857.3199999999999, "text": " layer up you gain one more context window in each direction, right. So it's" }, { "start": 857.3199999999999, "end": 862.3199999999999, "text": " not you'd have to go very very deep in order to incorporate the information" }, { "start": 862.3199999999999, "end": 870.64, "text": " from these very long sequences and the sliding the dilated sliding window helps" }, { "start": 870.64, "end": 881.28, "text": " this where they say well technically now any any any sequence here so again if we" }, { "start": 881.28, "end": 889.56, "text": " have this sequence and this is the next layer actually let's just draw so this" }, { "start": 889.56, "end": 896.12, "text": " unit right here it will be able to attend this and this but not this and" }, { "start": 896.12, "end": 900.92, "text": " not this but it will also be able to attend this and this but not this and" }, { "start": 900.92, "end": 905.88, "text": " not this sorry not this so it'll skip one so right these these attention" }, { "start": 905.88, "end": 912.62, "text": " patterns they will always kind of skip skip one and the idea is that now you" }, { "start": 912.62, "end": 918.28, "text": " have a vastly greater window of attention right your your window size is" }, { "start": 918.28, "end": 923.4, "text": " now way bigger that means you can incorporate information way faster" }, { "start": 923.4, "end": 929.4, "text": " across the layers like global information but of course now they're" }, { "start": 929.4, "end": 934, "text": " kind of arguing against each other in when they do this sliding window they" }, { "start": 934, "end": 941.24, "text": " say well we pose that mostly local information is important for NLP right" }, { "start": 941.24, "end": 945.84, "text": " the words right around the word are important and now if they say this here" }, { "start": 945.84, "end": 950.56, "text": " they basically say oh well it's not so important that we miss this word right" }, { "start": 950.56, "end": 957.04, "text": " here which is right next to the word that they are attending from which is" }, { "start": 957.04, "end": 961.16, "text": " counter counter to what they just said that probably the most important" }, { "start": 961.16, "end": 968, "text": " information is around the word they they do get around this by saying well if we" }, { "start": 968, "end": 973, "text": " have different layers in a transformer and in the lower layers will use this" }, { "start": 973, "end": 979.76, "text": " sliding window fully local and in the higher layers will use this dilated" }, { "start": 979.76, "end": 986.4, "text": " window and therefore in the in the lower layers we postulate that local" }, { "start": 986.4, "end": 991.88, "text": " information is actually what's needed to understand local features and then in" }, { "start": 991.88, "end": 997.92, "text": " the higher layers we want more global information because it will incorporate" }, { "start": 997.92, "end": 1003.88, "text": " features from the from the local informations of the lower layers all" }, { "start": 1003.88, "end": 1008.92, "text": " right I can I can get the argumentation but I feel that's just something they've" }, { "start": 1008.92, "end": 1017.0799999999999, "text": " thrown in there to make it work better after they tried it out and the the last" }, { "start": 1017.0799999999999, "end": 1023.24, "text": " idea here in the long former is what they call global attention and these" }, { "start": 1023.24, "end": 1029, "text": " global attention is sparse what it means is that there are some special units" }, { "start": 1029, "end": 1037.1599999999999, "text": " here so in this this this and this unit and these special units as you can see" }, { "start": 1037.16, "end": 1041.92, "text": " from the attention pattern these are these can actually attend to everything" }, { "start": 1041.92, "end": 1047, "text": " so this unit can attend for example to this one or to this one or to anything" }, { "start": 1047, "end": 1052.3200000000002, "text": " these can attend to anything and any unit can attend to those right any unit" }, { "start": 1052.3200000000002, "end": 1059.3200000000002, "text": " can attend to the the first unit right here right so these are your special" }, { "start": 1059.3200000000002, "end": 1066.68, "text": " tokens your special units and they have global attention and the reason for" }, { "start": 1066.68, "end": 1072.88, "text": " this particularly is that sometimes this is needed and this is an engineering" }, { "start": 1072.88, "end": 1077.48, "text": " choice right the example I can give is let's say you have a question answering" }, { "start": 1077.48, "end": 1081.3600000000001, "text": " task in a question answering task what you usually have is a question and a" }, { "start": 1081.3600000000001, "end": 1087.5600000000002, "text": " paragraph and let's say the task here is to answer yes or no is the is the" }, { "start": 1087.5600000000002, "end": 1092.68, "text": " question so the question might be a statement right I don't know King James" }, { "start": 1092.68, "end": 1099.44, "text": " was King of England from 1120 to 1140 and then the paragraph will be the" }, { "start": 1099.44, "end": 1107.1200000000001, "text": " Wikipedia entry for King James and the the question is yes or no is the is the" }, { "start": 1107.1200000000001, "end": 1111.72, "text": " question true or not is the statement made true or not how you would feed this" }, { "start": 1111.72, "end": 1118.8400000000001, "text": " to a birdmold to a transformer is you concatenate these two things quest" }, { "start": 1118.84, "end": 1124.9199999999998, "text": " statement in paragraph right these are the tokens right here and then you would" }, { "start": 1124.9199999999998, "end": 1129.6399999999999, "text": " separate them using a special token called the separator token this is just" }, { "start": 1129.6399999999999, "end": 1133.72, "text": " to inform the model that here is where the first thing stops and the next" }, { "start": 1133.72, "end": 1139.24, "text": " thing starts and then at the beginning you would put a special token called the" }, { "start": 1139.24, "end": 1145.9599999999998, "text": " CLS token now usually what you do is you send these things through your" }, { "start": 1145.96, "end": 1152.68, "text": " transformer and now in the last layer right you end up as we've seen before" }, { "start": 1152.68, "end": 1156.24, "text": " because you always transform a sequence into a sequence you end up with a" }, { "start": 1156.24, "end": 1161.24, "text": " sequence again but you just want a single thing you just want yes or no so" }, { "start": 1161.24, "end": 1168.8400000000001, "text": " you designate you say this particular unit here that corresponds to the CLS" }, { "start": 1168.8400000000001, "end": 1174.48, "text": " token that's what I'm going to throw into a logistic regression and that's" }, { "start": 1174.48, "end": 1178.96, "text": " what will give me my yes or no answer and that's how you train it right so you" }, { "start": 1178.96, "end": 1185.68, "text": " you don't want to single out any of these any of these as like special so" }, { "start": 1185.68, "end": 1191.08, "text": " you simply already include a special token at the beginning that then you" }, { "start": 1191.08, "end": 1197.56, "text": " take the classification from right it's pretty smart but also you say ah this is" }, { "start": 1197.56, "end": 1204.24, "text": " such a special token I want that to be able to attend to anything right even" }, { "start": 1204.24, "end": 1209.4, "text": " though for example this unit right here it can only attend to its neighbors" }, { "start": 1209.4, "end": 1214.06, "text": " right it has this cone thing and this unit right here has this cone thing this" }, { "start": 1214.06, "end": 1220.4, "text": " unit right here can always attend to anything at each of the layers right it" }, { "start": 1220.4, "end": 1225.72, "text": " can attend to anything and anything can attend to it so it can get information" }, { "start": 1225.72, "end": 1231.78, "text": " from anywhere routed to it in each of the layers and it can send information" }, { "start": 1231.78, "end": 1233.78, "text": " to any of the other units." }, { "start": 1234.58, "end": 1236.66, "text": " This is an engineering choice." }, { "start": 1236.74, "end": 1243.18, "text": " So at the beginning, you as an engineer have to say which one of these tokens are special tokens." }, { "start": 1243.18, "end": 1247.62, "text": " For these tokens, you'll actually then do full attention." }, { "start": 1247.62, "end": 1250.42, "text": " It can attend to and from anything." }, { "start": 1251.18, "end": 1253.7, "text": " What are our new memory requirements?" }, { "start": 1253.98, "end": 1255.98, "text": " What this will give us is," }, { "start": 1256.1399999999999, "end": 1259.46, "text": " first of all, we have N tokens." }, { "start": 1259.46, "end": 1263.14, "text": " And here W is our window size." }, { "start": 1263.14, "end": 1266.94, "text": " So we have N times W memory." }, { "start": 1266.94, "end": 1269.98, "text": " But then we also add the global attention." }, { "start": 1269.98, "end": 1275.74, "text": " So plus the number of special tokens times," }, { "start": 1275.74, "end": 1278.46, "text": " if there's a special token," }, { "start": 1278.46, "end": 1284.66, "text": " it will have N times 2 memory requirement," }, { "start": 1284.66, "end": 1288.06, "text": " because it can attend from and to in each layer." }, { "start": 1288.06, "end": 1293.1399999999999, "text": " And this entire thing, sorry, with the plus," }, { "start": 1293.1399999999999, "end": 1296.82, "text": " this entire thing times the number of layers." }, { "start": 1296.82, "end": 1300.86, "text": " So this is your new attention memory requirements." }, { "start": 1300.86, "end": 1303.98, "text": " And as you can see here, N plus N." }, { "start": 1303.98, "end": 1306.98, "text": " So this is going to be order of N," }, { "start": 1306.98, "end": 1311.1, "text": " instead of much smaller than order of N squared," }, { "start": 1311.1, "end": 1313.8999999999999, "text": " as we had for the original transformer." }, { "start": 1313.9, "end": 1315.9, "text": " Right." }, { "start": 1315.9, "end": 1320.5, "text": " So this is what the longformer basically does." }, { "start": 1320.5, "end": 1323.5, "text": " Now they have written custom CUDA kernels" }, { "start": 1323.5, "end": 1329.5, "text": " for doing this dilated attention and so on," }, { "start": 1329.5, "end": 1330.5, "text": " which is pretty cool." }, { "start": 1330.5, "end": 1333.5, "text": " And they have code available for the model." }, { "start": 1333.5, "end": 1338.5, "text": " They test this on a number of language tasks." }, { "start": 1338.5, "end": 1342.5, "text": " And what I find interesting is," }, { "start": 1342.5, "end": 1346.5, "text": " actually, they start from the Roberta checkpoint," }, { "start": 1346.5, "end": 1350.5, "text": " which Roberta, where is it said?" }, { "start": 1350.5, "end": 1354.3, "text": " Somewhere, oh yeah, this Roberta model right here" }, { "start": 1354.3, "end": 1356.1, "text": " is a variant of BERT." }, { "start": 1356.1, "end": 1358.5, "text": " Right, you can see the name in here." }, { "start": 1358.5, "end": 1360.1, "text": " It's a variant of BERT." }, { "start": 1360.1, "end": 1361.5, "text": " And that's their baseline." }, { "start": 1361.5, "end": 1363.3, "text": " And they start from these checkpoints," }, { "start": 1363.3, "end": 1364.7, "text": " as far as I understand," }, { "start": 1364.7, "end": 1368.1, "text": " and they kind of copy over the position embeddings and so on." }, { "start": 1368.1, "end": 1372.8999999999999, "text": " And therefore, they only need to train not very much" }, { "start": 1372.8999999999999, "end": 1374.3, "text": " past the Roberta." }, { "start": 1374.3, "end": 1377.6999999999998, "text": " Now the reason why they can copy it over actually is," }, { "start": 1377.6999999999998, "end": 1380.1, "text": " and this I find very interesting," }, { "start": 1380.1, "end": 1383.8999999999999, "text": " is they use a window size of 512." }, { "start": 1383.8999999999999, "end": 1385.6999999999998, "text": " So until I read this," }, { "start": 1385.6999999999998, "end": 1389.6999999999998, "text": " I got away from reading the paper" }, { "start": 1389.6999999999998, "end": 1394.8999999999999, "text": " thinking that this window size here might be fairly small." }, { "start": 1394.8999999999999, "end": 1395.5, "text": " Right." }, { "start": 1395.5, "end": 1398.9, "text": " So this window size, it might be, you know," }, { "start": 1398.9, "end": 1402.9, "text": " maybe 10, 20, 30 tokens or something, right?" }, { "start": 1402.9, "end": 1411.5, "text": " But actually, this window size is 512 in their formulation," }, { "start": 1411.5, "end": 1416.1, "text": " which basically means that this is as much" }, { "start": 1416.1, "end": 1419.9, "text": " as one of the classic models could take as a document." }, { "start": 1419.9, "end": 1420.5, "text": " Right." }, { "start": 1420.5, "end": 1423.1, "text": " So, sorry, let's go over." }, { "start": 1423.1, "end": 1426.5, "text": " So this here is 512." }, { "start": 1426.5, "end": 1436.6999999999998, "text": " So this is what a classic model could take as an entire document." }, { "start": 1436.6999999999998, "end": 1437.6999999999998, "text": " And in the classic model," }, { "start": 1437.6999999999998, "end": 1441.1, "text": " you simply split up the document, feed chunks, right?" }, { "start": 1441.1, "end": 1442.6999999999998, "text": " And then aggregate over them." }, { "start": 1442.6999999999998, "end": 1446.6999999999998, "text": " Now the longformer basically has this." }, { "start": 1446.6999999999998, "end": 1451.3, "text": " So right now, for now, I said it has less memory requirements." }, { "start": 1451.3, "end": 1455.5, "text": " Actually, it has the same memory requirements as a classic model," }, { "start": 1455.5, "end": 1456.8999999999999, "text": " but it is also able," }, { "start": 1456.8999999999999, "end": 1458.5, "text": " because of these global attention," }, { "start": 1458.5, "end": 1462.8999999999999, "text": " to kind of incorporate information from the surrounding things." }, { "start": 1462.8999999999999, "end": 1466.1, "text": " So that's the new part." }, { "start": 1466.1, "end": 1468.3, "text": " Because if you think about it," }, { "start": 1468.3, "end": 1474.8999999999999, "text": " if this W here is 512, 512 was the original N." }, { "start": 1474.8999999999999, "end": 1480.5, "text": " So 512 was the N0." }, { "start": 1480.5, "end": 1483.7, "text": " Whatever the old models had as an N." }, { "start": 1483.7, "end": 1489.7, "text": " So right now, if I replace this," }, { "start": 1489.7, "end": 1492.5, "text": " and let's not take care of this." }, { "start": 1492.5, "end": 1496.1, "text": " If I replace this, it's actually N times N0." }, { "start": 1496.1, "end": 1501.7, "text": " And that regresses to the classic model if you plug in N0 here, right?" }, { "start": 1501.7, "end": 1508.3, "text": " So the new part really is the fact that you have this sliding window," }, { "start": 1508.3, "end": 1515.3, "text": " and the global attention is able to incorporate information from these special tokens as well." }, { "start": 1515.3, "end": 1520.7, "text": " Because sliding window, that was done before." }, { "start": 1520.7, "end": 1524.7, "text": " So I just don't want to get to you the wrong impression" }, { "start": 1524.7, "end": 1528.1, "text": " that now we can run transformers on like very small memory machines." }, { "start": 1528.1, "end": 1529.7, "text": " We can't." }, { "start": 1529.7, "end": 1533.5, "text": " But we can run them on the same memory machines," }, { "start": 1533.5, "end": 1535.8999999999999, "text": " because this is the same length, right?" }, { "start": 1535.9, "end": 1539.1000000000001, "text": " But also feed in longer documents," }, { "start": 1539.1000000000001, "end": 1547.3000000000002, "text": " and have some information of the entire document be propagated to these blocks," }, { "start": 1547.3000000000002, "end": 1548.7, "text": " which before we couldn't." }, { "start": 1548.7, "end": 1554.3000000000002, "text": " Before we could just simply feed these blocks as one and not have global information." }, { "start": 1554.3000000000002, "end": 1555.9, "text": " So that's the new thing." }, { "start": 1555.9, "end": 1559.3000000000002, "text": " At least they haven't tested it on the smaller things," }, { "start": 1559.3000000000002, "end": 1561.3000000000002, "text": " which is cool from an engineering point, right?" }, { "start": 1561.3000000000002, "end": 1562.9, "text": " You would want to," }, { "start": 1562.9, "end": 1564.7, "text": " because if you want to show that you're better," }, { "start": 1564.7, "end": 1571.1000000000001, "text": " you would want to basically be able to be as powerful as the old model," }, { "start": 1571.1000000000001, "end": 1573.5, "text": " but then be more powerful." }, { "start": 1573.5, "end": 1575.5, "text": " And that's what they do." }, { "start": 1575.5, "end": 1575.9, "text": " All right." }, { "start": 1575.9, "end": 1579.1000000000001, "text": " So if you want to check out the experiments and the ablations," }, { "start": 1579.1000000000001, "end": 1583.9, "text": " it's very interesting because they turn on and off a lot of things in their model," }, { "start": 1583.9, "end": 1586.1000000000001, "text": " and kind of check out where things come from," }, { "start": 1586.1000000000001, "end": 1587.9, "text": " what helps, what doesn't." }, { "start": 1587.9, "end": 1590.5, "text": " And I'll leave this to you, and I'll link it." }, { "start": 1590.5, "end": 1595.1, "text": " And with that, thanks for listening, watching, and bye-bye." } ]
eyxmSmjmNS0
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
[Classic] Generative Adversarial Networks (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "gan", "generator", "discriminator", "convolution", "deconvolution", "goodfellow", "bengio", "convolutional neural network", "mnist", "cifar10", "generative", "generative model", "image generation", "face model", "latent space", "interpolation", "minmax", "nash equilibrium", "game theory" ]
#ai #deeplearning #gan GANs are of the main models in modern deep learning. This is the paper that started it all! While the task of image classification was making progress, the task of image generation was still cumbersome and prone to artifacts. The main idea behind GANs is to pit two competing networks against each other, thereby creating a generative model that only ever has implicit access to the data through a second, discriminative, model. The paper combines architecture, experiments, and theoretical analysis beautifully. OUTLINE: 0:00 - Intro & Overview 3:50 - Motivation 8:40 - Minimax Loss Function 13:20 - Intuition Behind the Loss 19:30 - GAN Algorithm 22:05 - Theoretical Analysis 27:00 - Experiments 33:10 - Advantages & Disadvantages 35:00 - Conclusion Paper: https://arxiv.org/abs/1406.2661 Abstract: We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to 1/2 everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples. Authors: Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher LinkedIn: https://www.linkedin.com/in/yannic-kilcher-488534136/ If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Hi there! Today we'll look at Generative Adversarial Nets by Ian J. Goodfellow et al. So this one is another installment in our series of historical papers that had great impact. GANs nowadays, or Generative Adversarial Nets back then, were sort of... This was the starting shot in a long line of research that is still continuing today. So I remember when I started my PhD in 2015, GANs were just about spiking. I remember NURiPS, or back then NIPS, in 2016, and every other paper was about GANs. There was also this famous Schmidhuber Goodfellow moment at the tutorial. It was a wild time. And this is the paper that started it all. The paper is quite well written. It's very focused on convincing you that this is a sound method mathematically. That it doesn't just do wild things. And also it has a lot of modern tricks for GANs already built into it. So astounding how much foresight there was already in this paper. But of course, GANs have come a super long way since then. And today we'll just go through the paper and look at how it looked back then and what this paper was like. So yeah, join me in this. If you like it, please share it out. Let me know in the comments what you think of historic paper reviews. This is not going to be like a beginner's tutorial in GANs. This is really going to be... We'll go through the paper. You'll see right here the paper is from 2014. So it would still be another two years or so until GANs really take off from this point on. But the introduction, of course, was really important. Okay, so abstract. Here we go. We propose a new framework for estimating generative models via an adversarial process in which we simultaneously train two models, a generative model G that captures the data distribution and a discriminative model D that estimates the probability that a sample came from the training data rather than G. Okay, this was sort of a new thing. Now, I know, I know people disagree with this being a new thing, but this was a new thing. And specifically, this was the first paper that made something like this really work for data. So to have a discriminator, the words generator and discriminator were also introduced in this paper. So you train this D model, which is the discriminator, and the D model basically decides whether or not a given data point comes from data or comes from the fake distribution. And then you have a generative model G that is supposed to just create this data X rather than coming from the database. So you want to sample a couple of times from the data, and sometimes you sample from this model G, and then the discriminator is supposed to decide whether or not it comes from the data set or from your counterfeiter, like from this generator G. And it's supposed to say whether it's data or fake. So you train the D model as a simple image classifier. So people already knew how to build image classifiers. This was shortly, as you can see, before ResNet came on the scene. So people already kind of knew how to build CNNs, build really good image classifiers. And the thought here was really generative models weren't really a thing until then. So people were in language models, Word2Vec was kind of coming up, but they would still be doing like RNNs using these Word2Vec vectors for generating language. In images, generative models weren't really much of a thing. So you would do like compositional models or you would do autoencoders, which were just either really blurry or really, really artifactory. And there were also approaches like deep belief networks and so on, but they had their own problems. So there wasn't really a satisfactory way to do image generation that resulted in really high quality images. Now here, I think the entire thought, and this is not really spelled out, but the entire thought here is that, hey, we know how to train really, really good image classifiers. This has been evident in these since AlexNet. So for two years, this was evident how to build really good image classifiers. And the question here is to say that rather than also building really good generators, can't we like harness the power of building really good classifiers for training a generator? And this is this idea right here. This wasn't the one before, as you know, in like an autoencoder, what you do is you'd input a sample into some kind of auto bottleneck thing, whatever. And then at the end, you train your output sample to match the input sample as close as possible. And then in here, after you've trained this, this part here is your generative model. And then here, in here, you'd input like MCMC sampler or whatnot. And then, of course, variational autoencoders came up and so on. But still, what you always would do is you would somehow use the data directly. So this is data in order to train your model. So you would somehow say, ah, the output here should probably match the input in some way or in some at least distributional way. This was a new thing. As you can see right here, there is no direct connection between the data and the generator. And I think this was the success of this model. The fact that the generator did not, it wasn't trained from the data like you would do if you were just approaching this problem. The philosophy here is let's use the power of discriminative models, which we know how to build in order to train this generator. So the generators task now isn't to match any sort of data point. The generators task is to produce images that the discriminator would classify as data. And you can do that by simply back propagating through the discriminator to the generator. So I think that's the only thing that's kind of unstated in this paper, the reasoning behind why this is new, why this might work. But everything else is spelled out very well in this paper, I have to say, if you read through it. So the training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two player game. So as I said, the paper is very much focused on convincing you that there's something sound happening here, because at that time, if you were to look at this, you would say something like there is no way. This is you would be like, yeah. So I can understand the motivation here to really convince people that, you know, something something good is happening also on the on the theoretical side. In the space, sorry, in the space of arbitrary functions, G and D, a unique solution exists with G recovering the training data distribution D equals to one half everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with back propagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. OK, so the point here is that it's much easier than current methods of producing of generative models. And also it does something sound. Now, let's jump into the loss function right here. So they say G and D play the following two player minimax game with value function V. And this is still understood until today that it was already like if this was a pure engineering paper, they could simply build the architecture and say, oh, we let these networks fight. And they they are kind of adversarial and they they pump each other up and so on. And this here was more much more into the direction of kind of a a theoretical reasoning into why something like this would work. Of course, there is still a lot of engineering going on to actually make it work. So they they have there is this value function right here. OK, and the value function is the following. So what you have is you have the log probability of data and you have one the log one minus the of the generated samples. So here you can see and this was introduced, this seems also obvious now. Right. But you have a prior on what this is called the noise distribution. OK, so you have a prior on your input noise to the generator because the generator is supposed to come up with very many different data points. And if it is a if it is a, you know, non-stochastic function like a neural network, then you need some way to make to produce different images. So there is this prior distribution over the noise. You feed that noise into the generator. The generator will produce an output. You put that into the discriminator and then this right here, as you can see, the discriminator is trying to maximize this objective. So the discriminator is trying to maximize the probability of real data and it is trying to minimize the probability of fake data. OK, it is this is simply a two way classification problem. At the same time, the generator, as you can see, is trying to minimize the objective. In fact, the order here is quite important. So the generator, as you can see, is trying to minimize whatever this here is. So the generator sort of is trying to minimize against the best possible discriminator. And so this is one one observation right here is that the formulation is always with respect to a perfect discriminator. Now, we know that this doesn't work because if you have a perfect discriminator, then generator cannot catch up because you have insufficient gradients and so on. And this was already recognized in this paper as well. But the formulation is with respect to a min max game and not a max min game. So the other point I want to make here is that you can see the discriminator appears in both in both terms right here. However, the generator only appears right here. And this this basically means that the objective for the generator is only this part here because the other part is constant. So the generator is just trying to make the discriminator think that fake data is real. So it is trying to make the discriminator the class of fake data as small as possible for the data that it outputs. Well, the discriminator is trying to make the class of fake data more than the class of sorry, real data. Yeah, it's trying to make it's trying to classify fake data as fake and real data as real. Whereas the generator has only this part on the right. This is I feel this is it's quite important. Why? Because already in this paper, they recognize that this might not be the best practical objective. And for the generator, they can actually exchange this part here on the right to simply say we want to. So we want to instead of one minus D, instead of log one minus D, we simply want to use minus log D as an objective for the generator. So you can kind of play around with this. And as you know, lots of formulations have played around with this loss right here. And yeah, that's why we have like a billion, billion, billion, billion GAN variations. They introduced the reasoning behind this. So there's an intuition right here. And you can see already in practice, equation one may not provide sufficient gradient for G to learn well. Early in learning, when G is poor, D can reject samples with high confidence because they're clearly different from the training data. In this case, this saturates rather than training G to minimize that we can train G to maximize log D. This objective function results in the same fixed point for the dynamic, but provides much stronger gradients in early, much stronger gradients early in learning. This is in contrast to like other papers that seem to say, oh, we do this. And they at least say it provides the same fixed point. Right. Yeah. So again, they're trying to convince you that this is doing something useful and that this is easier. OK, so this strategy is analogous to other things. Training maintains samples from a Markov chain from one learning step in the next order to avoid burning in a Markov chain in another loop of learning. Sorry. OK, this is from another paper. So their point here is that it's analogous to other papers that use these Markov chains where you always do one step in G and one step in D. We alternate between K steps of optimizing D and one step of optimizing G because you have this inner maximization over D and then the outer maximization, the outer minimization over G. So this has already been around the fact that you kind of have to have these optimizations in lockstep. But the difference here is you don't need any sort of like Markov chain in the inner loop and so on. You simply need back propagation. So here's an illustration of how that might work. So at the beginning here, you have your Z space and this is always sampled uniformly, as you can see right here. This is from a prior distribution and through the mapping. So this here is from Z to X is G. So this is the mapping G. You can see that the uniform distribution is now mapped to something non-uniform, which results in the green thing. So G is the green line, while as this is data, the black dots are data. And if you have a discriminator, the discriminator is supposed to tell you where there's data and where there's fake data. Now, so green here is fake. Now, this blue line is sort of a half trained discriminator. Now you train D, right? You max maximize D, the discriminator, and that gives you this blue line right here. So this this is a perfect discriminator for these two data distributions. It tells you it's basically the ratio of green to black at each point. And now you train the generator according to this. And you can see that the gradient of the discriminator is so the gradient of the discriminator. Discriminator is in this direction. OK, so it's like up this hill. And that's why you want to shift your green curve over here according to the gradient of the discriminator. Note that we first trained the discriminator and now in a second step, we optimize the generator. So now we shift this green curve over in order to in along the gradient of the blue curve. So it's important the green curve doesn't see the black curve ever. The generator doesn't see the data. The generator simply sees that blue curve and it goes along the gradient of that blue curve of the discriminator. OK, and then if you do this many, many steps, actually, there are dots right here. You will end up with a discriminator that has no clue what's where. This is one half probability everywhere because the ratio is the same. And you end up with the probability of data equal to the probability of the output generated samples. And this can happen if the generator simply remembers the training data. But there are a number of things that counter that. For example, the generator is continuous while the training data is, of course, discrete. So there is this in between things right here where there is no training data. In fact, to hit exactly training data is very, very unlikely. But of course, you can still you can still peek at the training data. But also, I think there are two things why the generator doesn't simply remember the training data first, because it doesn't ever see the training data directly. So it can only see it through the discriminator. And second of all, because it is built as these multilayer neural networks, it doesn't have the power to just remember this, because as there is kind of this notion of continuous function. So and these neural networks are rather smooth functions often. And therefore, I think that is something that helps the generator avoid remembering the training data. Of course, there is still this problem of mode collapse that was really big in GANs. So even if it doesn't remember the training data, it might focus on the easiest part of the training data and forget all other parts. And that was a direct result, actually, of this objective. So where was it? So this objective directly led to mode collapse in some in some form, because it penalizes different errors differently. So of course, people have come up with ways to to solve that. OK, now here is the algorithm. As you can see, this was already quite this was already quite the algorithm we use nowadays. So for K steps, this is the inner maximization. And here they say that we use K equals one. So this is this is pretty much what we use today. The early days of GAN were still like, how much do I need to discriminator per generator and so on. Nowadays, everyone is just using one step here, one step there, or even training and jointly works in some cases. So you want to sample a mini batch of noise samples and you will sample a mini batch of M examples from training data generation. So from this data, you want to update the discriminator by sending its stochastic gradient. And this is simply the gradient of the objective. And then after those K steps, you want to sample another mini batch of noise samples and update the generator by descending its stochastic gradient. And you can see right here already, there is this reduced objective that doesn't include this because it falls away in the gradient. And they say the gradient based up this can use any standard learning based rule. We use momentum in our experiments. Very cool. So I believe they already also say that it is somewhere here. It's pretty fun that they say, oh, in our generator, we only input noise at the lowest layer. This is also something that if you think that G here is a multilayer network, so it's kind of a multilayer network that outputs an image. And if you ask yourself, if I have noise, how would I input that into there? It's so clear nowadays that we just put it here. But this was not clear at all. This was kind of an invention of this paper because you could put it pretty much at all layers. You could distribute it and so on. You could add some right here. It was this paper that already established the fact that we input noise kind of as a vector at the very beginning and then just let the neural network produce the image from that. So yeah, pretty cool. It's pretty sneaky how many things are hidden in these initial papers, how many decisions that are made there then are just taken over. And this one, I guess, turned out to be fairly good. OK, so here they go for some theoretical analysis. And the first they want to convince you that if the generator, if this all works well, if both parties, this generator and the discriminator, optimize their objective to the optimum, then the generator will have captured the data distribution, so the global optimality of this. And they go about convincing you of that. So the first thing that they convince you of is that if you fix the generator, the optimal discriminator is this. And we've already seen this in this drawing right here. So the optimal discriminator is simply the ratio of the data, of the likelihood of data versus the likelihood of the generated data. OK, so you train, you almost train this discriminator in the inner loop. And that's simply the consequence of this, of a pointwise. This is true pointwise, therefore it's true over the entire data distribution. In the next thing, they convince you that the global minimum of the virtual training criterion, this is the value function, this min-max game, is achieved if and only if this holds. At that point, the training criterion achieves the value of negative log 4. And this, again, this was already here, the fact that this has a global minimum, and it is achieved when the generator matches the data distribution, which is pretty cool. So in the proof, it's pretty simple, actually. They first say, look, if this is the case, we just simply plug that in, the discriminator will be confused. So if the generator exactly captures the data, the discriminator will have no clue what's going on, right? Because it can't, because they're equal. So it must basically output the probability of one half everywhere, and then your objective becomes a constant negative log 4. Now, if you then plug that into the other equation, you'll see that the training criterion ends up being negative log 4 plus twice the Jensen-Shannon divergence between the data and the generated distribution. And since this term here is always positive, that means that this thing here can never be less than negative log 4. And therefore, the negative log 4 is the optimum. OK, that's the proof is pretty cool, I have to say, to show that this has the optimum at that place. And the last thing they convince you of is that this algorithm actually converges. And the convergence is simply predicated on the fact that if you look at each of these problems individually, they are convex. So like here is convex in X for every alpha. So each of these are sort of convex problems, and then it will naturally converge to their minimum. However, in practice, adversarial nets represent a limited family of distributions via the function. And we optimize the parameters rather than the distribution itself. Using a multilayer perceptron to define G introduces multiple critical points in parameter space. However, the excellent performance of the multilayer perceptrons in practice suggest that they are a reasonable model to use, despite their lack of theoretical guarantees. So they say if we could optimize this probability distribution directly, it is a convex problem and we will always converge. But in practice, of course, we only optimize the parameters of an MLP or a CNN. And that doesn't always converge. But we have reasonable hopes that it will converge. OK, so again, it's very much focused on convincing that this is doing something sensible, which I hope now you are convinced. So there is a global optimum point. It's when the generator captures the data distribution perfectly. This is this can be achieved and will be achieved if you can optimize these probability distributions with a reasonable degree of freedom. And the neural networks provide that reasonable degree of freedom and give us good hope that in practice it will work. So they apply this to data sets, namely MNIST, the Toronto Phase Database and C410. The generator nets used a mixture of rectifier linear activations and sigmoid activations, while the discriminator net used max out activations. That was still a thing. Dropout was applied in training at the discriminator net. While our theoretical framework permits the use of dropout and other noise at intermediate layers of the generator, we used noise as the input to only the bottom most layer of the generator network. Again, this wasn't kind of clear at the beginning. And also the fact that to leave out dropout and so on in the generator was, I guess they found that empirically. And then there was of course no way to evaluate these things. Like how do we evaluate generative models? Nowadays we have these inception distances and so on. But then we estimate probability of the test set under P under the generated data by fitting a Gaussian parsing window to the samples generated with G and reporting the log likelihood under this distribution. The theta parameter, yada yada yada. Results are reported. This method of estimating the likelihood has somewhat high variance and does not perform well in high dimensional spaces, but it is the best method available to our knowledge. Advances in generative models that can sample but not estimate likelihood directly motivate further research into how to evaluate such models. They were absolutely right in this. And there was a lot of research into how to evaluate these models. However, it is my opinion that we still have very, very limited methods of evaluating models like this. Like we have better methods, but it's yeah, it's not really it's not really satisfactory how it is right now. So you see that these models, these adversarial nets, by the way, they're always called adversarial nets right here, where I think we call them like most people would call them adversarial networks. But it's just interesting to see the nets also in the title. Right. It says I think it says nets, does it? I think it does. We'll look at it after. So the out they outperform these other models, especially these these belief networks were kind of popular at the time. And you can see the samples right here were in no way comparable to examples that you get from the modern GANs. But this was already very, very, very good, especially the MNIST. And then here you could actually recognize. So the ones with the yellow are always from the training data set. They're like the nearest neighbors of the things on the left. So they want to show that it doesn't simply remember the training data, though I'm not so sure. Like this seems like it has some sort of somehow remember the training data a little bit. Also, this one right here. And there was already a way. So this was also very foresighted. So these A to C were fully connected networks, which might be one of the reasons why it worked moderately well. Right. But the last one was a convolutional discriminator and a deconvolutional generator. So already using kind of deconvolutions that are used everywhere today. So they are used in GANs and whatnot. VAs to up sample anything. If you want to do pixel wise classification, you use deconvolutions. So again, this paper sort of introduced a lot of things that later that we still use in GANs today. Now, I'm sure deconvolutions weren't invented here, but we still use them. So legit, they were the first GAN paper to use deconvolutions. Haha. Yeah. They also say we make no claim that these samples are better than samples generated by existing methods. We believe that these samples are at least competitive with the better generative models in the literature and highlight the potential of the adversarial framework. Today, this paper would be so rejected. Like, wait, you're not better. Get out of here. You can't claim it anymore. No, it doesn't work anymore. I'm sorry. Yours has always has to be better than everything else nowadays. Otherwise, it's a it's a it's a weak reject their experimental evidence doesn't doesn't convince me. You can't simply say something's cool. Also already introduced in this paper digits obtained by linearly interpolating between coordinates in z space of the full model like this thing here. Every single GAN paper had interpolations in the like in this in the GAN spike. And it came all came from here. So already this is just like this is like every GAN paper then had like rows of these like of these interpolations. I should know if I've written a paper on it and introduced right here. Who knows if they hadn't done this. Yeah, I guess it's it's kind of an obvious thing. But still, you know, very, very cool to see that this was already done. And here GANs compared to other different methods like deep directed graphical models, generative auto encoders and compared in very many ways. So this is a actually a good reference if you want to learn about these different kinds of models. And they make the claim here that their advantages and disadvantages. So disadvantages mainly come with training these things because you have to train them in lockstep. But then also the disadvantages that you don't have an explicit representation. So there is no explicit representation of this probability distribution. You never build the data distribution. You can only sample from it. However, the advantages are that Markov chains are never needed. Only backprop is used to obtain gradients. No inference is needed during learning. And a wide variety of functions can be incorporated into the model. This, you know, I hadn't read this paper in a while. And I just have to laugh nowadays because now all the people are trying to reintroduce like there are as many papers like reintroducing Markov chains into GANs. Like, oh, GANs would be so much better if they had an MCMC sampler somewhere. You're like, no, the point was to get rid of it. And like no inference is needed during learning, which, you know, for some of these other models, you actually need an inference during training. Right. So this is very, very costly. And how many models are there nowadays where it's like, oh, if we just do this inference during training. Yeah. So it's quite it's quite funny to see people kind of trying to to just combine everything with everything. And in the process, sort of reverse, reverse whatever these methods were originally meant to get rid of. Now, I'm not saying anything against these methods, but it's just kind of funny. Yeah. So they had a lot of conclusions and future work. They already say, you know, conditional GANs are very easy to do straightforward. Learned approximate inference can be performed by training an auxiliary network to predict Z given X. And this, of course, as you know, has come, you know, has come to fruit very often. Early papers already introduced that the so if you have the G network producing some producing an X and then the D network discriminating, that you would also have like a encoder right here to produce back the Z noise to give you the latent encoding, sort of like a variational autoencoder, but not really. It's more like a reverse generator. You know, this model nowadays are big by GAN and things like this that employ this exact thing that was sort of predicted right here. Of course, there are much earlier models also using this. As long as I can remember, people have attempted to bring encoders into GANs. They have a bunch of other things like semi supervised learning. You can use this to do to do get more data for a classifier, which is also done. So a lot of things here already foresight in this paper is pretty cool. And the coolest thing, look at that savages good fellow, not even using the full eight pages, just dropping this on the world. Absolutely cool. Mad respect. So, yeah, this was kind of my take on general. Yeah, it is generative adversarial nets. And yeah, please tell me if you like historic paper overviews. It's more kind of a rant than it really is a paper explanation. But I do enjoy going through this papers and kind of looking at them in hindsight. All right. That was it from me. I wish you a nice day. Bye bye.
[ { "start": 0, "end": 6, "text": " Hi there! Today we'll look at Generative Adversarial Nets by Ian J. Goodfellow et al." }, { "start": 6, "end": 12, "text": " So this one is another installment in our series of historical papers that had great impact." }, { "start": 12, "end": 20, "text": " GANs nowadays, or Generative Adversarial Nets back then, were sort of..." }, { "start": 20, "end": 26, "text": " This was the starting shot in a long line of research that is still continuing today." }, { "start": 26, "end": 34, "text": " So I remember when I started my PhD in 2015, GANs were just about spiking." }, { "start": 34, "end": 41, "text": " I remember NURiPS, or back then NIPS, in 2016, and every other paper was about GANs." }, { "start": 41, "end": 47, "text": " There was also this famous Schmidhuber Goodfellow moment at the tutorial." }, { "start": 47, "end": 54, "text": " It was a wild time. And this is the paper that started it all." }, { "start": 54, "end": 64, "text": " The paper is quite well written. It's very focused on convincing you that this is a sound method mathematically." }, { "start": 64, "end": 69, "text": " That it doesn't just do wild things." }, { "start": 69, "end": 79, "text": " And also it has a lot of modern tricks for GANs already built into it." }, { "start": 79, "end": 84, "text": " So astounding how much foresight there was already in this paper." }, { "start": 84, "end": 89, "text": " But of course, GANs have come a super long way since then." }, { "start": 89, "end": 96, "text": " And today we'll just go through the paper and look at how it looked back then and what this paper was like." }, { "start": 96, "end": 100, "text": " So yeah, join me in this. If you like it, please share it out." }, { "start": 100, "end": 104, "text": " Let me know in the comments what you think of historic paper reviews." }, { "start": 104, "end": 109, "text": " This is not going to be like a beginner's tutorial in GANs." }, { "start": 109, "end": 112, "text": " This is really going to be... We'll go through the paper." }, { "start": 112, "end": 117, "text": " You'll see right here the paper is from 2014." }, { "start": 117, "end": 124, "text": " So it would still be another two years or so until GANs really take off from this point on." }, { "start": 124, "end": 129, "text": " But the introduction, of course, was really important." }, { "start": 129, "end": 132, "text": " Okay, so abstract. Here we go." }, { "start": 132, "end": 138, "text": " We propose a new framework for estimating generative models via an adversarial process" }, { "start": 138, "end": 145, "text": " in which we simultaneously train two models, a generative model G that captures the data distribution" }, { "start": 145, "end": 154, "text": " and a discriminative model D that estimates the probability that a sample came from the training data rather than G." }, { "start": 154, "end": 156, "text": " Okay, this was sort of a new thing." }, { "start": 156, "end": 161, "text": " Now, I know, I know people disagree with this being a new thing, but this was a new thing." }, { "start": 161, "end": 169, "text": " And specifically, this was the first paper that made something like this really work for data." }, { "start": 169, "end": 177, "text": " So to have a discriminator, the words generator and discriminator were also introduced in this paper." }, { "start": 177, "end": 181, "text": " So you train this D model, which is the discriminator," }, { "start": 181, "end": 187, "text": " and the D model basically decides whether or not a given data point comes from data" }, { "start": 187, "end": 192, "text": " or comes from the fake distribution." }, { "start": 192, "end": 202, "text": " And then you have a generative model G that is supposed to just create this data X rather than coming from the database." }, { "start": 202, "end": 209, "text": " So you want to sample a couple of times from the data, and sometimes you sample from this model G," }, { "start": 209, "end": 215, "text": " and then the discriminator is supposed to decide whether or not it comes from the data set" }, { "start": 215, "end": 222, "text": " or from your counterfeiter, like from this generator G." }, { "start": 222, "end": 225, "text": " And it's supposed to say whether it's data or fake." }, { "start": 225, "end": 229, "text": " So you train the D model as a simple image classifier." }, { "start": 229, "end": 232, "text": " So people already knew how to build image classifiers." }, { "start": 232, "end": 238, "text": " This was shortly, as you can see, before ResNet came on the scene." }, { "start": 238, "end": 244, "text": " So people already kind of knew how to build CNNs, build really good image classifiers." }, { "start": 244, "end": 251, "text": " And the thought here was really generative models weren't really a thing until then." }, { "start": 251, "end": 256, "text": " So people were in language models, Word2Vec was kind of coming up," }, { "start": 256, "end": 263, "text": " but they would still be doing like RNNs using these Word2Vec vectors for generating language." }, { "start": 263, "end": 267, "text": " In images, generative models weren't really much of a thing." }, { "start": 267, "end": 272, "text": " So you would do like compositional models or you would do autoencoders," }, { "start": 272, "end": 278, "text": " which were just either really blurry or really, really artifactory." }, { "start": 278, "end": 281, "text": " And there were also approaches like deep belief networks and so on," }, { "start": 281, "end": 283, "text": " but they had their own problems." }, { "start": 283, "end": 292, "text": " So there wasn't really a satisfactory way to do image generation that resulted in really high quality images." }, { "start": 292, "end": 296, "text": " Now here, I think the entire thought, and this is not really spelled out," }, { "start": 296, "end": 304, "text": " but the entire thought here is that, hey, we know how to train really, really good image classifiers." }, { "start": 304, "end": 309, "text": " This has been evident in these since AlexNet." }, { "start": 309, "end": 313, "text": " So for two years, this was evident how to build really good image classifiers." }, { "start": 313, "end": 319, "text": " And the question here is to say that rather than also building really good generators," }, { "start": 319, "end": 326, "text": " can't we like harness the power of building really good classifiers for training a generator?" }, { "start": 326, "end": 329, "text": " And this is this idea right here." }, { "start": 329, "end": 332, "text": " This wasn't the one before, as you know, in like an autoencoder," }, { "start": 332, "end": 338, "text": " what you do is you'd input a sample into some kind of auto bottleneck thing, whatever." }, { "start": 338, "end": 345, "text": " And then at the end, you train your output sample to match the input sample as close as possible." }, { "start": 345, "end": 350, "text": " And then in here, after you've trained this, this part here is your generative model." }, { "start": 350, "end": 355, "text": " And then here, in here, you'd input like MCMC sampler or whatnot." }, { "start": 355, "end": 359, "text": " And then, of course, variational autoencoders came up and so on." }, { "start": 359, "end": 365, "text": " But still, what you always would do is you would somehow use the data directly." }, { "start": 365, "end": 368, "text": " So this is data in order to train your model." }, { "start": 368, "end": 373, "text": " So you would somehow say, ah, the output here should probably match the input in some way" }, { "start": 373, "end": 377, "text": " or in some at least distributional way." }, { "start": 377, "end": 385, "text": " This was a new thing. As you can see right here, there is no direct connection between the data and the generator." }, { "start": 385, "end": 388, "text": " And I think this was the success of this model." }, { "start": 388, "end": 396, "text": " The fact that the generator did not, it wasn't trained from the data like you would do if you were just approaching this problem." }, { "start": 396, "end": 407, "text": " The philosophy here is let's use the power of discriminative models, which we know how to build in order to train this generator." }, { "start": 407, "end": 411, "text": " So the generators task now isn't to match any sort of data point." }, { "start": 411, "end": 417, "text": " The generators task is to produce images that the discriminator would classify as data." }, { "start": 417, "end": 424, "text": " And you can do that by simply back propagating through the discriminator to the generator." }, { "start": 424, "end": 434, "text": " So I think that's the only thing that's kind of unstated in this paper, the reasoning behind why this is new, why this might work." }, { "start": 434, "end": 442, "text": " But everything else is spelled out very well in this paper, I have to say, if you read through it." }, { "start": 442, "end": 450, "text": " So the training procedure for G is to maximize the probability of D making a mistake." }, { "start": 450, "end": 453, "text": " This framework corresponds to a minimax two player game." }, { "start": 453, "end": 458, "text": " So as I said, the paper is very much focused on convincing you that there's something sound happening here," }, { "start": 458, "end": 465, "text": " because at that time, if you were to look at this, you would say something like there is no way." }, { "start": 465, "end": 469, "text": " This is you would be like, yeah." }, { "start": 469, "end": 480, "text": " So I can understand the motivation here to really convince people that, you know, something something good is happening also on the on the theoretical side." }, { "start": 480, "end": 491, "text": " In the space, sorry, in the space of arbitrary functions, G and D, a unique solution exists with G recovering the training data distribution D equals to one half everywhere." }, { "start": 491, "end": 497, "text": " In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with back propagation." }, { "start": 497, "end": 505, "text": " There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples." }, { "start": 505, "end": 514, "text": " OK, so the point here is that it's much easier than current methods of producing of generative models." }, { "start": 514, "end": 518, "text": " And also it does something sound." }, { "start": 518, "end": 524, "text": " Now, let's jump into the loss function right here." }, { "start": 524, "end": 532, "text": " So they say G and D play the following two player minimax game with value function V." }, { "start": 532, "end": 546, "text": " And this is still understood until today that it was already like if this was a pure engineering paper, they could simply build the architecture and say, oh, we let these networks fight." }, { "start": 546, "end": 552, "text": " And they they are kind of adversarial and they they pump each other up and so on." }, { "start": 552, "end": 560, "text": " And this here was more much more into the direction of kind of a a theoretical reasoning into why something like this would work." }, { "start": 560, "end": 565, "text": " Of course, there is still a lot of engineering going on to actually make it work." }, { "start": 565, "end": 570, "text": " So they they have there is this value function right here." }, { "start": 570, "end": 573, "text": " OK, and the value function is the following." }, { "start": 573, "end": 585, "text": " So what you have is you have the log probability of data and you have one the log one minus the of the generated samples." }, { "start": 585, "end": 590, "text": " So here you can see and this was introduced, this seems also obvious now." }, { "start": 590, "end": 596, "text": " Right. But you have a prior on what this is called the noise distribution." }, { "start": 596, "end": 606, "text": " OK, so you have a prior on your input noise to the generator because the generator is supposed to come up with very many different data points." }, { "start": 606, "end": 616, "text": " And if it is a if it is a, you know, non-stochastic function like a neural network, then you need some way to make to produce different images." }, { "start": 616, "end": 620, "text": " So there is this prior distribution over the noise." }, { "start": 620, "end": 623, "text": " You feed that noise into the generator." }, { "start": 623, "end": 625, "text": " The generator will produce an output." }, { "start": 625, "end": 634, "text": " You put that into the discriminator and then this right here, as you can see, the discriminator is trying to maximize this objective." }, { "start": 634, "end": 643, "text": " So the discriminator is trying to maximize the probability of real data and it is trying to minimize the probability of fake data." }, { "start": 643, "end": 650, "text": " OK, it is this is simply a two way classification problem." }, { "start": 650, "end": 655, "text": " At the same time, the generator, as you can see, is trying to minimize the objective." }, { "start": 655, "end": 658, "text": " In fact, the order here is quite important." }, { "start": 658, "end": 667, "text": " So the generator, as you can see, is trying to minimize whatever this here is." }, { "start": 667, "end": 672, "text": " So the generator sort of is trying to minimize against the best possible discriminator." }, { "start": 672, "end": 681, "text": " And so this is one one observation right here is that the formulation is always with respect to a perfect discriminator." }, { "start": 681, "end": 690, "text": " Now, we know that this doesn't work because if you have a perfect discriminator, then generator cannot catch up because you have insufficient gradients and so on." }, { "start": 690, "end": 694, "text": " And this was already recognized in this paper as well." }, { "start": 694, "end": 701, "text": " But the formulation is with respect to a min max game and not a max min game." }, { "start": 701, "end": 711, "text": " So the other point I want to make here is that you can see the discriminator appears in both in both terms right here." }, { "start": 711, "end": 715, "text": " However, the generator only appears right here." }, { "start": 715, "end": 723, "text": " And this this basically means that the objective for the generator is only this part here because the other part is constant." }, { "start": 723, "end": 730, "text": " So the generator is just trying to make the discriminator think that fake data is real." }, { "start": 730, "end": 739, "text": " So it is trying to make the discriminator the class of fake data as small as possible for the data that it outputs." }, { "start": 739, "end": 748, "text": " Well, the discriminator is trying to make the class of fake data more than the class of sorry, real data." }, { "start": 748, "end": 754, "text": " Yeah, it's trying to make it's trying to classify fake data as fake and real data as real." }, { "start": 754, "end": 757, "text": " Whereas the generator has only this part on the right." }, { "start": 757, "end": 761, "text": " This is I feel this is it's quite important." }, { "start": 761, "end": 768, "text": " Why? Because already in this paper, they recognize that this might not be the best practical objective." }, { "start": 768, "end": 775, "text": " And for the generator, they can actually exchange this part here on the right to simply say we want to." }, { "start": 775, "end": 789, "text": " So we want to instead of one minus D, instead of log one minus D, we simply want to use minus log D as an objective for the generator." }, { "start": 789, "end": 791, "text": " So you can kind of play around with this." }, { "start": 791, "end": 795, "text": " And as you know, lots of formulations have played around with this loss right here." }, { "start": 795, "end": 802, "text": " And yeah, that's why we have like a billion, billion, billion, billion GAN variations." }, { "start": 802, "end": 805, "text": " They introduced the reasoning behind this." }, { "start": 805, "end": 809, "text": " So there's an intuition right here." }, { "start": 809, "end": 816, "text": " And you can see already in practice, equation one may not provide sufficient gradient for G to learn well." }, { "start": 816, "end": 822, "text": " Early in learning, when G is poor, D can reject samples with high confidence because they're clearly different from the training data." }, { "start": 822, "end": 830, "text": " In this case, this saturates rather than training G to minimize that we can train G to maximize log D." }, { "start": 830, "end": 839, "text": " This objective function results in the same fixed point for the dynamic, but provides much stronger gradients in early, much stronger gradients early in learning." }, { "start": 839, "end": 843, "text": " This is in contrast to like other papers that seem to say, oh, we do this." }, { "start": 843, "end": 846, "text": " And they at least say it provides the same fixed point." }, { "start": 846, "end": 848, "text": " Right. Yeah." }, { "start": 848, "end": 854, "text": " So again, they're trying to convince you that this is doing something useful and that this is easier." }, { "start": 854, "end": 858, "text": " OK, so this strategy is analogous to other things." }, { "start": 858, "end": 869, "text": " Training maintains samples from a Markov chain from one learning step in the next order to avoid burning in a Markov chain in another loop of learning." }, { "start": 869, "end": 871, "text": " Sorry. OK, this is from another paper." }, { "start": 871, "end": 881, "text": " So their point here is that it's analogous to other papers that use these Markov chains where you always do one step in G and one step in D." }, { "start": 881, "end": 893, "text": " We alternate between K steps of optimizing D and one step of optimizing G because you have this inner maximization over D and then the outer maximization, the outer minimization over G." }, { "start": 893, "end": 899, "text": " So this has already been around the fact that you kind of have to have these optimizations in lockstep." }, { "start": 899, "end": 905, "text": " But the difference here is you don't need any sort of like Markov chain in the inner loop and so on." }, { "start": 905, "end": 907, "text": " You simply need back propagation." }, { "start": 907, "end": 911, "text": " So here's an illustration of how that might work." }, { "start": 911, "end": 918, "text": " So at the beginning here, you have your Z space and this is always sampled uniformly, as you can see right here." }, { "start": 918, "end": 922, "text": " This is from a prior distribution and through the mapping." }, { "start": 922, "end": 926, "text": " So this here is from Z to X is G." }, { "start": 926, "end": 928, "text": " So this is the mapping G." }, { "start": 928, "end": 935, "text": " You can see that the uniform distribution is now mapped to something non-uniform, which results in the green thing." }, { "start": 935, "end": 943, "text": " So G is the green line, while as this is data, the black dots are data." }, { "start": 943, "end": 952, "text": " And if you have a discriminator, the discriminator is supposed to tell you where there's data and where there's fake data." }, { "start": 952, "end": 957, "text": " Now, so green here is fake." }, { "start": 957, "end": 960, "text": " Now, this blue line is sort of a half trained discriminator." }, { "start": 960, "end": 962, "text": " Now you train D, right?" }, { "start": 962, "end": 969, "text": " You max maximize D, the discriminator, and that gives you this blue line right here." }, { "start": 969, "end": 974, "text": " So this this is a perfect discriminator for these two data distributions." }, { "start": 974, "end": 980, "text": " It tells you it's basically the ratio of green to black at each point." }, { "start": 980, "end": 985, "text": " And now you train the generator according to this." }, { "start": 985, "end": 991, "text": " And you can see that the gradient of the discriminator is so the gradient of the discriminator." }, { "start": 991, "end": 994, "text": " Discriminator is in this direction." }, { "start": 994, "end": 997, "text": " OK, so it's like up this hill." }, { "start": 997, "end": 1005, "text": " And that's why you want to shift your green curve over here according to the gradient of the discriminator." }, { "start": 1005, "end": 1016, "text": " Note that we first trained the discriminator and now in a second step, we optimize the generator." }, { "start": 1016, "end": 1024, "text": " So now we shift this green curve over in order to in along the gradient of the blue curve." }, { "start": 1024, "end": 1029, "text": " So it's important the green curve doesn't see the black curve ever." }, { "start": 1029, "end": 1031, "text": " The generator doesn't see the data." }, { "start": 1031, "end": 1038, "text": " The generator simply sees that blue curve and it goes along the gradient of that blue curve of the discriminator." }, { "start": 1038, "end": 1044, "text": " OK, and then if you do this many, many steps, actually, there are dots right here." }, { "start": 1044, "end": 1050, "text": " You will end up with a discriminator that has no clue what's where." }, { "start": 1050, "end": 1053, "text": " This is one half probability everywhere because the ratio is the same." }, { "start": 1053, "end": 1063, "text": " And you end up with the probability of data equal to the probability of the output generated samples." }, { "start": 1063, "end": 1069, "text": " And this can happen if the generator simply remembers the training data." }, { "start": 1069, "end": 1071, "text": " But there are a number of things that counter that." }, { "start": 1071, "end": 1078, "text": " For example, the generator is continuous while the training data is, of course, discrete." }, { "start": 1078, "end": 1085, "text": " So there is this in between things right here where there is no training data." }, { "start": 1085, "end": 1089, "text": " In fact, to hit exactly training data is very, very unlikely." }, { "start": 1089, "end": 1093, "text": " But of course, you can still you can still peek at the training data." }, { "start": 1093, "end": 1101, "text": " But also, I think there are two things why the generator doesn't simply remember the training data first," }, { "start": 1101, "end": 1104, "text": " because it doesn't ever see the training data directly." }, { "start": 1104, "end": 1108, "text": " So it can only see it through the discriminator." }, { "start": 1108, "end": 1113, "text": " And second of all, because it is built as these multilayer neural networks," }, { "start": 1113, "end": 1118, "text": " it doesn't have the power to just remember this," }, { "start": 1118, "end": 1123, "text": " because as there is kind of this notion of continuous function." }, { "start": 1123, "end": 1128, "text": " So and these neural networks are rather smooth functions often." }, { "start": 1128, "end": 1135, "text": " And therefore, I think that is something that helps the generator avoid remembering the training data." }, { "start": 1135, "end": 1139, "text": " Of course, there is still this problem of mode collapse that was really big in GANs." }, { "start": 1139, "end": 1141, "text": " So even if it doesn't remember the training data," }, { "start": 1141, "end": 1146, "text": " it might focus on the easiest part of the training data and forget all other parts." }, { "start": 1146, "end": 1151, "text": " And that was a direct result, actually, of this objective." }, { "start": 1151, "end": 1153, "text": " So where was it?" }, { "start": 1153, "end": 1160, "text": " So this objective directly led to mode collapse in some in some form," }, { "start": 1160, "end": 1163, "text": " because it penalizes different errors differently." }, { "start": 1163, "end": 1169, "text": " So of course, people have come up with ways to to solve that." }, { "start": 1169, "end": 1173, "text": " OK, now here is the algorithm." }, { "start": 1173, "end": 1181, "text": " As you can see, this was already quite this was already quite the algorithm we use nowadays." }, { "start": 1181, "end": 1184, "text": " So for K steps, this is the inner maximization." }, { "start": 1184, "end": 1187, "text": " And here they say that we use K equals one." }, { "start": 1187, "end": 1190, "text": " So this is this is pretty much what we use today." }, { "start": 1190, "end": 1196, "text": " The early days of GAN were still like, how much do I need to discriminator per generator and so on." }, { "start": 1196, "end": 1199, "text": " Nowadays, everyone is just using one step here, one step there," }, { "start": 1199, "end": 1204, "text": " or even training and jointly works in some cases." }, { "start": 1204, "end": 1207, "text": " So you want to sample a mini batch of noise samples" }, { "start": 1207, "end": 1214, "text": " and you will sample a mini batch of M examples from training data generation." }, { "start": 1214, "end": 1220, "text": " So from this data, you want to update the discriminator by sending its stochastic gradient." }, { "start": 1220, "end": 1222, "text": " And this is simply the gradient of the objective." }, { "start": 1222, "end": 1227, "text": " And then after those K steps, you want to sample another mini batch of noise samples" }, { "start": 1227, "end": 1231, "text": " and update the generator by descending its stochastic gradient." }, { "start": 1231, "end": 1235, "text": " And you can see right here already, there is this reduced objective" }, { "start": 1235, "end": 1241, "text": " that doesn't include this because it falls away in the gradient." }, { "start": 1241, "end": 1245, "text": " And they say the gradient based up this can use any standard learning based rule." }, { "start": 1245, "end": 1248, "text": " We use momentum in our experiments." }, { "start": 1248, "end": 1250, "text": " Very cool." }, { "start": 1250, "end": 1259, "text": " So I believe they already also say that it is somewhere here." }, { "start": 1259, "end": 1266, "text": " It's pretty fun that they say, oh, in our generator, we only input noise at the lowest layer." }, { "start": 1266, "end": 1272, "text": " This is also something that if you think that G here is a multilayer network," }, { "start": 1272, "end": 1276, "text": " so it's kind of a multilayer network that outputs an image." }, { "start": 1276, "end": 1281, "text": " And if you ask yourself, if I have noise, how would I input that into there?" }, { "start": 1281, "end": 1285, "text": " It's so clear nowadays that we just put it here." }, { "start": 1285, "end": 1287, "text": " But this was not clear at all." }, { "start": 1287, "end": 1293, "text": " This was kind of an invention of this paper because you could put it pretty much at all layers." }, { "start": 1293, "end": 1295, "text": " You could distribute it and so on." }, { "start": 1295, "end": 1298, "text": " You could add some right here." }, { "start": 1298, "end": 1304, "text": " It was this paper that already established the fact that we input noise kind of as a vector" }, { "start": 1304, "end": 1309, "text": " at the very beginning and then just let the neural network produce the image from that." }, { "start": 1309, "end": 1312, "text": " So yeah, pretty cool." }, { "start": 1312, "end": 1316, "text": " It's pretty sneaky how many things are hidden in these initial papers," }, { "start": 1316, "end": 1320, "text": " how many decisions that are made there then are just taken over." }, { "start": 1320, "end": 1324, "text": " And this one, I guess, turned out to be fairly good." }, { "start": 1324, "end": 1328, "text": " OK, so here they go for some theoretical analysis." }, { "start": 1328, "end": 1336, "text": " And the first they want to convince you that if the generator, if this all works well," }, { "start": 1336, "end": 1344, "text": " if both parties, this generator and the discriminator, optimize their objective to the optimum," }, { "start": 1344, "end": 1353, "text": " then the generator will have captured the data distribution, so the global optimality of this." }, { "start": 1353, "end": 1356, "text": " And they go about convincing you of that." }, { "start": 1356, "end": 1362, "text": " So the first thing that they convince you of is that if you fix the generator," }, { "start": 1362, "end": 1364, "text": " the optimal discriminator is this." }, { "start": 1364, "end": 1366, "text": " And we've already seen this in this drawing right here." }, { "start": 1366, "end": 1375, "text": " So the optimal discriminator is simply the ratio of the data, of the likelihood of data" }, { "start": 1375, "end": 1379, "text": " versus the likelihood of the generated data." }, { "start": 1379, "end": 1385, "text": " OK, so you train, you almost train this discriminator in the inner loop." }, { "start": 1385, "end": 1390, "text": " And that's simply the consequence of this, of a pointwise." }, { "start": 1390, "end": 1396, "text": " This is true pointwise, therefore it's true over the entire data distribution." }, { "start": 1396, "end": 1405, "text": " In the next thing, they convince you that the global minimum of the virtual training criterion," }, { "start": 1405, "end": 1412, "text": " this is the value function, this min-max game, is achieved if and only if this holds." }, { "start": 1412, "end": 1418, "text": " At that point, the training criterion achieves the value of negative log 4." }, { "start": 1418, "end": 1426, "text": " And this, again, this was already here, the fact that this has a global minimum," }, { "start": 1426, "end": 1432, "text": " and it is achieved when the generator matches the data distribution, which is pretty cool." }, { "start": 1432, "end": 1434, "text": " So in the proof, it's pretty simple, actually." }, { "start": 1434, "end": 1440, "text": " They first say, look, if this is the case, we just simply plug that in," }, { "start": 1440, "end": 1443, "text": " the discriminator will be confused." }, { "start": 1443, "end": 1450, "text": " So if the generator exactly captures the data, the discriminator will have no clue what's going on, right?" }, { "start": 1450, "end": 1453, "text": " Because it can't, because they're equal." }, { "start": 1453, "end": 1457, "text": " So it must basically output the probability of one half everywhere," }, { "start": 1457, "end": 1462, "text": " and then your objective becomes a constant negative log 4." }, { "start": 1462, "end": 1466, "text": " Now, if you then plug that into the other equation," }, { "start": 1466, "end": 1473, "text": " you'll see that the training criterion ends up being negative log 4 plus twice the Jensen-Shannon divergence" }, { "start": 1473, "end": 1476, "text": " between the data and the generated distribution." }, { "start": 1476, "end": 1486, "text": " And since this term here is always positive, that means that this thing here can never be less than negative log 4." }, { "start": 1486, "end": 1489, "text": " And therefore, the negative log 4 is the optimum." }, { "start": 1489, "end": 1501, "text": " OK, that's the proof is pretty cool, I have to say, to show that this has the optimum at that place." }, { "start": 1501, "end": 1507, "text": " And the last thing they convince you of is that this algorithm actually converges." }, { "start": 1507, "end": 1514, "text": " And the convergence is simply predicated on the fact that if you look at each of these problems individually," }, { "start": 1514, "end": 1523, "text": " they are convex. So like here is convex in X for every alpha." }, { "start": 1523, "end": 1534, "text": " So each of these are sort of convex problems, and then it will naturally converge to their minimum." }, { "start": 1534, "end": 1540, "text": " However, in practice, adversarial nets represent a limited family of distributions via the function." }, { "start": 1540, "end": 1544, "text": " And we optimize the parameters rather than the distribution itself." }, { "start": 1544, "end": 1550, "text": " Using a multilayer perceptron to define G introduces multiple critical points in parameter space." }, { "start": 1550, "end": 1556, "text": " However, the excellent performance of the multilayer perceptrons in practice suggest that they are a reasonable model to use," }, { "start": 1556, "end": 1558, "text": " despite their lack of theoretical guarantees." }, { "start": 1558, "end": 1566, "text": " So they say if we could optimize this probability distribution directly, it is a convex problem and we will always converge." }, { "start": 1566, "end": 1573, "text": " But in practice, of course, we only optimize the parameters of an MLP or a CNN." }, { "start": 1573, "end": 1580, "text": " And that doesn't always converge. But we have reasonable hopes that it will converge." }, { "start": 1580, "end": 1588, "text": " OK, so again, it's very much focused on convincing that this is doing something sensible, which I hope now you are convinced." }, { "start": 1588, "end": 1597, "text": " So there is a global optimum point. It's when the generator captures the data distribution perfectly." }, { "start": 1597, "end": 1609, "text": " This is this can be achieved and will be achieved if you can optimize these probability distributions with a reasonable degree of freedom." }, { "start": 1609, "end": 1617, "text": " And the neural networks provide that reasonable degree of freedom and give us good hope that in practice it will work." }, { "start": 1617, "end": 1627, "text": " So they apply this to data sets, namely MNIST, the Toronto Phase Database and C410." }, { "start": 1627, "end": 1636, "text": " The generator nets used a mixture of rectifier linear activations and sigmoid activations, while the discriminator net used max out activations." }, { "start": 1636, "end": 1642, "text": " That was still a thing. Dropout was applied in training at the discriminator net." }, { "start": 1642, "end": 1653, "text": " While our theoretical framework permits the use of dropout and other noise at intermediate layers of the generator," }, { "start": 1653, "end": 1660, "text": " we used noise as the input to only the bottom most layer of the generator network." }, { "start": 1660, "end": 1671, "text": " Again, this wasn't kind of clear at the beginning. And also the fact that to leave out dropout and so on in the generator was, I guess they found that empirically." }, { "start": 1671, "end": 1678, "text": " And then there was of course no way to evaluate these things. Like how do we evaluate generative models?" }, { "start": 1678, "end": 1681, "text": " Nowadays we have these inception distances and so on." }, { "start": 1681, "end": 1692, "text": " But then we estimate probability of the test set under P under the generated data by fitting a Gaussian parsing window to the samples generated with G" }, { "start": 1692, "end": 1695, "text": " and reporting the log likelihood under this distribution." }, { "start": 1695, "end": 1702, "text": " The theta parameter, yada yada yada. Results are reported." }, { "start": 1702, "end": 1712, "text": " This method of estimating the likelihood has somewhat high variance and does not perform well in high dimensional spaces, but it is the best method available to our knowledge." }, { "start": 1712, "end": 1721, "text": " Advances in generative models that can sample but not estimate likelihood directly motivate further research into how to evaluate such models." }, { "start": 1721, "end": 1728, "text": " They were absolutely right in this. And there was a lot of research into how to evaluate these models." }, { "start": 1728, "end": 1737, "text": " However, it is my opinion that we still have very, very limited methods of evaluating models like this." }, { "start": 1737, "end": 1747, "text": " Like we have better methods, but it's yeah, it's not really it's not really satisfactory how it is right now." }, { "start": 1747, "end": 1760, "text": " So you see that these models, these adversarial nets, by the way, they're always called adversarial nets right here, where I think we call them like most people would call them adversarial networks." }, { "start": 1760, "end": 1772, "text": " But it's just interesting to see the nets also in the title. Right. It says I think it says nets, does it? I think it does. We'll look at it after." }, { "start": 1772, "end": 1782, "text": " So the out they outperform these other models, especially these these belief networks were kind of popular at the time." }, { "start": 1782, "end": 1790, "text": " And you can see the samples right here were in no way comparable to examples that you get from the modern GANs." }, { "start": 1790, "end": 1798, "text": " But this was already very, very, very good, especially the MNIST. And then here you could actually recognize." }, { "start": 1798, "end": 1806, "text": " So the ones with the yellow are always from the training data set. They're like the nearest neighbors of the things on the left." }, { "start": 1806, "end": 1812, "text": " So they want to show that it doesn't simply remember the training data, though I'm not so sure." }, { "start": 1812, "end": 1818, "text": " Like this seems like it has some sort of somehow remember the training data a little bit." }, { "start": 1818, "end": 1824, "text": " Also, this one right here. And there was already a way." }, { "start": 1824, "end": 1834, "text": " So this was also very foresighted. So these A to C were fully connected networks, which might be one of the reasons why it worked moderately well." }, { "start": 1834, "end": 1842, "text": " Right. But the last one was a convolutional discriminator and a deconvolutional generator." }, { "start": 1842, "end": 1848, "text": " So already using kind of deconvolutions that are used everywhere today." }, { "start": 1848, "end": 1858, "text": " So they are used in GANs and whatnot. VAs to up sample anything. If you want to do pixel wise classification, you use deconvolutions." }, { "start": 1858, "end": 1869, "text": " So again, this paper sort of introduced a lot of things that later that we still use in GANs today." }, { "start": 1869, "end": 1876, "text": " Now, I'm sure deconvolutions weren't invented here, but we still use them." }, { "start": 1876, "end": 1884, "text": " So legit, they were the first GAN paper to use deconvolutions. Haha. Yeah." }, { "start": 1884, "end": 1891, "text": " They also say we make no claim that these samples are better than samples generated by existing methods." }, { "start": 1891, "end": 1899, "text": " We believe that these samples are at least competitive with the better generative models in the literature and highlight the potential of the adversarial framework." }, { "start": 1899, "end": 1909, "text": " Today, this paper would be so rejected. Like, wait, you're not better. Get out of here. You can't claim it anymore." }, { "start": 1909, "end": 1916, "text": " No, it doesn't work anymore. I'm sorry. Yours has always has to be better than everything else nowadays." }, { "start": 1916, "end": 1924, "text": " Otherwise, it's a it's a it's a weak reject their experimental evidence doesn't doesn't convince me." }, { "start": 1924, "end": 1935, "text": " You can't simply say something's cool. Also already introduced in this paper digits obtained by linearly interpolating between coordinates in z space of the full model like this thing here." }, { "start": 1935, "end": 1941, "text": " Every single GAN paper had interpolations in the like in this in the GAN spike." }, { "start": 1941, "end": 1953, "text": " And it came all came from here. So already this is just like this is like every GAN paper then had like rows of these like of these interpolations." }, { "start": 1953, "end": 1961, "text": " I should know if I've written a paper on it and introduced right here. Who knows if they hadn't done this." }, { "start": 1961, "end": 1969, "text": " Yeah, I guess it's it's kind of an obvious thing. But still, you know, very, very cool to see that this was already done." }, { "start": 1969, "end": 1981, "text": " And here GANs compared to other different methods like deep directed graphical models, generative auto encoders and compared in very many ways." }, { "start": 1981, "end": 1985, "text": " So this is a actually a good reference if you want to learn about these different kinds of models." }, { "start": 1985, "end": 1991, "text": " And they make the claim here that their advantages and disadvantages." }, { "start": 1991, "end": 1999, "text": " So disadvantages mainly come with training these things because you have to train them in lockstep." }, { "start": 1999, "end": 2005, "text": " But then also the disadvantages that you don't have an explicit representation." }, { "start": 2005, "end": 2009, "text": " So there is no explicit representation of this probability distribution." }, { "start": 2009, "end": 2014, "text": " You never build the data distribution. You can only sample from it." }, { "start": 2014, "end": 2018, "text": " However, the advantages are that Markov chains are never needed." }, { "start": 2018, "end": 2023, "text": " Only backprop is used to obtain gradients. No inference is needed during learning." }, { "start": 2023, "end": 2026, "text": " And a wide variety of functions can be incorporated into the model." }, { "start": 2026, "end": 2030, "text": " This, you know, I hadn't read this paper in a while." }, { "start": 2030, "end": 2044, "text": " And I just have to laugh nowadays because now all the people are trying to reintroduce like there are as many papers like reintroducing Markov chains into GANs." }, { "start": 2044, "end": 2049, "text": " Like, oh, GANs would be so much better if they had an MCMC sampler somewhere." }, { "start": 2049, "end": 2053, "text": " You're like, no, the point was to get rid of it." }, { "start": 2053, "end": 2064, "text": " And like no inference is needed during learning, which, you know, for some of these other models, you actually need an inference during training." }, { "start": 2064, "end": 2067, "text": " Right. So this is very, very costly." }, { "start": 2067, "end": 2075, "text": " And how many models are there nowadays where it's like, oh, if we just do this inference during training." }, { "start": 2075, "end": 2084, "text": " Yeah. So it's quite it's quite funny to see people kind of trying to to just combine everything with everything." }, { "start": 2084, "end": 2092, "text": " And in the process, sort of reverse, reverse whatever these methods were originally meant to get rid of." }, { "start": 2092, "end": 2099, "text": " Now, I'm not saying anything against these methods, but it's just kind of funny." }, { "start": 2099, "end": 2103, "text": " Yeah. So they had a lot of conclusions and future work." }, { "start": 2103, "end": 2110, "text": " They already say, you know, conditional GANs are very easy to do straightforward." }, { "start": 2110, "end": 2115, "text": " Learned approximate inference can be performed by training an auxiliary network to predict Z given X." }, { "start": 2115, "end": 2120, "text": " And this, of course, as you know, has come, you know, has come to fruit very often." }, { "start": 2120, "end": 2131, "text": " Early papers already introduced that the so if you have the G network producing some producing an X and then the D network discriminating," }, { "start": 2131, "end": 2141, "text": " that you would also have like a encoder right here to produce back the Z noise to give you the latent encoding," }, { "start": 2141, "end": 2144, "text": " sort of like a variational autoencoder, but not really." }, { "start": 2144, "end": 2147, "text": " It's more like a reverse generator." }, { "start": 2147, "end": 2158, "text": " You know, this model nowadays are big by GAN and things like this that employ this exact thing that was sort of predicted right here." }, { "start": 2158, "end": 2161, "text": " Of course, there are much earlier models also using this." }, { "start": 2161, "end": 2169, "text": " As long as I can remember, people have attempted to bring encoders into GANs." }, { "start": 2169, "end": 2173, "text": " They have a bunch of other things like semi supervised learning." }, { "start": 2173, "end": 2179, "text": " You can use this to do to do get more data for a classifier, which is also done." }, { "start": 2179, "end": 2184, "text": " So a lot of things here already foresight in this paper is pretty cool." }, { "start": 2184, "end": 2194, "text": " And the coolest thing, look at that savages good fellow, not even using the full eight pages, just dropping this on the world." }, { "start": 2194, "end": 2198, "text": " Absolutely cool. Mad respect." }, { "start": 2198, "end": 2204, "text": " So, yeah, this was kind of my take on general." }, { "start": 2204, "end": 2207, "text": " Yeah, it is generative adversarial nets." }, { "start": 2207, "end": 2212, "text": " And yeah, please tell me if you like historic paper overviews." }, { "start": 2212, "end": 2216, "text": " It's more kind of a rant than it really is a paper explanation." }, { "start": 2216, "end": 2220, "text": " But I do enjoy going through this papers and kind of looking at them in hindsight." }, { "start": 2220, "end": 2248, "text": " All right. That was it from me. I wish you a nice day. Bye bye." } ]
Hdo81GtLC_4
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Direct Feedback Alignment Scales to Modern Deep Learning Tasks and Architectures (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "gnn", "transformer", "graph", "biology", "neurons", "axon", "dendrites", "plausible", "biologically plausible", "backprop", "backpropagation", "dfa", "feedback alignment", "random projections" ]
Backpropagation is one of the central components of modern deep learning. However, it's not biologically plausible, which limits the applicability of deep learning to understand how the human brain works. Direct Feedback Alignment is a biologically plausible alternative and this paper shows that, contrary to previous research, it can be successfully applied to modern deep architectures and solve challenging tasks. OUTLINE: 0:00 - Intro & Overview 1:40 - The Problem with Backpropagation 10:25 - Direct Feedback Alignment 21:00 - My Intuition why DFA works 31:20 - Experiments Paper: https://arxiv.org/abs/2006.12878 Code: https://github.com/lightonai/dfa-scales-to-modern-deep-learning Referenced Paper by Arild Nøkland: https://arxiv.org/abs/1609.01596 Abstract: Despite being the workhorse of deep learning, the backpropagation algorithm is no panacea. It enforces sequential layer updates, thus preventing efficient parallelization of the training process. Furthermore, its biological plausibility is being challenged. Alternative schemes have been devised; yet, under the constraint of synaptic asymmetry, none have scaled to modern deep learning tasks and architectures. Here, we challenge this perspective, and study the applicability of Direct Feedback Alignment to neural view synthesis, recommender systems, geometric learning, and natural language processing. In contrast with previous studies limited to computer vision tasks, our findings show that it successfully trains a large range of state-of-the-art deep learning architectures, with performance close to fine-tuned backpropagation. At variance with common beliefs, our work supports that challenging tasks can be tackled in the absence of weight transport. Authors: Julien Launay, Iacopo Poli, François Boniface, Florent Krzakala Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there, today we'll look at direct feedback alignment scales to modern deep learning tasks and architectures by Julia Alonet, Jacopo Poli, François Boniface and Florian Krzakala. So this paper on a high level it replaces the back propagation algorithm in deep learning architectures with this algorithm called direct feedback alignment which is more biologically plausible. The algorithm has been around for a while but it hasn't yet been shown to be applicable to really modern big deep learning architectures and then perform on par with backprop on modern deep learning tasks. This paper as I understand it is the first one to demonstrate that it can do that. So this is very much an engineering paper, an applied paper and we're going to mostly go into direct feedback alignment as such and I don't think we're going to go too much into what the actual empirical findings are because they even though they're impressive and it's a good piece of engineering I think they can be summarized pretty much by it works not yet on par with back propagation but into a promising direction. Alright as always if you like content like this consider sharing it out and leaving a like and tell me in the comments what you like. Of course subscribe if you aren't yet that that is you know essential otherwise how are we gonna hear from me in the future? Okay let's dive in. They say despite being the workhorse of deep learning the back propagation algorithm is no panacea. It enforces sequential layer updates thus preventing efficient parallelization of the training process. Furthermore its biological plausibility is being challenged. Alternative schemes have been devised yet under the constraints of synaptic asymmetry. None have scaled to modern deep learning tasks and architectures. Here we challenge this perspective and study the applicability of direct feedback alignment to neural view synthesis, recommender systems, geometric learning and natural language processing. In contrast with previous studies limited to computer vision tasks our findings show that it successfully trains a large range of state-of-the-art deep learning architectures with performance close to fine-tuned back propagation. At variance with common beliefs our work supports that challenging tasks can be tackled in the absence of weight transport. So there's a lot to unpack in this particular abstract right here. So first of all what's the problem with back propagation? Back propagation they have they have two corals right here. First of all it's preventing efficient parallelization of the training process. So what does that mean? So in back propagation I'm pretty sure you all know it's basic back propagation but you have an input to a neural network and the neural network has a bunch of layers so the input will travel layer by layer and at the end you'll get some output and your output y hat, let's call it here what the neural network thinks the let's say it's a classifier, thinks that the class of this particular X should be. Now in the data set you have your true label and then you compare that to your output label and you can compute a loss function. Now the whole question of the back propagation algorithm is how do I need to change my layers of the neural network in order to make the loss as small as possible? And for that you can use back propagation that means you can take that loss and you can back propagate it down the layers in order to update each layer individually. So the first problem they have here with the back propagation algorithm and it's not I mean it's kind of a secondary problem but it is that is sequential. So in order to update this layer right here you need to have already back propagated to this layer and then you need to back propagate further to this and to this layer so it's a sequential task you need to back propagate down the layers again whereas what is more plausible but what would be more efficient if we could somehow update all the layers in parallel but this is a minor quarrel. The bigger one is that back propagation isn't biologically plausible. We know that in real neurons you have your dendrites, your inputs and your axon and the signal only travels in one direction. We don't know of a feedback mechanism in true neurons in the brain that would allow for information sort of to flow in the opposite direction. There is information flowing in the opposite direction but I think it's too slow and it's so it's not really it can't be there's no analogous way of back propagation. There's no nothing in the brain that would take the role of the back propagation algorithm. Specifically if each layer is characterized by a weight matrix right here what back propagation does is it uses the transpose of that weight matrix to back propagate. So these arrows to the front right here they use the weight matrices and these arrows to the back they use the transposes of the weight matrices. So the transposes of the weight matrices sort of relay the information of what needs to change that would be the loss. What needs to change to make the losses small as possible. They relay this information down to the other layers and we don't know of any biological analogy to this mechanism right here. This transpose it acts as sort of a layer inverse and that is called weight transport. So weight transport means that you can you can do something like the transpose of the weights basically to carry to bring information from the next layer back to this layer. And in biology we don't have this and in direct feedback alignment we don't have this either. So direct feedback alignment the next thing here in this abstract is the algorithm that they are going to apply here. Direct feedback alignment and we'll go into what it is but it is more biologically plausible in that what it does is it takes the loss somehow and it distributes it globally to all of these layers like this. And it does so without requiring these transposes and also without requiring these sequential steps. So both of their proposed problems here would be solved by this. They say that in contrast with previous studies limited to computer vision tasks. So what people have tried to do is they have tried to apply this DFA algorithm to computer vision tasks. But in computer vision most architectures are CNNs and as I understand it as far as I understand it DFA can only right now be applied to linear layers. So something that is WX plus B and then a non-linearity. It cannot even though you can write the CNN as like a linear layer with constraints as I read this paper I think to interpret that you can only apply DFA to fully connected layers or things that look kind of like fully connected layers. So what they're going to do in their experiments is they're going to take these big architectures like transformers and replace parts of them with the parts that act as fully connected layers with DFA updates. So well they're not going to replace the layers but they're going to replace the back propagation part of it with DFA updates. It remains to say that they still use back propagation at some places where they can't replace the updates with DFA and that means where the layer isn't you know a fully connected layer or I guess it's too big. They somehow have to make it work so often they will not update for example the embedding layers and things like this. Okay so what they're saying is they go away from computer vision tasks because if you go to computer vision and CNNs rule that world right you can only do for feet-forward layers fully connected layers you're gonna lose already. So yeah it's kind of an unfair fight in that sense but even in absence of that they say we apply this to neural view synthesis, recommender systems, geometric learning and natural language processing. So these are quite diverse tasks and they're going to be quite diverse architectures that they are applying it to. For example in geometric learning I believe they do graph neural networks and there they replace the usually in graph neural networks there are fully connected layers that connect the two the vertices and the edges together and compute properties of them. So that's a pretty good point for using DFA right because what you're looking for is state-of-the-art tasks and architectures that still employ fully connected layers because there your algorithm can shine. Okay so that's it and they're basically going to show that this is performance is close to fine-tuned back propagation. Alright so what is DFA? What is this direct feedback alignment? And for that I actually want to jump papers right here and go to this other paper that describes DFA in a bit in a bit not more detail but in a graphic fashion. So this paper right here direct feedback alignment provides learning in deep neural networks by Arl Noecklund sorry Noecklund shows some theoretical properties about DFA. Now I don't want to go into the theory right here or in the math but I mainly like this paper for this particular graphic right here. So in the back propagation algorithm as you can see you forward propagate using these weight matrices and then you back propagate using the transposes of the weight matrices. Now one step after that is this thing right here it's called feedback alignment. It's not the same thing as a direct feedback alignment. In feedback alignment you simply say well I won't back prop using these transposes because I can't because that's not biologically possible. What I'll do is I'll use other matrices and these other matrices are going to be random matrices and by random matrices we really mean a matrix that is of you know the correct shape the same shape as this W transpose but each entry is going to be sampled from a like a random Gaussian right now I don't mean like the distribution of Gaussians but you fix this matrix once at the beginning of training by sampling from Gaussian and then you leave it there and that's going to be the matrix that you use for relaying the signal back through the layers. Now you might protest and say wait that's not gonna work because specifically this thing right here it you know that you need to know the weights here to know what you need to change in the lower layers you need to somehow have that information in there how are you gonna know what to change and that's a valid question and I will give my opinion of why this works okay in a second in two seconds first this is feedback alignment so simply use random matrices to back propagate so to say and then you have direct feedback alignment and direct feedback alignment goes a step further because in feedback alignment you still do this in a sequential manner direct feedback alignment simply takes whatever the top change should be the change to the top layer so how do I need to change the top layer and it back propagates that in a this global fashion to all the layers directly using random matrices okay and then this IFA we're not gonna look at today because that's not relevant for this other paper but I hope you can sort of see the overview here so let's go back scroll scroll scroll scroll scroll scroll scroll okay so here is the mathematical formulation of all of this and it pays to look at it to understand what's going on so they characterize a neural network right here as having n layers each neural network is the following each neural each layer takes whatever is the output of the last layer multiplies it by a weight matrix and that's going to be your a quantity you put a through a non-linearity to obtain the next layers input okay so the H is the output of this layer and the input of the next layer at the very end your last output is going to be your estimation of the labels so your last non-linearity is probably going to be something like a a softmax or something like this okay so how can we how can we have this as a concept in our heads if you have the neural network right here what you want to do is you want to forward prop always using your weight matrix W and then your non-linearity of that particular layer and then the last in the last layer you get your y hat as we saw before now the question is how can we adjust how can we adjust this W right here to make y hat more into the direction of y and here it's in here it's useful to think of the last layer as a vector output like usually we think of the loss function but in all of these algorithms they always start with the derivative of the loss function with respect to the last layer output so ay and ay is here right before the non-linearity if you remember this was f of ay and this here I guess is the softmax so if this is a classifier the ay here those are the logits and that's the output of your last layer so it instead of having y and y hat right sorry y hat right here it pays to maybe think of the output as a vector and the desired output as another vector and the desired output is of course going to be one hot vector in the case of in the case of a classification but it you know if you think of it like this then you'll recognize okay I need to change if this is my estimated output and I want to achieve this output I need to change it into this direction right to get more into the same direction as the output I want the entire question now becomes how do I tell the lower layers about this change right here this is the change that I want to make in the lower layers how do I get the lower layers such that they provide me with that signal with with the green signal instead of the red signal so I need to propagate this blue difference in the back propagation algorithm you can simply ask the system right so we've built entire frameworks on being able to back propagate tensorflow pytorch jacks whatever because with back propagation we can simply ask the system this question so here is how should I change the weights of my layer to make the loss smaller you can just ask that you can say what's the gradient of the loss with respect to the to my weights and the night negative sign here is because you want to make the loss smaller okay and that is going to be a straightforward calculation how does that calculation go it's going to involve this right here is the last layers output this right here as you can see over here is going to be this is going to be whatever comes back from the back propagation so in back propagation you always have to think of if you want to update these weights you need two quantities you need whatever comes from the bottom or came from the bottom during the forward pass and whatever comes from the top during the backward pass and this quantity here is going to be the one that came from the top and it's basically how you need to change the next layer in order to make the loss happier and by using this right here you pull it back to this layer so how do I need to change this layer and here you see that dreaded transpose of that weight matrix this is what we can't do in biology but this is what back propagation does so it pulls back how you need to change the next layer it pulls it back to this layer so this quantity right here is basically how do I need to change the output of this particular layer in order to make the loss happier and then you multiply it by the signal that comes from the bottom and that will give you how you need to change your weights okay so the green part is how does the output of the layer need to change and the multiplied by the blue part it's how do the weights need to change and of course the non-linearity is in there as well but let's let's just leave the non-linearity away because it's really not important for this particular thing so this is what backprop does what does DFA do DFA here again asks how should I change the weights of layer I and DFA says well first you need to compute this thing right here this is you see the derivative of the loss with respect to a y now a y is the output of the last layer these are in in our case for example your log it's okay note that this is still a gradient so it's not like we can't differentiate anymore we simply can't do back propagation from layer to layer okay so this is the quantity how do we need to change the last layers output and we're going to take that and simply feed it through this random matrix and then multiply again let's leave this away multiply it by the by this thing right here so if I get my colors correct like this again you have your neural network you want to update these weights the green is what comes from the top now it doesn't come from the next layer but the green actually comes from all the way at the end sorry you can't see that I still have to get used to that new frame of view so the green comes all the way from the end and the blue comes from down here okay so this is weird right because especially because this is just modulated by a random matrix so how can this possibly work that's the question and I you know I had some thoughts but I haven't read too much about it so I might be completely wrong or this might be completely known in the community I have no idea I'll just give my opinion right here so first of all you have to see you have to compare this to backprop so what's actually changing is this green part right here right we agree that this is the thing that's changing and what do we say does the green part mean the green part basically tells you how do you how should the output of this layer change okay by adjusting the weights in the direction of the thing on the right side of the equality sign you're going to change the output of the layer into the direction of that green part now in backpropagation the green part basically tells you how should the output of this layer change in order to make the loss as happy as possible now we don't have that anymore here we simply change the output of the layer into the into the direction of a random transformation of the of the change we would like to have in the output now okay that's the the first thing is we understand what's different and we understand what the green quantity means green quantity means how should the output of our layer change okay second thing if you look at the last layer of a neural network that that logits layer right what does it actually do let's say we have that's a three-dimensional last layer which means you have three classes right if your last layer is three-dimensional you have three classes each axis represents one class because you encode the classes as one hot vectors so this might be C the class label equals zero this might be C equals one this might be C equals two if you have something that you forward propagate through your neural network and let's say it comes out to be like this what would you classify that as now you classify that as the whatever class has the the biggest inner product with that vector which would be the C equals zero class right here and what is this quantity going to be how should you update this output in order to make the loss happier now that depends on your true label but let's say your true label is actually the zero label now what you want to do is you want to update that thing into the direction here right such that it is more aligned with the axis so what happens if you pull that back through a random matrix now the thing you have to know about random matrices like this is that they do approximately preserve distances and angles so technically if you pull this back what you're going to induce is another coordinate system in that other space now this can be a higher or a lower dimensional space I frankly I don't care but what you're going to induce is a coordinate system and what do you pull through that B matrix so this is the BI matrix you fix it right this is really important you fix it at the beginning of training it's always the same it preserves distances and angles approximately you pull back that quantity which is the okay my colors are all screwed which is the green arrow over here you pull back this green arrow here so what does it mean what so the output right here the output vector that came from the lower layers right that's you forward propagated that through your network so maybe in this layer it actually pointed here we don't know but let's say it pointed here if we pull back the green thing it might point here okay now this is since it's a random matrix we don't know we know that the angle is approximately preserved okay but you know that and these lengths are approximately preserved with relative to each other but it doesn't really tell you too much so why is this useful and to see why it's useful you need to consider other inputs we don't just in input this one vector we input a whole bunch of data now let's consider two other vectors so first I want to consider this this blue vector right here now the blue vector is also going to have a label of zero so what does the blue vectors update look like the blue vector is going to be pulled into this direction and I also want to consider this red vector right here the red vector is of class one so what does the red vectors update going to look like like this and if I consider now the red and the blue vector in this space right let's I just draw them at random like so okay what I do know actually that's that's for consistent let's draw the blue somewhere here and the red somewhere here what I do know is that the angles and distances are preserved so what is the green thing going to look like the update for the blue vector is going to be something like this and the update for the red vector is going to maybe be something like this you know away from from those so what is happening in that lower space you'll notice that the two vectors that are supposed to be in the same class this and this they are going to be pulled together now the direction they're pulled in that's determined by this random matrix but we know they're going to be pulled together because they are pulled together in this space in the final space okay and they're going to be pulled apart from the red vector okay because that red vector is going to to be pulled towards a different class in the in the last space and since the distances and angles are approximately preserved it's going to be pulled away from these in in this space so what this induces in my opinion is some sort of it induces this coordinate system where if you make the last layer axis aligned because you want to classify it it kind of clusters things that belong in the same class in these previous weight spaces right and because and if you do this layer by layer so if you do this in layer K and then you make the job easier for any layer K plus one that's in between here right because they are now the things in the same class are already together pretty okay now you map it through a weight and the non-linearity they might you know intertwine a bit again but there's they're more together than they would be otherwise so you make the job for the next layer easier which means that the next layer can also can even better cluster things and what you'll end up with in this last layer is the is a basically a class or next to last layer is basically a clustering where everything that's supposed to be in the same class is together and far apart from each other and since the last layer is the classification layer it's going to have a really easy job separating those classes and performing good classification so that's what I think is happening in this algorithm so even though the layers don't know how to change to help the last layer by the fact that these random matrices and induce a clustering together you know by back propagating these updates here it helps the last layer make it makes its job really easy and you know that's all the classifier needs and I want to I want to show again this is my opinion this is not anything of value it's just my hypothesis of why something like this could work I want to show you in this paper that I've shown you before right here they do actually do these experiments with DFA and they show that you can see top row shows feature obtained with back propagation bottom row shows features obtained with DFA I think these are input and features I'm not sure where exactly they are in the network but you can see that this clustering clearly emerges so oh yeah here from left to right input images first hidden layer second hidden layer third hidden layer so you can see that the clustering from layer to layer in backprop and also in DFA is better and better so the reason why backprop is good maybe it's just that because it also really induces clusterings like this I don't know maybe backprop does even does something on top of that because I mean backprop has all the properties of this and more right but still this this is congruent with my hypothesis of what's happening so what do they do with it they take this algorithm and they apply it to these architectures now let's for example look at one of them this neural view synthesis with neural radiance fields so neural radiance fields is a type of model to do this task of where you get a bunch of views of an object in 3d or you know a bunch of views around an object and you're supposed to render a new view and you can see that the DFA parameter or the DFA updated nerve neural radiance field model is pretty close to the back propagation updated one you can see it's a bit more blurry but it it works right and I think the this paper is really trying to show that look this works it doesn't work you know extremely well but it works and it works on a level that hasn't been seen before so here if you consider these results higher is better on the synthetic data set here even you see that if you have the same model with backprop it performs better than with DFA but the DFA for that model performs better than these other baseline models that have themselves been trained with back propagation so it's definitely in the direction of being competitive and that's the same thing they show with all of these experiments so they apply this to graph networks apply this to transformers and as I said it's it's not there yet you see that so in the transformers they have these settings where macro they just use it DFA for the individual blocks and micro they use it for each layer and already told you that you still in the attention mechanism you still have to use backprop within the attention mechanism but it is much more of a plausible algorithm than the back propagation through the entire network and they show that if they appropriately tweak the hyper parameters they do get into the direction of something that's performant at least with this macro strategy now this is nowhere close to this is nowhere close to what the to what the back propagation algorithm achieves but it's sort of it's sort of an indication that if the community could work as much on this as it has worked on back propagation then probably will make a lot of like we could we could push this to a place where it does perform on par with backprop or very close to it so I do invite you to go and look at the experiments they have a lot of lot of details on how they did it and exactly how you have to change the architectures to make DFA work and the hyper parameters and so on so that's really cool and they have some more outputs right here of the view synthesis and so on yeah if you are interested in that thing I again I don't want to disrespect it it's just I don't think there is much point in me going over it it's the results are always sort of the same that DFA it it's not there yet but it's a good direction yeah I hope this was informative let me know if you disagree about my assessment of DFA I could be completely wrong or you know I yeah or or this could be like well known to people already so yeah see you next time
[ { "start": 0, "end": 4.84, "text": " Hi there, today we'll look at direct feedback alignment scales to modern deep" }, { "start": 4.84, "end": 10.66, "text": " learning tasks and architectures by Julia Alonet, Jacopo Poli, François Boniface" }, { "start": 10.66, "end": 16.8, "text": " and Florian Krzakala. So this paper on a high level it replaces the back" }, { "start": 16.8, "end": 21.52, "text": " propagation algorithm in deep learning architectures with this algorithm called" }, { "start": 21.52, "end": 27.240000000000002, "text": " direct feedback alignment which is more biologically plausible. The algorithm has" }, { "start": 27.24, "end": 31.439999999999998, "text": " been around for a while but it hasn't yet been shown to be applicable to" }, { "start": 31.439999999999998, "end": 36.92, "text": " really modern big deep learning architectures and then perform on par" }, { "start": 36.92, "end": 42.04, "text": " with backprop on modern deep learning tasks. This paper as I understand it is" }, { "start": 42.04, "end": 46.879999999999995, "text": " the first one to demonstrate that it can do that. So this is very much an" }, { "start": 46.879999999999995, "end": 54.72, "text": " engineering paper, an applied paper and we're going to mostly go into direct" }, { "start": 54.72, "end": 58.92, "text": " feedback alignment as such and I don't think we're going to go too much into" }, { "start": 58.92, "end": 64.16, "text": " what the actual empirical findings are because they even though they're" }, { "start": 64.16, "end": 67.56, "text": " impressive and it's a good piece of engineering I think they can be" }, { "start": 67.56, "end": 74, "text": " summarized pretty much by it works not yet on par with back propagation but" }, { "start": 74, "end": 80.4, "text": " into a promising direction. Alright as always if you like content like this" }, { "start": 80.4, "end": 85.60000000000001, "text": " consider sharing it out and leaving a like and tell me in the comments what" }, { "start": 85.60000000000001, "end": 91.36000000000001, "text": " you like. Of course subscribe if you aren't yet that that is you know" }, { "start": 91.36000000000001, "end": 97.60000000000001, "text": " essential otherwise how are we gonna hear from me in the future? Okay let's" }, { "start": 97.60000000000001, "end": 101.64000000000001, "text": " dive in. They say despite being the workhorse of deep learning the back" }, { "start": 101.64000000000001, "end": 106.60000000000001, "text": " propagation algorithm is no panacea. It enforces sequential layer updates thus" }, { "start": 106.6, "end": 111.28, "text": " preventing efficient parallelization of the training process. Furthermore its" }, { "start": 111.28, "end": 116.47999999999999, "text": " biological plausibility is being challenged. Alternative schemes have been" }, { "start": 116.47999999999999, "end": 121.88, "text": " devised yet under the constraints of synaptic asymmetry. None have scaled to" }, { "start": 121.88, "end": 126.24, "text": " modern deep learning tasks and architectures. Here we challenge this" }, { "start": 126.24, "end": 131.35999999999999, "text": " perspective and study the applicability of direct feedback alignment to neural" }, { "start": 131.35999999999999, "end": 136.4, "text": " view synthesis, recommender systems, geometric learning and natural language" }, { "start": 136.4, "end": 142.72, "text": " processing. In contrast with previous studies limited to computer vision tasks" }, { "start": 142.72, "end": 146.4, "text": " our findings show that it successfully trains a large range of state-of-the-art" }, { "start": 146.4, "end": 150.72, "text": " deep learning architectures with performance close to fine-tuned back" }, { "start": 150.72, "end": 156.12, "text": " propagation. At variance with common beliefs our work supports that" }, { "start": 156.12, "end": 160.72, "text": " challenging tasks can be tackled in the absence of weight transport. So there's a" }, { "start": 160.72, "end": 167.68, "text": " lot to unpack in this particular abstract right here. So first of all what's the" }, { "start": 167.68, "end": 172.28, "text": " problem with back propagation? Back propagation they have they have two" }, { "start": 172.28, "end": 177.48, "text": " corals right here. First of all it's preventing efficient parallelization of" }, { "start": 177.48, "end": 183.32, "text": " the training process. So what does that mean? So in back propagation I'm pretty" }, { "start": 183.32, "end": 187.8, "text": " sure you all know it's basic back propagation but you have an input to a" }, { "start": 187.8, "end": 191.16000000000003, "text": " neural network and the neural network has a bunch of layers so the input will" }, { "start": 191.16000000000003, "end": 196.4, "text": " travel layer by layer and at the end you'll get some output and your output" }, { "start": 196.4, "end": 201.4, "text": " y hat, let's call it here what the neural network thinks the let's say it's a" }, { "start": 201.4, "end": 206.36, "text": " classifier, thinks that the class of this particular X should be. Now in the" }, { "start": 206.36, "end": 212.96, "text": " data set you have your true label and then you compare that to your output" }, { "start": 212.96, "end": 218.32000000000002, "text": " label and you can compute a loss function. Now the whole question of the" }, { "start": 218.32000000000002, "end": 222.56, "text": " back propagation algorithm is how do I need to change my layers of the neural" }, { "start": 222.56, "end": 227.60000000000002, "text": " network in order to make the loss as small as possible? And for that you can" }, { "start": 227.60000000000002, "end": 231.08, "text": " use back propagation that means you can take that loss and you can back" }, { "start": 231.08, "end": 238.12, "text": " propagate it down the layers in order to update each layer individually. So the" }, { "start": 238.12, "end": 241.68, "text": " first problem they have here with the back propagation algorithm and it's not" }, { "start": 241.68, "end": 247.08, "text": " I mean it's kind of a secondary problem but it is that is sequential. So in order" }, { "start": 247.08, "end": 252.48000000000002, "text": " to update this layer right here you need to have already back propagated to this" }, { "start": 252.48000000000002, "end": 256.56, "text": " layer and then you need to back propagate further to this and to this" }, { "start": 256.56, "end": 261.08, "text": " layer so it's a sequential task you need to back propagate down the layers again" }, { "start": 261.08, "end": 267.44, "text": " whereas what is more plausible but what would be more efficient if we could" }, { "start": 267.44, "end": 272.36, "text": " somehow update all the layers in parallel but this is a minor quarrel. The" }, { "start": 272.36, "end": 277.52, "text": " bigger one is that back propagation isn't biologically plausible. We know" }, { "start": 277.52, "end": 283.04, "text": " that in real neurons you have your dendrites, your inputs and your" }, { "start": 283.04, "end": 289.64, "text": " axon and the signal only travels in one direction. We don't know of a feedback" }, { "start": 289.64, "end": 295.32, "text": " mechanism in true neurons in the brain that would allow for information sort of" }, { "start": 295.32, "end": 299.92, "text": " to flow in the opposite direction. There is information flowing in the" }, { "start": 299.92, "end": 305.59999999999997, "text": " opposite direction but I think it's too slow and it's so it's not" }, { "start": 305.59999999999997, "end": 312.44, "text": " really it can't be there's no analogous way of back propagation. There's no" }, { "start": 312.44, "end": 318.36, "text": " nothing in the brain that would take the role of the back propagation algorithm." }, { "start": 318.36, "end": 325.40000000000003, "text": " Specifically if each layer is characterized by a weight matrix right here" }, { "start": 325.40000000000003, "end": 332.76, "text": " what back propagation does is it uses the transpose of that weight matrix to" }, { "start": 332.76, "end": 339.92, "text": " back propagate. So these arrows to the front right here they use the" }, { "start": 339.92, "end": 345.44, "text": " weight matrices and these arrows to the back they use the transposes of the" }, { "start": 345.44, "end": 351.48, "text": " weight matrices. So the transposes of the weight matrices sort of relay the" }, { "start": 351.48, "end": 355.8, "text": " information of what needs to change that would be the loss. What needs to change" }, { "start": 355.8, "end": 361.64, "text": " to make the losses small as possible. They relay this information down to the" }, { "start": 361.64, "end": 367.6, "text": " other layers and we don't know of any biological analogy to this" }, { "start": 367.6, "end": 372.92, "text": " mechanism right here. This transpose it acts as sort of a layer inverse and that" }, { "start": 372.92, "end": 379.6, "text": " is called weight transport. So weight transport means that you can you can do" }, { "start": 379.6, "end": 384.52000000000004, "text": " something like the transpose of the weights basically to carry to bring" }, { "start": 384.52000000000004, "end": 390.84000000000003, "text": " information from the next layer back to this layer. And in biology we don't have" }, { "start": 390.84000000000003, "end": 395.72, "text": " this and in direct feedback alignment we don't have this either. So direct" }, { "start": 395.72, "end": 401.32, "text": " feedback alignment the next thing here in this abstract is the algorithm that" }, { "start": 401.32, "end": 405.88, "text": " they are going to apply here. Direct feedback alignment and we'll go into" }, { "start": 405.88, "end": 410.36, "text": " what it is but it is more biologically plausible in that what it does is it" }, { "start": 410.36, "end": 416.08, "text": " takes the loss somehow and it distributes it globally to all of these" }, { "start": 416.08, "end": 423.64, "text": " layers like this. And it does so without requiring these transposes and also" }, { "start": 423.64, "end": 429.56, "text": " without requiring these sequential steps. So both of their proposed problems here" }, { "start": 429.56, "end": 440.08, "text": " would be solved by this. They say that in contrast with previous studies" }, { "start": 440.08, "end": 445.36, "text": " limited to computer vision tasks. So what people have tried to do is they have" }, { "start": 445.36, "end": 452.32, "text": " tried to apply this DFA algorithm to computer vision tasks. But in computer" }, { "start": 452.32, "end": 457.32, "text": " vision most architectures are CNNs and as I understand it as far as I" }, { "start": 457.32, "end": 464.04, "text": " understand it DFA can only right now be applied to linear layers. So something" }, { "start": 464.04, "end": 470.68, "text": " that is WX plus B and then a non-linearity. It cannot even though you" }, { "start": 470.68, "end": 476.84, "text": " can write the CNN as like a linear layer with constraints as I read this paper I" }, { "start": 476.84, "end": 482.4, "text": " think to interpret that you can only apply DFA to fully connected layers or" }, { "start": 482.4, "end": 487.03999999999996, "text": " things that look kind of like fully connected layers. So what they're going to" }, { "start": 487.04, "end": 490, "text": " do in their experiments is they're going to take these big architectures like" }, { "start": 490, "end": 495.92, "text": " transformers and replace parts of them with the parts that act as fully" }, { "start": 495.92, "end": 501.20000000000005, "text": " connected layers with DFA updates. So well they're not going to replace" }, { "start": 501.20000000000005, "end": 505.16, "text": " the layers but they're going to replace the back propagation part of it with DFA" }, { "start": 505.16, "end": 509.68, "text": " updates. It remains to say that they still use back propagation at some" }, { "start": 509.68, "end": 515.28, "text": " places where they can't replace the updates with DFA and that means where" }, { "start": 515.28, "end": 519.8399999999999, "text": " the layer isn't you know a fully connected layer or I guess it's too big." }, { "start": 519.8399999999999, "end": 522.9599999999999, "text": " They somehow have to make it work so often they will not update for example" }, { "start": 522.9599999999999, "end": 529.24, "text": " the embedding layers and things like this. Okay so what they're saying is they" }, { "start": 529.24, "end": 533.6, "text": " go away from computer vision tasks because if you go to computer vision and" }, { "start": 533.6, "end": 540.3, "text": " CNNs rule that world right you can only do for feet-forward layers fully" }, { "start": 540.3, "end": 547.1999999999999, "text": " connected layers you're gonna lose already. So yeah it's kind of an" }, { "start": 547.1999999999999, "end": 553.0799999999999, "text": " unfair fight in that sense but even in absence of that they say we" }, { "start": 553.0799999999999, "end": 558.04, "text": " apply this to neural view synthesis, recommender systems, geometric learning" }, { "start": 558.04, "end": 561.76, "text": " and natural language processing. So these are quite diverse tasks and they're" }, { "start": 561.76, "end": 565.68, "text": " going to be quite diverse architectures that they are applying it to. For example" }, { "start": 565.68, "end": 571, "text": " in geometric learning I believe they do graph neural networks and there" }, { "start": 571, "end": 576.88, "text": " they replace the usually in graph neural networks there are fully connected" }, { "start": 576.88, "end": 582.2399999999999, "text": " layers that connect the two the vertices and the edges together and compute" }, { "start": 582.2399999999999, "end": 587.8, "text": " properties of them. So that's a pretty good point for using DFA right because" }, { "start": 587.8, "end": 591.76, "text": " what you're looking for is state-of-the-art tasks and architectures" }, { "start": 591.76, "end": 597.96, "text": " that still employ fully connected layers because there your algorithm can shine." }, { "start": 597.96, "end": 604.56, "text": " Okay so that's it and they're basically going to show that this is performance" }, { "start": 604.56, "end": 612.04, "text": " is close to fine-tuned back propagation. Alright so what is DFA? What is this" }, { "start": 612.04, "end": 617.3199999999999, "text": " direct feedback alignment? And for that I actually want to jump papers right here" }, { "start": 617.32, "end": 623.4000000000001, "text": " and go to this other paper that describes DFA in a bit in a bit not" }, { "start": 623.4000000000001, "end": 628.5200000000001, "text": " more detail but in a graphic fashion. So this paper right here direct feedback" }, { "start": 628.5200000000001, "end": 633.2800000000001, "text": " alignment provides learning in deep neural networks by Arl Noecklund" }, { "start": 633.2800000000001, "end": 641.0400000000001, "text": " sorry Noecklund shows some theoretical properties about DFA. Now I don't want to" }, { "start": 641.0400000000001, "end": 645.9200000000001, "text": " go into the theory right here or in the math but I mainly like this paper for" }, { "start": 645.92, "end": 651.1999999999999, "text": " this particular graphic right here. So in the back propagation algorithm as you" }, { "start": 651.1999999999999, "end": 655.92, "text": " can see you forward propagate using these weight matrices and then you back" }, { "start": 655.92, "end": 662.12, "text": " propagate using the transposes of the weight matrices. Now one step after that" }, { "start": 662.12, "end": 666.4599999999999, "text": " is this thing right here it's called feedback alignment. It's not the same" }, { "start": 666.4599999999999, "end": 671.48, "text": " thing as a direct feedback alignment. In feedback alignment you simply say well I" }, { "start": 671.48, "end": 675.4, "text": " won't back prop using these transposes because I can't because that's not" }, { "start": 675.4, "end": 682.12, "text": " biologically possible. What I'll do is I'll use other matrices and these other" }, { "start": 682.12, "end": 688.28, "text": " matrices are going to be random matrices and by random matrices we really mean a" }, { "start": 688.28, "end": 693.76, "text": " matrix that is of you know the correct shape the same shape as this W transpose" }, { "start": 693.76, "end": 701.12, "text": " but each entry is going to be sampled from a like a random Gaussian right now" }, { "start": 701.12, "end": 706.8, "text": " I don't mean like the distribution of Gaussians but you fix this matrix once" }, { "start": 706.8, "end": 712.4, "text": " at the beginning of training by sampling from Gaussian and then you leave it" }, { "start": 712.4, "end": 717, "text": " there and that's going to be the matrix that you use for relaying the signal" }, { "start": 717, "end": 722.44, "text": " back through the layers. Now you might protest and say wait that's not gonna" }, { "start": 722.44, "end": 728.16, "text": " work because specifically this thing right here it you know that you need to" }, { "start": 728.16, "end": 732.6, "text": " know the weights here to know what you need to change in the lower layers you" }, { "start": 732.6, "end": 737.48, "text": " need to somehow have that information in there how are you gonna know what to" }, { "start": 737.48, "end": 743.04, "text": " change and that's a valid question and I will give my opinion of why this works" }, { "start": 743.04, "end": 750, "text": " okay in a second in two seconds first this is feedback alignment so simply use" }, { "start": 750, "end": 755.92, "text": " random matrices to back propagate so to say and then you have direct feedback" }, { "start": 755.92, "end": 760, "text": " alignment and direct feedback alignment goes a step further because in feedback" }, { "start": 760, "end": 764.24, "text": " alignment you still do this in a sequential manner direct feedback" }, { "start": 764.24, "end": 770.9599999999999, "text": " alignment simply takes whatever the top change should be the change to the top" }, { "start": 770.9599999999999, "end": 776.88, "text": " layer so how do I need to change the top layer and it back propagates that in a" }, { "start": 776.88, "end": 783.4, "text": " this global fashion to all the layers directly using random matrices okay and" }, { "start": 783.4, "end": 788.24, "text": " then this IFA we're not gonna look at today because that's not relevant for" }, { "start": 788.24, "end": 795.8, "text": " this other paper but I hope you can sort of see the overview here so let's go back" }, { "start": 797.4399999999999, "end": 802.16, "text": " scroll scroll scroll scroll scroll scroll scroll okay so here is the" }, { "start": 802.16, "end": 807.6, "text": " mathematical formulation of all of this and it pays to look at it to understand" }, { "start": 807.6, "end": 811.88, "text": " what's going on so they characterize a neural network right here as having n" }, { "start": 811.88, "end": 817.88, "text": " layers each neural network is the following each neural each layer takes" }, { "start": 817.88, "end": 823, "text": " whatever is the output of the last layer multiplies it by a weight matrix and" }, { "start": 823, "end": 830.56, "text": " that's going to be your a quantity you put a through a non-linearity to obtain" }, { "start": 830.56, "end": 836.2, "text": " the next layers input okay so the H is the output of this layer and the input" }, { "start": 836.2, "end": 843.24, "text": " of the next layer at the very end your last output is going to be your" }, { "start": 843.24, "end": 848.72, "text": " estimation of the labels so your last non-linearity is probably going to be" }, { "start": 848.72, "end": 857.32, "text": " something like a a softmax or something like this okay so how can we how can we" }, { "start": 857.32, "end": 863.72, "text": " have this as a concept in our heads if you have the neural network right here" }, { "start": 863.72, "end": 868.76, "text": " what you want to do is you want to forward prop always using your weight" }, { "start": 868.76, "end": 876.5600000000001, "text": " matrix W and then your non-linearity of that particular layer and then the last" }, { "start": 876.5600000000001, "end": 883.84, "text": " in the last layer you get your y hat as we saw before now the question is how can" }, { "start": 883.84, "end": 891.9200000000001, "text": " we adjust how can we adjust this W right here to make y hat more into the" }, { "start": 891.92, "end": 898.16, "text": " direction of y and here it's in here it's useful to think of the last layer" }, { "start": 898.16, "end": 905.4, "text": " as a vector output like usually we think of the loss function but in all of these" }, { "start": 905.4, "end": 910.4, "text": " algorithms they always start with the derivative of the loss function with" }, { "start": 910.4, "end": 918.3199999999999, "text": " respect to the last layer output so ay and ay is here right before the" }, { "start": 918.32, "end": 926.32, "text": " non-linearity if you remember this was f of ay and this here I guess is the softmax" }, { "start": 926.32, "end": 932.08, "text": " so if this is a classifier the ay here those are the logits and that's the" }, { "start": 932.08, "end": 942.08, "text": " output of your last layer so it instead of having y and y hat right sorry y hat" }, { "start": 942.08, "end": 951.0400000000001, "text": " right here it pays to maybe think of the output as a vector and the desired" }, { "start": 951.0400000000001, "end": 956.32, "text": " output as another vector and the desired output is of course going to be one hot" }, { "start": 956.32, "end": 963.36, "text": " vector in the case of in the case of a classification but it you know if you" }, { "start": 963.36, "end": 970.88, "text": " think of it like this then you'll recognize okay I need to change if this" }, { "start": 970.88, "end": 975.84, "text": " is my estimated output and I want to achieve this output I need to change it" }, { "start": 975.84, "end": 981.36, "text": " into this direction right to get more into the same direction as the output I" }, { "start": 981.36, "end": 988.04, "text": " want the entire question now becomes how do I tell the lower layers about this" }, { "start": 988.04, "end": 993.2, "text": " change right here this is the change that I want to make in the lower layers" }, { "start": 993.2, "end": 1000.12, "text": " how do I get the lower layers such that they provide me with that signal" }, { "start": 1000.12, "end": 1005.12, "text": " with with the green signal instead of the red signal so I need to propagate" }, { "start": 1005.12, "end": 1011.28, "text": " this blue difference in the back propagation algorithm you can simply ask" }, { "start": 1011.28, "end": 1016.76, "text": " the system right so we've built entire frameworks on being able to back" }, { "start": 1016.76, "end": 1022.64, "text": " propagate tensorflow pytorch jacks whatever because with back propagation" }, { "start": 1022.64, "end": 1028.08, "text": " we can simply ask the system this question so here is how should I change" }, { "start": 1028.08, "end": 1034.12, "text": " the weights of my layer to make the loss smaller you can just ask that you can" }, { "start": 1034.12, "end": 1040.74, "text": " say what's the gradient of the loss with respect to the to my weights and the" }, { "start": 1040.74, "end": 1046.12, "text": " night negative sign here is because you want to make the loss smaller okay and" }, { "start": 1046.12, "end": 1051.8, "text": " that is going to be a straightforward calculation how does that calculation go" }, { "start": 1051.8, "end": 1062.6399999999999, "text": " it's going to involve this right here is the last layers output this right here" }, { "start": 1062.6399999999999, "end": 1071.36, "text": " as you can see over here is going to be this is going to be whatever comes back" }, { "start": 1071.36, "end": 1075.84, "text": " from the back propagation so in back propagation you always have to think of" }, { "start": 1075.84, "end": 1080.04, "text": " if you want to update these weights you need two quantities you need whatever" }, { "start": 1080.04, "end": 1084.12, "text": " comes from the bottom or came from the bottom during the forward pass and" }, { "start": 1084.12, "end": 1092.1599999999999, "text": " whatever comes from the top during the backward pass and this quantity here is" }, { "start": 1092.1599999999999, "end": 1098.92, "text": " going to be the one that came from the top and it's basically how you need to" }, { "start": 1098.92, "end": 1104.76, "text": " change the next layer in order to make the loss happier and by using this right" }, { "start": 1104.76, "end": 1109.8799999999999, "text": " here you pull it back to this layer so how do I need to change this layer and" }, { "start": 1109.88, "end": 1115.24, "text": " here you see that dreaded transpose of that weight matrix this is what we can't" }, { "start": 1115.24, "end": 1120.4, "text": " do in biology but this is what back propagation does so it pulls back how you" }, { "start": 1120.4, "end": 1126.1200000000001, "text": " need to change the next layer it pulls it back to this layer so this quantity" }, { "start": 1126.1200000000001, "end": 1131.9, "text": " right here is basically how do I need to change the output of this particular" }, { "start": 1131.9, "end": 1138.0200000000002, "text": " layer in order to make the loss happier and then you multiply it by the signal" }, { "start": 1138.02, "end": 1142.6399999999999, "text": " that comes from the bottom and that will give you how you need to change your" }, { "start": 1142.6399999999999, "end": 1148.16, "text": " weights okay so the green part is how does the output of the layer need to" }, { "start": 1148.16, "end": 1153.2, "text": " change and the multiplied by the blue part it's how do the weights need to" }, { "start": 1153.2, "end": 1158.56, "text": " change and of course the non-linearity is in there as well but let's let's just" }, { "start": 1158.56, "end": 1162.52, "text": " leave the non-linearity away because it's really not important for this" }, { "start": 1162.52, "end": 1170.8, "text": " particular thing so this is what backprop does what does DFA do DFA here" }, { "start": 1170.8, "end": 1178.12, "text": " again asks how should I change the weights of layer I and DFA says well" }, { "start": 1178.12, "end": 1183.68, "text": " first you need to compute this thing right here this is you see the derivative" }, { "start": 1183.68, "end": 1189.8, "text": " of the loss with respect to a y now a y is the output of the last layer these" }, { "start": 1189.8, "end": 1195.68, "text": " are in in our case for example your log it's okay note that this is still a" }, { "start": 1195.68, "end": 1200.72, "text": " gradient so it's not like we can't differentiate anymore we simply can't do" }, { "start": 1200.72, "end": 1206.8, "text": " back propagation from layer to layer okay so this is the quantity how do we" }, { "start": 1206.8, "end": 1213.44, "text": " need to change the last layers output and we're going to take that and simply" }, { "start": 1213.44, "end": 1218.84, "text": " feed it through this random matrix and then multiply again let's leave this" }, { "start": 1218.84, "end": 1225.08, "text": " away multiply it by the by this thing right here so if I get my colors" }, { "start": 1225.08, "end": 1229.84, "text": " correct like this again you have your neural network you want to update these" }, { "start": 1229.84, "end": 1236.12, "text": " weights the green is what comes from the top now it doesn't come from the next" }, { "start": 1236.12, "end": 1242.36, "text": " layer but the green actually comes from all the way at the end sorry you can't" }, { "start": 1242.36, "end": 1247.84, "text": " see that I still have to get used to that new frame of view so the green" }, { "start": 1247.84, "end": 1256, "text": " comes all the way from the end and the blue comes from down here okay so this" }, { "start": 1256, "end": 1260.72, "text": " is weird right because especially because this is just modulated by a" }, { "start": 1260.72, "end": 1268.84, "text": " random matrix so how can this possibly work that's the question and I you know" }, { "start": 1268.84, "end": 1271.9199999999998, "text": " I had some thoughts but I haven't read too much about it so I might be" }, { "start": 1271.9199999999998, "end": 1276.72, "text": " completely wrong or this might be completely known in the community I have" }, { "start": 1276.72, "end": 1283.8, "text": " no idea I'll just give my opinion right here so first of all you have to see you" }, { "start": 1283.8, "end": 1289.2, "text": " have to compare this to backprop so what's actually changing is this green" }, { "start": 1289.2, "end": 1293.96, "text": " part right here right we agree that this is the thing that's changing and what" }, { "start": 1293.96, "end": 1299.68, "text": " do we say does the green part mean the green part basically tells you how do" }, { "start": 1299.68, "end": 1306.68, "text": " you how should the output of this layer change okay by adjusting the weights" }, { "start": 1306.68, "end": 1310.3600000000001, "text": " in the direction of the thing on the right side of the equality sign you're" }, { "start": 1310.3600000000001, "end": 1314.6000000000001, "text": " going to change the output of the layer into the direction of that green part" }, { "start": 1314.6000000000001, "end": 1320.24, "text": " now in backpropagation the green part basically tells you how should the" }, { "start": 1320.24, "end": 1326.52, "text": " output of this layer change in order to make the loss as happy as possible now" }, { "start": 1326.52, "end": 1331.76, "text": " we don't have that anymore here we simply change the output of the layer" }, { "start": 1331.76, "end": 1339.68, "text": " into the into the direction of a random transformation of the of the change we" }, { "start": 1339.68, "end": 1344.96, "text": " would like to have in the output now okay that's the the first thing is we" }, { "start": 1344.96, "end": 1349, "text": " understand what's different and we understand what the green quantity means" }, { "start": 1349, "end": 1354.24, "text": " green quantity means how should the output of our layer change okay second" }, { "start": 1354.24, "end": 1360.96, "text": " thing if you look at the last layer of a neural network that that logits layer" }, { "start": 1360.96, "end": 1366.2, "text": " right what does it actually do let's say we have that's a three-dimensional last" }, { "start": 1366.2, "end": 1370.72, "text": " layer which means you have three classes right if your last layer is" }, { "start": 1370.72, "end": 1376.64, "text": " three-dimensional you have three classes each axis represents one class because" }, { "start": 1376.64, "end": 1381.44, "text": " you encode the classes as one hot vectors so this might be C the class" }, { "start": 1381.44, "end": 1388.92, "text": " label equals zero this might be C equals one this might be C equals two if you" }, { "start": 1388.92, "end": 1392.8400000000001, "text": " have something that you forward propagate through your neural network" }, { "start": 1392.8400000000001, "end": 1399.8000000000002, "text": " and let's say it comes out to be like this what would you classify that as now" }, { "start": 1399.8000000000002, "end": 1408.64, "text": " you classify that as the whatever class has the the biggest inner product with" }, { "start": 1408.64, "end": 1415.92, "text": " that vector which would be the C equals zero class right here and what is this" }, { "start": 1415.92, "end": 1421.6000000000001, "text": " quantity going to be how should you update this output in order to make the" }, { "start": 1421.6000000000001, "end": 1426.1200000000001, "text": " loss happier now that depends on your true label but let's say your true label" }, { "start": 1426.1200000000001, "end": 1431.8400000000001, "text": " is actually the zero label now what you want to do is you want to update that" }, { "start": 1431.8400000000001, "end": 1438.72, "text": " thing into the direction here right such that it is more aligned with the axis so" }, { "start": 1438.72, "end": 1444.2, "text": " what happens if you pull that back through a random matrix now the thing" }, { "start": 1444.2, "end": 1448.32, "text": " you have to know about random matrices like this is that they do approximately" }, { "start": 1448.32, "end": 1456.72, "text": " preserve distances and angles so technically if you pull this back what" }, { "start": 1456.72, "end": 1461.32, "text": " you're going to induce is another coordinate system in that other space now" }, { "start": 1461.32, "end": 1467.0800000000002, "text": " this can be a higher or a lower dimensional space I frankly I don't care" }, { "start": 1467.08, "end": 1474.6399999999999, "text": " but what you're going to induce is a coordinate system and what do you pull" }, { "start": 1474.6399999999999, "end": 1479.4399999999998, "text": " through that B matrix so this is the BI matrix you fix it right this is really" }, { "start": 1479.4399999999998, "end": 1482.76, "text": " important you fix it at the beginning of training it's always the same it" }, { "start": 1482.76, "end": 1488.76, "text": " preserves distances and angles approximately you pull back that quantity" }, { "start": 1488.76, "end": 1493.76, "text": " which is the okay my colors are all screwed which is the green arrow over" }, { "start": 1493.76, "end": 1503.4, "text": " here you pull back this green arrow here so what does it mean what so the output" }, { "start": 1503.4, "end": 1508.76, "text": " right here the output vector that came from the lower layers right that's you" }, { "start": 1508.76, "end": 1512.6, "text": " forward propagated that through your network so maybe in this layer it" }, { "start": 1512.6, "end": 1519.24, "text": " actually pointed here we don't know but let's say it pointed here if we pull" }, { "start": 1519.24, "end": 1526.28, "text": " back the green thing it might point here okay now this is since it's a random" }, { "start": 1526.28, "end": 1530.16, "text": " matrix we don't know we know that the angle is approximately preserved okay" }, { "start": 1530.16, "end": 1534.44, "text": " but you know that and these lengths are approximately preserved with relative to" }, { "start": 1534.44, "end": 1543.2, "text": " each other but it doesn't really tell you too much so why is this useful and" }, { "start": 1543.2, "end": 1549.28, "text": " to see why it's useful you need to consider other inputs we don't just in" }, { "start": 1549.28, "end": 1554.88, "text": " input this one vector we input a whole bunch of data now let's consider two" }, { "start": 1554.88, "end": 1563.0800000000002, "text": " other vectors so first I want to consider this this blue vector right here now the" }, { "start": 1563.0800000000002, "end": 1568.04, "text": " blue vector is also going to have a label of zero so what does the blue" }, { "start": 1568.04, "end": 1572.68, "text": " vectors update look like the blue vector is going to be pulled into this" }, { "start": 1572.68, "end": 1578.48, "text": " direction and I also want to consider this red vector right here the red" }, { "start": 1578.48, "end": 1584.2, "text": " vector is of class one so what does the red vectors update going to look like" }, { "start": 1584.2, "end": 1593.0800000000002, "text": " like this and if I consider now the red and the blue vector in this space right" }, { "start": 1593.0800000000002, "end": 1601.1200000000001, "text": " let's I just draw them at random like so okay what I do know actually that's" }, { "start": 1601.12, "end": 1605.86, "text": " that's for consistent let's draw the blue somewhere here and the red" }, { "start": 1605.86, "end": 1611.4399999999998, "text": " somewhere here what I do know is that the angles and distances are preserved" }, { "start": 1611.4399999999998, "end": 1615.9599999999998, "text": " so what is the green thing going to look like the update for the blue vector is" }, { "start": 1615.9599999999998, "end": 1620.56, "text": " going to be something like this and the update for the red vector is going to" }, { "start": 1620.56, "end": 1627.8, "text": " maybe be something like this you know away from from those so what is" }, { "start": 1627.8, "end": 1632.04, "text": " happening in that lower space you'll notice that the two vectors that are" }, { "start": 1632.04, "end": 1637.08, "text": " supposed to be in the same class this and this they are going to be pulled" }, { "start": 1637.08, "end": 1642.84, "text": " together now the direction they're pulled in that's determined by this" }, { "start": 1642.84, "end": 1647.68, "text": " random matrix but we know they're going to be pulled together because they are" }, { "start": 1647.68, "end": 1654.24, "text": " pulled together in this space in the final space okay and they're going to" }, { "start": 1654.24, "end": 1661.92, "text": " be pulled apart from the red vector okay because that red vector is going to to" }, { "start": 1661.92, "end": 1666.8, "text": " be pulled towards a different class in the in the last space and since the" }, { "start": 1666.8, "end": 1670.8, "text": " distances and angles are approximately preserved it's going to be pulled away" }, { "start": 1670.8, "end": 1679.76, "text": " from these in in this space so what this induces in my opinion is some sort of it" }, { "start": 1679.76, "end": 1687.72, "text": " induces this coordinate system where if you make the last layer axis aligned" }, { "start": 1687.72, "end": 1693.36, "text": " because you want to classify it it kind of clusters things that belong in the" }, { "start": 1693.36, "end": 1700.16, "text": " same class in these previous weight spaces right and because and if you do" }, { "start": 1700.16, "end": 1707.84, "text": " this layer by layer so if you do this in layer K and then you make the job easier" }, { "start": 1707.84, "end": 1712.8, "text": " for any layer K plus one that's in between here right because they are now" }, { "start": 1712.8, "end": 1717.32, "text": " the things in the same class are already together pretty okay now you map it" }, { "start": 1717.32, "end": 1721.32, "text": " through a weight and the non-linearity they might you know intertwine a bit" }, { "start": 1721.32, "end": 1727.1599999999999, "text": " again but there's they're more together than they would be otherwise so you make" }, { "start": 1727.1599999999999, "end": 1733.32, "text": " the job for the next layer easier which means that the next layer can also can" }, { "start": 1733.32, "end": 1739.36, "text": " even better cluster things and what you'll end up with in this last layer is" }, { "start": 1739.36, "end": 1745.8799999999999, "text": " the is a basically a class or next to last layer is basically a clustering" }, { "start": 1745.8799999999999, "end": 1751, "text": " where everything that's supposed to be in the same class is together and far" }, { "start": 1751, "end": 1756.8, "text": " apart from each other and since the last layer is the classification layer it's" }, { "start": 1756.8, "end": 1761.24, "text": " going to have a really easy job separating those classes and performing" }, { "start": 1761.24, "end": 1767.28, "text": " good classification so that's what I think is happening in this algorithm so" }, { "start": 1767.28, "end": 1773.32, "text": " even though the layers don't know how to change to help the last layer by the" }, { "start": 1773.32, "end": 1779.84, "text": " fact that these random matrices and induce a clustering together you know by" }, { "start": 1779.84, "end": 1786.28, "text": " back propagating these updates here it helps the last layer make it makes its" }, { "start": 1786.28, "end": 1792.76, "text": " job really easy and you know that's all the classifier needs and I want to I" }, { "start": 1792.76, "end": 1799.16, "text": " want to show again this is my opinion this is not anything of value it's just" }, { "start": 1799.16, "end": 1803.28, "text": " my hypothesis of why something like this could work I want to show you in this" }, { "start": 1803.28, "end": 1806.84, "text": " paper that I've shown you before right here they do actually do these" }, { "start": 1806.84, "end": 1814.28, "text": " experiments with DFA and they show that you can see top row shows feature" }, { "start": 1814.28, "end": 1819.48, "text": " obtained with back propagation bottom row shows features obtained with DFA I" }, { "start": 1819.48, "end": 1825.96, "text": " think these are input and features I'm not sure where exactly they are in the" }, { "start": 1825.96, "end": 1832.36, "text": " network but you can see that this clustering clearly emerges so oh yeah" }, { "start": 1832.36, "end": 1836.72, "text": " here from left to right input images first hidden layer second hidden layer" }, { "start": 1836.72, "end": 1841.24, "text": " third hidden layer so you can see that the clustering from layer to layer in" }, { "start": 1841.24, "end": 1848.56, "text": " backprop and also in DFA is better and better so the reason why backprop is" }, { "start": 1848.56, "end": 1853.88, "text": " good maybe it's just that because it also really induces clusterings like" }, { "start": 1853.88, "end": 1857.76, "text": " this I don't know maybe backprop does even does something on top of that" }, { "start": 1857.76, "end": 1864, "text": " because I mean backprop has all the properties of this and more right but" }, { "start": 1864, "end": 1871.04, "text": " still this this is congruent with my hypothesis of what's happening so what" }, { "start": 1871.04, "end": 1877.44, "text": " do they do with it they take this algorithm and they apply it to these" }, { "start": 1877.44, "end": 1883.08, "text": " architectures now let's for example look at one of them this neural view" }, { "start": 1883.08, "end": 1889.3999999999999, "text": " synthesis with neural radiance fields so neural radiance fields is a type of" }, { "start": 1889.3999999999999, "end": 1897.24, "text": " model to do this task of where you get a bunch of views of an object in 3d or you" }, { "start": 1897.24, "end": 1902.2, "text": " know a bunch of views around an object and you're supposed to render a new view" }, { "start": 1902.2, "end": 1910.92, "text": " and you can see that the DFA parameter or the DFA updated nerve neural radiance" }, { "start": 1910.92, "end": 1917.68, "text": " field model is pretty close to the back propagation updated one you can see it's" }, { "start": 1917.68, "end": 1922.92, "text": " a bit more blurry but it it works right and I think the this paper is really" }, { "start": 1922.92, "end": 1929.1200000000001, "text": " trying to show that look this works it doesn't work you know extremely well but" }, { "start": 1929.1200000000001, "end": 1935.68, "text": " it works and it works on a level that hasn't been seen before so here if you" }, { "start": 1935.68, "end": 1940.3600000000001, "text": " consider these results higher is better on the synthetic data set here even you" }, { "start": 1940.3600000000001, "end": 1944.76, "text": " see that if you have the same model with backprop it performs better than with" }, { "start": 1944.76, "end": 1952.5600000000002, "text": " DFA but the DFA for that model performs better than these other baseline models" }, { "start": 1952.56, "end": 1958.36, "text": " that have themselves been trained with back propagation so it's definitely in" }, { "start": 1958.36, "end": 1965.76, "text": " the direction of being competitive and that's the same thing they show with all" }, { "start": 1965.76, "end": 1970.12, "text": " of these experiments so they apply this to graph networks apply this to" }, { "start": 1970.12, "end": 1975.32, "text": " transformers and as I said it's it's not there yet you see that so in the" }, { "start": 1975.32, "end": 1979.6799999999998, "text": " transformers they have these settings where macro they just use it DFA for the" }, { "start": 1979.68, "end": 1984.88, "text": " individual blocks and micro they use it for each layer and already told you that" }, { "start": 1984.88, "end": 1989.48, "text": " you still in the attention mechanism you still have to use backprop within the" }, { "start": 1989.48, "end": 1996.8, "text": " attention mechanism but it is much more of a plausible algorithm than the back" }, { "start": 1996.8, "end": 2001.76, "text": " propagation through the entire network and they show that if they appropriately" }, { "start": 2001.76, "end": 2007.44, "text": " tweak the hyper parameters they do get into the direction of something that's" }, { "start": 2007.44, "end": 2012.64, "text": " performant at least with this macro strategy now this is nowhere close to" }, { "start": 2012.64, "end": 2018.52, "text": " this is nowhere close to what the to what the back propagation algorithm" }, { "start": 2018.52, "end": 2024.96, "text": " achieves but it's sort of it's sort of an indication that if the community could" }, { "start": 2024.96, "end": 2031.52, "text": " work as much on this as it has worked on back propagation then probably will make" }, { "start": 2031.52, "end": 2037.1200000000001, "text": " a lot of like we could we could push this to a place where it does perform on" }, { "start": 2037.12, "end": 2043.32, "text": " par with backprop or very close to it so I do invite you to go and look at the" }, { "start": 2043.32, "end": 2050.64, "text": " experiments they have a lot of lot of details on how they did it and exactly" }, { "start": 2050.64, "end": 2055.24, "text": " how you have to change the architectures to make DFA work and the hyper parameters" }, { "start": 2055.24, "end": 2060.3599999999997, "text": " and so on so that's really cool and they have some more outputs right here of the" }, { "start": 2060.3599999999997, "end": 2066.8399999999997, "text": " view synthesis and so on yeah if you are interested in that thing I again I don't" }, { "start": 2066.84, "end": 2071, "text": " want to disrespect it it's just I don't think there is much point in me going" }, { "start": 2071, "end": 2076.6000000000004, "text": " over it it's the results are always sort of the same that DFA it it's not there" }, { "start": 2076.6000000000004, "end": 2083.8, "text": " yet but it's a good direction yeah I hope this was informative let me know if" }, { "start": 2083.8, "end": 2089.8, "text": " you disagree about my assessment of DFA I could be completely wrong or you know" }, { "start": 2089.8, "end": 2096.76, "text": " I yeah or or this could be like well known to people already so yeah see you" }, { "start": 2096.76, "end": 2123.0400000000004, "text": " next time" } ]
56GW1IlWgMg
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Learning model-based planning from scratch
[ "Science & Technology" ]
[ "machine learning", "artificial intelligence", "ai", "deep learning", "reinforcement learning", "deep mind", "research", "academia", "paper", "review", "imagination", "planning", "agents" ]
https://arxiv.org/abs/1707.06170 Abstract: Conventional wisdom holds that model-based planning is a powerful approach to sequential decision-making. It is often very challenging in practice, however, because while a model can be used to evaluate a plan, it does not prescribe how to construct a plan. Here we introduce the "Imagination-based Planner", the first model-based, sequential decision-making agent that can learn to construct, evaluate, and execute plans. Before any action, it can perform a variable number of imagination steps, which involve proposing an imagined action and evaluating it with its model-based imagination. All imagined actions and outcomes are aggregated, iteratively, into a "plan context" which conditions future real and imagined actions. The agent can even decide how to imagine: testing out alternative imagined actions, chaining sequences of actions together, or building a more complex "imagination tree" by navigating flexibly among the previously imagined states using a learned policy. And our agent can learn to plan economically, jointly optimizing for external rewards and computational costs associated with using its imagination. We show that our architecture can learn to solve a challenging continuous control problem, and also learn elaborate planning strategies in a discrete maze-solving task. Our work opens a new direction toward learning the components of a model-based planning system and how to use them. Authors: Razvan Pascanu, Yujia Li, Oriol Vinyals, Nicolas Heess, Lars Buesing, Sebastien Racanière, David Reichert, Théophane Weber, Daan Wierstra, Peter Battaglia
Hi there, today we're taking a look at learning model-based planning from scratch by DeepMind. So as a recap, what is model-based planning? Basically a model, also called an environment model, is just kind of a black box thing, you can imagine, where you have a state of your current environment, you put it in there and you have an action that you want to take, you put it in there as well. And the environment model tells you what the new state, S' here, and possibly also the new reward for taking that action is going to be. So this, of course it's always good to have such an environment model, because you can use it to plan ahead, but the authors here question how do you plan and propose a new algorithm to learn this planning. For now, people have mostly used heuristics to plan either things like A star search, where you have a maze and you want to go here, and you kind of have a heuristic, say the distance between the two points, but there's kind of walls in between, so you try to go there but then there's a wall and you kind of explore around it. So these are kind of the techniques that have existed so far. Also we've seen stuff like Monte Carlo tree search for AlphaGo and other things like this that are not really learned. So this kind of paper pros and mechanisms to learn how to plan using such a model. So basically they devise an algorithm or a framework, you can say, where they have this, what you see here, this schematic. This schematic tells you that you have this thing called a manager. Let me just quickly bring up my comment thingy thing. You can see here there's this kind of manager and this manager can decide to imagine or act. If it acts, then it simply takes kind of the current state and all the things that happened so far and decides on an action to do in the world. And then it kind of trains on the action like classic reinforcement learning. But if it decides to imagine, it can use its model of the world, its imagination model to perform an action and see what would happen if it did that action. And it can then also append that to the memory and use it to further learn. Even though it didn't do the action, it can imagine what happens. So how can it imagine? The authors in particular propose different methods of imagining. This graph you see there are proposed methods. The first two methods basically, so here every row is a method of imagining. The first method, the one step imagining, simply means you have the current state of the world, which is the grey blob here. And what you do is you always go from the current state of the world, imagine one step ahead. So basically you select the state to imagine from, you imagine one step. And if you decide to not take an action after that, but imagine again, because maybe you're not sure yet what you want to do, so you want to imagine another action, you would again go from this initial state, so this in the horizontal direction is time, time, internal time basically. You would again go from this state, imagine another action based on it, and so on, imagine another action. Until you're satisfied, you've imagined enough so you can actually take a real world step. In contrast, the end step strategy, so these are hard coded strategies as you can see. The learned part is which action should I take? The hard coded part is where do I base this action off? The end step strategy also selects the first state at first, imagines one action on top of it, but then always selects that new imagined action. So you can see here it selects this one to propose this action, and then it selects that imagined action to propose yet another action. So you can see it kind of imagines one path into the future instead of many paths, just one step ahead. And then lastly, this imagination tree strategy is basically the only one that's actually kind of a learned strategy where the manager can now propose any previously imagined or real world states in order to imagine from. So you always have the current world state, which is the first node in the graph. You select it, of course, at the beginning you have no choice. You imagine an action on top of it, but then you can select any of these two nodes to imagine from and here again the first is selected and action is imagined. Then you have three nodes. You can choose any of those where you want to imagine the next step. Here in this example, the manager selects this state right here and decides to imagine another action on top of it until it is satisfied and can then actually go over to plan to actually perform an action in the real world. So if you then decide to do an action in the real world, what you can do is you can take all of the things you've imagined and use that. So you see in this pathway here, this flows back to the manager. At some point it decides, okay, I've imagined enough and we can use all of these imagined steps in order to take a real world step. And after the real world step, the entire thing starts again. So that's how it learns to plan. Really interesting of course is this imagination tree strategy where it actually learns to plan ahead. So the model is described in detail in a formal manner and then it already goes over to experiments and there's this spaceship task where you have to get the spaceship to move around stuff and around these asteroids and get a reward. So you can see different imagination projectives here in the top row. You see the red ones is the kind of executed actions, the blue ones are imagined ones and you see the tree it's constructed. So first it takes an action right here, just without imagining. Then it imagines one step but then decides to take another action. It imagines two actions but decides on a third one. So you see to the left in this picture you see the first action. Then it imagines one action and decides to take an action. Then it imagines two actions and based on these imaginations, I'm going to guess it's fairly satisfied with the one that's very close to the target and it can then take an action. So it's pretty smart in that it sees that the second imagined action is fairly close to where it wants to go and it doesn't need to imagine yet another action. That then actually hits the target. It can go over to performing the action right away because the imagination gives enough information. So these kind of things are pretty cool to look at and check out the more experiments if you want to know. Here is even more experiments in discrete mazes. They feature multiple goals. They feature the system optimizing not only for its reward but also for kind of internal costs, so having a budget for imagining and optimizing not doing too many imagination steps. On this experiment the kind of thing that bugs me here is the fact that they didn't actually use the full imagination tree algorithm but the manager only selected from what you can see here. So do an actual action, then SJ0 is the first imagined state and SJK is the last imagined state. So basically the manager can only choose between actually acting, then doing this one step strategy and then doing kind of this end step strategy in each step. So it kind of limits the way it can plan but I'm going to guess they did this because otherwise they couldn't have trained the model and it seems a pretty reasonable simplification to make in order to get this to work. Also check out the paper if you want to see how all of these different parts are implemented. Of course you can guess most of them are neural networks and it's pretty standard so far and check out for the additional experiments. They're pretty cool. See you next time.
[ { "start": 0, "end": 8.040000000000001, "text": " Hi there, today we're taking a look at learning model-based planning from scratch by DeepMind." }, { "start": 8.040000000000001, "end": 12.32, "text": " So as a recap, what is model-based planning?" }, { "start": 12.32, "end": 20.32, "text": " Basically a model, also called an environment model, is just kind of a black box thing," }, { "start": 20.32, "end": 26.28, "text": " you can imagine, where you have a state of your current environment, you put it in there" }, { "start": 26.28, "end": 30.720000000000002, "text": " and you have an action that you want to take, you put it in there as well." }, { "start": 30.720000000000002, "end": 36.52, "text": " And the environment model tells you what the new state, S' here, and possibly also the" }, { "start": 36.52, "end": 41.36, "text": " new reward for taking that action is going to be." }, { "start": 41.36, "end": 49.120000000000005, "text": " So this, of course it's always good to have such an environment model, because you can" }, { "start": 49.12, "end": 57.04, "text": " use it to plan ahead, but the authors here question how do you plan and propose a new" }, { "start": 57.04, "end": 59.239999999999995, "text": " algorithm to learn this planning." }, { "start": 59.239999999999995, "end": 66.6, "text": " For now, people have mostly used heuristics to plan either things like A star search," }, { "start": 66.6, "end": 72.08, "text": " where you have a maze and you want to go here, and you kind of have a heuristic, say the" }, { "start": 72.08, "end": 77.75999999999999, "text": " distance between the two points, but there's kind of walls in between, so you try to go" }, { "start": 77.76, "end": 83.52000000000001, "text": " there but then there's a wall and you kind of explore around it." }, { "start": 83.52000000000001, "end": 87.12, "text": " So these are kind of the techniques that have existed so far." }, { "start": 87.12, "end": 95.64, "text": " Also we've seen stuff like Monte Carlo tree search for AlphaGo and other things like this" }, { "start": 95.64, "end": 98.5, "text": " that are not really learned." }, { "start": 98.5, "end": 108.9, "text": " So this kind of paper pros and mechanisms to learn how to plan using such a model." }, { "start": 108.9, "end": 117.44, "text": " So basically they devise an algorithm or a framework, you can say, where they have this," }, { "start": 117.44, "end": 120.28, "text": " what you see here, this schematic." }, { "start": 120.28, "end": 124.6, "text": " This schematic tells you that you have this thing called a manager." }, { "start": 124.6, "end": 137.35999999999999, "text": " Let me just quickly bring up my comment thingy thing." }, { "start": 137.35999999999999, "end": 143.35999999999999, "text": " You can see here there's this kind of manager and this manager can decide to imagine or" }, { "start": 143.35999999999999, "end": 147.4, "text": " act." }, { "start": 147.4, "end": 154.28, "text": " If it acts, then it simply takes kind of the current state and all the things that happened" }, { "start": 154.28, "end": 159.52, "text": " so far and decides on an action to do in the world." }, { "start": 159.52, "end": 164.36, "text": " And then it kind of trains on the action like classic reinforcement learning." }, { "start": 164.36, "end": 171.68, "text": " But if it decides to imagine, it can use its model of the world, its imagination model" }, { "start": 171.68, "end": 177, "text": " to perform an action and see what would happen if it did that action." }, { "start": 177, "end": 187.32, "text": " And it can then also append that to the memory and use it to further learn." }, { "start": 187.32, "end": 190.68, "text": " Even though it didn't do the action, it can imagine what happens." }, { "start": 190.68, "end": 192.16, "text": " So how can it imagine?" }, { "start": 192.16, "end": 201.56, "text": " The authors in particular propose different methods of imagining." }, { "start": 201.56, "end": 205.32, "text": " This graph you see there are proposed methods." }, { "start": 205.32, "end": 214, "text": " The first two methods basically, so here every row is a method of imagining." }, { "start": 214, "end": 218.72, "text": " The first method, the one step imagining, simply means you have the current state of" }, { "start": 218.72, "end": 221.79999999999998, "text": " the world, which is the grey blob here." }, { "start": 221.79999999999998, "end": 226.4, "text": " And what you do is you always go from the current state of the world, imagine one step" }, { "start": 226.4, "end": 227.76, "text": " ahead." }, { "start": 227.76, "end": 234.32, "text": " So basically you select the state to imagine from, you imagine one step." }, { "start": 234.32, "end": 241.28, "text": " And if you decide to not take an action after that, but imagine again, because maybe you're" }, { "start": 241.28, "end": 246.16, "text": " not sure yet what you want to do, so you want to imagine another action, you would again" }, { "start": 246.16, "end": 255.84, "text": " go from this initial state, so this in the horizontal direction is time, time, internal" }, { "start": 255.84, "end": 258.4, "text": " time basically." }, { "start": 258.4, "end": 263.15999999999997, "text": " You would again go from this state, imagine another action based on it, and so on, imagine" }, { "start": 263.16, "end": 265.76000000000005, "text": " another action." }, { "start": 265.76000000000005, "end": 271.84000000000003, "text": " Until you're satisfied, you've imagined enough so you can actually take a real world step." }, { "start": 271.84000000000003, "end": 282.86, "text": " In contrast, the end step strategy, so these are hard coded strategies as you can see." }, { "start": 282.86, "end": 286.08000000000004, "text": " The learned part is which action should I take?" }, { "start": 286.08000000000004, "end": 291.40000000000003, "text": " The hard coded part is where do I base this action off?" }, { "start": 291.4, "end": 297, "text": " The end step strategy also selects the first state at first, imagines one action on top" }, { "start": 297, "end": 302.15999999999997, "text": " of it, but then always selects that new imagined action." }, { "start": 302.15999999999997, "end": 308.56, "text": " So you can see here it selects this one to propose this action, and then it selects that" }, { "start": 308.56, "end": 312.71999999999997, "text": " imagined action to propose yet another action." }, { "start": 312.71999999999997, "end": 319.59999999999997, "text": " So you can see it kind of imagines one path into the future instead of many paths, just" }, { "start": 319.6, "end": 321.72, "text": " one step ahead." }, { "start": 321.72, "end": 329.48, "text": " And then lastly, this imagination tree strategy is basically the only one that's actually" }, { "start": 329.48, "end": 339.32000000000005, "text": " kind of a learned strategy where the manager can now propose any previously imagined or" }, { "start": 339.32000000000005, "end": 342.06, "text": " real world states in order to imagine from." }, { "start": 342.06, "end": 347.08000000000004, "text": " So you always have the current world state, which is the first node in the graph." }, { "start": 347.08, "end": 350.12, "text": " You select it, of course, at the beginning you have no choice." }, { "start": 350.12, "end": 355.44, "text": " You imagine an action on top of it, but then you can select any of these two nodes to imagine" }, { "start": 355.44, "end": 361.28, "text": " from and here again the first is selected and action is imagined." }, { "start": 361.28, "end": 363, "text": " Then you have three nodes." }, { "start": 363, "end": 367.76, "text": " You can choose any of those where you want to imagine the next step." }, { "start": 367.76, "end": 375.78, "text": " Here in this example, the manager selects this state right here and decides to imagine" }, { "start": 375.78, "end": 382.91999999999996, "text": " another action on top of it until it is satisfied and can then actually go over to plan to actually" }, { "start": 382.91999999999996, "end": 384.44, "text": " perform an action in the real world." }, { "start": 384.44, "end": 395.32, "text": " So if you then decide to do an action in the real world, what you can do is you can take" }, { "start": 395.32, "end": 402.03999999999996, "text": " all of the things you've imagined and use that." }, { "start": 402.04, "end": 407.20000000000005, "text": " So you see in this pathway here, this flows back to the manager." }, { "start": 407.20000000000005, "end": 412.44, "text": " At some point it decides, okay, I've imagined enough and we can use all of these imagined" }, { "start": 412.44, "end": 416.16, "text": " steps in order to take a real world step." }, { "start": 416.16, "end": 423.8, "text": " And after the real world step, the entire thing starts again." }, { "start": 423.8, "end": 426.88, "text": " So that's how it learns to plan." }, { "start": 426.88, "end": 438.32, "text": " Really interesting of course is this imagination tree strategy where it actually learns to" }, { "start": 438.32, "end": 442.78, "text": " plan ahead." }, { "start": 442.78, "end": 449.92, "text": " So the model is described in detail in a formal manner and then it already goes over to experiments" }, { "start": 449.92, "end": 462.04, "text": " and there's this spaceship task where you have to get the spaceship to move around stuff" }, { "start": 462.04, "end": 468.44, "text": " and around these asteroids and get a reward." }, { "start": 468.44, "end": 475.40000000000003, "text": " So you can see different imagination projectives here in the top row." }, { "start": 475.4, "end": 481.64, "text": " You see the red ones is the kind of executed actions, the blue ones are imagined ones and" }, { "start": 481.64, "end": 483.84, "text": " you see the tree it's constructed." }, { "start": 483.84, "end": 488.47999999999996, "text": " So first it takes an action right here, just without imagining." }, { "start": 488.47999999999996, "end": 493.15999999999997, "text": " Then it imagines one step but then decides to take another action." }, { "start": 493.15999999999997, "end": 500.46, "text": " It imagines two actions but decides on a third one." }, { "start": 500.46, "end": 506.2, "text": " So you see to the left in this picture you see the first action." }, { "start": 506.2, "end": 511.44, "text": " Then it imagines one action and decides to take an action." }, { "start": 511.44, "end": 516.12, "text": " Then it imagines two actions and based on these imaginations, I'm going to guess it's" }, { "start": 516.12, "end": 523.4399999999999, "text": " fairly satisfied with the one that's very close to the target and it can then take an" }, { "start": 523.4399999999999, "end": 524.4399999999999, "text": " action." }, { "start": 524.44, "end": 531.2800000000001, "text": " So it's pretty smart in that it sees that the second imagined action is fairly close" }, { "start": 531.2800000000001, "end": 537.32, "text": " to where it wants to go and it doesn't need to imagine yet another action." }, { "start": 537.32, "end": 539.24, "text": " That then actually hits the target." }, { "start": 539.24, "end": 546.36, "text": " It can go over to performing the action right away because the imagination gives enough" }, { "start": 546.36, "end": 549.84, "text": " information." }, { "start": 549.84, "end": 558.1600000000001, "text": " So these kind of things are pretty cool to look at and check out the more experiments" }, { "start": 558.1600000000001, "end": 559.2800000000001, "text": " if you want to know." }, { "start": 559.2800000000001, "end": 563.2800000000001, "text": " Here is even more experiments in discrete mazes." }, { "start": 563.2800000000001, "end": 565, "text": " They feature multiple goals." }, { "start": 565, "end": 573.0400000000001, "text": " They feature the system optimizing not only for its reward but also for kind of internal" }, { "start": 573.04, "end": 580.16, "text": " costs, so having a budget for imagining and optimizing not doing too many imagination" }, { "start": 580.16, "end": 582, "text": " steps." }, { "start": 582, "end": 588.3199999999999, "text": " On this experiment the kind of thing that bugs me here is the fact that they didn't" }, { "start": 588.3199999999999, "end": 596.0799999999999, "text": " actually use the full imagination tree algorithm but the manager only selected from what you" }, { "start": 596.0799999999999, "end": 597.12, "text": " can see here." }, { "start": 597.12, "end": 608.64, "text": " So do an actual action, then SJ0 is the first imagined state and SJK is the last imagined" }, { "start": 608.64, "end": 612.04, "text": " state." }, { "start": 612.04, "end": 622.88, "text": " So basically the manager can only choose between actually acting, then doing this one step" }, { "start": 622.88, "end": 628.4, "text": " strategy and then doing kind of this end step strategy in each step." }, { "start": 628.4, "end": 635.88, "text": " So it kind of limits the way it can plan but I'm going to guess they did this because otherwise" }, { "start": 635.88, "end": 641.56, "text": " they couldn't have trained the model and it seems a pretty reasonable simplification to" }, { "start": 641.56, "end": 645.28, "text": " make in order to get this to work." }, { "start": 645.28, "end": 650.56, "text": " Also check out the paper if you want to see how all of these different parts are implemented." }, { "start": 650.56, "end": 656.7199999999999, "text": " Of course you can guess most of them are neural networks and it's pretty standard so far and" }, { "start": 656.7199999999999, "end": 659.1199999999999, "text": " check out for the additional experiments." }, { "start": 659.1199999999999, "end": 660.1199999999999, "text": " They're pretty cool." }, { "start": 660.12, "end": 681.16, "text": " See you next time." } ]
pBau7umFhjQ
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Topographic VAEs learn Equivariant Capsules (Machine Learning Research Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "vae", "variational", "bayesian", "variational methods", "variational autoencoder", "max welling", "elbo", "prior", "student t", "reparameterization trick", "log likelihood", "encoder decoder" ]
#tvae #topographic #equivariant Variational Autoencoders model the latent space as a set of independent Gaussian random variables, which the decoder maps to a data distribution. However, this independence is not always desired, for example when dealing with video sequences, we know that successive frames are heavily correlated. Thus, any latent space dealing with such data should reflect this in its structure. Topographic VAEs are a framework for defining correlation structures among the latent variables and induce equivariance within the resulting model. This paper shows how such correlation structures can be built by correctly arranging higher-level variables, which are themselves independent Gaussians. OUTLINE: 0:00 - Intro 1:40 - Architecture Overview 6:30 - Comparison to regular VAEs 8:35 - Generative Mechanism Formulation 11:45 - Non-Gaussian Latent Space 17:30 - Topographic Product of Student-t 21:15 - Introducing Temporal Coherence 24:50 - Topographic VAE 27:50 - Experimental Results 31:15 - Conclusion & Comments Paper: https://arxiv.org/abs/2109.01394 Code: https://github.com/akandykeller/topographicvae Abstract: In this work we seek to bridge the concepts of topographic organization and equivariance in neural networks. To accomplish this, we introduce the Topographic VAE: a novel method for efficiently training deep generative models with topographically organized latent variables. We show that such a model indeed learns to organize its activations according to salient characteristics such as digit class, width, and style on MNIST. Furthermore, through topographic organization over time (i.e. temporal coherence), we demonstrate how predefined latent space transformation operators can be encouraged for observed transformed input sequences -- a primitive form of unsupervised learned equivariance. We demonstrate that this model successfully learns sets of approximately equivariant features (i.e. "capsules") directly from sequences and achieves higher likelihood on correspondingly transforming test sequences. Equivariance is verified quantitatively by measuring the approximate commutativity of the inference network and the sequence transformations. Finally, we demonstrate approximate equivariance to complex transformations, expanding upon the capabilities of existing group equivariant neural networks. Authors: T. Anderson Keller, Max Welling Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher LinkedIn: https://www.linkedin.com/in/ykilcher BiliBili: https://space.bilibili.com/1824646584 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Hello there. Today we'll look at topographic VAEs learn equivariant capsules by T. Anderson Keller and Max Welling. On a high level this paper proposes a new type of variational autoencoder where the latent variables aren't independent but are organized in a topographic way. Now what that means we're going to look at that but in essence it means that it can represent transformations in the real world of a certain kind as transformations inside of the latent space of the model. So the whole question is here how do we build a latent space and a model where this naturally happens as we train it. We want the real world to somehow correspond to the latent space in a way such that if the real world moves the latent space moves equivalently or equivariantly. That's where this word is going to come in. So we're going to go through the paper. I have to say I don't understand this fully as well. These variational frameworks they are always kind of I feel kind of math heavy and they take a very different approach than the papers I might be used to. So I'm going to tell you what I think is going on here and if I'm completely wrong this is entirely possible please let me know. Alright let's dive into the paper. This is the first graphic right here that shows kind of an overview over the system. So what do they want to achieve? What they say is we're not going to consider we're going to try to build a generative model like a variational autoencoder but we're not going to consider any kind of data. We're going to consider data essentially frames of a video. So we're going to assume that what we're looking at is kind of a video and the transitions inside the video are sort of continuous sort of monotonic and slow. So here you can see the seven rotates slowly and also changes its color slowly relatively monotonously over this sequence. So what they're going to say is we're gonna our model is going to take this entire sequence one a picture is going to be kind of the focus here so this green one is the focus but we're going to take in this entire sequence right here into the model and we want the model to come up with a latent representation of the focus image. In this case it's going to be, we'll jump a step here, is going to be this thing right here. Let's call that I don't even remember how they call it let's call it like Z hat. Okay this is a latent representation of the focus image and now obviously in a regular variational autoencoder I could now push this again into the decoder and get back the same image and I can do so here as well. However we want something else as well. We also want that if I now transform my latent space in a certain way and this way is going to be this role operation in this paper. If I transform my latent space in this way I want this to correspond to moving forward in this sequence right so I have a sequence as an input and I say well my latent space should be such that if I perform certain operations right here in this case I roll by 10 that that corresponds not to the picture that I have input but to the picture that would be if I were to observe this transition 10 steps into the future. So roll by 10 and roll in this case means you can see here they have two of these what they call you know capsules I think they call them capsules the left one and the right one and the roll simply means that I take every variable latent variable and I simply roll them forward so this is over the latent dimension I just roll them forward by one step I do that 10 times this is as you can see this is arranged in sort of a torus here and a 1d torus so I can just roll this around and also this capsule I can just roll it around 10 times and that hopefully if we train the model correctly should correspond to not the input image but the image that is 10 steps into the future. So that is the goal now we don't want to train a model explicitly to predict 10 steps into the future that would be would be a valid task but it's not what this model wants what this model wants is say can we build a model architecture and the latent space architecture such that this is kind of happens automatically and let's see well you can already see kind of how this latent space comes to be I said this Z hat here is going to be the latent representation you can see that is not the thing that is directly output by the encoder the encoder in this case outputs many things so it outputs a Z variable so the Z hat is what I call kind of Z normalized the Z variable is kind of Z unnormalized so it outputs a Z variable for the focus image but it also outputs these U squared variable or it outputs the U variables which we then square so these U variables right here are output I'm gonna guess this is from this image and this is from this image and this is from this image and also kind of look into the future right here and yeah so I have these U variables and I define sort of a a window a context window around which I look and I also predict them I square them and then I sum them all up but pull the square root right here and I divide so this is why I say kind of a normalized Z is what comes out of this but it's fairly fairly complicated right but this is going to in a way encourage this behavior so let's see why that is and for that I want to just draw back a little bit to like a regular VAE a regular variational autoencoder so if in a regular VAE you have like an image this is encoded decoded and you get back an image right so in a regular VAE what you assume is you assume that the latent space is sort of made up out of these independent latent variables latent random variables they're Gaussian distributed and yeah there I already said they're independent from each other and you you claim if I know the latent variables so essentially if I know the mean and variance of these then you know producing an image is is easy right you can simply train a neural network I input you know which which var I input what values my latent variables are or how the Gaussians are parameterized alternatively I input that and I train the decoder to produce a picture from that that is easy the question is if I have a picture trusty cat right here if I have a picture what are the corresponding latent variables you know what are the values of the latent variables that makes sense right here and of course in a VAE we train the encoder and the decoder jointly such that they cooperatively can construct this latent space like okay how how should how should the latent space look from which the decoder decodes but I just want to turn your attention to the question of the encoders job is essentially to take in an image and produce what values the latent variables are and the latent variables are assumed to be independent from each other and Gaussian distributed now this is where this model right here differs okay so this model says well we're going to assume we have observed and latent variables observed variables X and latent variables T observed are I guess the images or the image sequences and T are the latent variables so this I guess what this would be equivalent to Z hat to what I called Z hat they call team all right so they say will formulate the joint distribution note that in this framework in these variational frameworks I don't it's not my thing either but what you do is you always you propose a mechanism by which the data and by which the variables are generated so you as a designer of the algorithm propose the structure of how the latent variables work together and then you have some small parts in there that you say well these things I don't know I'm gonna let a neural network do these things but essentially you come and you impose a structure upon the world right and you know if you get the structure correct your model will work fine if you don't get the structure correct your model won't work fine but this is a bit of a different way of working than you know saying well I train a conv net to predict so we're going to propose our structure we're going to say the joint distribution of observed and latent variables factorizes into these two it factorizes into this conditional so if I have the latent variables right then what are the images and times the prior across the latent variables now we already seen this distribution it's the first one is listed here again this conditional distribution that's simply your decoder in the VAE framework and that's written here it essentially says well to produce an image I'm going to put T the latent variable into this neural network G right here and that will give me the distribution of my output image so this is your decoder in the VAE now the interesting part and where it differs from a regular VAE is right here where they say well how do our latent how does our latent space look well this is zooming around our latent space isn't a independent Gaussians it's actually this TPOT distribution this topographic product no where where does it I forgot what it I forgot what it's what it's called a topographic product of student T's model the TPOT topographic product of student T that's going to be our distribution and that distribution is going to encourage this topographically organized latent space right so we can ask how does it how does it do that note that the encoder isn't here yet because we've only we've defined we've imposed degenerative process of the data the generative process starts at the latent space I said if I know what the latent variables are I can just ask my decoder to produce an image so this distribution here tells us you know the latent variables are distributed like this and then there we go now obviously what we want is we want our encoder to produce the variables the latent variables but we also want what the encoder produces to follow this distribution right here and that's going to be the sort of difficulty right here because what we know what we can train with back propagation is pretty much Gaussians you know like we can train things where we can apply the reparameterization trick that's stuff we can back prop through stuff we can Gaussians we can sample from efficiently and so on we have closed form solution for the KL divergences in the objectives so essentially what we can do in these variational frameworks is Gaussians not topographic product of student is however here they show okay we can in fact construct a product of student is this is no this is not yet a topographic product is just a product of student is distribution from Gaussians and that is I take one z variable and I take a bunch of u variables and they're all distributed like Gaussians and I square the use I sum them up I average them and then I take the square root and I divide z by dot and this variable right here that's going to be a univariate student t random variable this should be kind of known if you've ever taken statistics or like use the t-test for anything okay and you know this is already quite familiar and I can extend this now to the multi-dimensional case so if t is a multi-dimensional student is random variable composed of independent Z's and use then we can construct t as a vector and that is going to be distributed according to a product of student t's variable and this should connect to what we've seen before right we said that this models organization of the latent space is pretty much of this form that we saw right here we have the z variable divided by the square root of the sum of the squared u variables and now we learn how we can construct the product of student t's latent space given z and u independent Gaussians and that is you know now it should connect for you in deep learning variational frameworks we can work pretty much only with Gaussian random variables in this model we want to work with product of student t random variables and here is the way how we can construct the product of student t random errors from Gaussian random variables so that's why here we the neural networks will output the Z and the u that's what they will output that's those are those are Gaussians or supposed to be Gaussians and then we transform them by dividing them and summing them up in this way to the latent variable that the decoder receives which is this Z hat or t I guess to this is what the decoder receives so we know that if the encoder as output Gaussian random variables the decoder will receive a product of student t random variable now why is the product of student t random variable special in any way because it enables us to what they call here introduce topography in essence and they formulate this a little bit what it does is it it lets it if if some of the use in this some and some of the you in this some are the same which you can see by the indices in this case they are not but if some are shared that means that the two were the two t variables not the two Z the two t so this is one t and this is another t right this is t1 this is t2 lots of t these two variables will no longer be independent they will actually be dependent on each other so this is a way how we can construct latent spaces where some of the variables are actually correlated or in some other way have have higher order correlations with each other meaning that the value of one is not independent from the value of the other one and that is pretty much a basis for what we want for constructing these topographic latent spaces so here they say introducing topography essentially what we're going to do is we're not we're going to define neighborhoods across our u variables and we're going to share the u variables according to these neighborhoods and that's going to make the in the components of t dependent on each other and this sounds complicated but essentially you can imagine instead of having like four latent random variable which are all Gaussians now we have simply one set of z variables and one set of u variables and we're going to consider an entire sequence and not just one one image right so we were going to consider an entire sequence of images like this right here every image produces one z and one u variable and then when we consider an image let's say this is the focus right now we consider its z and we consider a neighborhood of use and that's just going to amount sort of like a convolution like this is maybe a neighborhood of three so we're going to consider this u this u and this u so we're going to construct the z on top of the fraction divided by this thing squared this bubble here squared this bubble here squared square root of top on top of that and that's going to be our t so the t for this image right here that's going to be this whole fraction so when we train the VAE we input the whole sequence we focus on for example this picture we construct its t by looking at its z and its neighborhood of use then we put that t into the decoder the decoder is going to produce an image and then we can apply a loss function between those two okay so that is the loss that's the loss function right the loss function note that the loss function doesn't say you need if you roll ten times then it needs to be the picture that's ten times ahead that is not the case at all we actually don't have the role function in here but even now even once we introduce the role function in the in the latent space we're not going to explicitly train the model to predict the future we're simply going to construct as we did here the latent space such that it such that this naturally happens so how are we going to do this almost the same and here you have they talk about capsules so you can see that they divide this neighborhood structure so the W defines the neighborhood structure you can see here some of the use they are connected and then other ones are connected but these use are not connected with those they kind of talk about capsules essentially it's just that they make some of the variables dependent on each other and some not or or when they do these neighborhood things they just have two sets of variables like to have two sets of Z's and use and they only yeah they construct two T variables and that that's what they call capsules that I don't I don't know why the capsule terminology enters this paper necessarily but you know they they want to draw a connection here so temporal coherence now we get to how do we organize this latent space such that the role operation now also gets in and this is pretty simple it's actually just an extension of this right here so here if you consider these images here as images of a sequence we always said well you need to be connected to sort of your your neighboring variables and now sorry your neighboring you variables as they are right and now we're going to say the same thing but but I'm going to draw the critical path here again so this we have a Z variable right here we have you variables from the neighborhood okay and we're going to take the Z variable on top of the fraction and we're going to take the U variables below the fraction right here like so like so like so now before we do this before we take the U variables here below the fraction we're going to roll the U variables according to their distance from according to their distance from the focus so in this case this would be simply one rollback this would be simply one roll forward so in the language of this paper what this means is that we don't want we we don't want this image or it given a particular position in this image right this position right here if we simply apply the classic neighborhood structure we say we want this position in this image to be correlated with the same position a step back and a step forward now if we construct the role like this what we're saying is no no no no I don't want I want I want this position to be correlated with maybe this position here and this position there like slightly behind and slightly ahead but I'm obviously not going to tell the model what I expect I simply say please this image is one time stack well black this image is one time step back from me please roll the latent space by one and that's going to be your relevant variable and in this case it's please roll the latent space of this thing one forward and that's going to be your relevant latent variable so it's not that we train we we train rolling this t variable here because the t is what finally comes out we're not training this t to roll forward or back and then predict ten steps ahead we're simply saying how you are influenced you as a focus how you are influenced by pictures before and after you you're not simply taking into account their latent variables you want to take into account rolled versions of their latent variables in order for you to reconstruct yourself in the training objective and it turns out at least that's how I understand it right and it turns out so here you can see the whole process we're going to take images we're going to produce mean and variance of late of Gaussian variables for the Z and the u variables so if you had just a VAE it would just be this right here and those will be a layer you're latent variables but not here we produce two sets Z's and use then we're going to construct the t variables I don't know why this is on the bottom here but then we're going to construct the t variables according to this formula W here is the neighborhood structure you define it you and Z are the variables you produced from your encoder or you sampled from what your encoder produced and mu here is also a learnable parameter a learnable mean parameter and then we want to stick this these T's into you're going to stick these T's into this neural network now here it says Z and ZL and UL but essentially this here this here these create T oh here it's here you're going to stick the T into your decoder neural network remember the G how do we get the picture from the latent variable that's the decoder and stick that into the decoder and out you get an image and you train it with the classic elbow the evidence lower bound which says okay what I want is I want to reconstruct the picture accurately right that's this term right here to reconstruct the picture accurately but I also want that my Z well essentially what I want is that my T variables are distributed according to this TPOT distribution I want to enforce that but I can't right I can work with Gaussians so what but what I can do is I can say well the Z variables and the U variables they must be as Gaussian as possible so I penalize the KL divergence between what I produce which is this right here and the Gaussian like a a pure Gaussian this has a closed form I can I can calculate KL divergences from what I produce with Gaussians no problem okay and that's the training loss and I simply average that over the input sequence and there there you go now the evaluation of these things I have to say after reading through the experiments in the evaluations this is this is a paper kind of an idea at least I feel so right correct me if I'm wrong but I feel that this is sort of an idea paper it's like here's an idea it works if we you know specifically construct a data set for it and if we specifically also the experiments are appear to be kind of fiddly like you have to really you know get your parameters right to make this work but if you do then you know the model behaves as you as you expect and so they measure things like is the rolled version of the latent variables really equal to the latent variables a couple of time steps ahead and things like this and they produce these these maps so here is one where the latent space isn't a 1d torus like we looked at so 1d torus is this right so you go around around around sorry and this is a 2d torus so a 2d torus is like a plane and if you leave here you come back here and if you leave here you come back here so if you if you roll this up and then you you have a pipe and if you close the pipe you have like a donut so that's a torus so if they have a topographic space like a torus they and they simply apply that to MNIST the test set sort of looks like this I don't know if you want to read something into this like feel free I'm not sure but in when they go with the sequences so here you see like the sequences I think on top is what they input and then this is the continuation that the model doesn't see on the bottom is what the model produces you can see the model does get to a point where it understands how these sequences go here all right it goes large large and then it kind of flips around to the smallest this is a expected behavior here as well the rotation it model continues the rotation and it turns out even if the model is just trained with they have these these experiments even if the model is just trained with single transformations so either a role sorry either a rotation or a scale transformation or a color change it can generalize to multiple transformations at once as you can see right here colors and rotations can the model can generalize to that fairly fairly well okay I don't want to get too much into the experiments because I'm not sure how important that the numbers here are I'm safe to say if you construct this model and if you apply it to the you know problems where exactly this is needed and if you get the hyper parameters right then this model actually works it's better whereas a regular neural network it could not easily incorporate the concept of these slow changing transitions it would sort of have to learn okay what color comes after red orange okay what color comes after orange yellow okay what color comes after yellow green I guess the other model has to learn that as well but this model it cannot represent the transition in a sequence as sort of as it has to learn it as a parameterized function rather than being able to map it to an internal transformation of the rate of the latent space like the topographic VAE can do okay that was it for me I'm not competent enough to tell you how big of a step this is it feels to me like a little step it might be a giant step I don't know okay it feels to me like it's kind of an idea paper to show something neat that you could do in an idealized case it might be that this is a much bigger deal than than I think I thought it was a cool paper I thought it was a neat idea it's written even though it's I think under you know more high love sorry more more so I'm not as competent at it but I could still make sense of it so if you enjoy this give it a read yeah let me know if you have any comments and that was it bye bye thanks
[ { "start": 0, "end": 5.62, "text": " Hello there. Today we'll look at topographic VAEs learn equivariant" }, { "start": 5.62, "end": 10.72, "text": " capsules by T. Anderson Keller and Max Welling. On a high level this paper" }, { "start": 10.72, "end": 15.72, "text": " proposes a new type of variational autoencoder where the latent variables" }, { "start": 15.72, "end": 22.04, "text": " aren't independent but are organized in a topographic way. Now what that means" }, { "start": 22.04, "end": 28.84, "text": " we're going to look at that but in essence it means that it can" }, { "start": 28.84, "end": 37.84, "text": " represent transformations in the real world of a certain kind as transformations" }, { "start": 37.84, "end": 45.120000000000005, "text": " inside of the latent space of the model. So the whole question is here how do we" }, { "start": 45.120000000000005, "end": 51.879999999999995, "text": " build a latent space and a model where this naturally happens as we train it." }, { "start": 51.879999999999995, "end": 57.519999999999996, "text": " We want the real world to somehow correspond to the latent space in a way" }, { "start": 57.52, "end": 63.160000000000004, "text": " such that if the real world moves the latent space moves equivalently or" }, { "start": 63.160000000000004, "end": 68.92, "text": " equivariantly. That's where this word is going to come in. So we're going to go" }, { "start": 68.92, "end": 74.72, "text": " through the paper. I have to say I don't understand this fully as well. These" }, { "start": 74.72, "end": 80.2, "text": " variational frameworks they are always kind of I feel kind of math heavy and" }, { "start": 80.2, "end": 85.76, "text": " they take a very different approach than the papers I might be used to. So I'm" }, { "start": 85.76, "end": 90.48, "text": " going to tell you what I think is going on here and if I'm completely wrong this" }, { "start": 90.48, "end": 97.52000000000001, "text": " is entirely possible please let me know. Alright let's dive into the paper. This" }, { "start": 97.52000000000001, "end": 102.08000000000001, "text": " is the first graphic right here that shows kind of an overview over the system." }, { "start": 102.08000000000001, "end": 107.76, "text": " So what do they want to achieve? What they say is we're not going to consider" }, { "start": 107.76, "end": 112.2, "text": " we're going to try to build a generative model like a variational autoencoder but" }, { "start": 112.2, "end": 116.04, "text": " we're not going to consider any kind of data. We're going to consider data" }, { "start": 116.04, "end": 121.24000000000001, "text": " essentially frames of a video. So we're going to assume that what we're" }, { "start": 121.24000000000001, "end": 126.52000000000001, "text": " looking at is kind of a video and the transitions inside the" }, { "start": 126.52000000000001, "end": 136.36, "text": " video are sort of continuous sort of monotonic and slow. So here you can" }, { "start": 136.36, "end": 143.48000000000002, "text": " see the seven rotates slowly and also changes its color slowly relatively" }, { "start": 143.48000000000002, "end": 148.76000000000002, "text": " monotonously over this sequence. So what they're going to say is we're gonna our" }, { "start": 148.76000000000002, "end": 155.04000000000002, "text": " model is going to take this entire sequence one a picture is going to be" }, { "start": 155.04000000000002, "end": 159.84, "text": " kind of the focus here so this green one is the focus but we're going to take in" }, { "start": 159.84, "end": 165.24, "text": " this entire sequence right here into the model and we want the model to come up" }, { "start": 165.24, "end": 170.4, "text": " with a latent representation of the focus image. In this case it's going to" }, { "start": 170.4, "end": 174.76000000000002, "text": " be, we'll jump a step here, is going to be this thing right here. Let's call that" }, { "start": 174.76000000000002, "end": 179.96, "text": " I don't even remember how they call it let's call it like Z hat. Okay this is a" }, { "start": 179.96, "end": 187.08, "text": " latent representation of the focus image and now obviously in a regular" }, { "start": 187.08, "end": 191.48000000000002, "text": " variational autoencoder I could now push this again into the decoder and get back" }, { "start": 191.48, "end": 197.23999999999998, "text": " the same image and I can do so here as well. However we want something else as" }, { "start": 197.23999999999998, "end": 204.48, "text": " well. We also want that if I now transform my latent space in a certain" }, { "start": 204.48, "end": 208.72, "text": " way and this way is going to be this role operation in this paper. If I" }, { "start": 208.72, "end": 216.23999999999998, "text": " transform my latent space in this way I want this to correspond to moving" }, { "start": 216.24, "end": 222.44, "text": " forward in this sequence right so I have a sequence as an input and I say well my" }, { "start": 222.44, "end": 228.84, "text": " latent space should be such that if I perform certain operations right here in" }, { "start": 228.84, "end": 234.76000000000002, "text": " this case I roll by 10 that that corresponds not to the picture that I" }, { "start": 234.76000000000002, "end": 240.72, "text": " have input but to the picture that would be if I were to observe this transition" }, { "start": 240.72, "end": 246.78, "text": " 10 steps into the future. So roll by 10 and roll in this case means you can see" }, { "start": 246.78, "end": 251.44, "text": " here they have two of these what they call you know capsules I think they call" }, { "start": 251.44, "end": 255.68, "text": " them capsules the left one and the right one and the roll simply means that I" }, { "start": 255.68, "end": 261.2, "text": " take every variable latent variable and I simply roll them forward so this is" }, { "start": 261.2, "end": 266.68, "text": " over the latent dimension I just roll them forward by one step I do that 10" }, { "start": 266.68, "end": 270.48, "text": " times this is as you can see this is arranged in sort of a torus here and a" }, { "start": 270.48, "end": 275.8, "text": " 1d torus so I can just roll this around and also this capsule I can just roll it" }, { "start": 275.8, "end": 281.16, "text": " around 10 times and that hopefully if we train the model correctly should" }, { "start": 281.16, "end": 287.6, "text": " correspond to not the input image but the image that is 10 steps into the" }, { "start": 287.6, "end": 294.88, "text": " future. So that is the goal now we don't want to train a model explicitly to" }, { "start": 294.88, "end": 299.44, "text": " predict 10 steps into the future that would be would be a valid task but it's" }, { "start": 299.44, "end": 303.94, "text": " not what this model wants what this model wants is say can we build a model" }, { "start": 303.94, "end": 307.36, "text": " architecture and the latent space architecture such that this is kind of" }, { "start": 307.36, "end": 315.04, "text": " happens automatically and let's see well you can already see kind of how this" }, { "start": 315.04, "end": 319.44, "text": " latent space comes to be I said this Z hat here is going to be the latent" }, { "start": 319.44, "end": 323.6, "text": " representation you can see that is not the thing that is directly output by the" }, { "start": 323.6, "end": 330.6, "text": " encoder the encoder in this case outputs many things so it outputs a Z variable" }, { "start": 330.6, "end": 335.44, "text": " so the Z hat is what I call kind of Z normalized the Z variable is kind of Z" }, { "start": 335.44, "end": 340.08000000000004, "text": " unnormalized so it outputs a Z variable for the focus image but it also outputs" }, { "start": 340.08000000000004, "end": 346.24, "text": " these U squared variable or it outputs the U variables which we then square so" }, { "start": 346.24, "end": 351.56, "text": " these U variables right here are output I'm gonna guess this is from this image" }, { "start": 351.56, "end": 355.12, "text": " and this is from this image and this is from this image and also kind of look" }, { "start": 355.12, "end": 362.56, "text": " into the future right here and yeah so I have these U variables and I define sort" }, { "start": 362.56, "end": 368.4, "text": " of a a window a context window around which I look and I also predict them I" }, { "start": 368.4, "end": 373.52, "text": " square them and then I sum them all up but pull the square root right here and" }, { "start": 373.52, "end": 379.72, "text": " I divide so this is why I say kind of a normalized Z is what comes out of this" }, { "start": 379.72, "end": 386.12, "text": " but it's fairly fairly complicated right but this is going to in a way" }, { "start": 386.12, "end": 393.20000000000005, "text": " encourage this behavior so let's see why that is and for that I want to just draw" }, { "start": 393.20000000000005, "end": 398.20000000000005, "text": " back a little bit to like a regular VAE a regular variational autoencoder so if" }, { "start": 398.20000000000005, "end": 404.92, "text": " in a regular VAE you have like an image this is encoded decoded and you get back" }, { "start": 404.92, "end": 412.6, "text": " an image right so in a regular VAE what you assume is you assume that the latent" }, { "start": 412.6, "end": 418.32, "text": " space is sort of made up out of these independent latent variables latent" }, { "start": 418.32, "end": 422.44, "text": " random variables they're Gaussian distributed and yeah there I already" }, { "start": 422.44, "end": 431.20000000000005, "text": " said they're independent from each other and you you claim if I know the latent" }, { "start": 431.2, "end": 435.68, "text": " variables so essentially if I know the mean and variance of these then you know" }, { "start": 435.68, "end": 442, "text": " producing an image is is easy right you can simply train a neural network I" }, { "start": 442, "end": 450.03999999999996, "text": " input you know which which var I input what values my latent variables are or" }, { "start": 450.03999999999996, "end": 456.48, "text": " how the Gaussians are parameterized alternatively I input that and I train" }, { "start": 456.48, "end": 463.20000000000005, "text": " the decoder to produce a picture from that that is easy the question is if I" }, { "start": 463.20000000000005, "end": 469.52000000000004, "text": " have a picture trusty cat right here if I have a picture what are the" }, { "start": 469.52000000000004, "end": 474.84000000000003, "text": " corresponding latent variables you know what are the values of the latent" }, { "start": 474.84000000000003, "end": 480.84000000000003, "text": " variables that makes sense right here and of course in a VAE we train the" }, { "start": 480.84000000000003, "end": 485.20000000000005, "text": " encoder and the decoder jointly such that they cooperatively can construct" }, { "start": 485.2, "end": 490.71999999999997, "text": " this latent space like okay how how should how should the latent space look" }, { "start": 490.71999999999997, "end": 496.96, "text": " from which the decoder decodes but I just want to turn your attention to the" }, { "start": 496.96, "end": 502.28, "text": " question of the encoders job is essentially to take in an image and" }, { "start": 502.28, "end": 511.59999999999997, "text": " produce what values the latent variables are and the latent variables are assumed" }, { "start": 511.6, "end": 517.32, "text": " to be independent from each other and Gaussian distributed now this is where" }, { "start": 517.32, "end": 523.6, "text": " this model right here differs okay so this model says well we're going to" }, { "start": 523.6, "end": 528.6800000000001, "text": " assume we have observed and latent variables observed variables X and" }, { "start": 528.6800000000001, "end": 534.96, "text": " latent variables T observed are I guess the images or the image sequences and T" }, { "start": 534.96, "end": 541.32, "text": " are the latent variables so this I guess what this would be equivalent to Z hat" }, { "start": 541.32, "end": 547.96, "text": " to what I called Z hat they call team all right so they say will formulate the" }, { "start": 547.96, "end": 552.44, "text": " joint distribution note that in this framework in these variational frameworks" }, { "start": 552.44, "end": 558.6, "text": " I don't it's not my thing either but what you do is you always you propose a" }, { "start": 558.6, "end": 565.6800000000001, "text": " mechanism by which the data and by which the variables are generated so you as a" }, { "start": 565.68, "end": 571.4799999999999, "text": " designer of the algorithm propose the structure of how the latent variables" }, { "start": 571.4799999999999, "end": 578.16, "text": " work together and then you have some small parts in there that you say well" }, { "start": 578.16, "end": 582.4399999999999, "text": " these things I don't know I'm gonna let a neural network do these things but" }, { "start": 582.4399999999999, "end": 588.12, "text": " essentially you come and you impose a structure upon the world right and you" }, { "start": 588.12, "end": 591.68, "text": " know if you get the structure correct your model will work fine if you don't" }, { "start": 591.68, "end": 594.4399999999999, "text": " get the structure correct your model won't work fine but this is a bit of a" }, { "start": 594.44, "end": 600.1400000000001, "text": " different way of working than you know saying well I train a conv net to" }, { "start": 600.1400000000001, "end": 606.6800000000001, "text": " predict so we're going to propose our structure we're going to say the joint" }, { "start": 606.6800000000001, "end": 611.4000000000001, "text": " distribution of observed and latent variables factorizes into these two it" }, { "start": 611.4000000000001, "end": 617.9200000000001, "text": " factorizes into this conditional so if I have the latent variables right then" }, { "start": 617.92, "end": 624.92, "text": " what are the images and times the prior across the latent variables now we" }, { "start": 624.92, "end": 631.4, "text": " already seen this distribution it's the first one is listed here again this" }, { "start": 631.4, "end": 638.1999999999999, "text": " conditional distribution that's simply your decoder in the VAE framework and" }, { "start": 638.1999999999999, "end": 643.92, "text": " that's written here it essentially says well to produce an image I'm going to" }, { "start": 643.92, "end": 649.56, "text": " put T the latent variable into this neural network G right here and that" }, { "start": 649.56, "end": 655.88, "text": " will give me the distribution of my output image so this is your decoder in" }, { "start": 655.88, "end": 663.4799999999999, "text": " the VAE now the interesting part and where it differs from a regular VAE is" }, { "start": 663.4799999999999, "end": 668.24, "text": " right here where they say well how do our latent how does our latent space" }, { "start": 668.24, "end": 674.64, "text": " look well this is zooming around our latent space isn't a independent Gaussians" }, { "start": 674.64, "end": 683.96, "text": " it's actually this TPOT distribution this topographic product no where where" }, { "start": 683.96, "end": 688.52, "text": " does it I forgot what it I forgot what it's what it's called a topographic" }, { "start": 688.52, "end": 697.04, "text": " product of student T's model the TPOT topographic product of student T that's" }, { "start": 697.04, "end": 701.56, "text": " going to be our distribution and that distribution is going to encourage this" }, { "start": 701.56, "end": 707.9599999999999, "text": " topographically organized latent space right so we can ask how does it how does" }, { "start": 707.9599999999999, "end": 713.36, "text": " it do that note that the encoder isn't here yet because we've only we've" }, { "start": 713.36, "end": 719.52, "text": " defined we've imposed degenerative process of the data the generative" }, { "start": 719.52, "end": 723.92, "text": " process starts at the latent space I said if I know what the latent variables" }, { "start": 723.92, "end": 730.8, "text": " are I can just ask my decoder to produce an image so this distribution here tells" }, { "start": 730.8, "end": 735.36, "text": " us you know the latent variables are distributed like this and then there we" }, { "start": 735.36, "end": 744.5999999999999, "text": " go now obviously what we want is we want our encoder to produce the variables the" }, { "start": 744.5999999999999, "end": 749.64, "text": " latent variables but we also want what the encoder produces to follow this" }, { "start": 749.64, "end": 755.76, "text": " distribution right here and that's going to be the sort of difficulty right here" }, { "start": 755.76, "end": 762.48, "text": " because what we know what we can train with back propagation is pretty much" }, { "start": 762.48, "end": 766.3199999999999, "text": " Gaussians you know like we can train things where we can apply the" }, { "start": 766.3199999999999, "end": 773.04, "text": " reparameterization trick that's stuff we can back prop through stuff we can" }, { "start": 773.04, "end": 777.96, "text": " Gaussians we can sample from efficiently and so on we have closed form solution" }, { "start": 777.96, "end": 785.0400000000001, "text": " for the KL divergences in the objectives so essentially what we can do in these" }, { "start": 785.0400000000001, "end": 790.08, "text": " variational frameworks is Gaussians not topographic product of student is" }, { "start": 790.08, "end": 797.9200000000001, "text": " however here they show okay we can in fact construct a product of student is" }, { "start": 797.9200000000001, "end": 804.96, "text": " this is no this is not yet a topographic product is just a product of student is" }, { "start": 804.96, "end": 811.88, "text": " distribution from Gaussians and that is I take one z variable and I take a bunch" }, { "start": 811.88, "end": 817.6, "text": " of u variables and they're all distributed like Gaussians and I square" }, { "start": 817.6, "end": 825.9200000000001, "text": " the use I sum them up I average them and then I take the square root and I divide" }, { "start": 825.9200000000001, "end": 832.1600000000001, "text": " z by dot and this variable right here that's going to be a univariate student" }, { "start": 832.16, "end": 837.4399999999999, "text": " t random variable this should be kind of known if you've ever taken statistics" }, { "start": 837.4399999999999, "end": 843.8399999999999, "text": " or like use the t-test for anything okay and you know this is already quite" }, { "start": 843.8399999999999, "end": 849.04, "text": " familiar and I can extend this now to the multi-dimensional case so if t is a" }, { "start": 849.04, "end": 854.52, "text": " multi-dimensional student is random variable composed of independent Z's and" }, { "start": 854.52, "end": 862.36, "text": " use then we can construct t as a vector and that is going to be distributed" }, { "start": 862.36, "end": 869.0799999999999, "text": " according to a product of student t's variable and this should connect to what" }, { "start": 869.0799999999999, "end": 874.6, "text": " we've seen before right we said that this models organization of the latent" }, { "start": 874.6, "end": 880.0799999999999, "text": " space is pretty much of this form that we saw right here we have the z variable" }, { "start": 880.08, "end": 885.6, "text": " divided by the square root of the sum of the squared u variables and now we learn" }, { "start": 885.6, "end": 896.32, "text": " how we can construct the product of student t's latent space given z and u" }, { "start": 896.32, "end": 905.2, "text": " independent Gaussians and that is you know now it should connect for you in" }, { "start": 905.2, "end": 910.24, "text": " deep learning variational frameworks we can work pretty much only with Gaussian" }, { "start": 910.24, "end": 917.0400000000001, "text": " random variables in this model we want to work with product of student t random" }, { "start": 917.0400000000001, "end": 923.5600000000001, "text": " variables and here is the way how we can construct the product of student t" }, { "start": 923.5600000000001, "end": 930.84, "text": " random errors from Gaussian random variables so that's why here we the" }, { "start": 930.84, "end": 936.72, "text": " neural networks will output the Z and the u that's what they will output" }, { "start": 936.72, "end": 943.48, "text": " that's those are those are Gaussians or supposed to be Gaussians and then we" }, { "start": 943.48, "end": 949.96, "text": " transform them by dividing them and summing them up in this way to the latent" }, { "start": 949.96, "end": 956.84, "text": " variable that the decoder receives which is this Z hat or t I guess to this is" }, { "start": 956.84, "end": 962.08, "text": " what the decoder receives so we know that if the encoder as output Gaussian" }, { "start": 962.08, "end": 968.0400000000001, "text": " random variables the decoder will receive a product of student t random" }, { "start": 968.0400000000001, "end": 972.88, "text": " variable now why is the product of student t random variable special in" }, { "start": 972.88, "end": 980.6800000000001, "text": " any way because it enables us to what they call here introduce topography in" }, { "start": 980.6800000000001, "end": 986.44, "text": " essence and they formulate this a little bit what it does is it it lets it" }, { "start": 986.44, "end": 993.44, "text": " if if some of the use in this some and some of the you in this some are the" }, { "start": 993.44, "end": 999.08, "text": " same which you can see by the indices in this case they are not but if some are" }, { "start": 999.08, "end": 1005.4000000000001, "text": " shared that means that the two were the two t variables not the two Z the two t" }, { "start": 1005.4000000000001, "end": 1015.12, "text": " so this is one t and this is another t right this is t1 this is t2 lots of t" }, { "start": 1015.12, "end": 1020.76, "text": " these two variables will no longer be independent they will actually be" }, { "start": 1020.76, "end": 1027.8, "text": " dependent on each other so this is a way how we can construct latent spaces" }, { "start": 1027.8, "end": 1033.64, "text": " where some of the variables are actually correlated or in some other way have" }, { "start": 1033.64, "end": 1039.76, "text": " have higher order correlations with each other meaning that the value of one is" }, { "start": 1039.76, "end": 1046.04, "text": " not independent from the value of the other one and that is pretty much a" }, { "start": 1046.04, "end": 1053.32, "text": " basis for what we want for constructing these topographic latent spaces so here" }, { "start": 1053.32, "end": 1057.16, "text": " they say introducing topography essentially what we're going to do is" }, { "start": 1057.16, "end": 1065.08, "text": " we're not we're going to define neighborhoods across our u variables and" }, { "start": 1065.08, "end": 1069.84, "text": " we're going to share the u variables according to these neighborhoods and" }, { "start": 1069.84, "end": 1074.28, "text": " that's going to make the in the components of t dependent on each other" }, { "start": 1074.28, "end": 1078.8, "text": " and this sounds complicated but essentially you can imagine instead of" }, { "start": 1078.8, "end": 1083.24, "text": " having like four latent random variable which are all Gaussians now we have" }, { "start": 1083.24, "end": 1092.32, "text": " simply one set of z variables and one set of u variables and we're going to" }, { "start": 1092.32, "end": 1096.6399999999999, "text": " consider an entire sequence and not just one one image right so we were going to" }, { "start": 1096.6399999999999, "end": 1101.84, "text": " consider an entire sequence of images like this right here every image" }, { "start": 1101.84, "end": 1107.8, "text": " produces one z and one u variable and then when we consider an image let's say" }, { "start": 1107.8, "end": 1113.8799999999999, "text": " this is the focus right now we consider its z and we consider a neighborhood of" }, { "start": 1113.8799999999999, "end": 1119.12, "text": " use and that's just going to amount sort of like a convolution like this is maybe" }, { "start": 1119.12, "end": 1123.9199999999998, "text": " a neighborhood of three so we're going to consider this u this u and this u so" }, { "start": 1123.9199999999998, "end": 1129.6799999999998, "text": " we're going to construct the z on top of the fraction divided by this thing" }, { "start": 1129.6799999999998, "end": 1137.1999999999998, "text": " squared this bubble here squared this bubble here squared square root of top" }, { "start": 1137.1999999999998, "end": 1145.4399999999998, "text": " on top of that and that's going to be our t so the t for this image right here" }, { "start": 1145.44, "end": 1152.24, "text": " that's going to be this whole fraction so when we train the VAE we input the" }, { "start": 1152.24, "end": 1157.72, "text": " whole sequence we focus on for example this picture we construct its t by" }, { "start": 1157.72, "end": 1163.2, "text": " looking at its z and its neighborhood of use then we put that t into the decoder" }, { "start": 1163.2, "end": 1168, "text": " the decoder is going to produce an image and then we can apply a loss function" }, { "start": 1168, "end": 1175.56, "text": " between those two okay so that is the loss that's the loss function right the" }, { "start": 1175.56, "end": 1183, "text": " loss function note that the loss function doesn't say you need if you" }, { "start": 1183, "end": 1188.04, "text": " roll ten times then it needs to be the picture that's ten times ahead that is" }, { "start": 1188.04, "end": 1193.08, "text": " not the case at all we actually don't have the role function in here but even" }, { "start": 1193.08, "end": 1199.36, "text": " now even once we introduce the role function in the in the latent space" }, { "start": 1199.36, "end": 1205.72, "text": " we're not going to explicitly train the model to predict the future we're simply" }, { "start": 1205.72, "end": 1213.52, "text": " going to construct as we did here the latent space such that it such that this" }, { "start": 1213.52, "end": 1218.6799999999998, "text": " naturally happens so how are we going to do this almost the same and here you" }, { "start": 1218.68, "end": 1224.3600000000001, "text": " have they talk about capsules so you can see that they divide this neighborhood" }, { "start": 1224.3600000000001, "end": 1229.2, "text": " structure so the W defines the neighborhood structure you can see here" }, { "start": 1229.2, "end": 1234.0800000000002, "text": " some of the use they are connected and then other ones are connected but these" }, { "start": 1234.0800000000002, "end": 1240.0800000000002, "text": " use are not connected with those they kind of talk about capsules essentially" }, { "start": 1240.0800000000002, "end": 1243.5600000000002, "text": " it's just that they make some of the variables dependent on each other and" }, { "start": 1243.56, "end": 1250.04, "text": " some not or or when they do these neighborhood things they just have two" }, { "start": 1250.04, "end": 1257.24, "text": " sets of variables like to have two sets of Z's and use and they only yeah they" }, { "start": 1257.24, "end": 1261.28, "text": " construct two T variables and that that's what they call capsules that I" }, { "start": 1261.28, "end": 1265.32, "text": " don't I don't know why the capsule terminology enters this paper" }, { "start": 1265.32, "end": 1274.6399999999999, "text": " necessarily but you know they they want to draw a connection here so temporal" }, { "start": 1274.6399999999999, "end": 1279.4399999999998, "text": " coherence now we get to how do we organize this latent space such that the" }, { "start": 1279.4399999999998, "end": 1284.6799999999998, "text": " role operation now also gets in and this is pretty simple it's actually just an" }, { "start": 1284.6799999999998, "end": 1291.48, "text": " extension of this right here so here if you consider these images here as images" }, { "start": 1291.48, "end": 1296.88, "text": " of a sequence we always said well you need to be connected to sort of your" }, { "start": 1296.88, "end": 1305.08, "text": " your neighboring variables and now sorry your neighboring you variables as they" }, { "start": 1305.08, "end": 1312.56, "text": " are right and now we're going to say the same thing but but I'm going to draw the" }, { "start": 1312.56, "end": 1319, "text": " critical path here again so this we have a Z variable right here we have you" }, { "start": 1319, "end": 1327.08, "text": " variables from the neighborhood okay and we're going to take the Z variable on" }, { "start": 1327.08, "end": 1333.04, "text": " top of the fraction and we're going to take the U variables below the fraction" }, { "start": 1333.04, "end": 1343.86, "text": " right here like so like so like so now before we do this before we take the U" }, { "start": 1343.86, "end": 1347.4, "text": " variables here below the fraction we're going to roll the U variables" }, { "start": 1347.4, "end": 1353.92, "text": " according to their distance from according to their distance from the" }, { "start": 1353.92, "end": 1358.5600000000002, "text": " focus so in this case this would be simply one rollback this would be" }, { "start": 1358.5600000000002, "end": 1366.0400000000002, "text": " simply one roll forward so in the language of this paper what this means" }, { "start": 1366.0400000000002, "end": 1375.24, "text": " is that we don't want we we don't want this image or it given a particular" }, { "start": 1375.24, "end": 1381.52, "text": " position in this image right this position right here if we simply apply" }, { "start": 1381.52, "end": 1387.96, "text": " the classic neighborhood structure we say we want this position in this image" }, { "start": 1387.96, "end": 1397.08, "text": " to be correlated with the same position a step back and a step forward now if we" }, { "start": 1397.08, "end": 1403.32, "text": " construct the role like this what we're saying is no no no no I don't want I" }, { "start": 1403.32, "end": 1408.9199999999998, "text": " want I want this position to be correlated with maybe this position here" }, { "start": 1408.9199999999998, "end": 1413.9199999999998, "text": " and this position there like slightly behind and slightly ahead but I'm" }, { "start": 1413.9199999999998, "end": 1420.96, "text": " obviously not going to tell the model what I expect I simply say please this" }, { "start": 1420.96, "end": 1426, "text": " image is one time stack well black this image is one time step back from me" }, { "start": 1426, "end": 1432.9199999999998, "text": " please roll the latent space by one and that's going to be your relevant" }, { "start": 1432.92, "end": 1438.8400000000001, "text": " variable and in this case it's please roll the latent space of this thing one" }, { "start": 1438.8400000000001, "end": 1445.4, "text": " forward and that's going to be your relevant latent variable so it's not" }, { "start": 1445.4, "end": 1452.16, "text": " that we train we we train rolling this t variable here because the t is what" }, { "start": 1452.16, "end": 1459.78, "text": " finally comes out we're not training this t to roll forward or back and then" }, { "start": 1459.78, "end": 1465.96, "text": " predict ten steps ahead we're simply saying how you are influenced you as a" }, { "start": 1465.96, "end": 1472.16, "text": " focus how you are influenced by pictures before and after you you're not simply" }, { "start": 1472.16, "end": 1476.32, "text": " taking into account their latent variables you want to take into account" }, { "start": 1476.32, "end": 1482.68, "text": " rolled versions of their latent variables in order for you to reconstruct" }, { "start": 1482.68, "end": 1488.3999999999999, "text": " yourself in the training objective and it turns out at least that's how I" }, { "start": 1488.4, "end": 1494.4, "text": " understand it right and it turns out so here you can see the whole process we're" }, { "start": 1494.4, "end": 1500.0400000000002, "text": " going to take images we're going to produce mean and variance of late of" }, { "start": 1500.0400000000002, "end": 1506.5600000000002, "text": " Gaussian variables for the Z and the u variables so if you had just a VAE it" }, { "start": 1506.5600000000002, "end": 1510.24, "text": " would just be this right here and those will be a layer you're latent variables" }, { "start": 1510.24, "end": 1516.96, "text": " but not here we produce two sets Z's and use then we're going to construct the t" }, { "start": 1516.96, "end": 1520.8, "text": " variables I don't know why this is on the bottom here but then we're going to" }, { "start": 1520.8, "end": 1526.16, "text": " construct the t variables according to this formula W here is the neighborhood" }, { "start": 1526.16, "end": 1531.08, "text": " structure you define it you and Z are the variables you produced from your" }, { "start": 1531.08, "end": 1536.3600000000001, "text": " encoder or you sampled from what your encoder produced and mu here is also a" }, { "start": 1536.3600000000001, "end": 1541.44, "text": " learnable parameter a learnable mean parameter and then we want to stick" }, { "start": 1541.44, "end": 1546.6000000000001, "text": " this these T's into you're going to stick these T's into this neural network" }, { "start": 1546.6, "end": 1553.32, "text": " now here it says Z and ZL and UL but essentially this here this here these" }, { "start": 1553.32, "end": 1560.3999999999999, "text": " create T oh here it's here you're going to stick the T into your decoder neural" }, { "start": 1560.3999999999999, "end": 1566.6399999999999, "text": " network remember the G how do we get the picture from the latent variable that's" }, { "start": 1566.6399999999999, "end": 1572.04, "text": " the decoder and stick that into the decoder and out you get an image and you" }, { "start": 1572.04, "end": 1578.96, "text": " train it with the classic elbow the evidence lower bound which says okay what" }, { "start": 1578.96, "end": 1585.6399999999999, "text": " I want is I want to reconstruct the picture accurately right that's this" }, { "start": 1585.6399999999999, "end": 1590.72, "text": " term right here to reconstruct the picture accurately but I also want that" }, { "start": 1590.72, "end": 1599, "text": " my Z well essentially what I want is that my T variables are distributed" }, { "start": 1599, "end": 1605.44, "text": " according to this TPOT distribution I want to enforce that but I can't right I" }, { "start": 1605.44, "end": 1609.36, "text": " can work with Gaussians so what but what I can do is I can say well the Z" }, { "start": 1609.36, "end": 1614.36, "text": " variables and the U variables they must be as Gaussian as possible so I penalize" }, { "start": 1614.36, "end": 1621, "text": " the KL divergence between what I produce which is this right here and the Gaussian" }, { "start": 1621, "end": 1627.72, "text": " like a a pure Gaussian this has a closed form I can I can calculate KL" }, { "start": 1627.72, "end": 1634.32, "text": " divergences from what I produce with Gaussians no problem okay and that's the" }, { "start": 1634.32, "end": 1641.6000000000001, "text": " training loss and I simply average that over the input sequence and there there" }, { "start": 1641.6000000000001, "end": 1646.72, "text": " you go now the evaluation of these things I have to say after reading" }, { "start": 1646.72, "end": 1652.3600000000001, "text": " through the experiments in the evaluations this is this is a paper kind" }, { "start": 1652.3600000000001, "end": 1657, "text": " of an idea at least I feel so right correct me if I'm wrong but I feel that" }, { "start": 1657, "end": 1662.56, "text": " this is sort of an idea paper it's like here's an idea it works if we you know" }, { "start": 1662.56, "end": 1667.36, "text": " specifically construct a data set for it and if we specifically also the" }, { "start": 1667.36, "end": 1672.4, "text": " experiments are appear to be kind of fiddly like you have to really you know" }, { "start": 1672.4, "end": 1678.8, "text": " get your parameters right to make this work but if you do then you know the" }, { "start": 1678.8, "end": 1685.48, "text": " model behaves as you as you expect and so they measure things like is the" }, { "start": 1685.48, "end": 1691.16, "text": " rolled version of the latent variables really equal to the latent variables a" }, { "start": 1691.16, "end": 1697.28, "text": " couple of time steps ahead and things like this and they produce these these" }, { "start": 1697.28, "end": 1702.8, "text": " maps so here is one where the latent space isn't a 1d torus like we looked at" }, { "start": 1702.8, "end": 1708.88, "text": " so 1d torus is this right so you go around around around sorry and this is a" }, { "start": 1708.88, "end": 1713.52, "text": " 2d torus so a 2d torus is like a plane and if you leave here you come back" }, { "start": 1713.52, "end": 1719.04, "text": " here and if you leave here you come back here so if you if you roll this up and" }, { "start": 1719.04, "end": 1724.24, "text": " then you you have a pipe and if you close the pipe you have like a donut so" }, { "start": 1724.24, "end": 1730.92, "text": " that's a torus so if they have a topographic space like a torus they and" }, { "start": 1730.92, "end": 1736.92, "text": " they simply apply that to MNIST the test set sort of looks like this I don't know" }, { "start": 1736.92, "end": 1746, "text": " if you want to read something into this like feel free I'm not sure but in when" }, { "start": 1746, "end": 1751.16, "text": " they go with the sequences so here you see like the sequences I think on top is" }, { "start": 1751.16, "end": 1754.76, "text": " what they input and then this is the continuation that the model doesn't see" }, { "start": 1754.76, "end": 1760.3200000000002, "text": " on the bottom is what the model produces you can see the model does get to a" }, { "start": 1760.32, "end": 1767.36, "text": " point where it understands how these sequences go here all right it goes large" }, { "start": 1767.36, "end": 1772.2, "text": " large and then it kind of flips around to the smallest this is a expected" }, { "start": 1772.2, "end": 1778.48, "text": " behavior here as well the rotation it model continues the rotation and it" }, { "start": 1778.48, "end": 1783.48, "text": " turns out even if the model is just trained with they have these these" }, { "start": 1783.48, "end": 1788.32, "text": " experiments even if the model is just trained with single transformations so" }, { "start": 1788.32, "end": 1795.6, "text": " either a role sorry either a rotation or a scale transformation or a color change" }, { "start": 1795.6, "end": 1802.96, "text": " it can generalize to multiple transformations at once as you can see" }, { "start": 1802.96, "end": 1811.2, "text": " right here colors and rotations can the model can generalize to that fairly" }, { "start": 1811.2, "end": 1816.4399999999998, "text": " fairly well okay I don't want to get too much into the experiments because I'm" }, { "start": 1816.44, "end": 1822.52, "text": " not sure how important that the numbers here are I'm safe to say if you construct" }, { "start": 1822.52, "end": 1826.88, "text": " this model and if you apply it to the you know problems where exactly this is" }, { "start": 1826.88, "end": 1831.52, "text": " needed and if you get the hyper parameters right then this model" }, { "start": 1831.52, "end": 1836.4, "text": " actually works it's better whereas a regular neural network it could not" }, { "start": 1836.4, "end": 1842.8400000000001, "text": " easily incorporate the concept of these slow changing transitions it would sort" }, { "start": 1842.84, "end": 1846.72, "text": " of have to learn okay what color comes after red orange okay what color comes" }, { "start": 1846.72, "end": 1851.3999999999999, "text": " after orange yellow okay what color comes after yellow green I guess the" }, { "start": 1851.3999999999999, "end": 1855.84, "text": " other model has to learn that as well but this model it cannot represent the" }, { "start": 1855.84, "end": 1863.6799999999998, "text": " transition in a sequence as sort of as it has to learn it as a parameterized" }, { "start": 1863.6799999999998, "end": 1870.4399999999998, "text": " function rather than being able to map it to an internal transformation of the" }, { "start": 1870.44, "end": 1876.6000000000001, "text": " rate of the latent space like the topographic VAE can do okay that was it" }, { "start": 1876.6000000000001, "end": 1881.44, "text": " for me I'm not competent enough to tell you how big of a step this is it feels" }, { "start": 1881.44, "end": 1889.5800000000002, "text": " to me like a little step it might be a giant step I don't know okay it feels to" }, { "start": 1889.5800000000002, "end": 1894.0800000000002, "text": " me like it's kind of an idea paper to show something neat that you could do in" }, { "start": 1894.0800000000002, "end": 1899.64, "text": " an idealized case it might be that this is a much bigger deal than than I think" }, { "start": 1899.64, "end": 1904.44, "text": " I thought it was a cool paper I thought it was a neat idea it's written even" }, { "start": 1904.44, "end": 1912.64, "text": " though it's I think under you know more high love sorry more more so I'm not as" }, { "start": 1912.64, "end": 1918.2, "text": " competent at it but I could still make sense of it so if you enjoy this give" }, { "start": 1918.2, "end": 1922.6000000000001, "text": " it a read yeah let me know if you have any comments and that was it bye bye" }, { "start": 1922.6, "end": 1931.9199999999998, "text": " thanks" } ]
ZVVnvZdUMUk
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
[ "Science & Technology" ]
[ "deep learning", "machine learning", "neural networks", "pruning", "distillation", "quantization", "size", "weights", "optimization", "training", "generalization", "overparameterization", "winning ticket", "winning lottery ticket", "arxiv" ]
Stunning evidence for the hypothesis that neural networks work so well because their random initialization almost certainly contains a nearly optimal sub-network that is responsible for most of the final performance. https://arxiv.org/abs/1803.03635 Abstract: Neural network pruning techniques can reduce the parameter counts of trained networks by over 90%, decreasing storage requirements and improving computational performance of inference without compromising accuracy. However, contemporary experience is that the sparse architectures produced by pruning are difficult to train from the start, which would similarly improve training performance. We find that a standard pruning technique naturally uncovers subnetworks whose initializations made them capable of training effectively. Based on these results, we articulate the "lottery ticket hypothesis:" dense, randomly-initialized, feed-forward networks contain subnetworks ("winning tickets") that - when trained in isolation - reach test accuracy comparable to the original network in a similar number of iterations. The winning tickets we find have won the initialization lottery: their connections have initial weights that make training particularly effective. We present an algorithm to identify winning tickets and a series of experiments that support the lottery ticket hypothesis and the importance of these fortuitous initializations. We consistently find winning tickets that are less than 10-20% of the size of several fully-connected and convolutional feed-forward architectures for MNIST and CIFAR10. Above this size, the winning tickets that we find learn faster than the original network and reach higher test accuracy. Authors: Jonathan Frankle, Michael Carbin Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there. Today we're looking at the lottery ticket hypothesis finding sparse trainable neural networks by Jonathan Frankel and Michael Carbon. So this paper is sort of an empirical paper into what makes neural networks train successfully. And it comes out of the literature of pruning. So they say neural network pruning techniques, right, they have been around for a while. They can reduce the parameter counts of trained networks by over 90%, decreasing storage requirements and improving computational performance or inference without compromising accuracy. So what does this mean? If you have a neural network, let's say you just have three nodes, each layer, you have two layers here. You have a neural network here. If you have a fully connected neural network, every node is going to be connected with every node in the next layer, right? And these connections are your weights, your thetas. And you're going to train them, which means you have a number of steps in this direction. And let's say you have a test set accuracy right here. So here is steps. You're going to train them. And if you train them, your accuracy will reach a certain point, right? I'm just going to draw the end point here. Let's say you reach a 90% test accuracy. So your network generalizes pretty well. That's pretty good. So people have been wondering, these networks, they require quite a lot of storage. You know, this is nine connections right here. So three times three. And this is also nine connections. Can we make it smaller but still retain the accuracy? And this is where pruning comes in. So with pruning, people would go and after you train them. So the first step is train the full network, right? And then the second step is prune. Now when you prune, you basically select among the weights that you have that you have trained, you select the best ones in some form or another. In this case, people just select the ones with the largest magnitudes. But there are multiple techniques to do this. And this is very related to things like quantization or distillation. So with pruning, you just leave away some of the weights or most of the weights. And you hope that you still retain a pretty good accuracy right here, right? Sorry, actually, we don't need these steps thing. So you leave away weights and you retain a good accuracy. So pruning methods have been deployed successfully to make networks use less space or be faster to evaluate. Because, of course, with less numbers, you need to do less calculations. So this paper builds on top of this and it basically says, all right, if we do the following, if we now take this network that we identified after training and we just take this network and we train it from the beginning, only this sub network. Right. So three is retrain. Then it will also perform pretty well or even better under one condition. Right. So if you only train this thing, it will perform well under one condition. And the condition is that you transfer over the initial weights. So right. The question is, can we train just the small network from the beginning so that we don't have to train the big network? Right. And the paper identifies that this works if if your initial weights, theta zero of the small network are equal to the initial weights of the large network. Right. Just so just the ones where you ported them over. But basically, the short answer is no. And the reason is, if you only want to train the small network, you need to know the good initialization of these of these weights all here. And the good initialization, you only know after you've trained the large network and actually identified which of these connections make sense. You can't just take a smaller network from the beginning. You have to train the larger one. Then you know which weights and which initializations make sense. So this is the winning lottery ticket hypothesis. Basically, it states and we can read it out in full. The lottery ticket hypothesis is a randomly initialized dense neural network contains a sub network that is initialized such that when trained in isolation, it can match the test accuracy of the original network after trading for at most the same number of iterations. Right. Now, the important part here is that it contains a sub network that is initialized such that when trained in isolation. So two things are important. It is important. The structure of the network of the sub network, but it is also important. What are the initialization of the connections? So the paper kind of hints at why neural networks work at all. And the reason why neural networks work is because we've often thought of neural networks have so many parameters, how can they even generalize? The reason is the following. If we have a neural network, we throw so many parameters at it. Some of the parameters, one subset of the parameters, namely the red ones here, are going to be initialized in such a way, in such a beneficial way that training will perform, will make the network perform well. So it's initialization plus SGD on that sub network. So it is actually only a very small sub network that is responsible for the performance of the neural network. But that sub network needs to be initialized at the correct position. And by over parameterizing these neural networks so much, we actually give it combinatorically many sub networks to choose from where the initialization could be well. So because of this combinatorics, it means that if we over parameterize by some margin, then there's almost guaranteed to be a good sub network in there that can then perform well. So I hope this makes sense. It is basically not a way, it is not a magic thing where we now can train the smaller networks. It is an explanation of why the over parameterization in neural networks makes sense. Because by over parameterizing, we allow the neural networks to exploit the combinatorics to find a good, well initialized sub network that will perform well. And the evidence for this is exactly the fact that if we transfer over the sub network, it by itself will reach the same performance or actually exceed the performance. But only if we initialize it at the same point as it was initialized in the original network. So here is how these sub networks are identified. We've already hinted at that, but here is how the paper does it. So it says identifying winning tickets. First randomly initialize a neural network. This is the full neural network. Then train the network for j iterations arriving at some parameters. These are the trained parameters. Prune p% of the parameters. So of these parameters, prune some. And this is in order to know which ones you prune, you need to have first trained the full neural network. So this is the catch here. You need to train the full neural network to know which ones you must prune. And thereby you create a mask m. And then they say reset the remaining parameters to their value in theta 0. Actually you don't need to say remaining. You can just say reset the parameters to their values in theta 0. Now this is also important. This is the same theta 0 as it was at the beginning of the training. So you need to actually set them back to those exact values. And thereby you create the winning ticket. If you just want to end up with a trained network, then this remaining thing here is important. But if you then want to retrain, you can set everything back and only train the masked version of the network. And they say this will identify these winning tickets. And it will actually work better if you don't do this in what they call one shot. But if you do this iterative pruning, that means it repeatedly trains, prunes and resets the network over n rounds. Each round prunes p to the 1 over n percent of the weights that survived the previous round. Now why might that be? It might be. And this is I think a somewhat valid hypothesis that I myself put forth here. It might be that if you prune some of the weights, let's say you prune this one and this one, what you'll do is you'll put the responsibility of these weights onto other weights. So maybe on this one and this one. So as we said, they prune by looking at which weights are large. So let's say here we have the weights of the layer and these are the magnitudes of the weights. So you would prune, let's say you only want to keep two of those around. So you would prune this one and this one because these are pretty small. Here's the magnitude. And you would also prune that one. If you just do this one shot and then you would retrain and maybe these weights would end up somewhat different. But if you do this in multiple rounds, let's say you first prune one of them. So you would actually prune the smallest one, this one here. And then you retrain and then your weights actually change. And all of the responsibility that this weight carried before is now transferred onto this. So your new weights look like this. And you prune another one like this. And again, all the responsibility of this would, in my hypothetical example, fall on this one. And now if you prune a third one, you would actually prune this one because you realize this weight here, in absence of these two other weights, is actually important. So you would prune this one as well. So I think that is why this iterative pruning method might work a bit better than the one shot pruning method that they say here. So they do a lot of empirical investigation. And I just want to highlight very few of them. So that you get the gist and then the paper goes into a lot of detail and a lot of different architectures that you can check out yourself. So here we have a plot that deals with percent of weights remaining. So as you go to the right here, they drop more and more weights and realize this is a log plot. So if the dashed lines here are random pruning, which means you just drop out a certain number of weights and then you retrain. And you can see that the dashed line here, it starts dropping and just becomes worse as you have less and less weights remaining, which is exactly what's expected. You prune the network, you make it smaller, you make it less performant. And the more weights you take away, the less performing it is. But interestingly enough, if you do this pruning that they suggest and then retrain with the correct initialization, not only do you retain the same level of accuracy for very long, you see here this is 2.9 or 1.2 percent of weights remaining, but you actually go higher. So you can see here when you have 16 percent of weights remaining, there's actually a significant difference between the full network and the prune network. And that's only by simply training this winning hypothesis. So this I find very, very fascinating. And again, this is not a magic bullet that you can do from the beginning, but it does give a clue that if you could train these from the beginning, then you might actually end up at a better point. So it does actually give a practical application. Also, you see they train faster. So the blue line here is the full network over the course of training. Sorry, this should be blue. So here is training iterations and this is test accuracy. So you see the full network does something like this. Now, if you prune to 20 percent of the weights, actually train faster and you go higher. And even if you have 7 percent of the weights, you go almost as high. So this is very interesting. Only when you go to like 1.9 percent of the weights does your performance degrade again and eventually actually go lower than the original network. So that is pretty, pretty, pretty cool, I think. Now, as I said, they do a lot of investigation. And I think one of the main takeaways is that it is not only the structure of the winning hypothesis. So it's not only the structure of the sub network that makes it to be a winning hypothesis. It is actually the initialization. Here I want to show one of these plots. They have lots of plots. You can see here, for example, sorry, this is from my own annotations. Again, this is percent of weights remaining and this is test accuracy at the final iteration. And if we initialize the sub network at its original position, like this method suggests, you see, we first increase the accuracy and then decrease it after a long time. If we take the same sub network, right, but we randomly reinitialize it, then it drops much faster and actually immediately drops. So it really is about not only the structure of the sub network, but about its initialization. I think that is that is the core of the hypothesis here. A very interesting related finding that I just want to mention, I find, to be that they actually discover that the weights, so if you have a weight of the, so if you have two kinds of weights, let's actually go up to my original drawing here. If you compare how fast or how far do the weights travel in optimization space, right, so you can basically look at how far weights travel during optimization. So you take the full neural network here and you look at a parameter that ends up being in the winning hypothesis, theta, theta zero, and it goes to theta end, which let's say theta final. And you also look at parameters that don't end up in the winning hypothesis. Let's call these theta one, two, theta, also final, prime. I'm not too good at labeling. And you look at how far they travel, you'll find that the weights that end up in the winning hypothesis, they, during optimization, they travel much further in optimization space than weights that are not in the winning hypothesis, right? They just stay around much more. So it's not that the kind of good network is already contained in initialization. It's much more than the good network lends itself very favorably to be initialized by SGD, right? Because it travels farther. It means SGD has a bigger pull on it, right? I think there is a lot of things that are yet to be explored in this space, and I think this paper is a very cool contribution to our understanding of how neural networks work. All right, I invite you to check out all the experiments. They do a very thorough job. And with that, I say bye bye.
[ { "start": 0, "end": 11, "text": " Hi there. Today we're looking at the lottery ticket hypothesis finding sparse trainable neural networks by Jonathan Frankel and Michael Carbon." }, { "start": 11, "end": 21, "text": " So this paper is sort of an empirical paper into what makes neural networks train successfully." }, { "start": 21, "end": 29, "text": " And it comes out of the literature of pruning. So they say neural network pruning techniques, right, they have been around for a while." }, { "start": 29, "end": 44, "text": " They can reduce the parameter counts of trained networks by over 90%, decreasing storage requirements and improving computational performance or inference without compromising accuracy." }, { "start": 44, "end": 57, "text": " So what does this mean? If you have a neural network, let's say you just have three nodes, each layer, you have two layers here." }, { "start": 57, "end": 69, "text": " You have a neural network here. If you have a fully connected neural network, every node is going to be connected with every node in the next layer, right?" }, { "start": 69, "end": 84, "text": " And these connections are your weights, your thetas. And you're going to train them, which means you have a number of steps in this direction." }, { "start": 84, "end": 94, "text": " And let's say you have a test set accuracy right here. So here is steps. You're going to train them." }, { "start": 94, "end": 103, "text": " And if you train them, your accuracy will reach a certain point, right? I'm just going to draw the end point here." }, { "start": 103, "end": 109, "text": " Let's say you reach a 90% test accuracy. So your network generalizes pretty well. That's pretty good." }, { "start": 109, "end": 118, "text": " So people have been wondering, these networks, they require quite a lot of storage. You know, this is nine connections right here." }, { "start": 118, "end": 126, "text": " So three times three. And this is also nine connections. Can we make it smaller but still retain the accuracy?" }, { "start": 126, "end": 131, "text": " And this is where pruning comes in. So with pruning, people would go and after you train them." }, { "start": 131, "end": 140, "text": " So the first step is train the full network, right? And then the second step is prune." }, { "start": 140, "end": 155, "text": " Now when you prune, you basically select among the weights that you have that you have trained, you select the best ones in some form or another." }, { "start": 155, "end": 162, "text": " In this case, people just select the ones with the largest magnitudes. But there are multiple techniques to do this." }, { "start": 162, "end": 172, "text": " And this is very related to things like quantization or distillation. So with pruning, you just leave away some of the weights or most of the weights." }, { "start": 172, "end": 184, "text": " And you hope that you still retain a pretty good accuracy right here, right? Sorry, actually, we don't need these steps thing." }, { "start": 184, "end": 195, "text": " So you leave away weights and you retain a good accuracy. So pruning methods have been deployed successfully to make networks use less space or be faster to evaluate." }, { "start": 195, "end": 200, "text": " Because, of course, with less numbers, you need to do less calculations." }, { "start": 200, "end": 228, "text": " So this paper builds on top of this and it basically says, all right, if we do the following, if we now take this network that we identified after training and we just take this network and we train it from the beginning, only this sub network." }, { "start": 228, "end": 234, "text": " Right. So three is retrain." }, { "start": 234, "end": 247, "text": " Then it will also perform pretty well or even better under one condition. Right. So if you only train this thing, it will perform well under one condition." }, { "start": 247, "end": 262, "text": " And the condition is that you transfer over the initial weights. So right. The question is, can we train just the small network from the beginning so that we don't have to train the big network?" }, { "start": 262, "end": 278, "text": " Right. And the paper identifies that this works if if your initial weights, theta zero of the small network are equal to the initial weights of the large network." }, { "start": 278, "end": 285, "text": " Right. Just so just the ones where you ported them over. But basically, the short answer is no." }, { "start": 285, "end": 297, "text": " And the reason is, if you only want to train the small network, you need to know the good initialization of these of these weights all here." }, { "start": 297, "end": 307, "text": " And the good initialization, you only know after you've trained the large network and actually identified which of these connections make sense." }, { "start": 307, "end": 316, "text": " You can't just take a smaller network from the beginning. You have to train the larger one. Then you know which weights and which initializations make sense." }, { "start": 316, "end": 324, "text": " So this is the winning lottery ticket hypothesis. Basically, it states and we can read it out in full." }, { "start": 324, "end": 337, "text": " The lottery ticket hypothesis is a randomly initialized dense neural network contains a sub network that is initialized such that when trained in isolation," }, { "start": 337, "end": 345, "text": " it can match the test accuracy of the original network after trading for at most the same number of iterations." }, { "start": 345, "end": 356, "text": " Right. Now, the important part here is that it contains a sub network that is initialized such that when trained in isolation." }, { "start": 356, "end": 369, "text": " So two things are important. It is important. The structure of the network of the sub network, but it is also important." }, { "start": 369, "end": 378, "text": " What are the initialization of the connections? So the paper kind of hints at why neural networks work at all." }, { "start": 378, "end": 387, "text": " And the reason why neural networks work is because we've often thought of neural networks have so many parameters, how can they even generalize?" }, { "start": 387, "end": 394, "text": " The reason is the following. If we have a neural network, we throw so many parameters at it." }, { "start": 394, "end": 403, "text": " Some of the parameters, one subset of the parameters, namely the red ones here, are going to be initialized in such a way," }, { "start": 403, "end": 412, "text": " in such a beneficial way that training will perform, will make the network perform well." }, { "start": 412, "end": 421, "text": " So it's initialization plus SGD on that sub network." }, { "start": 421, "end": 428, "text": " So it is actually only a very small sub network that is responsible for the performance of the neural network." }, { "start": 428, "end": 435, "text": " But that sub network needs to be initialized at the correct position." }, { "start": 435, "end": 448, "text": " And by over parameterizing these neural networks so much, we actually give it combinatorically many sub networks to choose from where the initialization could be well." }, { "start": 448, "end": 455, "text": " So because of this combinatorics, it means that if we over parameterize by some margin," }, { "start": 455, "end": 462, "text": " then there's almost guaranteed to be a good sub network in there that can then perform well." }, { "start": 462, "end": 472, "text": " So I hope this makes sense. It is basically not a way, it is not a magic thing where we now can train the smaller networks." }, { "start": 472, "end": 479, "text": " It is an explanation of why the over parameterization in neural networks makes sense." }, { "start": 479, "end": 493, "text": " Because by over parameterizing, we allow the neural networks to exploit the combinatorics to find a good, well initialized sub network that will perform well." }, { "start": 493, "end": 506, "text": " And the evidence for this is exactly the fact that if we transfer over the sub network, it by itself will reach the same performance or actually exceed the performance." }, { "start": 506, "end": 515, "text": " But only if we initialize it at the same point as it was initialized in the original network." }, { "start": 515, "end": 520, "text": " So here is how these sub networks are identified." }, { "start": 520, "end": 524, "text": " We've already hinted at that, but here is how the paper does it." }, { "start": 524, "end": 529, "text": " So it says identifying winning tickets. First randomly initialize a neural network." }, { "start": 529, "end": 531, "text": " This is the full neural network." }, { "start": 531, "end": 537, "text": " Then train the network for j iterations arriving at some parameters." }, { "start": 537, "end": 540, "text": " These are the trained parameters." }, { "start": 540, "end": 545, "text": " Prune p% of the parameters." }, { "start": 545, "end": 548, "text": " So of these parameters, prune some." }, { "start": 548, "end": 558, "text": " And this is in order to know which ones you prune, you need to have first trained the full neural network." }, { "start": 558, "end": 564, "text": " So this is the catch here. You need to train the full neural network to know which ones you must prune." }, { "start": 564, "end": 568, "text": " And thereby you create a mask m." }, { "start": 568, "end": 574, "text": " And then they say reset the remaining parameters to their value in theta 0." }, { "start": 574, "end": 580, "text": " Actually you don't need to say remaining. You can just say reset the parameters to their values in theta 0." }, { "start": 580, "end": 587, "text": " Now this is also important. This is the same theta 0 as it was at the beginning of the training." }, { "start": 587, "end": 592, "text": " So you need to actually set them back to those exact values." }, { "start": 592, "end": 596, "text": " And thereby you create the winning ticket." }, { "start": 596, "end": 606, "text": " If you just want to end up with a trained network, then this remaining thing here is important." }, { "start": 606, "end": 616, "text": " But if you then want to retrain, you can set everything back and only train the masked version of the network." }, { "start": 616, "end": 620, "text": " And they say this will identify these winning tickets." }, { "start": 620, "end": 626, "text": " And it will actually work better if you don't do this in what they call one shot." }, { "start": 626, "end": 634, "text": " But if you do this iterative pruning, that means it repeatedly trains, prunes and resets the network over n rounds." }, { "start": 634, "end": 641, "text": " Each round prunes p to the 1 over n percent of the weights that survived the previous round." }, { "start": 641, "end": 645, "text": " Now why might that be? It might be." }, { "start": 645, "end": 655, "text": " And this is I think a somewhat valid hypothesis that I myself put forth here." }, { "start": 655, "end": 665, "text": " It might be that if you prune some of the weights, let's say you prune this one and this one," }, { "start": 665, "end": 671, "text": " what you'll do is you'll put the responsibility of these weights onto other weights." }, { "start": 671, "end": 679, "text": " So maybe on this one and this one. So as we said, they prune by looking at which weights are large." }, { "start": 679, "end": 689, "text": " So let's say here we have the weights of the layer and these are the magnitudes of the weights." }, { "start": 689, "end": 699, "text": " So you would prune, let's say you only want to keep two of those around." }, { "start": 699, "end": 703, "text": " So you would prune this one and this one because these are pretty small." }, { "start": 703, "end": 709, "text": " Here's the magnitude. And you would also prune that one." }, { "start": 709, "end": 717, "text": " If you just do this one shot and then you would retrain and maybe these weights would end up somewhat different." }, { "start": 717, "end": 723, "text": " But if you do this in multiple rounds, let's say you first prune one of them." }, { "start": 723, "end": 729, "text": " So you would actually prune the smallest one, this one here." }, { "start": 729, "end": 733, "text": " And then you retrain and then your weights actually change." }, { "start": 733, "end": 741, "text": " And all of the responsibility that this weight carried before is now transferred onto this." }, { "start": 741, "end": 745, "text": " So your new weights look like this." }, { "start": 745, "end": 747, "text": " And you prune another one like this." }, { "start": 747, "end": 753, "text": " And again, all the responsibility of this would, in my hypothetical example, fall on this one." }, { "start": 753, "end": 759, "text": " And now if you prune a third one, you would actually prune this one because you realize this weight here," }, { "start": 759, "end": 763, "text": " in absence of these two other weights, is actually important." }, { "start": 763, "end": 765, "text": " So you would prune this one as well." }, { "start": 765, "end": 775, "text": " So I think that is why this iterative pruning method might work a bit better than the one shot pruning method that they say here." }, { "start": 775, "end": 779, "text": " So they do a lot of empirical investigation." }, { "start": 779, "end": 783, "text": " And I just want to highlight very few of them." }, { "start": 783, "end": 793, "text": " So that you get the gist and then the paper goes into a lot of detail and a lot of different architectures that you can check out yourself." }, { "start": 793, "end": 799, "text": " So here we have a plot that deals with percent of weights remaining." }, { "start": 799, "end": 807, "text": " So as you go to the right here, they drop more and more weights and realize this is a log plot." }, { "start": 807, "end": 817, "text": " So if the dashed lines here are random pruning, which means you just drop out a certain number of weights and then you retrain." }, { "start": 817, "end": 831, "text": " And you can see that the dashed line here, it starts dropping and just becomes worse as you have less and less weights remaining," }, { "start": 831, "end": 833, "text": " which is exactly what's expected." }, { "start": 833, "end": 837, "text": " You prune the network, you make it smaller, you make it less performant." }, { "start": 837, "end": 843, "text": " And the more weights you take away, the less performing it is." }, { "start": 843, "end": 854, "text": " But interestingly enough, if you do this pruning that they suggest and then retrain with the correct initialization," }, { "start": 854, "end": 863, "text": " not only do you retain the same level of accuracy for very long, you see here this is 2.9 or 1.2 percent of weights remaining," }, { "start": 863, "end": 867, "text": " but you actually go higher." }, { "start": 867, "end": 879, "text": " So you can see here when you have 16 percent of weights remaining, there's actually a significant difference between the full network and the prune network." }, { "start": 879, "end": 884, "text": " And that's only by simply training this winning hypothesis." }, { "start": 884, "end": 887, "text": " So this I find very, very fascinating." }, { "start": 887, "end": 892, "text": " And again, this is not a magic bullet that you can do from the beginning," }, { "start": 892, "end": 904, "text": " but it does give a clue that if you could train these from the beginning, then you might actually end up at a better point." }, { "start": 904, "end": 906, "text": " So it does actually give a practical application." }, { "start": 906, "end": 908, "text": " Also, you see they train faster." }, { "start": 908, "end": 913, "text": " So the blue line here is the full network over the course of training." }, { "start": 913, "end": 915, "text": " Sorry, this should be blue." }, { "start": 915, "end": 919, "text": " So here is training iterations and this is test accuracy." }, { "start": 919, "end": 922, "text": " So you see the full network does something like this." }, { "start": 922, "end": 929, "text": " Now, if you prune to 20 percent of the weights, actually train faster and you go higher." }, { "start": 929, "end": 934, "text": " And even if you have 7 percent of the weights, you go almost as high." }, { "start": 934, "end": 937, "text": " So this is very interesting." }, { "start": 937, "end": 948, "text": " Only when you go to like 1.9 percent of the weights does your performance degrade again and eventually actually go lower than the original network." }, { "start": 948, "end": 954, "text": " So that is pretty, pretty, pretty cool, I think." }, { "start": 954, "end": 958, "text": " Now, as I said, they do a lot of investigation." }, { "start": 958, "end": 965, "text": " And I think one of the main takeaways is that it is not only the structure of the winning hypothesis." }, { "start": 965, "end": 971, "text": " So it's not only the structure of the sub network that makes it to be a winning hypothesis." }, { "start": 971, "end": 974, "text": " It is actually the initialization." }, { "start": 974, "end": 978, "text": " Here I want to show one of these plots." }, { "start": 978, "end": 980, "text": " They have lots of plots." }, { "start": 980, "end": 987, "text": " You can see here, for example, sorry, this is from my own annotations." }, { "start": 987, "end": 994, "text": " Again, this is percent of weights remaining and this is test accuracy at the final iteration." }, { "start": 994, "end": 1001, "text": " And if we initialize the sub network at its original position, like this method suggests, you see," }, { "start": 1001, "end": 1007, "text": " we first increase the accuracy and then decrease it after a long time." }, { "start": 1007, "end": 1018, "text": " If we take the same sub network, right, but we randomly reinitialize it, then it drops much faster and actually immediately drops." }, { "start": 1018, "end": 1025, "text": " So it really is about not only the structure of the sub network, but about its initialization." }, { "start": 1025, "end": 1029, "text": " I think that is that is the core of the hypothesis here." }, { "start": 1029, "end": 1039, "text": " A very interesting related finding that I just want to mention, I find, to be that they actually discover that the weights," }, { "start": 1039, "end": 1048, "text": " so if you have a weight of the, so if you have two kinds of weights, let's actually go up to my original drawing here." }, { "start": 1048, "end": 1056, "text": " If you compare how fast or how far do the weights travel in optimization space, right," }, { "start": 1056, "end": 1062, "text": " so you can basically look at how far weights travel during optimization." }, { "start": 1062, "end": 1074, "text": " So you take the full neural network here and you look at a parameter that ends up being in the winning hypothesis, theta," }, { "start": 1074, "end": 1080, "text": " theta zero, and it goes to theta end, which let's say theta final." }, { "start": 1080, "end": 1086, "text": " And you also look at parameters that don't end up in the winning hypothesis." }, { "start": 1086, "end": 1091, "text": " Let's call these theta one, two, theta, also final, prime." }, { "start": 1091, "end": 1093, "text": " I'm not too good at labeling." }, { "start": 1093, "end": 1101, "text": " And you look at how far they travel, you'll find that the weights that end up in the winning hypothesis," }, { "start": 1101, "end": 1110, "text": " they, during optimization, they travel much further in optimization space than weights that are not in the winning hypothesis, right?" }, { "start": 1110, "end": 1112, "text": " They just stay around much more." }, { "start": 1112, "end": 1117, "text": " So it's not that the kind of good network is already contained in initialization." }, { "start": 1117, "end": 1129, "text": " It's much more than the good network lends itself very favorably to be initialized by SGD, right?" }, { "start": 1129, "end": 1132, "text": " Because it travels farther." }, { "start": 1132, "end": 1137, "text": " It means SGD has a bigger pull on it, right?" }, { "start": 1137, "end": 1142, "text": " I think there is a lot of things that are yet to be explored in this space," }, { "start": 1142, "end": 1148, "text": " and I think this paper is a very cool contribution to our understanding of how neural networks work." }, { "start": 1148, "end": 1150, "text": " All right, I invite you to check out all the experiments." }, { "start": 1150, "end": 1152, "text": " They do a very thorough job." }, { "start": 1152, "end": 1162, "text": " And with that, I say bye bye." } ]
XjILIYVLFrI
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
[ML Olds] Meta Research Supercluster | OpenAI GPT-Instruct | Google LaMDA | Drones fight Pigeons
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "gpt3", "gpt-3", "gpt 3", "openai", "open ai", "meta supercluster", "rsc", "meta rsc", "meta research super cluster", "meta research supercluster", "mlnews", "ml news", "kilcher news", "openai gpt instruct", "gpt3 follow instructions", "how does gpt3 work", "google lamda", "google lambda" ]
#mlnews #rsc #gpt3 Some things we've missed in recent weeks! OUTLINE: 0:00 - Intro & Overview 0:40 - Meta builds AI Research Supercluster (RSC) 2:25 - OpenAI trains GPT-3 to follow instructions 4:10 - Meta AI releases multilingual language models 4:50 - Google LaMDA dialogue models 5:50 - Helpful Things 8:25 - Training the alpha matte generator for Pixel 6 10:15 - Drones used to deter pigeons on buildings 11:05 - IBM sells some Watson Health assets for USD 1B Merch: http://store.ykilcher.com References: https://ai.facebook.com/blog/ai-rsc/?utm_source=pocket_mylist https://openai.com/blog/instruction-following/ https://cdn.openai.com/papers/Training_language_models_to_follow_instructions_with_human_feedback.pdf https://openai.com/blog/deep-reinforcement-learning-from-human-preferences/ https://twitter.com/MetaAI/status/1486745968372551686?utm_source=pocket_mylist https://arxiv.org/pdf/2112.10668.pdf https://github.com/pytorch/fairseq/tree/main/examples/xglm https://ai.googleblog.com/2022/01/lamda-towards-safe-grounded-and-high.html?m=1&utm_source=pocket_mylist https://arxiv.org/pdf/2201.08239.pdf https://evolutiongym.github.io/?utm_source=pocket_mylist https://evolutiongym.github.io/all-tasks https://evolutiongym.github.io/documentation https://arxiv.org/pdf/2201.09863.pdf https://github.com/EvolutionGym https://huggingface.co/blog/sb3 https://twitter.com/Sentdex/status/1489991413005787139 https://github.com/lvwerra/trl?utm_source=pocket_mylist https://ai.googleblog.com/2022/01/accurate-alpha-matting-for-portrait.html https://polyhaven.com/hdris https://ieeexplore.ieee.org/document/9656717 https://www.bloomberg.com/news/articles/2022-01-21/ibm-is-said-to-near-sale-of-watson-health-to-francisco-partners https://archive.ph/xadf9 Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher LinkedIn: https://www.linkedin.com/in/ykilcher BiliBili: https://space.bilibili.com/2017636191 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Meta builds a humongous computer, OpenAI teaches their language models to follow instructions, and we battle pigeons with drones. Welcome to ML News. Welcome to ML News. Now I have to say these aren't exactly news. This is stuff that we've somehow missed or skipped or anything like this from the last two to three weeks, let's say. So consider this more ML olds. But if you're interested, stick around. If you actually do enjoy new ML News, be sure to be subscribed to the channel, leave a like and as always, tell me what you think in the comments. I'm very happy to take your feedback. First story, Meta AI has released a blog post introducing the AI research supercluster, Meta's cutting edge AI supercomputer for AI research. Now this, this is a big computer. Like, look at that. The RSC, the research supercluster, that is ginormous. I mean, look at this. Does anyone get the vibes of like, so this is where your box would go? In any case, this is a huge thing. It consists of 760 DGX A100 boxes. That is a total of 6080 GPUs and all of them are A100s. But did you wonder why you can't get your hands on any GPU anywhere on the planet for the last one and a half years or so? Yeah, they're all right here. Now obviously, obviously all of this is connected with super duper Infini band. It has 175 petabytes of storage. It has 175 petabytes of a flash array storage as 46 petabytes of cache storage, and it has 10 petabytes of flash blade storage. I have no clue what these things mean, but it's a lot. So the blog post goes a little bit into the history of how it was built a bit more at what it contains, how they make it secure, how they handle the difficulties of the last two years and so on. This cluster is supposed to support Meta AI production and research workloads and is already operational but is planned to finish to its full scale up to the mid 2022. Look here's the box. Here's the box. Where does the box go? Where does your box go? Your box goes there. Really nice. This is where your box would go. Check out blog post if you want to learn more. OpenAI has released a blog post in paper titled Aligning Language Models to Follow Instructions, where they've fine tuned GPT-3 to follow human instructions. They give an example right here where if you ask GPT-3 something like explain the moon landing to a six year old in a few sentences, it would sort of continue the pattern as GPT-3 does. It would say explain the theory of gravity, explain the theory of relativity. So it would sort of treat this as a regular language modeling prompt. If you actually want to make GPT-3 answer the question, you have to give it a few examples of question, answer, question, answer beforehand. OpenAI went and fine tuned their language models to obey instructions more clearly. So the model that results is instruct GPT, which in this case would output people went to the moon, they took pictures of what they saw and sent them back to Earth so we could all see them. Supposedly. Like yeah, like that ever happened. So the main challenge here is the data collection part. Fine tuning a big language model requires a bit of data. And they largely followed earlier work called learning from human preferences. So this is a multi step process. First they collect a small labeled data set. After that, they let humans sort of rank answers of the model and they train a reward model from that. And in the end, they use reinforcement learning against that learned reward model. Now in their own words, this is nothing new, they say. However, the smaller instruct GPT model are preferred by humans to the larger GPT-3 models, which is interesting. There's a paper to go along with it, give it a read if you're interested. Data AI writes that they are releasing a series of multilingual autoregressive language models up to 7.5 billion parameters, which significantly outperform English centric language models in few shot learning on 20 plus languages. Again, there is a paper to go along with it and the code and models are available on the repository. These are multilingual models and most of the models are trained on 30 different languages. As you can see, they do scale up in partially layers, also model dimensions, and there's even one model that's trained on over 134 languages. So if you're interested in multilingual models, give this model a try. Google releases a paper called Lambda language models for dialogue applications along with a blog post where they detail a new foray into dialogue models using large language models. What's interesting here is that they're not only interested in generating the most likely data, they do pre-train on pure language modeling, but then when it comes to fine tuning on dialogue data, they have various metrics and for each of these metrics, they have classifiers that classifies the outputs of the language model, which is trying to optimize. So some of these outputs are safety, sensibility, specificity, interestingness, and so on. The model is also capable of doing factual grounding as it is augmented by a retrieval stage during the generation process. So technically, it can look up something on Wikipedia before it answers you, which is pretty cool. If you're interested in dialogue models, definitely give this blog post and paper a read. Alright some helpful stuff for this week. Evolution Gym is a large scale benchmark for evolving soft robots. So contrary to classic reinforcement learning where your agent is kind of fixed and static and has a bunch of actions available, in soft robots, you can also choose how to compose your robot. So here's a bunch of examples of soft robots. Now as you can see, the policy isn't the hard part. It's actually the hard part, how you even construct your robots from the individual building blocks. So here you can see a walker, there is object manipulation, climbing, I believe they do have some some other examples right here. There's climbing. It looks pretty cool. So even though it's still reinforcement learning, this is a cool domain. I like it. There's a paper to go along with the release. If you're interested in soft robotics and reinforcement learning, give it a read. Stable Baselines 3 is in the hugging face hub. Stable Baselines 3 is a reinforcement learning library that provides kind of baseline implementations of RL algorithms such as proximal policy optimization, Q learning and more. So now these are on the hugging face hub and you can just kind of download the strategies, maybe, not entirely sure. But if you're into reinforcement learning, give this a try. I've seen that sent decks has already made a video using stable baselines three. But as far as I could see, he has not used the hugging face hub. So sorry, Harrison, you actually did like a lot of work for nothing. You like pip installed the actual package. Why? In related news, I want to highlight this repository right here by Leandro von Vera, who released this repository to perform reinforcement learning with transformers. It's a library slash example code repository of training transformers using proximal policy optimization. If you don't know proximal policy optimization is a reinforcement learning algorithm that tries to maximize the reward, but at the same time, stay close to some known state like a baseline implementation, a baseline model, or a previous version of the model that you're training. This prevents fatal steps like single steps that bring you into really bad local minima. Now I was going to say if you're into the combination of language and reinforcement learning, check this out. But I mean, transformers have gone way beyond language by this point. So if you're into RL and transformers, this might be the repo for you. Okay, this was it for our helpful stuff this week. I hope you were helped. Our next news is Google AI releasing a blog post called accurate alpha matting for portrait mode selfies on Pixel 6. Yes, it is a bit of an ad for their Pixel phones, but also it details quite extensively how they went about training a system that would generate the alpha map for the types of portrait pictures. The goal here is to get a mask on top of a picture that separates foreground meaning if it's a portrait, the person from background so that you can swap out the background. This is challenging because as you can see right here, hair is often a problem. There are very fine details, the lighting can come from any place and that might not match up with the background and so on. So they detail what kind of model architecture they did. It consists of progressive up sampling, which we've seen a couple of times so far. And the most interesting part is the data generation process. They have this giant studio with like surround array of cameras and lights so they can activate different lights at different time and get kind of a 3D impression of the subject that is at the center. They're also able to capture different lighting effects on the subject, which is also really helpful because the second thing they do is they place that subject into various kind of fake backgrounds. And these fake backgrounds are not just any picture. They are sort of 360 pictures of scenes. So what they can do is they can dynamically relight the subject so that it actually fits into the background. And from that, they generate the training data to the AlphaMAT classifier. Now give this a read if you want to learn more. I was just impressed how deep one can go in like a single task, like how much there is if you really want to solve something to the level of where you can build it into a product and it performs well. So that's pretty cool. I saw this article on IEEE Explorer called Autonomous Detection and Deterrence of Pigeons on Buildings by Drones. And this is the most metal thing ever. I mean poor drones. So there's this camera on roofs and it locates pigeons and when it sees a flock of them, pigeons would destroy their things with their what they call it excrements, but it's poop. So they poop and it destroys the buildings. So they want to shoo them away to prevent damage and difficult and dangerous cleaning procedures. So the camera spots the pigeons and it sends in the drone. And here you can see like a first person view of the drone is like it waits and it's like activate, it just goes after the pigeons. I'm so sorry pigeons. Machines one nature zero. Your move pigeons. All right, our last news. Bloomberg writes IBM sells some Watson Health assets for more than $1 billion. So apparently the whole Watson project hasn't really panned out for IBM the way they wanted it to after the initial successes of winning Jeopardy. It just kind of got nowhere it seemed like. I've heard from a lot of people that it was just not doing the things they promised it to do when they actually deployed it in let's say health settings or the finance world. And I don't know exactly what they tried, but the uniform feedback I've heard is that it just underwhelmed in practice. Now there are some customers using it and IBM says it's still committed to the project. Note that it is only selling some parts and only of Watson Health. That is not the entire Watson project. It's just a health sub project, which might come with its own difficulties, let's say regulatory and whatnot. So IBM says that it is going to focus more on being a cloud provider for AI applications. Well I guess that's where the big money is right now. I guess if you're a cloud provider now you can just you can just print money. So good on IBM instead of losing money. They're now printing it. Excellent. This was already it for ML news. If you have any comments, anything to say, please leave it in the comments. Merch still available and I'll see you next time. Bye bye.
[ { "start": 0, "end": 2.72, "text": " Meta builds a humongous computer," }, { "start": 2.72, "end": 6.48, "text": " OpenAI teaches their language models to follow instructions," }, { "start": 6.48, "end": 9.44, "text": " and we battle pigeons with drones." }, { "start": 9.44, "end": 10.8, "text": " Welcome to ML News." }, { "start": 10.8, "end": 16.8, "text": " Welcome to ML News." }, { "start": 16.8, "end": 19.52, "text": " Now I have to say these aren't exactly news." }, { "start": 19.52, "end": 24.04, "text": " This is stuff that we've somehow missed or skipped or anything like this" }, { "start": 24.04, "end": 26.72, "text": " from the last two to three weeks, let's say." }, { "start": 26.72, "end": 29, "text": " So consider this more ML olds." }, { "start": 29, "end": 30.96, "text": " But if you're interested, stick around." }, { "start": 30.96, "end": 33.32, "text": " If you actually do enjoy new ML News," }, { "start": 33.32, "end": 35.64, "text": " be sure to be subscribed to the channel," }, { "start": 35.64, "end": 39.04, "text": " leave a like and as always, tell me what you think in the comments." }, { "start": 39.04, "end": 40.76, "text": " I'm very happy to take your feedback." }, { "start": 40.76, "end": 47.24, "text": " First story, Meta AI has released a blog post introducing the AI research supercluster," }, { "start": 47.24, "end": 51.2, "text": " Meta's cutting edge AI supercomputer for AI research." }, { "start": 51.2, "end": 53.760000000000005, "text": " Now this, this is a big computer." }, { "start": 53.760000000000005, "end": 55.2, "text": " Like, look at that." }, { "start": 55.2, "end": 60.400000000000006, "text": " The RSC, the research supercluster, that is ginormous." }, { "start": 60.400000000000006, "end": 62.56, "text": " I mean, look at this." }, { "start": 62.56, "end": 67.96000000000001, "text": " Does anyone get the vibes of like, so this is where your box would go?" }, { "start": 67.96000000000001, "end": 70.32000000000001, "text": " In any case, this is a huge thing." }, { "start": 70.32000000000001, "end": 75.92, "text": " It consists of 760 DGX A100 boxes." }, { "start": 75.92, "end": 82.28, "text": " That is a total of 6080 GPUs and all of them are A100s." }, { "start": 82.28, "end": 86.6, "text": " But did you wonder why you can't get your hands on any GPU anywhere on the planet for" }, { "start": 86.6, "end": 88.76, "text": " the last one and a half years or so?" }, { "start": 88.76, "end": 90.68, "text": " Yeah, they're all right here." }, { "start": 90.68, "end": 95.88, "text": " Now obviously, obviously all of this is connected with super duper Infini band." }, { "start": 95.88, "end": 99.68, "text": " It has 175 petabytes of storage." }, { "start": 99.68, "end": 107.2, "text": " It has 175 petabytes of a flash array storage as 46 petabytes of cache storage, and it has" }, { "start": 107.2, "end": 110.08, "text": " 10 petabytes of flash blade storage." }, { "start": 110.08, "end": 112.76, "text": " I have no clue what these things mean, but it's a lot." }, { "start": 112.76, "end": 116.8, "text": " So the blog post goes a little bit into the history of how it was built a bit more at" }, { "start": 116.8, "end": 121.12, "text": " what it contains, how they make it secure, how they handle the difficulties of the last" }, { "start": 121.12, "end": 122.64, "text": " two years and so on." }, { "start": 122.64, "end": 128.76, "text": " This cluster is supposed to support Meta AI production and research workloads and is already" }, { "start": 128.76, "end": 135.36, "text": " operational but is planned to finish to its full scale up to the mid 2022." }, { "start": 135.36, "end": 136.64, "text": " Look here's the box." }, { "start": 136.64, "end": 137.64, "text": " Here's the box." }, { "start": 137.64, "end": 139.07999999999998, "text": " Where does the box go?" }, { "start": 139.08, "end": 141.08, "text": " Where does your box go?" }, { "start": 141.08, "end": 142.84, "text": " Your box goes there." }, { "start": 142.84, "end": 143.84, "text": " Really nice." }, { "start": 143.84, "end": 145.56, "text": " This is where your box would go." }, { "start": 145.56, "end": 147.72000000000003, "text": " Check out blog post if you want to learn more." }, { "start": 147.72000000000003, "end": 155.08, "text": " OpenAI has released a blog post in paper titled Aligning Language Models to Follow Instructions," }, { "start": 155.08, "end": 158.92000000000002, "text": " where they've fine tuned GPT-3 to follow human instructions." }, { "start": 158.92000000000002, "end": 163.52, "text": " They give an example right here where if you ask GPT-3 something like explain the moon" }, { "start": 163.52, "end": 169, "text": " landing to a six year old in a few sentences, it would sort of continue the pattern as GPT-3" }, { "start": 169, "end": 170, "text": " does." }, { "start": 170, "end": 173.28, "text": " It would say explain the theory of gravity, explain the theory of relativity." }, { "start": 173.28, "end": 178.06, "text": " So it would sort of treat this as a regular language modeling prompt." }, { "start": 178.06, "end": 182.42, "text": " If you actually want to make GPT-3 answer the question, you have to give it a few examples" }, { "start": 182.42, "end": 185.6, "text": " of question, answer, question, answer beforehand." }, { "start": 185.6, "end": 192.14, "text": " OpenAI went and fine tuned their language models to obey instructions more clearly." }, { "start": 192.14, "end": 197.34, "text": " So the model that results is instruct GPT, which in this case would output people went" }, { "start": 197.34, "end": 200.98, "text": " to the moon, they took pictures of what they saw and sent them back to Earth so we could" }, { "start": 200.98, "end": 202.64000000000001, "text": " all see them." }, { "start": 202.64000000000001, "end": 203.92000000000002, "text": " Supposedly." }, { "start": 203.92000000000002, "end": 206.24, "text": " Like yeah, like that ever happened." }, { "start": 206.24, "end": 210.28, "text": " So the main challenge here is the data collection part." }, { "start": 210.28, "end": 214.5, "text": " Fine tuning a big language model requires a bit of data." }, { "start": 214.5, "end": 219.28, "text": " And they largely followed earlier work called learning from human preferences." }, { "start": 219.28, "end": 221.2, "text": " So this is a multi step process." }, { "start": 221.2, "end": 224.14000000000001, "text": " First they collect a small labeled data set." }, { "start": 224.14, "end": 228.79999999999998, "text": " After that, they let humans sort of rank answers of the model and they train a reward model" }, { "start": 228.79999999999998, "end": 229.79999999999998, "text": " from that." }, { "start": 229.79999999999998, "end": 233.72, "text": " And in the end, they use reinforcement learning against that learned reward model." }, { "start": 233.72, "end": 237, "text": " Now in their own words, this is nothing new, they say." }, { "start": 237, "end": 244.72, "text": " However, the smaller instruct GPT model are preferred by humans to the larger GPT-3 models," }, { "start": 244.72, "end": 245.72, "text": " which is interesting." }, { "start": 245.72, "end": 251, "text": " There's a paper to go along with it, give it a read if you're interested." }, { "start": 251, "end": 256.04, "text": " Data AI writes that they are releasing a series of multilingual autoregressive language models" }, { "start": 256.04, "end": 261.92, "text": " up to 7.5 billion parameters, which significantly outperform English centric language models" }, { "start": 261.92, "end": 264.36, "text": " in few shot learning on 20 plus languages." }, { "start": 264.36, "end": 270.28, "text": " Again, there is a paper to go along with it and the code and models are available on the" }, { "start": 270.28, "end": 271.32, "text": " repository." }, { "start": 271.32, "end": 276.92, "text": " These are multilingual models and most of the models are trained on 30 different languages." }, { "start": 276.92, "end": 282.32, "text": " As you can see, they do scale up in partially layers, also model dimensions, and there's" }, { "start": 282.32, "end": 286.78000000000003, "text": " even one model that's trained on over 134 languages." }, { "start": 286.78000000000003, "end": 292.76, "text": " So if you're interested in multilingual models, give this model a try." }, { "start": 292.76, "end": 297.98, "text": " Google releases a paper called Lambda language models for dialogue applications along with" }, { "start": 297.98, "end": 303.52000000000004, "text": " a blog post where they detail a new foray into dialogue models using large language" }, { "start": 303.52000000000004, "end": 304.52000000000004, "text": " models." }, { "start": 304.52, "end": 309.4, "text": " What's interesting here is that they're not only interested in generating the most likely" }, { "start": 309.4, "end": 314.4, "text": " data, they do pre-train on pure language modeling, but then when it comes to fine tuning on dialogue" }, { "start": 314.4, "end": 319.38, "text": " data, they have various metrics and for each of these metrics, they have classifiers that" }, { "start": 319.38, "end": 324.21999999999997, "text": " classifies the outputs of the language model, which is trying to optimize." }, { "start": 324.21999999999997, "end": 330, "text": " So some of these outputs are safety, sensibility, specificity, interestingness, and so on." }, { "start": 330, "end": 335.78, "text": " The model is also capable of doing factual grounding as it is augmented by a retrieval" }, { "start": 335.78, "end": 338.28, "text": " stage during the generation process." }, { "start": 338.28, "end": 342.72, "text": " So technically, it can look up something on Wikipedia before it answers you, which is" }, { "start": 342.72, "end": 343.72, "text": " pretty cool." }, { "start": 343.72, "end": 351.64, "text": " If you're interested in dialogue models, definitely give this blog post and paper a read." }, { "start": 351.64, "end": 355.24, "text": " Alright some helpful stuff for this week." }, { "start": 355.24, "end": 359.56, "text": " Evolution Gym is a large scale benchmark for evolving soft robots." }, { "start": 359.56, "end": 364.68, "text": " So contrary to classic reinforcement learning where your agent is kind of fixed and static" }, { "start": 364.68, "end": 370.76, "text": " and has a bunch of actions available, in soft robots, you can also choose how to compose" }, { "start": 370.76, "end": 371.76, "text": " your robot." }, { "start": 371.76, "end": 375.24, "text": " So here's a bunch of examples of soft robots." }, { "start": 375.24, "end": 377.84000000000003, "text": " Now as you can see, the policy isn't the hard part." }, { "start": 377.84000000000003, "end": 381.68, "text": " It's actually the hard part, how you even construct your robots from the individual" }, { "start": 381.68, "end": 382.72, "text": " building blocks." }, { "start": 382.72, "end": 388.5, "text": " So here you can see a walker, there is object manipulation, climbing, I believe they do" }, { "start": 388.5, "end": 391.14, "text": " have some some other examples right here." }, { "start": 391.14, "end": 392.14, "text": " There's climbing." }, { "start": 392.14, "end": 393.66, "text": " It looks pretty cool." }, { "start": 393.66, "end": 397.56, "text": " So even though it's still reinforcement learning, this is a cool domain." }, { "start": 397.56, "end": 398.56, "text": " I like it." }, { "start": 398.56, "end": 400.64, "text": " There's a paper to go along with the release." }, { "start": 400.64, "end": 405.56, "text": " If you're interested in soft robotics and reinforcement learning, give it a read." }, { "start": 405.56, "end": 408.76, "text": " Stable Baselines 3 is in the hugging face hub." }, { "start": 408.76, "end": 414.12, "text": " Stable Baselines 3 is a reinforcement learning library that provides kind of baseline implementations" }, { "start": 414.12, "end": 419.76, "text": " of RL algorithms such as proximal policy optimization, Q learning and more." }, { "start": 419.76, "end": 425.08, "text": " So now these are on the hugging face hub and you can just kind of download the strategies," }, { "start": 425.08, "end": 427.28000000000003, "text": " maybe, not entirely sure." }, { "start": 427.28000000000003, "end": 430.68, "text": " But if you're into reinforcement learning, give this a try." }, { "start": 430.68, "end": 435.48, "text": " I've seen that sent decks has already made a video using stable baselines three." }, { "start": 435.48, "end": 439.98, "text": " But as far as I could see, he has not used the hugging face hub." }, { "start": 439.98, "end": 443.64, "text": " So sorry, Harrison, you actually did like a lot of work for nothing." }, { "start": 443.64, "end": 446.8, "text": " You like pip installed the actual package." }, { "start": 446.8, "end": 447.8, "text": " Why?" }, { "start": 447.8, "end": 451.88, "text": " In related news, I want to highlight this repository right here by Leandro von Vera," }, { "start": 451.88, "end": 456.62, "text": " who released this repository to perform reinforcement learning with transformers." }, { "start": 456.62, "end": 462.74, "text": " It's a library slash example code repository of training transformers using proximal policy" }, { "start": 462.74, "end": 463.82, "text": " optimization." }, { "start": 463.82, "end": 468.02, "text": " If you don't know proximal policy optimization is a reinforcement learning algorithm that" }, { "start": 468.02, "end": 474.15999999999997, "text": " tries to maximize the reward, but at the same time, stay close to some known state like" }, { "start": 474.15999999999997, "end": 480.28, "text": " a baseline implementation, a baseline model, or a previous version of the model that you're" }, { "start": 480.28, "end": 481.28, "text": " training." }, { "start": 481.28, "end": 487, "text": " This prevents fatal steps like single steps that bring you into really bad local minima." }, { "start": 487, "end": 490.74, "text": " Now I was going to say if you're into the combination of language and reinforcement" }, { "start": 490.74, "end": 492.35999999999996, "text": " learning, check this out." }, { "start": 492.35999999999996, "end": 496.18, "text": " But I mean, transformers have gone way beyond language by this point." }, { "start": 496.18, "end": 500.28000000000003, "text": " So if you're into RL and transformers, this might be the repo for you." }, { "start": 500.28000000000003, "end": 502.44, "text": " Okay, this was it for our helpful stuff this week." }, { "start": 502.44, "end": 503.92, "text": " I hope you were helped." }, { "start": 503.92, "end": 509.84000000000003, "text": " Our next news is Google AI releasing a blog post called accurate alpha matting for portrait" }, { "start": 509.84000000000003, "end": 511.8, "text": " mode selfies on Pixel 6." }, { "start": 511.8, "end": 517.5600000000001, "text": " Yes, it is a bit of an ad for their Pixel phones, but also it details quite extensively" }, { "start": 517.5600000000001, "end": 523.96, "text": " how they went about training a system that would generate the alpha map for the types" }, { "start": 523.96, "end": 525.6800000000001, "text": " of portrait pictures." }, { "start": 525.68, "end": 529.92, "text": " The goal here is to get a mask on top of a picture that separates foreground meaning" }, { "start": 529.92, "end": 535.14, "text": " if it's a portrait, the person from background so that you can swap out the background." }, { "start": 535.14, "end": 539.42, "text": " This is challenging because as you can see right here, hair is often a problem." }, { "start": 539.42, "end": 544.28, "text": " There are very fine details, the lighting can come from any place and that might not" }, { "start": 544.28, "end": 546.38, "text": " match up with the background and so on." }, { "start": 546.38, "end": 549.56, "text": " So they detail what kind of model architecture they did." }, { "start": 549.56, "end": 554.4399999999999, "text": " It consists of progressive up sampling, which we've seen a couple of times so far." }, { "start": 554.44, "end": 557.96, "text": " And the most interesting part is the data generation process." }, { "start": 557.96, "end": 563.8000000000001, "text": " They have this giant studio with like surround array of cameras and lights so they can activate" }, { "start": 563.8000000000001, "end": 569.4000000000001, "text": " different lights at different time and get kind of a 3D impression of the subject that" }, { "start": 569.4000000000001, "end": 570.6800000000001, "text": " is at the center." }, { "start": 570.6800000000001, "end": 575.36, "text": " They're also able to capture different lighting effects on the subject, which is also really" }, { "start": 575.36, "end": 579.84, "text": " helpful because the second thing they do is they place that subject into various kind" }, { "start": 579.84, "end": 581.2800000000001, "text": " of fake backgrounds." }, { "start": 581.2800000000001, "end": 584.22, "text": " And these fake backgrounds are not just any picture." }, { "start": 584.22, "end": 587.76, "text": " They are sort of 360 pictures of scenes." }, { "start": 587.76, "end": 592.88, "text": " So what they can do is they can dynamically relight the subject so that it actually fits" }, { "start": 592.88, "end": 594.1600000000001, "text": " into the background." }, { "start": 594.1600000000001, "end": 598.2, "text": " And from that, they generate the training data to the AlphaMAT classifier." }, { "start": 598.2, "end": 600.5600000000001, "text": " Now give this a read if you want to learn more." }, { "start": 600.5600000000001, "end": 606.4, "text": " I was just impressed how deep one can go in like a single task, like how much there is" }, { "start": 606.4, "end": 611.44, "text": " if you really want to solve something to the level of where you can build it into a product" }, { "start": 611.44, "end": 612.88, "text": " and it performs well." }, { "start": 612.88, "end": 616.28, "text": " So that's pretty cool." }, { "start": 616.28, "end": 622, "text": " I saw this article on IEEE Explorer called Autonomous Detection and Deterrence of Pigeons" }, { "start": 622, "end": 624.12, "text": " on Buildings by Drones." }, { "start": 624.12, "end": 626.28, "text": " And this is the most metal thing ever." }, { "start": 626.28, "end": 627.4, "text": " I mean poor drones." }, { "start": 627.4, "end": 633.2, "text": " So there's this camera on roofs and it locates pigeons and when it sees a flock of them," }, { "start": 633.2, "end": 638.32, "text": " pigeons would destroy their things with their what they call it excrements, but it's poop." }, { "start": 638.32, "end": 640.92, "text": " So they poop and it destroys the buildings." }, { "start": 640.92, "end": 644.4, "text": " So they want to shoo them away to prevent damage and difficult and dangerous cleaning" }, { "start": 644.4, "end": 645.4, "text": " procedures." }, { "start": 645.4, "end": 648.28, "text": " So the camera spots the pigeons and it sends in the drone." }, { "start": 648.28, "end": 653.3199999999999, "text": " And here you can see like a first person view of the drone is like it waits and it's like" }, { "start": 653.3199999999999, "end": 658.7199999999999, "text": " activate, it just goes after the pigeons." }, { "start": 658.7199999999999, "end": 661.0799999999999, "text": " I'm so sorry pigeons." }, { "start": 661.0799999999999, "end": 663.12, "text": " Machines one nature zero." }, { "start": 663.12, "end": 664.12, "text": " Your move pigeons." }, { "start": 664.12, "end": 666.64, "text": " All right, our last news." }, { "start": 666.64, "end": 672.04, "text": " Bloomberg writes IBM sells some Watson Health assets for more than $1 billion." }, { "start": 672.04, "end": 676.64, "text": " So apparently the whole Watson project hasn't really panned out for IBM the way they wanted" }, { "start": 676.64, "end": 679.8, "text": " it to after the initial successes of winning Jeopardy." }, { "start": 679.8, "end": 682.6, "text": " It just kind of got nowhere it seemed like." }, { "start": 682.6, "end": 687.1999999999999, "text": " I've heard from a lot of people that it was just not doing the things they promised it" }, { "start": 687.1999999999999, "end": 692.8, "text": " to do when they actually deployed it in let's say health settings or the finance world." }, { "start": 692.8, "end": 697.64, "text": " And I don't know exactly what they tried, but the uniform feedback I've heard is that" }, { "start": 697.64, "end": 700.52, "text": " it just underwhelmed in practice." }, { "start": 700.52, "end": 705.24, "text": " Now there are some customers using it and IBM says it's still committed to the project." }, { "start": 705.24, "end": 708.92, "text": " Note that it is only selling some parts and only of Watson Health." }, { "start": 708.92, "end": 710.68, "text": " That is not the entire Watson project." }, { "start": 710.68, "end": 715.4799999999999, "text": " It's just a health sub project, which might come with its own difficulties, let's say" }, { "start": 715.4799999999999, "end": 717.6999999999999, "text": " regulatory and whatnot." }, { "start": 717.7, "end": 723.8000000000001, "text": " So IBM says that it is going to focus more on being a cloud provider for AI applications." }, { "start": 723.8000000000001, "end": 725.84, "text": " Well I guess that's where the big money is right now." }, { "start": 725.84, "end": 729.44, "text": " I guess if you're a cloud provider now you can just you can just print money." }, { "start": 729.44, "end": 731.74, "text": " So good on IBM instead of losing money." }, { "start": 731.74, "end": 733.08, "text": " They're now printing it." }, { "start": 733.08, "end": 734.08, "text": " Excellent." }, { "start": 734.08, "end": 735.7, "text": " This was already it for ML news." }, { "start": 735.7, "end": 740, "text": " If you have any comments, anything to say, please leave it in the comments." }, { "start": 740, "end": 742.76, "text": " Merch still available and I'll see you next time." }, { "start": 742.76, "end": 756.68, "text": " Bye bye." } ]
l_3zj6HeWUE
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Group Normalization (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "batchnorm", "groupnorm", "layer norm", "group norm", "batch norm", "instance norm", "fair", "normalization", "mean", "standard deviation", "minibatch", "batch statistics", "kernel", "cnn", "convolutional neural network" ]
The dirty little secret of Batch Normalization is its intrinsic dependence on the training batch size. Group Normalization attempts to achieve the benefits of normalization without batch statistics and, most importantly, without sacrificing performance compared to Batch Normalization. https://arxiv.org/abs/1803.08494 Abstract: Batch Normalization (BN) is a milestone technique in the development of deep learning, enabling various networks to train. However, normalizing along the batch dimension introduces problems --- BN's error increases rapidly when the batch size becomes smaller, caused by inaccurate batch statistics estimation. This limits BN's usage for training larger models and transferring features to computer vision tasks including detection, segmentation, and video, which require small batches constrained by memory consumption. In this paper, we present Group Normalization (GN) as a simple alternative to BN. GN divides the channels into groups and computes within each group the mean and variance for normalization. GN's computation is independent of batch sizes, and its accuracy is stable in a wide range of batch sizes. On ResNet-50 trained in ImageNet, GN has 10.6% lower error than its BN counterpart when using a batch size of 2; when using typical batch sizes, GN is comparably good with BN and outperforms other normalization variants. Moreover, GN can be naturally transferred from pre-training to fine-tuning. GN can outperform its BN-based counterparts for object detection and segmentation in COCO, and for video classification in Kinetics, showing that GN can effectively replace the powerful BN in a variety of tasks. GN can be easily implemented by a few lines of code in modern libraries. Authors: Yuxin Wu, Kaiming He Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there! Today we'll look at group normalization by Yuxin Wu and Kaiming He of Facebook AI Research. So this paper is basically an engineering paper about a new normalization technique called group normalization. So what's the issue here? The issue is that pretty much throughout neural network learning we're using this technique called batch normalization. Now batch normalization is a pretty reasonable thing and it works very very well. So what's the idea behind batch normalization? The idea is if you have data points for machine learning methods and your data is in a 2D coordinate system somewhere down here, and you're trying to separate that from the dots which are here, it is often very beneficial to shift that distribution before you do anything. You want to shift it to the middle of the... Basically you want to center it, first of all, such that the origin point is in the middle of the data. And then sometimes you also want to do what's called normalize it. And by normalizing we mean you want to kind of rescale the axis such that things are more or less sort of like Gaussians. So if you look at this distribution, first is the centering and then second is what is called a normalization. And usually we know that any sort of machine learning methods work better if you do that. And that's mostly in classic machine learning methods with conditioning numbers of the data being better and so on. But if you just want to learn, let's say a linear classifier, you can see here you can even save one parameter because you can make it just go through the origin. And that's true in general. So if we draw this in 1D, you'd have a distribution that maybe is very peaky right here. You first center it to the middle of the coordinate system. And sorry, that's not really centered. And then you would divide it by its standard deviation such that after it, it is a unit standard deviation Gaussian, so a normal distribution. The closer your data seems to be to a multivariate normal distribution, the better these machine learning methods work, especially if you look at how signal in deep network is propagating through the layers. So the idea is if it's good for the general machine learning method that the input has a multivariate normal distribution or is normalized, then it's probably good that the input to each layer is normalized. So when you look at how signal features are in between layers, so this is, for example, the con five three. This is a layer somewhere in the middle of a convolutional neural network. And if you look at the spread of how features feature signals are throughout training, you'll see that the more training progresses, the larger the kind of spread of features is. So you might get really large numbers or really large negative numbers or maybe really small numbers in your neural networks. And it would be better if you had a layer and the input you've normalized it right. And the output then is again a distribution, but it's maybe shifted that you would first transform that back into a normal unit, normal distribution before you put it through the next layer. So what batch norm does is at each layer before each layer, it will do a normalization procedure on the data before giving it to the next layer. And you can do basically backprop through that. It's also common to learn bias and variance parameter to add after that. But the important thing is that after each layer, the data is normalized such that it is kind of in the most comfortable regime. What's the problem? The problem with this is that you actually need the distribution, right? If you want to center this data up here, you need to know what the data is. So you need to know the entire data. If I want to figure out what is the mean of this distribution, I need all of the data points to decide here's the mean. I need to shift that up to here. If I just have a mini batch like we usually do in machine learning. So if I just have this or this and this and this point, I just have four points. I can't determine the mean. But what I can do is I can sort of guess the mean from the four points, right? So my guesstimation of the mean would be somewhere here. And that would be usually close enough. And you can also see that the larger your batch is, if you sample at random, the more accurate your mean estimation is going to be. So people have been training neural networks with large batch sizes for basically batch size have gotten larger and larger in the last year. So that has not been a problem. But what people are doing now is they are doing distributed machine learning where you do have your data set and you draw a batch. And the batch might be large. So this might be, I don't know, one million images. This might still be 4000 images in your batch. But what they'll do, especially with things like TPUs, is they'll distribute that across many, many, many machines into batches of sometimes as small as eight samples per unit. And if this is not images, but maybe something longer, like a sequence of text, or if this is a sequence of speech or something like this, you can sometimes even go down to two or one samples per unit of computation. And of course, you can't do batch normalization. You can't calculate the mean of one sample. It's just going to be that one sample itself. So either you have to have two options. If you're in small batch sizes, let's say two, either you take the hit and have your very bad estimate of the mean from just two samples or eight samples. Or after each layer, you basically do a synchronization step such that everyone communicates to everyone else their statistics. And you can basically aggregate the statistics across the batch. Both are not cool. Usually these frameworks, they don't do this synchronization because it's just too slow. So they'll go with the bad statistics. And you can see this right here in this graph. They have the ImageNet classification error versus batch sizes. So this is a ResNet-50 model trained on the ImageNet dataset using eight workers, so eight GPUs. And if they do 32 images per... Now just look at the blue line here. If they do 32 images per worker, so it's eight workers, it's eight times 32. That's the batch size. That is a large number. 256 maybe. Yeah. All right. So if they do that, then you can see the error is on a state of the art for a ResNet-50. If they go to 16, it's still pretty good. But then as they go lower and lower and lower, so if they go to smaller and smaller batches and spread them out over the workers, then the error starts going up. And this is because the batch norm statistics get worse and worse. So the goal of this paper is to find this group norm thing here. The group norm, this paper claims, is another normalization technique that pretty much does the same thing, this centering and the normalization, the scaling. But it does it without relying on the batch statistics. It basically does it within a data point. And that means that the performance, even though it's a bit smaller at the beginning for this particular example, will stay constant with even in small batch size regimes. So this is potentially applicable, as I said, to things where you have to go to like two or one sample per worker, because it's just the data points, the single data points are just too large. So if you maybe want to train something like BERT on a GPU. So what is group normalization? Group normalization, as I said, works within a sample. Now, there have been other methods that work within a sample instead of across the batch. And they tend to not work as well as batch norm. Now, this paper here claims that group norm works on par with batch norm for large batch sizes and better than on small batch sizes. So here they have a schematic of what's happening. In batch norm, as you can see here, you have this cube. Now, this cube here, N, means the batch size. So these are the data points. Points in your mini batch. This is the thing that is going to get small in the if you don't have enough memory. Then C would be your channels. So we are talking about convolutional neural networks here, but this is generalizable to other neural networks. The channels are going to be the independent feature maps that you're going to have. So in a convolutional neural network, usually each layer has these things called kernels. And there might be three by three matrices like this. And if you have an image, the kernel will be slided. This thing right here will be maybe here will be slided across the image or slid. Is it slid? Okay, would be slid across the image. And then the numbers in here will be convolved with the pixels. And that will give you the next layers representation. So whatever the operation convolution operation is, and you'll slide that over. And that sliding over will give you the values in the next layer. Now you not only have one kernel, but you actually have many kernels. Sorry about this. Let's draw that. So you have more and more kernels. You have a whole stack of kernels. And how many kernels you have, those are the different kernels are also called your different channels. Now, the kernels refer to the weights and the channels refer to the image. But the Ith kernel is going to be convolving the Ith channel of the image. So at the beginning, the input image has three channels because red, green and blue. But then the intermediate images can have more channels as you have basically as many as you have kernels in the layer right before. Okay, and the H and the W means the height and width of the image. So it combined so the image is kind of unrolled across the height or the width in this direction. So what does batch norm do? Batch norm takes, as you can see here, one channel. And it it takes one channel. So maybe this image, this is one channel. Let's just say this is the red channel because I drawn it in red. It takes that and it calculates the mean one over and the standard deviation of that. It calculates those two statistics and it uses that to do this centering and scaling operation. So all of these methods are going to calculate the mean and the variance and then do the same scaling transformation. The question is just how do you calculate the mean? Batch norm does this across the data points. So it looks at a single feature at a single channel and it asks what's the mean across all the data points? What are the data statistics of this channel and what was the mean and standard deviation? Now, actually, batch norm, I'm not I didn't even know that in convolutional layer this works like this. You can also imagine batch norm of really just taking one single feature. And that means of really just taking one of these things right here. So if this goes to the back and normalizing across that, the important part is that it is in fact normalizing across the data points. So it looks at your batch, looks at the mean and the variance in that batch and it normalizes by that. I think convolutional layers make sense because you have this invariance in height and width and therefore. Yeah, so that makes sense. But in a fully connected layer, you'd simply go look at one feature at a time. Layer norm is different. Layer norm has basically been proposed as an alternative to batch norm with the same reasoning that this paper has. So layer norm, as you can see here, it does each data point individually. So here we only have one data point that is normalized by itself. So you do this for each data point independently and therefore it's not dependent on the batch size anymore. But what you'll do is you look across all of the channels right here. So all of the channels and all of the width and height. So this entire thing here, this entire thing is basically one channel. Right. And then the next channel is here of the image and the next. No, that's the next image. Well, that is a bad drawing because the image is unrolled. In any case, what you'll do is you look at. So if you have a filter bank like this, you have an image and the image composed of multiple channels. Right. This is the red. And then you'll have the green. Right. This is in the green. And then you'll have the blue channel. And what you'll do is simply you'll calculate the mean across all of the pixels. And across all of the channels, you just take this whole NumPy array and you just say dot mean. And that gives you one number. And it's just whatever that number is, you subtract it and then you say standard deviation and you divide by that. That's layer norm. So an entire layers representation of one image is just normalized to the mean. Now, this seems a bit drastic. And that's why instance norm did the exact opposite. They said, wait a minute, instead of normalizing across all of the features, right, we'll go back and do what batch norm does. Batch norm looks at each feature individually. So basically, it looks at all of these these different axes in the data distribution. It looks at them differently. So if one axis is scaled very widely, we want to normalize that differently than if than the other axis that is just scaled very shortly. And that's why we'll look at each feature individually like batch norm. But also, we only look at one data point at a time. Now, as you can imagine, this doesn't work anymore in in a fully connected network. This basically works in a convolutional network where you have a feature map channel. So you look at one individual channel and one data point. So that means here you would normalize the red channel individually. You would normalize the green channel individually and you normalize the blue channel individually. So the image you're going to end up with is simply the red channel subtracted by its own mean and then divided by its own standard deviation. And just within that data point, right. So maybe I should here say across the number of features or something. So I hope that that's clear. So the layer norm drops the dependence on the batch size, but instead says we should normalize across all of the features. And the instance norm says, wait a minute, batch norm had a good idea normalizing only across the features individually because the individual features might have different scales. And we should account for that. But also, we don't want to be dependent on the batch size. And now is this where group norm comes in? Group norm is basically a mix between layer norm and instance norm. What group norm says, layer norm and instance norm have good ideas. They only go across one sample. They take that. They say, in essence, instance norm has a good idea in that the features should be normalized individually, but it goes sort of too far from it goes too far. You might get not good enough statistics because you're now normalizing each of these things individually. Whereas with layer norm, you're too restricted. You're basically saying that the features, it's fine if the features relative to each other are like this, right. One is maybe very high variance and one is very low variance. Feature norm would keep that. And group norm would say, maybe that's not so good. We should have, we should normalize the individual features, maybe individually. But their argumentation here is that maybe there are some features that by their nature already have the same sort of scaling and variance. They give an example. If you, for example, have a filter, again, we deal with convolutional layers here, and that filter is a let's say an edge filter, right. So a horizontal edge filter. So it's very low value here. And let me mark the high value with blue. So this is a horizontal edge filter. If you slide this over a window and these are high numbers and these are low numbers, it will respond to edges because edges have high, low, high, right. Or vice versa. So it will give you very positive and very negative number every time you slide across an edge. Now you can imagine that in natural images, that filter, whatever image you put in would, and however you normalize, would give you pretty much the same response as a vertical edge filter. So the horizontal and the vertical edge filter, you'll see whatever their response is, they're probably about equal in size. So we could expect that in a neural network, there will be groups of filters that together exhibit the same scale. And therefore we can normalize across them like in layer norm. So the more things we normalize across, the better statistics we can gather. That's why instance norm doesn't work because it only normalizes across a very small thing, getting very little statistics. But we should normalize, if we could gather good statistics, we should normalize different features differently. And group norm says, well, since some of the features are almost guaranteed to behave the same, we could normalize across those. Now, of course, you don't know at the beginning which ones those are. But you hope that by doing group norm, by basically at a priori, so at the beginning of training, you decide what the groups are. And naturally, it's just whichever ones are next to each other, those are the groups. And you'll hope that through the training procedure, basically those groups will learn the features that are equal of size. Well, you basically enforce that, so you kind of constrain the architecture to do that. So that's the idea behind group norm. You basically build these groups of channels and then you normalize across those, across the groups of, within the groups of channels, across the entire height and width, only in a single data point. And therefore, you gain the advantage of layer norm, of normalizing within a single data point. You retain the advantage of batch norm, of normalizing across single features. And that's what instance norm attempted. But yeah, so you get the best of both worlds, sort of. That's group norm. And now we go and look what it does. So they say, OK, basically all the normalization techniques do this. They subtract a mean and divide by a standard deviation. That's what we saw. And the difference is just across what you collect, your statistics. So the group norm is the following code in TensorFlow. As you can see, you simply reshape your data and basically expand this part right here where you built, where you put the extra. So this is C. This entire thing used to be C. And you divide it into group and index within group. And then you just normalize across that and reshape to the original dimension again. And the important, the cool thing is in batch norm, you have to keep track of these, of these running means, because at test time, you sort of don't want the batch statistic to influence anything. You don't have that here. So you just back propagate through this observation, through this operation. And you don't need to keep these running, running averages going. And you always care, am I in test or am I in train mode right now? You just do this. This operation is per data point. So it's just part of your model. Right. And they do a an experiment where they have 32 images per GPU. So it's reasonably sized. And they can basically show that the group norm and the batch norm, they compare in their performance. Now, I do usually don't believe the experiments that you see in single papers. But I think this has been replicated a couple of times. Now, you see, this is the train error where group norm even behaves a bit better. And then in the validation error, it behaves a bit worse. But one could say it is it is kind of more closely together than the other methods are to the group norm or to each other. These instance norm and layer norm. So it at least it's better than instance norm and layer norm. And then once you go into the smaller batch size regime, of course, that's where the group norm starts to shine. So if you go from the 32 images per GPU, which is this low black curve here, all the way to two images per GPU. And I believe they could even do one image per GPU with group norm. But of course, you can't do that with batch norm because you need batch statistics. You can see that the performance of batch norm degrades drastically. Whereas with group norm, this experiment is just funny. They just had to do this, even though you know exactly what turns out. So look at the lines are all exactly in the in the same place. I mean, come on, like, you know, you're just having time to probably one of the reviewers was like, but did you really do the experiment? They put it in. So, yeah. So you can see that the batch norm beats the group norm in this setting with the when you have the larger batch sizes. But the group norm pulls ahead quite drastically when you have the smaller batch sizes. And that is the main advantage. So now you can turn to models that require small batch sizes or small batch per worker. And generally, it's a pain in the ass to just keep track of those statistics for test time. They do verify, which I find pretty cool, that this phenomenon of the responses going apart during training in the internal feature maps, batch norm counteracts that. So with batch norm, you'll get actually a convergence of responses during training. So the more you train, the more normalized basically your internal features will be. And they show that this is exactly the same with group norm. So group norm is as it seems, it is a replacement. It's not an addition. It doesn't the gains don't come from different place. It seems to be a substitute for batch norm, though they don't have an experiment where they do both. I believe maybe I'm wrong. Maybe they do. But yeah, it seems like you just kind of have to bring some calmness on some standardization into your signal. And how exactly you do that doesn't seem that important as long as you do it with some precision and some some real overall statistics. Yeah. What I don't like about this is now you have, of course, a new hyper parameter, which is this number of groups. So that that seems rather annoying. And the gains like this usually come from the introductions of new hyper parameters. And that just it's not so it's not that ideal for a method to introduce a new hyper parameter, at least layer norm and instance norm didn't. And now, as you can see, the number of groups is is not super influential, but does have a bit of an influence on the on the performance. So if you go a number of groups or here number of channels per group, of course, these two numbers are inversely related. The more groups you have, the less number of channels per group you have. If you go to one extreme, you will get to the layer norm, basically. So the layer norm is an extreme case of group norm where you just have one group. All the channels are in the same group. Then the performance, as you can see here, is quite a bit worse. If you go to the other extreme where every channel is its own group, that's equivalent to instance norm. Again, the performance is quite bad. And somewhere in the middle here with 32 groups is seems to be a good sweet spot. So I don't again I don't like the hyper parameter seems to be some somewhat of a thing where you really have to hit a good value. And well, I guess we'll see over time if that value is always going to be about the same, you know, like the beta two of Adam. It's it's always like people never change it from point nine nine nine because it just tends to work. Or whether that's really going to be another hyper parameter to fit. That seems to be annoying. They do a bunch of ablation studies and tests on, as we said, the, for example, object detection and segmentation. So so models where you must go almost to small batch sizes just because so video classification. So if you want to classify an entire video, that's a lot of data. And you almost have to go small batch sizes for that. They do a lot of experiments. And generally, as I said, I believe these results for group norm have been replicated and across the community a bunch of times now. And I would definitely consider group norm if you are thinking of a especially a distributed machine learning project. All right. With that, I hope you enjoyed this paper. I've been talking for way too long now. I wish you a nice day. If you haven't already, please subscribe, like, share, comment or whatever you feel like doing. Bye bye.
[ { "start": 0, "end": 10, "text": " Hi there! Today we'll look at group normalization by Yuxin Wu and Kaiming He of Facebook AI Research." }, { "start": 10, "end": 18, "text": " So this paper is basically an engineering paper about a new normalization technique called group normalization." }, { "start": 18, "end": 27, "text": " So what's the issue here? The issue is that pretty much throughout neural network learning we're using this technique called batch normalization." }, { "start": 27, "end": 33, "text": " Now batch normalization is a pretty reasonable thing and it works very very well." }, { "start": 33, "end": 36, "text": " So what's the idea behind batch normalization?" }, { "start": 36, "end": 47, "text": " The idea is if you have data points for machine learning methods and your data is in a 2D coordinate system somewhere down here," }, { "start": 47, "end": 51, "text": " and you're trying to separate that from the dots which are here," }, { "start": 51, "end": 57, "text": " it is often very beneficial to shift that distribution before you do anything." }, { "start": 57, "end": 61, "text": " You want to shift it to the middle of the..." }, { "start": 61, "end": 71, "text": " Basically you want to center it, first of all, such that the origin point is in the middle of the data." }, { "start": 71, "end": 75, "text": " And then sometimes you also want to do what's called normalize it." }, { "start": 75, "end": 85, "text": " And by normalizing we mean you want to kind of rescale the axis such that things are more or less sort of like Gaussians." }, { "start": 85, "end": 97, "text": " So if you look at this distribution, first is the centering and then second is what is called a normalization." }, { "start": 97, "end": 106, "text": " And usually we know that any sort of machine learning methods work better if you do that. And that's mostly in classic machine learning methods" }, { "start": 106, "end": 110, "text": " with conditioning numbers of the data being better and so on." }, { "start": 110, "end": 119, "text": " But if you just want to learn, let's say a linear classifier, you can see here you can even save one parameter because you can make it just go through the origin." }, { "start": 119, "end": 122, "text": " And that's true in general." }, { "start": 122, "end": 129, "text": " So if we draw this in 1D, you'd have a distribution that maybe is very peaky right here." }, { "start": 129, "end": 134, "text": " You first center it to the middle of the coordinate system." }, { "start": 134, "end": 137, "text": " And sorry, that's not really centered." }, { "start": 137, "end": 147, "text": " And then you would divide it by its standard deviation such that after it, it is a unit standard deviation Gaussian, so a normal distribution." }, { "start": 147, "end": 155, "text": " The closer your data seems to be to a multivariate normal distribution, the better these machine learning methods work," }, { "start": 155, "end": 160, "text": " especially if you look at how signal in deep network is propagating through the layers." }, { "start": 160, "end": 172, "text": " So the idea is if it's good for the general machine learning method that the input has a multivariate normal distribution or is normalized," }, { "start": 172, "end": 177, "text": " then it's probably good that the input to each layer is normalized." }, { "start": 177, "end": 188, "text": " So when you look at how signal features are in between layers, so this is, for example, the con five three." }, { "start": 188, "end": 193, "text": " This is a layer somewhere in the middle of a convolutional neural network." }, { "start": 193, "end": 200, "text": " And if you look at the spread of how features feature signals are throughout training," }, { "start": 200, "end": 206, "text": " you'll see that the more training progresses, the larger the kind of spread of features is." }, { "start": 206, "end": 215, "text": " So you might get really large numbers or really large negative numbers or maybe really small numbers in your neural networks." }, { "start": 215, "end": 221, "text": " And it would be better if you had a layer and the input you've normalized it right." }, { "start": 221, "end": 228, "text": " And the output then is again a distribution, but it's maybe shifted that you would first transform that back" }, { "start": 228, "end": 234, "text": " into a normal unit, normal distribution before you put it through the next layer." }, { "start": 234, "end": 246, "text": " So what batch norm does is at each layer before each layer, it will do a normalization procedure on the data before giving it to the next layer." }, { "start": 246, "end": 250, "text": " And you can do basically backprop through that." }, { "start": 250, "end": 255, "text": " It's also common to learn bias and variance parameter to add after that." }, { "start": 255, "end": 263, "text": " But the important thing is that after each layer, the data is normalized such that it is kind of in the most comfortable regime." }, { "start": 263, "end": 269, "text": " What's the problem? The problem with this is that you actually need the distribution, right?" }, { "start": 269, "end": 276, "text": " If you want to center this data up here, you need to know what the data is." }, { "start": 276, "end": 282, "text": " So you need to know the entire data. If I want to figure out what is the mean of this distribution," }, { "start": 282, "end": 288, "text": " I need all of the data points to decide here's the mean. I need to shift that up to here." }, { "start": 288, "end": 292, "text": " If I just have a mini batch like we usually do in machine learning." }, { "start": 292, "end": 299, "text": " So if I just have this or this and this and this point, I just have four points. I can't determine the mean." }, { "start": 299, "end": 304, "text": " But what I can do is I can sort of guess the mean from the four points, right?" }, { "start": 304, "end": 309, "text": " So my guesstimation of the mean would be somewhere here. And that would be usually close enough." }, { "start": 309, "end": 319, "text": " And you can also see that the larger your batch is, if you sample at random, the more accurate your mean estimation is going to be." }, { "start": 319, "end": 327, "text": " So people have been training neural networks with large batch sizes for basically batch size have gotten larger and larger in the last year." }, { "start": 327, "end": 337, "text": " So that has not been a problem. But what people are doing now is they are doing distributed machine learning where you do have your data set and you draw a batch." }, { "start": 337, "end": 341, "text": " And the batch might be large. So this might be, I don't know, one million images." }, { "start": 341, "end": 352, "text": " This might still be 4000 images in your batch. But what they'll do, especially with things like TPUs, is they'll distribute that across many, many, many machines" }, { "start": 352, "end": 359, "text": " into batches of sometimes as small as eight samples per unit." }, { "start": 359, "end": 368, "text": " And if this is not images, but maybe something longer, like a sequence of text, or if this is a sequence of speech or something like this," }, { "start": 368, "end": 376, "text": " you can sometimes even go down to two or one samples per unit of computation." }, { "start": 376, "end": 382, "text": " And of course, you can't do batch normalization. You can't calculate the mean of one sample." }, { "start": 382, "end": 388, "text": " It's just going to be that one sample itself. So either you have to have two options." }, { "start": 388, "end": 399, "text": " If you're in small batch sizes, let's say two, either you take the hit and have your very bad estimate of the mean from just two samples or eight samples." }, { "start": 399, "end": 407, "text": " Or after each layer, you basically do a synchronization step such that everyone communicates to everyone else their statistics." }, { "start": 407, "end": 412, "text": " And you can basically aggregate the statistics across the batch. Both are not cool." }, { "start": 412, "end": 418, "text": " Usually these frameworks, they don't do this synchronization because it's just too slow." }, { "start": 418, "end": 424, "text": " So they'll go with the bad statistics. And you can see this right here in this graph." }, { "start": 424, "end": 428, "text": " They have the ImageNet classification error versus batch sizes." }, { "start": 428, "end": 435, "text": " So this is a ResNet-50 model trained on the ImageNet dataset using eight workers, so eight GPUs." }, { "start": 435, "end": 442, "text": " And if they do 32 images per... Now just look at the blue line here." }, { "start": 442, "end": 450, "text": " If they do 32 images per worker, so it's eight workers, it's eight times 32. That's the batch size." }, { "start": 450, "end": 456, "text": " That is a large number. 256 maybe. Yeah." }, { "start": 456, "end": 466, "text": " All right. So if they do that, then you can see the error is on a state of the art for a ResNet-50." }, { "start": 466, "end": 472, "text": " If they go to 16, it's still pretty good. But then as they go lower and lower and lower," }, { "start": 472, "end": 481, "text": " so if they go to smaller and smaller batches and spread them out over the workers, then the error starts going up." }, { "start": 481, "end": 485, "text": " And this is because the batch norm statistics get worse and worse." }, { "start": 485, "end": 489, "text": " So the goal of this paper is to find this group norm thing here." }, { "start": 489, "end": 496, "text": " The group norm, this paper claims, is another normalization technique that pretty much does the same thing," }, { "start": 496, "end": 506, "text": " this centering and the normalization, the scaling. But it does it without relying on the batch statistics." }, { "start": 506, "end": 511, "text": " It basically does it within a data point. And that means that the performance," }, { "start": 511, "end": 516, "text": " even though it's a bit smaller at the beginning for this particular example," }, { "start": 516, "end": 521, "text": " will stay constant with even in small batch size regimes." }, { "start": 521, "end": 528, "text": " So this is potentially applicable, as I said, to things where you have to go to like two or one sample per worker," }, { "start": 528, "end": 534, "text": " because it's just the data points, the single data points are just too large." }, { "start": 534, "end": 541, "text": " So if you maybe want to train something like BERT on a GPU." }, { "start": 541, "end": 548, "text": " So what is group normalization? Group normalization, as I said, works within a sample." }, { "start": 548, "end": 553, "text": " Now, there have been other methods that work within a sample instead of across the batch." }, { "start": 553, "end": 557, "text": " And they tend to not work as well as batch norm." }, { "start": 557, "end": 563, "text": " Now, this paper here claims that group norm works on par with batch norm for large batch sizes" }, { "start": 563, "end": 569, "text": " and better than on small batch sizes. So here they have a schematic of what's happening." }, { "start": 569, "end": 573, "text": " In batch norm, as you can see here, you have this cube." }, { "start": 573, "end": 580, "text": " Now, this cube here, N, means the batch size. So these are the data points." }, { "start": 580, "end": 590, "text": " Points in your mini batch. This is the thing that is going to get small in the if you don't have enough memory." }, { "start": 590, "end": 594, "text": " Then C would be your channels." }, { "start": 594, "end": 604, "text": " So we are talking about convolutional neural networks here, but this is generalizable to other neural networks." }, { "start": 604, "end": 609, "text": " The channels are going to be the independent feature maps that you're going to have." }, { "start": 609, "end": 614, "text": " So in a convolutional neural network, usually each layer has these things called kernels." }, { "start": 614, "end": 617, "text": " And there might be three by three matrices like this." }, { "start": 617, "end": 623, "text": " And if you have an image, the kernel will be slided." }, { "start": 623, "end": 629, "text": " This thing right here will be maybe here will be slided across the image or slid. Is it slid?" }, { "start": 629, "end": 631, "text": " Okay, would be slid across the image." }, { "start": 631, "end": 634, "text": " And then the numbers in here will be convolved with the pixels." }, { "start": 634, "end": 639, "text": " And that will give you the next layers representation." }, { "start": 639, "end": 643, "text": " So whatever the operation convolution operation is, and you'll slide that over." }, { "start": 643, "end": 647, "text": " And that sliding over will give you the values in the next layer." }, { "start": 647, "end": 652, "text": " Now you not only have one kernel, but you actually have many kernels." }, { "start": 652, "end": 655, "text": " Sorry about this. Let's draw that." }, { "start": 655, "end": 665, "text": " So you have more and more kernels." }, { "start": 665, "end": 672, "text": " You have a whole stack of kernels. And how many kernels you have, those are the different kernels are also called your different channels." }, { "start": 672, "end": 676, "text": " Now, the kernels refer to the weights and the channels refer to the image." }, { "start": 676, "end": 681, "text": " But the Ith kernel is going to be convolving the Ith channel of the image." }, { "start": 681, "end": 687, "text": " So at the beginning, the input image has three channels because red, green and blue." }, { "start": 687, "end": 697, "text": " But then the intermediate images can have more channels as you have basically as many as you have kernels in the layer right before." }, { "start": 697, "end": 703, "text": " Okay, and the H and the W means the height and width of the image." }, { "start": 703, "end": 709, "text": " So it combined so the image is kind of unrolled across the height or the width in this direction." }, { "start": 709, "end": 711, "text": " So what does batch norm do?" }, { "start": 711, "end": 716, "text": " Batch norm takes, as you can see here, one channel." }, { "start": 716, "end": 720, "text": " And it it takes one channel." }, { "start": 720, "end": 723, "text": " So maybe this image, this is one channel." }, { "start": 723, "end": 727, "text": " Let's just say this is the red channel because I drawn it in red." }, { "start": 727, "end": 736, "text": " It takes that and it calculates the mean one over and the standard deviation of that." }, { "start": 736, "end": 742, "text": " It calculates those two statistics and it uses that to do this centering and scaling operation." }, { "start": 742, "end": 748, "text": " So all of these methods are going to calculate the mean and the variance and then do the same scaling transformation." }, { "start": 748, "end": 752, "text": " The question is just how do you calculate the mean?" }, { "start": 752, "end": 754, "text": " Batch norm does this across the data points." }, { "start": 754, "end": 761, "text": " So it looks at a single feature at a single channel and it asks what's the mean across all the data points?" }, { "start": 761, "end": 769, "text": " What are the data statistics of this channel and what was the mean and standard deviation?" }, { "start": 769, "end": 775, "text": " Now, actually, batch norm, I'm not I didn't even know that in convolutional layer this works like this." }, { "start": 775, "end": 779, "text": " You can also imagine batch norm of really just taking one single feature." }, { "start": 779, "end": 785, "text": " And that means of really just taking one of these things right here." }, { "start": 785, "end": 794, "text": " So if this goes to the back and normalizing across that, the important part is that it is in fact normalizing across the data points." }, { "start": 794, "end": 800, "text": " So it looks at your batch, looks at the mean and the variance in that batch and it normalizes by that." }, { "start": 800, "end": 805, "text": " I think convolutional layers make sense because you have this invariance in height and width and therefore." }, { "start": 805, "end": 807, "text": " Yeah, so that makes sense." }, { "start": 807, "end": 813, "text": " But in a fully connected layer, you'd simply go look at one feature at a time." }, { "start": 813, "end": 815, "text": " Layer norm is different." }, { "start": 815, "end": 821, "text": " Layer norm has basically been proposed as an alternative to batch norm with the same reasoning that this paper has." }, { "start": 821, "end": 827, "text": " So layer norm, as you can see here, it does each data point individually." }, { "start": 827, "end": 833, "text": " So here we only have one data point that is normalized by itself." }, { "start": 833, "end": 839, "text": " So you do this for each data point independently and therefore it's not dependent on the batch size anymore." }, { "start": 839, "end": 844, "text": " But what you'll do is you look across all of the channels right here." }, { "start": 844, "end": 848, "text": " So all of the channels and all of the width and height." }, { "start": 848, "end": 854, "text": " So this entire thing here, this entire thing is basically one channel." }, { "start": 854, "end": 859, "text": " Right. And then the next channel is here of the image and the next." }, { "start": 859, "end": 862, "text": " No, that's the next image." }, { "start": 862, "end": 867, "text": " Well, that is a bad drawing because the image is unrolled." }, { "start": 867, "end": 872, "text": " In any case, what you'll do is you look at." }, { "start": 872, "end": 878, "text": " So if you have a filter bank like this, you have an image and the image composed of multiple channels." }, { "start": 878, "end": 880, "text": " Right. This is the red." }, { "start": 880, "end": 883, "text": " And then you'll have the green." }, { "start": 883, "end": 885, "text": " Right. This is in the green." }, { "start": 885, "end": 890, "text": " And then you'll have the blue channel." }, { "start": 890, "end": 898, "text": " And what you'll do is simply you'll calculate the mean across all of the pixels." }, { "start": 898, "end": 905, "text": " And across all of the channels, you just take this whole NumPy array and you just say dot mean." }, { "start": 905, "end": 907, "text": " And that gives you one number." }, { "start": 907, "end": 912, "text": " And it's just whatever that number is, you subtract it and then you say standard deviation and you divide by that." }, { "start": 912, "end": 913, "text": " That's layer norm." }, { "start": 913, "end": 920, "text": " So an entire layers representation of one image is just normalized to the mean." }, { "start": 920, "end": 922, "text": " Now, this seems a bit drastic." }, { "start": 922, "end": 926, "text": " And that's why instance norm did the exact opposite." }, { "start": 926, "end": 935, "text": " They said, wait a minute, instead of normalizing across all of the features, right, we'll go back and do what batch norm does." }, { "start": 935, "end": 937, "text": " Batch norm looks at each feature individually." }, { "start": 937, "end": 942, "text": " So basically, it looks at all of these these different axes in the data distribution." }, { "start": 942, "end": 943, "text": " It looks at them differently." }, { "start": 943, "end": 954, "text": " So if one axis is scaled very widely, we want to normalize that differently than if than the other axis that is just scaled very shortly." }, { "start": 954, "end": 959, "text": " And that's why we'll look at each feature individually like batch norm." }, { "start": 959, "end": 962, "text": " But also, we only look at one data point at a time." }, { "start": 962, "end": 969, "text": " Now, as you can imagine, this doesn't work anymore in in a fully connected network." }, { "start": 969, "end": 974, "text": " This basically works in a convolutional network where you have a feature map channel." }, { "start": 974, "end": 980, "text": " So you look at one individual channel and one data point." }, { "start": 980, "end": 985, "text": " So that means here you would normalize the red channel individually." }, { "start": 985, "end": 990, "text": " You would normalize the green channel individually and you normalize the blue channel individually." }, { "start": 990, "end": 1000, "text": " So the image you're going to end up with is simply the red channel subtracted by its own mean and then divided by its own standard deviation." }, { "start": 1000, "end": 1002, "text": " And just within that data point, right." }, { "start": 1002, "end": 1010, "text": " So maybe I should here say across the number of features or something." }, { "start": 1010, "end": 1012, "text": " So I hope that that's clear." }, { "start": 1012, "end": 1021, "text": " So the layer norm drops the dependence on the batch size, but instead says we should normalize across all of the features." }, { "start": 1021, "end": 1030, "text": " And the instance norm says, wait a minute, batch norm had a good idea normalizing only across the features individually because the individual features might have different scales." }, { "start": 1030, "end": 1032, "text": " And we should account for that." }, { "start": 1032, "end": 1036, "text": " But also, we don't want to be dependent on the batch size." }, { "start": 1036, "end": 1038, "text": " And now is this where group norm comes in?" }, { "start": 1038, "end": 1043, "text": " Group norm is basically a mix between layer norm and instance norm." }, { "start": 1043, "end": 1048, "text": " What group norm says, layer norm and instance norm have good ideas." }, { "start": 1048, "end": 1051, "text": " They only go across one sample." }, { "start": 1051, "end": 1052, "text": " They take that." }, { "start": 1052, "end": 1064, "text": " They say, in essence, instance norm has a good idea in that the features should be normalized individually, but it goes sort of too far from it goes too far." }, { "start": 1064, "end": 1071, "text": " You might get not good enough statistics because you're now normalizing each of these things individually." }, { "start": 1071, "end": 1073, "text": " Whereas with layer norm, you're too restricted." }, { "start": 1073, "end": 1083, "text": " You're basically saying that the features, it's fine if the features relative to each other are like this, right." }, { "start": 1083, "end": 1086, "text": " One is maybe very high variance and one is very low variance." }, { "start": 1086, "end": 1088, "text": " Feature norm would keep that." }, { "start": 1088, "end": 1091, "text": " And group norm would say, maybe that's not so good." }, { "start": 1091, "end": 1096, "text": " We should have, we should normalize the individual features, maybe individually." }, { "start": 1096, "end": 1107, "text": " But their argumentation here is that maybe there are some features that by their nature already have the same sort of scaling and variance." }, { "start": 1107, "end": 1109, "text": " They give an example." }, { "start": 1109, "end": 1119, "text": " If you, for example, have a filter, again, we deal with convolutional layers here, and that filter is a let's say an edge filter, right." }, { "start": 1119, "end": 1121, "text": " So a horizontal edge filter." }, { "start": 1121, "end": 1123, "text": " So it's very low value here." }, { "start": 1123, "end": 1126, "text": " And let me mark the high value with blue." }, { "start": 1126, "end": 1129, "text": " So this is a horizontal edge filter." }, { "start": 1129, "end": 1140, "text": " If you slide this over a window and these are high numbers and these are low numbers, it will respond to edges because edges have high, low, high, right." }, { "start": 1140, "end": 1141, "text": " Or vice versa." }, { "start": 1141, "end": 1146, "text": " So it will give you very positive and very negative number every time you slide across an edge." }, { "start": 1146, "end": 1161, "text": " Now you can imagine that in natural images, that filter, whatever image you put in would, and however you normalize, would give you pretty much the same response as a vertical edge filter." }, { "start": 1161, "end": 1169, "text": " So the horizontal and the vertical edge filter, you'll see whatever their response is, they're probably about equal in size." }, { "start": 1169, "end": 1178, "text": " So we could expect that in a neural network, there will be groups of filters that together exhibit the same scale." }, { "start": 1178, "end": 1184, "text": " And therefore we can normalize across them like in layer norm." }, { "start": 1184, "end": 1188, "text": " So the more things we normalize across, the better statistics we can gather." }, { "start": 1188, "end": 1195, "text": " That's why instance norm doesn't work because it only normalizes across a very small thing, getting very little statistics." }, { "start": 1195, "end": 1204, "text": " But we should normalize, if we could gather good statistics, we should normalize different features differently." }, { "start": 1204, "end": 1211, "text": " And group norm says, well, since some of the features are almost guaranteed to behave the same, we could normalize across those." }, { "start": 1211, "end": 1216, "text": " Now, of course, you don't know at the beginning which ones those are." }, { "start": 1216, "end": 1226, "text": " But you hope that by doing group norm, by basically at a priori, so at the beginning of training, you decide what the groups are." }, { "start": 1226, "end": 1231, "text": " And naturally, it's just whichever ones are next to each other, those are the groups." }, { "start": 1231, "end": 1239, "text": " And you'll hope that through the training procedure, basically those groups will learn the features that are equal of size." }, { "start": 1239, "end": 1246, "text": " Well, you basically enforce that, so you kind of constrain the architecture to do that." }, { "start": 1246, "end": 1249, "text": " So that's the idea behind group norm." }, { "start": 1249, "end": 1258, "text": " You basically build these groups of channels and then you normalize across those, across the groups of, within the groups of channels," }, { "start": 1258, "end": 1264, "text": " across the entire height and width, only in a single data point." }, { "start": 1264, "end": 1270, "text": " And therefore, you gain the advantage of layer norm, of normalizing within a single data point." }, { "start": 1270, "end": 1278, "text": " You retain the advantage of batch norm, of normalizing across single features." }, { "start": 1278, "end": 1281, "text": " And that's what instance norm attempted." }, { "start": 1281, "end": 1285, "text": " But yeah, so you get the best of both worlds, sort of." }, { "start": 1285, "end": 1287, "text": " That's group norm." }, { "start": 1287, "end": 1291, "text": " And now we go and look what it does." }, { "start": 1291, "end": 1294, "text": " So they say, OK, basically all the normalization techniques do this." }, { "start": 1294, "end": 1296, "text": " They subtract a mean and divide by a standard deviation." }, { "start": 1296, "end": 1298, "text": " That's what we saw." }, { "start": 1298, "end": 1303, "text": " And the difference is just across what you collect, your statistics." }, { "start": 1303, "end": 1308, "text": " So the group norm is the following code in TensorFlow." }, { "start": 1308, "end": 1316, "text": " As you can see, you simply reshape your data and basically expand this part right here where you built, where you put the extra." }, { "start": 1316, "end": 1318, "text": " So this is C." }, { "start": 1318, "end": 1324, "text": " This entire thing used to be C. And you divide it into group and index within group." }, { "start": 1324, "end": 1331, "text": " And then you just normalize across that and reshape to the original dimension again." }, { "start": 1331, "end": 1339, "text": " And the important, the cool thing is in batch norm, you have to keep track of these, of these running means," }, { "start": 1339, "end": 1343, "text": " because at test time, you sort of don't want the batch statistic to influence anything." }, { "start": 1343, "end": 1345, "text": " You don't have that here." }, { "start": 1345, "end": 1349, "text": " So you just back propagate through this observation, through this operation." }, { "start": 1349, "end": 1353, "text": " And you don't need to keep these running, running averages going." }, { "start": 1353, "end": 1357, "text": " And you always care, am I in test or am I in train mode right now?" }, { "start": 1357, "end": 1358, "text": " You just do this." }, { "start": 1358, "end": 1361, "text": " This operation is per data point." }, { "start": 1361, "end": 1363, "text": " So it's just part of your model." }, { "start": 1363, "end": 1365, "text": " Right." }, { "start": 1365, "end": 1372, "text": " And they do a an experiment where they have 32 images per GPU." }, { "start": 1372, "end": 1374, "text": " So it's reasonably sized." }, { "start": 1374, "end": 1381, "text": " And they can basically show that the group norm and the batch norm, they compare in their performance." }, { "start": 1381, "end": 1388, "text": " Now, I do usually don't believe the experiments that you see in single papers." }, { "start": 1388, "end": 1391, "text": " But I think this has been replicated a couple of times." }, { "start": 1391, "end": 1395, "text": " Now, you see, this is the train error where group norm even behaves a bit better." }, { "start": 1395, "end": 1398, "text": " And then in the validation error, it behaves a bit worse." }, { "start": 1398, "end": 1408, "text": " But one could say it is it is kind of more closely together than the other methods are to the group norm or to each other." }, { "start": 1408, "end": 1410, "text": " These instance norm and layer norm." }, { "start": 1410, "end": 1415, "text": " So it at least it's better than instance norm and layer norm." }, { "start": 1415, "end": 1423, "text": " And then once you go into the smaller batch size regime, of course, that's where the group norm starts to shine." }, { "start": 1423, "end": 1431, "text": " So if you go from the 32 images per GPU, which is this low black curve here, all the way to two images per GPU." }, { "start": 1431, "end": 1436, "text": " And I believe they could even do one image per GPU with group norm." }, { "start": 1436, "end": 1441, "text": " But of course, you can't do that with batch norm because you need batch statistics." }, { "start": 1441, "end": 1446, "text": " You can see that the performance of batch norm degrades drastically." }, { "start": 1446, "end": 1450, "text": " Whereas with group norm, this experiment is just funny." }, { "start": 1450, "end": 1454, "text": " They just had to do this, even though you know exactly what turns out." }, { "start": 1454, "end": 1459, "text": " So look at the lines are all exactly in the in the same place." }, { "start": 1459, "end": 1468, "text": " I mean, come on, like, you know, you're just having time to probably one of the reviewers was like, but did you really do the experiment?" }, { "start": 1468, "end": 1472, "text": " They put it in." }, { "start": 1472, "end": 1474, "text": " So, yeah." }, { "start": 1474, "end": 1482, "text": " So you can see that the batch norm beats the group norm in this setting with the when you have the larger batch sizes." }, { "start": 1482, "end": 1488, "text": " But the group norm pulls ahead quite drastically when you have the smaller batch sizes." }, { "start": 1488, "end": 1490, "text": " And that is the main advantage." }, { "start": 1490, "end": 1498, "text": " So now you can turn to models that require small batch sizes or small batch per worker." }, { "start": 1498, "end": 1505, "text": " And generally, it's a pain in the ass to just keep track of those statistics for test time." }, { "start": 1505, "end": 1514, "text": " They do verify, which I find pretty cool, that this phenomenon of the responses going apart during training in the internal feature maps," }, { "start": 1514, "end": 1517, "text": " batch norm counteracts that." }, { "start": 1517, "end": 1522, "text": " So with batch norm, you'll get actually a convergence of responses during training." }, { "start": 1522, "end": 1528, "text": " So the more you train, the more normalized basically your internal features will be." }, { "start": 1528, "end": 1531, "text": " And they show that this is exactly the same with group norm." }, { "start": 1531, "end": 1535, "text": " So group norm is as it seems, it is a replacement." }, { "start": 1535, "end": 1537, "text": " It's not an addition." }, { "start": 1537, "end": 1540, "text": " It doesn't the gains don't come from different place." }, { "start": 1540, "end": 1549, "text": " It seems to be a substitute for batch norm, though they don't have an experiment where they do both." }, { "start": 1549, "end": 1551, "text": " I believe maybe I'm wrong." }, { "start": 1551, "end": 1554, "text": " Maybe they do." }, { "start": 1554, "end": 1560, "text": " But yeah, it seems like you just kind of have to bring some calmness on some standardization into your signal." }, { "start": 1560, "end": 1572, "text": " And how exactly you do that doesn't seem that important as long as you do it with some precision and some some real overall statistics." }, { "start": 1572, "end": 1573, "text": " Yeah." }, { "start": 1573, "end": 1580, "text": " What I don't like about this is now you have, of course, a new hyper parameter, which is this number of groups." }, { "start": 1580, "end": 1583, "text": " So that that seems rather annoying." }, { "start": 1583, "end": 1589, "text": " And the gains like this usually come from the introductions of new hyper parameters." }, { "start": 1589, "end": 1600, "text": " And that just it's not so it's not that ideal for a method to introduce a new hyper parameter, at least layer norm and instance norm didn't." }, { "start": 1600, "end": 1613, "text": " And now, as you can see, the number of groups is is not super influential, but does have a bit of an influence on the on the performance." }, { "start": 1613, "end": 1619, "text": " So if you go a number of groups or here number of channels per group, of course, these two numbers are inversely related." }, { "start": 1619, "end": 1622, "text": " The more groups you have, the less number of channels per group you have." }, { "start": 1622, "end": 1627, "text": " If you go to one extreme, you will get to the layer norm, basically." }, { "start": 1627, "end": 1632, "text": " So the layer norm is an extreme case of group norm where you just have one group." }, { "start": 1632, "end": 1634, "text": " All the channels are in the same group." }, { "start": 1634, "end": 1638, "text": " Then the performance, as you can see here, is quite a bit worse." }, { "start": 1638, "end": 1644, "text": " If you go to the other extreme where every channel is its own group, that's equivalent to instance norm." }, { "start": 1644, "end": 1647, "text": " Again, the performance is quite bad." }, { "start": 1647, "end": 1654, "text": " And somewhere in the middle here with 32 groups is seems to be a good sweet spot." }, { "start": 1654, "end": 1664, "text": " So I don't again I don't like the hyper parameter seems to be some somewhat of a thing where you really have to hit a good value." }, { "start": 1664, "end": 1673, "text": " And well, I guess we'll see over time if that value is always going to be about the same, you know, like the beta two of Adam." }, { "start": 1673, "end": 1680, "text": " It's it's always like people never change it from point nine nine nine because it just tends to work." }, { "start": 1680, "end": 1684, "text": " Or whether that's really going to be another hyper parameter to fit." }, { "start": 1684, "end": 1687, "text": " That seems to be annoying." }, { "start": 1687, "end": 1694, "text": " They do a bunch of ablation studies and tests on, as we said, the, for example, object detection and segmentation." }, { "start": 1694, "end": 1703, "text": " So so models where you must go almost to small batch sizes just because so video classification." }, { "start": 1703, "end": 1707, "text": " So if you want to classify an entire video, that's a lot of data." }, { "start": 1707, "end": 1711, "text": " And you almost have to go small batch sizes for that." }, { "start": 1711, "end": 1713, "text": " They do a lot of experiments." }, { "start": 1713, "end": 1722, "text": " And generally, as I said, I believe these results for group norm have been replicated and across the community a bunch of times now." }, { "start": 1722, "end": 1732, "text": " And I would definitely consider group norm if you are thinking of a especially a distributed machine learning project." }, { "start": 1732, "end": 1735, "text": " All right. With that, I hope you enjoyed this paper." }, { "start": 1735, "end": 1737, "text": " I've been talking for way too long now." }, { "start": 1737, "end": 1739, "text": " I wish you a nice day." }, { "start": 1739, "end": 1745, "text": " If you haven't already, please subscribe, like, share, comment or whatever you feel like doing." }, { "start": 1745, "end": 1766, "text": " Bye bye." } ]
Nfry2b4RFI4
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Investigating Human Priors for Playing Video Games (Paper & Demo)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "rl", "reinforcement learning", "deep rl", "human", "prior", "objects", "game", "video game", "key", "visuals", "enemy", "ladder", "gravity", "ablation" ]
Why are humans so good at video games? Maybe it's because a lot of games are designed with humans in mind. What happens if we change that? This paper removes the influence of human priors from a game and ends up with a pretty fun experience. Paper: https://arxiv.org/abs/1802.10217 Website: https://rach0012.github.io/humanRL_website/ Code: https://github.com/rach0012/humanRL_prior_games Abstract: What makes humans so good at solving seemingly complex video games? Unlike computers, humans bring in a great deal of prior knowledge about the world, enabling efficient decision making. This paper investigates the role of human priors for solving video games. Given a sample game, we conduct a series of ablation studies to quantify the importance of various priors on human performance. We do this by modifying the video game environment to systematically mask different types of visual information that could be used by humans as priors. We find that removal of some prior knowledge causes a drastic degradation in the speed with which human players solve the game, e.g. from 2 minutes to over 20 minutes. Furthermore, our results indicate that general priors, such as the importance of objects and visual consistency, are critical for efficient game-play. Videos and the game manipulations are available at this https URL Authors: Rachit Dubey, Pulkit Agrawal, Deepak Pathak, Thomas L. Griffiths, Alexei A. Efros Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hey there, what's going on today? We're looking at investigating human priors for playing video games by Rachid Duby, Pulkit Agrawal, Deepak Patak, Tom Griffiths and Alexei Aeferos. So there is a paper to go with this, but I actually don't want to get into the paper too much in order to not reveal too much of what's coming. But basically they're trying to investigate what makes video games work for humans. So what do humans pay attention to? What priors bring humans into a video game? And the fun thing is they've created these games where they ablate these individual priors. And we are going to play them. So the original game right here, as you can see, is kind of this Montezuma's Revenge type of game. So you only need the arrow keys. If you go to a bad blob like this, then you die. And you can jump on them and you can use the ladders and the spikes. They'll hurt you if you jump on them. And also if you fall down between the platforms. So what you've got to do is basically get the key, then go to the door over here and bada boom. Cool. So let's try out. So they basically ablate different things here. Mask semantics means that you don't know what the objects are anymore. So you might go over here and you might be like, what's this green thing? Can I jump on it? Oh. So we're probably a bit biased because we've seen the game before. So we know that these are the pink ones are the bad ones. And that's the key. So we should probably get it. But you can imagine that it is a bit harder, but you could still solve it right. Reverse semantics is very interesting if you play it for the first time, because all of a sudden now there's the coins. Oh, and the fire. But I think humans could probably still figure it out with like some minimal trial and error. I do this ice cream cone. You realize, OK, now it gets interesting because right now we've always had sort of we know that there's an object and there's no object on the platforms. But now these are masked. So basically you don't know what's like a relevant object and what isn't. So I know that there is like a bad thing here and a bad thing down to the left. So I'm going to guess these light. These light pink things are the bad things. Yeah. Yeah. These are the ladders. Cool. Bad thing right here. We are rocking this. OK. Key. Where's that? That's the key and the door. So it gets harder because you kind of have to remember the colors right. I know that the light pink ones are the bad squares. Still, still solvable. So let's jump over these on the left because these get really actually it's going to here. So masked affordances. What they're saying is that, OK, you can kind of from the way something looks, you can tell what you can do with it. For example, the platforms you can jump on them and the background is sort of empty space. So you know that there's nothing much happening there. So they trying to take that away by simply retexturing all the objects here such that you don't know how you can interact with them. And it does get significantly harder because OK, so these these green ones are these green ones are the platforms right here. So I can still see that that must be the latter. Right. You can imagine if you were playing this again for the first time that this is significantly more difficult, but you still see the key and the green ones being the platforms. We got this. Now it gets harder. Masked visual similarity. So this is where they say maybe as we did so far, maybe you as a human can kind of make out that things that are visually similar to each other can you can do the same things with. Like we said, the green ones are probably the platforms. So they took it away. Gee. OK, so that must be OK. Can't go here. Fell down. Let's try again. This is a platform. Is this one here? Yes. These are platforms. Ah, that was easy. Too easy. Too easy running into that bad blob there. OK, the ladders are still like this, but then OK. Yeah, this gets harder as as you can. Gee. OK, I'm too dumb to remember from before. I'm like the ideal subject because I don't remember. How did this work? I'm going to solve this just so you know, even if this video gets to 50 minutes, I'm going to make it through this. OK, here we can. OK, see my short term memory is so bad. OK, we got the key. Now just get over to the door. Doors over there. Yeah. OK, now let's wipe the short term memory again. Here changed ladder interaction where they basically say, OK, one of the things that you could know from the real world is how these objects work in the real world. So there's not really any pink blobs with evil faces. There might be spikes. Yes. But ladder is something, you know, that works. So if you want to go up here, that doesn't work. So you kind of have to figure. OK. So you have to go kind of left and right to go up the ladders. And so that one I actually tried before and I figured that out pretty quickly. I think humans are able to figure that out fairly quickly because you kind of on the ladder, right? You can actually go down easily. You're on the ladder and then it kind of doesn't work. And then you kind of try to wiggle because there's two of them. I don't think that's that's necessarily super hard. And now it feels a bit like, you know, the Super Mario maker thing where where people just try to make levels as hard as possible and trick you with trick blocks and invisible stuff. This is hard. So the direction of gravity. So now the left key jumps right here, this key. So this is like this is extremely, extremely hard because I have to like think about every move I make before I do it. And OK, no, no, no, this is so unintuitive for real. Yeah, got it. Got it. OK. Really? Try this out. This is crazy. Yeah. Yeah. OK, so the last thing is we combine all of it, I guess, except the changed gravity and changed interaction. So now all the priors, all the visual object priors removed. This is this is King's Discipline right here. OK, so we figured out where the blue. Cool. So where's the next? This is the next platform. Where's the OK. There must be like a yeah, take that. OK, but we know the next. So we can't really generalize from this because we know the next bad blob isn't going to be the same color. Right. OK. The white done this. I know there's a bad thing here, but we'd have to figure this out. So basically kind of the point of the paper, I think, is to say that this is what you're doing to our. No spikes. This is what you're doing to our algorithms. If you're in the most case, so they simply have to go and to basically try every single thing and remember what worked and what didn't work. Now, of course, the algorithms can also exhibit like can also use the visual similarity. That was the key. Yeah. Yeah. Let's go to the door door door. No, there is a there's like a bad thing here. Right. No spikes. OK. So either we build these priors into the algorithms if we want to get them to human level or we have some sort of learning these priors before we let the people go onto a paper or I don't know. Or we just take it that algorithms have to figure all of this out by themselves. So they ablate these things right here. You can see the masked object identity makes kind of the biggest difference in terms of time, number of deaths, the number of states explored. Reverse semantics. I believe these are humans that are trying it for the first time and they're just like, oh, an ice cream. So it can also hurt. Right. The algorithm wouldn't be super impressed by it looking like an ice cream. But the human is very much and the crazy thing here, you can see exploration, the original game and then exploration in the no object prior game, especially if you play this for the first time. This is just mad. Like no freaking way. I would actually like love to see video games like this coming out. This would be the worst selling video game of all times where dynamically it just removes these kind of priors. But it's a I think it's a really fun way to investigate what humans learn and what they already bring into the game. So here they have another game and they do this same thing on an RL agent. And you see here the RL agent just don't care about any of these things except visual similarity. So visual similarity helps the RL agent to generalize across the game. So if you see a bad blob, the next bad blob will look similar. And that's sort of kind of an invariance that we know they can exploit since they're using convolutional neural networks and so on. But I think it is really drawing attention to the importance of priors prior knowledge in reinforcement learning and human knowledge. So in this game right here, where you have these hidden rewards that the human doesn't see, right? But if they kind of touch it, they're kind of coins and the human performs way worse than the RL agent because the RL agent will actually try those things out. And the human having the prior that the black thing is like they don't see the yellow boxes that the black thing is just empty space. They won't even explore that. So maybe, you know, that is something to think about with respect to building RL agents. All right. I don't want to go into the paper too much. It's a very cool paper, but we're here to play games and I invite you to read the paper. Check out the website. Try these games for yourself. They're a lot of fun, especially if you try them first time. And bye bye.
[ { "start": 0, "end": 11, "text": " Hey there, what's going on today? We're looking at investigating human priors for playing video games by Rachid Duby, Pulkit Agrawal, Deepak Patak, Tom Griffiths and Alexei Aeferos." }, { "start": 11, "end": 19, "text": " So there is a paper to go with this, but I actually don't want to get into the paper too much in order to not reveal too much of what's coming." }, { "start": 19, "end": 26, "text": " But basically they're trying to investigate what makes video games work for humans. So what do humans pay attention to?" }, { "start": 26, "end": 34, "text": " What priors bring humans into a video game? And the fun thing is they've created these games where they ablate these individual priors." }, { "start": 34, "end": 42, "text": " And we are going to play them. So the original game right here, as you can see, is kind of this Montezuma's Revenge type of game." }, { "start": 42, "end": 48, "text": " So you only need the arrow keys. If you go to a bad blob like this, then you die." }, { "start": 48, "end": 53, "text": " And you can jump on them and you can use the ladders and the spikes. They'll hurt you if you jump on them." }, { "start": 53, "end": 62, "text": " And also if you fall down between the platforms. So what you've got to do is basically get the key, then go to the door over here and bada boom." }, { "start": 62, "end": 69, "text": " Cool. So let's try out. So they basically ablate different things here." }, { "start": 69, "end": 74, "text": " Mask semantics means that you don't know what the objects are anymore." }, { "start": 74, "end": 78, "text": " So you might go over here and you might be like, what's this green thing? Can I jump on it? Oh." }, { "start": 78, "end": 83, "text": " So we're probably a bit biased because we've seen the game before." }, { "start": 83, "end": 90, "text": " So we know that these are the pink ones are the bad ones. And that's the key. So we should probably get it." }, { "start": 90, "end": 95, "text": " But you can imagine that it is a bit harder, but you could still solve it right." }, { "start": 95, "end": 104, "text": " Reverse semantics is very interesting if you play it for the first time, because all of a sudden now there's the coins. Oh, and the fire." }, { "start": 104, "end": 109, "text": " But I think humans could probably still figure it out with like some minimal trial and error." }, { "start": 109, "end": 121, "text": " I do this ice cream cone. You realize, OK, now it gets interesting because right now we've always had sort of we know that there's an object and there's no object on the platforms." }, { "start": 121, "end": 126, "text": " But now these are masked. So basically you don't know what's like a relevant object and what isn't." }, { "start": 126, "end": 135, "text": " So I know that there is like a bad thing here and a bad thing down to the left. So I'm going to guess these light." }, { "start": 135, "end": 143, "text": " These light pink things are the bad things. Yeah. Yeah. These are the ladders. Cool. Bad thing right here." }, { "start": 143, "end": 149, "text": " We are rocking this. OK. Key. Where's that? That's the key and the door." }, { "start": 149, "end": 156, "text": " So it gets harder because you kind of have to remember the colors right. I know that the light pink ones are the bad squares." }, { "start": 156, "end": 163, "text": " Still, still solvable. So let's jump over these on the left because these get really actually it's going to here." }, { "start": 163, "end": 171, "text": " So masked affordances. What they're saying is that, OK, you can kind of from the way something looks, you can tell what you can do with it." }, { "start": 171, "end": 179, "text": " For example, the platforms you can jump on them and the background is sort of empty space. So you know that there's nothing much happening there." }, { "start": 179, "end": 187, "text": " So they trying to take that away by simply retexturing all the objects here such that you don't know how you can interact with them." }, { "start": 187, "end": 197, "text": " And it does get significantly harder because OK, so these these green ones are these green ones are the platforms right here." }, { "start": 197, "end": 202, "text": " So I can still see that that must be the latter. Right." }, { "start": 202, "end": 211, "text": " You can imagine if you were playing this again for the first time that this is significantly more difficult, but you still see the key and the green ones being the platforms." }, { "start": 211, "end": 216, "text": " We got this. Now it gets harder. Masked visual similarity." }, { "start": 216, "end": 228, "text": " So this is where they say maybe as we did so far, maybe you as a human can kind of make out that things that are visually similar to each other can you can do the same things with." }, { "start": 228, "end": 233, "text": " Like we said, the green ones are probably the platforms. So they took it away." }, { "start": 233, "end": 237, "text": " Gee. OK, so that must be OK. Can't go here." }, { "start": 237, "end": 246, "text": " Fell down. Let's try again. This is a platform. Is this one here? Yes. These are platforms." }, { "start": 246, "end": 252, "text": " Ah, that was easy. Too easy. Too easy running into that bad blob there." }, { "start": 252, "end": 258, "text": " OK, the ladders are still like this, but then OK." }, { "start": 258, "end": 265, "text": " Yeah, this gets harder as as you can. Gee. OK, I'm too dumb to remember from before." }, { "start": 265, "end": 269, "text": " I'm like the ideal subject because I don't remember." }, { "start": 269, "end": 273, "text": " How did this work?" }, { "start": 273, "end": 280, "text": " I'm going to solve this just so you know, even if this video gets to 50 minutes, I'm going to make it through this." }, { "start": 280, "end": 286, "text": " OK, here we can. OK, see my short term memory is so bad." }, { "start": 286, "end": 292, "text": " OK, we got the key. Now just get over to the door. Doors over there. Yeah." }, { "start": 292, "end": 296, "text": " OK, now let's wipe the short term memory again." }, { "start": 296, "end": 306, "text": " Here changed ladder interaction where they basically say, OK, one of the things that you could know from the real world is how these objects work in the real world." }, { "start": 306, "end": 313, "text": " So there's not really any pink blobs with evil faces. There might be spikes. Yes. But ladder is something, you know, that works." }, { "start": 313, "end": 318, "text": " So if you want to go up here, that doesn't work. So you kind of have to figure. OK." }, { "start": 318, "end": 326, "text": " So you have to go kind of left and right to go up the ladders. And so that one I actually tried before and I figured that out pretty quickly." }, { "start": 326, "end": 330, "text": " I think humans are able to figure that out fairly quickly because you kind of on the ladder, right?" }, { "start": 330, "end": 335, "text": " You can actually go down easily. You're on the ladder and then it kind of doesn't work." }, { "start": 335, "end": 343, "text": " And then you kind of try to wiggle because there's two of them. I don't think that's that's necessarily super hard." }, { "start": 343, "end": 355, "text": " And now it feels a bit like, you know, the Super Mario maker thing where where people just try to make levels as hard as possible and trick you with trick blocks and invisible stuff." }, { "start": 355, "end": 362, "text": " This is hard. So the direction of gravity. So now the left key jumps right here, this key." }, { "start": 362, "end": 373, "text": " So this is like this is extremely, extremely hard because I have to like think about every move I make before I do it." }, { "start": 373, "end": 381, "text": " And OK, no, no, no, this is so unintuitive for real." }, { "start": 381, "end": 391, "text": " Yeah, got it. Got it. OK. Really? Try this out. This is crazy. Yeah. Yeah." }, { "start": 391, "end": 398, "text": " OK, so the last thing is we combine all of it, I guess, except the changed gravity and changed interaction." }, { "start": 398, "end": 403, "text": " So now all the priors, all the visual object priors removed." }, { "start": 403, "end": 411, "text": " This is this is King's Discipline right here. OK, so we figured out where the blue. Cool." }, { "start": 411, "end": 416, "text": " So where's the next? This is the next platform. Where's the OK." }, { "start": 416, "end": 422, "text": " There must be like a yeah, take that. OK, but we know the next." }, { "start": 422, "end": 428, "text": " So we can't really generalize from this because we know the next bad blob isn't going to be the same color." }, { "start": 428, "end": 434, "text": " Right. OK. The white done this." }, { "start": 434, "end": 438, "text": " I know there's a bad thing here, but we'd have to figure this out." }, { "start": 438, "end": 444, "text": " So basically kind of the point of the paper, I think, is to say that this is what you're doing to our." }, { "start": 444, "end": 449, "text": " No spikes. This is what you're doing to our algorithms." }, { "start": 449, "end": 460, "text": " If you're in the most case, so they simply have to go and to basically try every single thing and remember what worked and what didn't work." }, { "start": 460, "end": 466, "text": " Now, of course, the algorithms can also exhibit like can also use the visual similarity." }, { "start": 466, "end": 472, "text": " That was the key. Yeah. Yeah. Let's go to the door door door. No, there is a there's like a bad thing here." }, { "start": 472, "end": 477, "text": " Right. No spikes." }, { "start": 477, "end": 499, "text": " OK." }, { "start": 499, "end": 515, "text": " So either we build these priors into the algorithms if we want to get them to human level or we have some sort of learning these priors before we let the people go onto a paper or I don't know." }, { "start": 515, "end": 521, "text": " Or we just take it that algorithms have to figure all of this out by themselves." }, { "start": 521, "end": 532, "text": " So they ablate these things right here. You can see the masked object identity makes kind of the biggest difference in terms of time, number of deaths, the number of states explored." }, { "start": 532, "end": 538, "text": " Reverse semantics. I believe these are humans that are trying it for the first time and they're just like, oh, an ice cream." }, { "start": 538, "end": 544, "text": " So it can also hurt. Right. The algorithm wouldn't be super impressed by it looking like an ice cream." }, { "start": 544, "end": 557, "text": " But the human is very much and the crazy thing here, you can see exploration, the original game and then exploration in the no object prior game, especially if you play this for the first time." }, { "start": 557, "end": 565, "text": " This is just mad. Like no freaking way. I would actually like love to see video games like this coming out." }, { "start": 565, "end": 578, "text": " This would be the worst selling video game of all times where dynamically it just removes these kind of priors. But it's a I think it's a really fun way to investigate what humans learn and what they already bring into the game." }, { "start": 578, "end": 587, "text": " So here they have another game and they do this same thing on an RL agent. And you see here the RL agent just don't care about any of these things except visual similarity." }, { "start": 587, "end": 596, "text": " So visual similarity helps the RL agent to generalize across the game. So if you see a bad blob, the next bad blob will look similar." }, { "start": 596, "end": 603, "text": " And that's sort of kind of an invariance that we know they can exploit since they're using convolutional neural networks and so on." }, { "start": 603, "end": 610, "text": " But I think it is really drawing attention to the importance of priors prior knowledge in reinforcement learning and human knowledge." }, { "start": 610, "end": 623, "text": " So in this game right here, where you have these hidden rewards that the human doesn't see, right? But if they kind of touch it, they're kind of coins and the human performs way worse than the RL agent because the RL agent will actually try those things out." }, { "start": 623, "end": 632, "text": " And the human having the prior that the black thing is like they don't see the yellow boxes that the black thing is just empty space." }, { "start": 632, "end": 640, "text": " They won't even explore that. So maybe, you know, that is something to think about with respect to building RL agents." }, { "start": 640, "end": 646, "text": " All right. I don't want to go into the paper too much. It's a very cool paper, but we're here to play games and I invite you to read the paper." }, { "start": 646, "end": 652, "text": " Check out the website. Try these games for yourself. They're a lot of fun, especially if you try them first time." }, { "start": 652, "end": 662, "text": " And bye bye." } ]
r8wiBA3ZaQE
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
[ML News] GPT-4 Rumors | AI Mind Reading | Neuron Interaction Solved | AI Theorem Proving
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "deep learning tutorial", "introduction to deep learning", "what is deep learning", "ml news", "mlnews", "kilcher news", "ai news", "gpt4", "gpt 4", "gpt 4 rumors", "gpt-4", "mind reading", "ai mind reading", "mind reading machine learning", "machine learning news", "metagenomics", "byzantine", "bzyantine reviewer" ]
#ai #mlnews #gpt4 Your weekly news from the AI & Machine Learning world. OUTLINE: 0:00 - Introduction 0:25 - AI reads brain signals to predict what you're thinking 3:00 - Closed-form solution for neuron interactions 4:15 - GPT-4 rumors 6:50 - Cerebras supercomputer 7:45 - Meta releases metagenomics atlas 9:15 - AI advances in theorem proving 10:40 - Better diffusion models with expert denoisers 12:00 - BLOOMZ & mT0 13:05 - ICLR reviewers going mad 21:40 - Scaling Transformer inference 22:10 - Infinite nature flythrough generation 23:55 - Blazing fast denoising 24:45 - Large-scale AI training with MultiRay 25:30 - arXiv to include Hugging Face spaces 26:10 - Multilingual Diffusion 26:30 - Music source separation 26:50 - Multilingual CLIP 27:20 - Drug response prediction 27:50 - Helpful Things ERRATA: HF did not acquire spaces, they launched spaces themselves and supported Gradio from the start. They later acquired Gradio. References: AI reads brain signals to predict what you're thinking https://mind-vis.github.io/?s=09&utm_source=pocket_saves https://neurosciencenews.com/bmi-internal-speech-21837/ Closed-form solution for neuron interactions https://twitter.com/ramin_m_h/status/1592585672606769153/photo/1 https://github.com/raminmh/CfC https://github.com/raminmh/CfC/blob/main/torch_cfc.py GPT-4 rumors https://thealgorithmicbridge.substack.com/p/gpt-4-rumors-from-silicon-valley?utm_source=pocket_reader Cerebras supercomputer https://www.cerebras.net/andromeda/ Meta releases metagenomics atlas https://ai.facebook.com/blog/protein-folding-esmfold-metagenomics/ https://www.genome.gov/genetics-glossary/Metagenomics AI advances in theorem proving https://ai.facebook.com/blog/ai-math-theorem-proving/ https://marketplace.visualstudio.com/items?itemName=jroesch.lean Better diffusion models with expert denoisers https://deepimagination.cc/eDiffi/ BLOOMZ & mT0 https://arxiv.org/abs/2211.01786?utm_source=pocket_reader https://huggingface.co/bigscience/bloomz?text=Suggest+at+least+five+related+search+terms+to+%22M%E1%BA%A1ng+neural+nh%C3%A2n+t%E1%BA%A1o%22. ICLR reviewers going mad https://twitter.com/XiangruTang/status/1589703605098975237?utm_source=pocket_reader https://twitter.com/BlancheMinerva/status/1588164585961422849?utm_source=pocket_reader https://openreview.net/forum?id=pfuqQQCB34 https://twitter.com/peter_richtarik/status/1591408710366408706?utm_source=pocket_reader Scaling Transformer inference https://arxiv.org/abs/2211.05102 Infinite nature flythrough generation https://ai.googleblog.com/2022/11/infinite-nature-generating-3d.html?utm_source=pocket_reader Blazing fast denoising https://github.com/dome272/Paella https://arxiv.org/abs/2211.07292 Large-scale AI training with MultiRay https://ai.facebook.com/blog/multiray-large-scale-AI-models/ arXiv to include Hugging Face spaces https://blog.arxiv.org/2022/11/17/discover-state-of-the-art-machine-learning-demos-on-arxiv/ Multilingual Diffusion https://github.com/FlagAI-Open/FlagAI/tree/master/examples/AltDiffusion Music source separation https://github.com/facebookresearch/demucs https://arxiv.org/abs/2211.08553 Multilingual CLIP https://twitter.com/rom1504/status/1593719037808320513 Drug response prediction https://phys.org/news/2022-10-ai-accurately-human-response-drug.html https://huggingface.co/Onodofthenorth/SD_PixelArt_SpriteSheet_Generator https://huggingface.co/spaces/ronvolutional/sd-spritesheets https://github.com/daspartho/prompt-extend https://huggingface.co/blog/fine-tune-whisper https://twitter.com/CarsonKatri/status/1585412662724272128 https://github.com/carson-katri/dream-textures/ https://www.youtube.com/playlist?list=PLzvYlJMoZ02Dxtwe-MmH4nOB5jYlMGBjr https://github.com/xl0/lovely-tensors https://github.com/jerryjliu/gpt_index https://colab.research.google.com/drive/1o1qYJcFeywzCIdkfKJy7cTpgZTCM2EI4 https://dagshub.com/blog/launching-data-streaming-and-upload/ https://dagshub.com/blog/build-an-end-2-end-active-learning-pipeline-part-1/ https://github.com/run-ai/genv https://arxiv.org/abs/2210.14868 https://github.com/timeseriesAI/tsai https://medium.com/@yangyou_berkeley/diffusion-pretraining-and-hardware-fine-tuning-can-be-almost-7x-cheaper-85e970fe207b https://medium.com/@hpcaitech/accelerating-structure-prediction-of-protein-monomers-and-multimer-by-11-times-769715dcb5b5 https://github.com/hpcaitech/ColossalAI/tree/main/examples/images/diffusion https://arxiv.org/abs/2211.03726 https://github.com/Deci-AI/super-gradients https://github.com/facebookresearch/shumai https://github.com/huggingface/safetensors https://github.com/google/learned_optimization/tree/main/learned_optimization/research/general_lopt https://github.com/NVIDIA-Merlin/dataloader https://loda-lang.org/ https://loda-lang.org/edit/ https://github.com/EelcoHoogendoorn/numga https://arxiv.org/abs/2210.07316v1 https://huggingface.co/spaces/mteb/leaderboard https://twitter.com/natfriedman/status/1575631194032549888 https://github.com/nat/natbot
Rumors of GPT-4 are in the air, neuron transmissions is now solved in closed form, and mind reading is a thing now. It's Monday and welcome to ML News. Hello and welcome to ML News. This is your regular update of what's going on in the machine learning and AI world. Our first story is the most interesting one. Brain reading is more and more becoming a thing. There is a paper called Seeing Beyond the Brain conditional diffusion models with sparse masked modeling for vision decoding. In this paper, the authors give a visual stimulus to a subject, a real human and then look at their brain waves. This is non-invasive, this is fMRI brain scans. And from that reading of the fMRI, they're able to decode what the person is seeing. You can see right here on the top, you have visual stimuli. And on the bottom, you have the reconstructed images. Now what you'll be able to see is that the pixels don't exactly match. However, the semantic content is very often the same. Now this is done via aligning the latent spaces of the encoders for the brain data and encoders from images. And this has been a long standing problem because the training data that exists to map what people are seeing from their brain waves to the image space is just super sparse. But the authors here go around that by pre-training on unlabeled fMRI data and first get a very, very good autoencoder on that data going. Then the latent space can be determined, compressed. And then from that latent space, we can learn a conditional image diffusion decoder in order to map the visual stimuli to the encoding of the brain waves. So the paradigm that we see in deep learning where you want to do some unsupervised pre-training first, because you have much more unlabeled data and only then include the task specific data and learn that on top of the unsupervised pre-trained models also holds in the field of brain computer interfaces, apparently, that's pretty cool that we're more and more getting the chance to peek into people's brains. Now this isn't yet a full thought reader or anything like this. Essentially, they disambiguate between I believe some hundred different classes of labels, but it's still very, very cool that you can essentially reconstruct just from reading brain waves, what kind of image the person is seeing and what about is in the image in a related article, neuroscience news.com writes that brain machine interface device predicts internal speech. Now this is a little bit different in that it's actually invasive. So this is an interface directly to the brain, but it is able to predict internal speech, which means speech that you just internally think to yourself, it is able to decode that now it is not able to decode arbitrary speech, I believe they go up to about eight words or something like this. So it's not yet exactly super accurate, but we are making big, big progress in that front. Alright, next news. Ramin Hassani writes that they've published a new article in nature, machine intelligence and solved a differential equation that's been long standing without a closed form solution, we now have that closed form solution and it concerns the interactions between neurons. This is a major benefit for people who want to implement biologically inspired sort of biologically plausible neural networks, because previously, you'd have to have some sort of an ODE solver in order to even model that connection properly. And now that there's a closed form solution, you can essentially just forward and backprop through that formula. And the absolute coolest thing is that they have implemented this in both pytorch and TensorFlow. So you can technically build in this directly into your architectures today. Now it's not guaranteed to be like a lot better than what we currently have in terms of neuron neuron connection. But that's not the point. The point is to get to a place where we can simulate biologically plausible neural networks as well as possible. And from those potentially learn something about the brain and we might actually get some inspiration for how to even improve our artificial neural network architectures from this. So check out the paper and the repository in case you're interested. Alberto Romero on sub stack has an article called GPT for rumors from Silicon Valley. This is a summary of things that people whatever people means talk about currently around GPT for so open AI has been announcing like tiny bits of the next iteration of their language models here and there. And there used to be an interview by Sam Altman where he said GPT four isn't really going to be that much bigger than GPT three. And it's probably still going to be in the text domain, it's probably going to be a bit more aligned to humans a bit more, you know, learning from human feedback and so on. And people were kind of like a tiny bit disappointed, I guess, because it's not all we're going to build the next giant thing. But now more and more rumors are coming out that in fact, GPT four might be very well what they claim colossal. So another scale up of two orders of magnitude or something like this in terms of numbers of parameters or even three orders of magnitude, although some rumors claim that it is going to be sparse. So there's not really like a one to one comparison. On the other hand, there are also a lot of rumors that claim that GPT four is going to be multimodal after all. So including text, images, videos, and so on basically anything they can get their fingers on. So we'll see which one of these turns out to be true, it's very well possible that they first aim that just sort of improving GPT three and then all of a sudden with recent developments around diffusion models and so on, they've now gone into the direction of you know, let's just let's just do another giant leap. And from people who have apparently spoken to other people who have apparently tried the new model or a precursor to the new GPT four, they say that GPT four will be just as much an improvement over GPT three as GPT three was over GPT two. And if you remember in case you remember GPT three was a giant improvement over GPT two. Now is this going to be a GI and solve all our problems? Probably not. But in case this is true, in case it is really the same amount of step from GPT two to GPT three, as it is from GPT three to the new GPT four, then I think we're in for pretty pretty amazing times. In any case, rumors be rumors. And I guess we'll only know when we actually see it. The new model is rumored to be released sometimes between December and February. So the wait isn't going to be that long. Now related to this, OpenAI is also rumored to collaborate with Cerebros. And Cerebros in turn has just released their biggest supercomputer to date, which is called Andromeda has 13.5 million cores. Now Cerebros is a company that builds extremely large chips, they want to do as much as they can, like on a single chip. And that's why their chips are like, I think they're about yay big, I'm not exactly sure. But this absolute supercomputer is just comprised of 16 cerebros CS two systems. So that should give you an already an indication of just how big their individual systems already are now connecting them makes for a ginormous supercomputer. Now here on the website, it says get demo but I guess for most of you, it's not really going to be an option to go into business with this kind of scale. But for some of you, it might be and you might very well want to click that button. The meta research blog announces the ESM Metagenomic Atlas, the first view of the dark matter of the protein universe. So a lot of folding work a lot of protein folding work has been done recently with alpha fold and ESM fold and now meta releases a database of what's called meta genomics. Metagenomics is essentially if you just go outside and you pick up a piece of dirt, there's going to be like a ton of microbes, a ton of bacteria, a ton of organic material in there. And all of that genomic material isn't necessarily something you find in like the human genome project or something like this, yet it's still very important, for example, for ecology for medicine, but also for human well being. So this Metagenomic Atlas is the first database that reveals the structures of the meta genomic world at the scale of hundreds of millions of proteins, you can explore that there is a link to the Atlas right here. If you're anywhere near this world of protein folding, I guess this is a very exciting time. And I'm also excited for the progress we make on other frontiers rather than just scaling up and producing more stories about unicorns. Like for all the criticisms that these big models get and the pressure to just scale and scale and scale, they do every now and then deliver us something like this, something that's absolutely undeniably useful for some natural science out there. And as we get better with our core research, even if that's on pictures of cats, I strongly believe that this will greatly benefit adjacent fields such as biology, mathematics, physics, chemistry, and more of the other sciences. Also on the meta AI blog, they released a blog post called teaching AI advanced mathematical reasoning. Now I've dealt before with some of the papers that met I had in this regard, where they tried to come up with systems that use a prover. So there are these things called prover systems or proof assistance, or essentially formalize your whole mathematics inputs to spell out everything super formally, super descriptive, super detailed, and then you can use the system to search for new proofs by applying some proof strategies here and there. So you can say I want to do now a contra position of two things and so on. However, as you'll quickly discover the amount of strategies that you can apply to a given statement to search for a proof is really, really huge. And that leaves you essentially with a search problem. So this paper uses essentially a variant of Monte Carlo tree search, the same thing that like AlphaGo uses in order to determine the next moves in a go game in order to determine the next proof strategy or the next proof step that should be applied in order to reach a given target statement. Again, very cool that what initially dealt with a bunch of games and was really flashy because we can now solve go and chess much better has developed into something that is of actual use in an adjacent field in this case, mathematics. So very cool. Check out the paper if you are interested. Nvidia has released a paper called eDIF-i text to image diffusion models with ensemble of expert denoisers. This is I would say a typical Nvidia paper where they don't reinvent the world. But what they do is they take what exists and they apply a strong engineering mindset to it, they improve upon it, and it just results in a very high qualitative output. So in this case, they take the idea of these text to image diffusion models. But then on top of that, they have an ensemble of expert denoisers. So they don't just have one denoiser like we used to in a diffusion model, they have an ensemble of denoisers, which means that different models can take care of different phases in this denoising process. Also, they stage the image production in multiple steps. Now this has been done before, but it is a very viable strategy in that you essentially have one model produce a low resolution version of the image and then you successively scale that image. And then you successively scale that up. Now, as you can see right here, all in all that results in super high quality images that can either be done from a text description or from as you can see right here, text plus some kind of map or some kind of mask that you draw. Or over here, you can also input some sort of a style reference image into this system. So again, it's just amazing how people are able to push forward the state of the art in such a short time. Big Science has released two new models, one called blooms and the other one called empty zero. These are evolutions of their previous models. And they're mainly concerned with multitask prompted fine tuning. We've dealt with prompted fine tuning before in the galactica paper, which essentially means that after you retrain your model, you fine tune it on prompted samples. So like you would ask GPT three with a prompt to do some kind of task, you go ahead and actually fine tune on the prompt, the input and the output of that task to make the model learn to respond to such prompts in an appropriate fashion. And if you do that for multiple tasks, you also have the ability to then generalize to new tasks because that will carry over from the pre training. Specifically, these new models deal with this exact setting, but in non English data. So across lingual generalization doing this in multiple languages and potentially also generalizing across languages. The models are on hogging face if you want to check them out. I clear 2023 reviews are out on open review. And there are quite a few surprises in the negative direction. So Robert Tang here tweets out an example where the authors respond to a reviewer with response to you is a waste of time. I hope you can respect the author's work and give constructive comments instead of taking a few minutes to give a trivial suggestion. I recommend that you complete a university maybe kindergarten course before giving your review comments. That's just lovely. Somehow believing in the good of human beings, maybe this person just like had an absolutely terrible day and they really need this paper. And the review is actually very, very bad, like actually does make like a super trivial dunk on the paper. And you know, I'm not sure what happened right here. If you're ever inclined to write a rebuttal like this, just just sleep, go to sleep, wake up the next day, breathe and realize that it's it's kind of useless, even if it's probably true. Another worrying issue tweeted out by Stella Biederman is the following. So one reviewer criticized this model for that it is not acceptable to only compare with publicly available models, meaning that the paper should also have compared with non publicly available models. Now there is of course, a debate to have right here in order to properly compare to someone's model, you need to have access to it. On the other hand, there has been a long history of science where people just hadn't been putting stuff out into open source. And you'd essentially just have to take the numbers from the tables from their paper and then put those into your paper and essentially just believe what they said is possible that the reviewer here is of the stands that look, you know, you can just take the number that they claim and put them there. On the other hand, it's also entirely fair to say that, well, I don't have access to their model, I can't verify their numbers. And therefore, I'm not going to put them into my paper. The crux is obviously if that fact that you leave these things away that aren't public also makes your method appear a lot better in comparison because the only actual competitors to your method are closed source and only have some number in some paper. I don't know what's the correct answer right here. But it's certainly worth having a discussion about. And lastly, and you might actually have heard of this one is this paper called variance reduction is an antidote to Byzantine's better rates, weaker assumptions and communication compression as a cherry on the top. People do get creative with titles these days. But the problem that one reviewer here had is with the word Byzantine's, which the reviewer claimed to be disparaging of the whoever people consider themselves Byzantine. Now Byzantine is a term that's been long used in various fields of analysis, security, cryptography, I believe game theory. So the term is very well known and is an established technical term. However, the reviewer is of strong opinion that that is a term that contains prejudice and is derogatory and is denouncing the ethno religious practice of some people. Now the reviewer bases their opinion strongly on the fact that the ICLEAR code of ethics says you must respect cultural heritage of others and repeatedly claims that the usage of the term Byzantine in this work is a violation of the ICLEAR code of ethics. Whereas the authors claim this is a technical term, it's been used for a long time, and it is disparaging to absolutely no one. The conversation goes on and on, I believe there are over 36 comments in this thread, including some other people coming in and saying, hey, I'm actually considered Byzantine, and I don't have a problem with the term. So don't defend, you know us. Well, the reviewer did make some suggestions for other terms such as deviant. But the authors pointed out that none of these suggestions capture the term in its full existence or in how people actually use it. As the debate goes on, you'll see the reviewer shifting their stance a little bit from the fact that it's just not appropriate to use the term that the paper also isn't technically correct. But I strongly believe that the reviewers only introduced that point after the discussion had been going on for a while, and they realized they needed to make another stronger case on scientific terms. Now the problem here is that on open review, I believe you can't see the modifications. So we have no idea these comments, they were all changed around, even the original comment is changed around to like include some other feedback and so on. So it seems the timeline here is a little bit murky. The authors here also point out that this point, the point that the word Byzantine is inappropriate was apparently initially the only criticism of that reviewer or the only real criticism. But the reviewer gave the paper a really low score. And if you know anything about conferences, most meta reviewers just kind of look whether there is one bad score, and then the paper already has very poor chances or they look at the average, which would obviously be decreased strongly by one bad score. So essentially, the reviewer held the paper hostage a little bit and wanted the authors to change the wording. The authors even agree to abbreviate the word Byzantine to biz like the short form biz, because they just didn't agree that any of the other terms would do the technical nature justice. The reviewer disagreed that that would actually solve the problem and essentially said that even if they were to change the term, they would now expect not only to not use that term, but also the paper to contain a discussion of why the word Byzantine is not appropriate, or at least like a moral struggle of the authors are bringing this up of why this is problematic. The reviewer again, repeatedly and insistently claims that it violates the ICLR code of ethics and holds that as like a stick to like hit the authors with like code of ethics. This is against the code of ethics. What's interesting is that at some point, the program chairs commented on this as well, saying that the program chair committee and ethics chair have been following this thread closely upon preliminary investigation, the ethics chair find that the use of the B word, it's not the B word is a possibly emerging issue, but not yet a major ethics issue that could justify rejecting research, there seems to be no widespread agreement that the B word is offensive. This discussion between reviewers and authors is still valuable to our community, which raises awareness of this potentially emerging issue, we appreciate the thoughts from the reviews, and they said that this is essentially now resolved by saying, you know, reviewer, you made your point, but we don't agree with the point, the reviewer responded again, lengthily pointed out that this violates the ICLR code of ethics. Now in the end, you could say it's all good. And the program chairs came in and essentially squashed the reviewer and said, okay, the paper is fine, can use the word Byzantine, it's not problematic, all good. But I strongly actually believe that this is a big win for this reviewer right here, because the ethics chair, the appropriate response would be shut up, you're an embarrassment to the scientific institution, and you're barred from reviewing any more papers for any other conferences. This is a joke, shut up. But they didn't do that. They essentially said yes to the reviewer, they essentially said, yes, it's a possibly emerging issue, because they've seen that there was quite a bit of uproar in the community that such a what is essentially a technical term that is no one absolutely no one except this reviewer feels is not appropriate was used, the ethics chair said, yes, it's possibly emerging. So this is like a groundwork for the future. This is how these things slip in there, I have full conviction that people who write these codes of ethics do so with the best intentions, at least most of them, I do believe some of them predict exactly this. And this is how you again and again, slip these things in. So one person makes a fuss, you take the temperature of the community is like, not yet ready, but we have now precedence, right. So at the next conference, the same reviewer can make a fuss again, and they can point back and say, well, other people, you don't know, it's the same reviewer, other people have said this before. So actually, this might actually be problematic. And the ethics chair here seems to be bound by the fact that someone said this is ridiculous, shut up. However, they do so in the most lenient way in the most way that guarantees that in the future, this will actually become a problem. So in my opinion, big win for the reviewer right here, big win for the complainers, and I don't like it. Google has a new paper called efficiently scaling transformer inference on how they scale their big home models on TPUs. Now it is not going to be very applicable for most of you. But in case you care on how they enable something like 32 larger context lengths, and super duper flops and super duper hardware utilization during large batch processing, give this paper a read. Also from Google, the Google Research blog has an entry called infinite nature generating 3d fly throughs from still photos. This is on top of a paper that they published at ECCV, which generates infinite views or infinite fly throughs as the title says. And the cool thing is this happens from still images. So you can give a single image and it will generate a fly through from that image, they use various techniques for that. But the base idea is that you take an image and you predict its depth map. So how far away all the stuff is, and then you use that in order to render the image from a slightly different view. If you know how far away all the things are, you can position your camera slightly differently. And you can still determine where the pixels go. Now this will leave some pixels to be undetermined because you can now see behind things that you didn't see before. And then you have another model here in this refining step that essentially fills in these missing pixels. And then you repeat again, you pose the depth map, you adjust your camera position tiny bit, and then you fill in the pixels that are missing. In order to train this, it's not exactly super easy. But there are some various techniques called cycle consistency, or what they do right here, they have an adversarial setup, they have a discriminator to determine whether after a number of steps, the image still looks like it's been generated from a real like nature image. And if you back propagate that error, then you can generate very long, very high quality fly throughs through nature. Here you can see a bunch of examples. What I do find interesting is that they also added a specific sky model in order to make you feel like the sky is more real, I suspect their original works that the sky was often the problem and looked unrealistic. So now everything that sky here is produced actually by a separate model, as far as I can tell. Aya, I hope that's how you pronounce it is a new paper that also does text to image. However, this one is speed optimized. So in order to do diffusion, you have to take some bit of noise and then run it through the diffusion process step after step after step, there are various techniques to speed this up and paella supercharges them and manages to do the whole diffusion process in only 10 steps, which amounts to only 500 milliseconds. So within only 500 milliseconds, you have a high quality image from a given piece of text. Again, amazing progress in a field that is super young. Check out paella there is corresponding paper to it called fast text conditional discrete denoising on vector quantized latent spaces. Now, if you enjoyed the previous paper on how to scale up palm, then you might also enjoy multi ray, which is by meta, and the blog post is called optimizing efficiency for large scale AI models. This describes the system called multi ray, I've read the blog post. And I have to say it's kind of wishy washy, you have to guess a lot of the stuff, they just kind of describe in words what it does. And they link to various things that they've done. But I can't exactly read out, you know, what precisely they're doing right here. But if you need some inspiration of how a system like this would work, or you know, some hints of how this is really done in practice at scale, then give this blog post a read. Archive pairs up with hugging face. So previously, hugging face has acquired hugging face spaces from Gradio, which allows you to make little demos out of your hugging face repositories. And now archive includes those spaces. So if you upload a paper to archive, you can attach a demo from a hugging face space so people can directly on archive, try out your model if you have one or your technique if you have one and do so interactively. This is very cool. And obviously, I'm a big fan of integrating interactive things into our very old format of eight page PDFs. Okay, we've got a bunch of new models this week. The first one is alt diffusions by flag AI, which is a multi lingual diffusion model. So this is essentially stable diffusion, but multi lingual, as you can see right here, English, Chinese, Spanish, French, Russian, Japanese, Korean, Arabic, and Italian. Next is D mux by meta, which is a music source separation model. So this thing you can put like a song in there, and it will separate the sources meaning it will separate things like drums and vocals and isolate those perfect for practicing something doing karaoke, and whatever you want to do with it. The paper is called hybrid transformers for music source separation. And it's an archive, there's a new multi lingual clip available from lion trained on their own data set, the lion five B, and it reaches 77% zero shot on image net in English, and around 55% for Italian, Japanese and Chinese and supports over 100 languages. The cool thing is that it's very efficient in training because it uses locked image tuning, which we've discussed previously in a video. So check out the model and check out locked image tuning if you haven't seen it yet. It is really cool paper and cool and simple technique. In other news, a research group at the Citizen's University of New York has released a model that can accurately predict the human response to novel drug compounds. Now they're certainly not the first people to release such a model. This has obviously been going on for as long as data science has existed. But also, it's cool to see that even in this front in the drug discovery front, giant progress is being made on the back of what started out as cat image research. Alright, some helpful things for this week, we have quite a lot to get through. So let's get into it. This is a pixel art sprite sheet generator. If you're into old games into sprite animations, and so on. This is a stable diffusion based model that will create the sprites for you given a description. Look at this, I typed in fat Joey prompt extend is a model that will extend your prompts. So here is an example, you type in psychedelic liquids space, and it will append what it thinks that stable diffusion needs to give you what you want. So this is like a little bit of a translator between human input and whatever a very competent human using stable diffusion could do with all the modifiers such as concept art, sharp focus, illustration, Unreal Engine, and so on. There's a new blog post on hugging face telling you how to fine tune whisper for multilingual asr. But you can fine tune whisper for whatever you want. This blog post is your point of entry. dream texture is a plugin to make blender interact with stable diffusion. So here's a demo person types into blender, whatever they want as a texture in terms of text and then bada bing bada boom, apply, and it's now in the texture. Absolutely great. The YouTube channel mutual information has a series on reinforcement learning that I can highly recommend they spent a lot of time on this and I hope it is helpful to anyone who's looking to get into RL lovely tensors solves a problem we all have had in the past. So if I just want to print some tensor, I'm gonna get this and it's absolutely not helpful at all. As soon as your tensors are go beyond like four or five values, it's useless to just look at them. So all you do is you import lovely tensors, you monkey patch that stuff in and all of a sudden if you print a tensor, a non-py array, a torch tensor, whatever it will give you the shape, the amount of elements, statistics, the means the standard deviations and so on. This is a much, much better way to look at tensors. Now if the tensor is small enough, it will actually show you the values. But as soon as it's bigger than that, it will give you much more useful information. So here it warns you that there is infinities, there's nans in the tensors, and so on. And even here it tells you, well, this one is actually all zeros, you can still get back to the original tensor using sort of property access here, you have verbose access that will give you the values even if it's large. And here you get the just the plain old way if you really want that there are various helper methods around this also to show images to show statistics to show channels and to show things such as different filters in a stack of convolutional filters, I'll leave you to explore all of that yourself. But if you work with tensors a lot in an experimental sense, this is surely worth it. GPT index is a technique to build an index out of files using GPT. So this uses GPT to essentially take a bunch of files and then for example, recursively summarize them so that you essentially have a structure where you have a summary on top of a bunch of stuff. And then if you like one of them, you go into it and then you have summaries of the sub stuff that is there you go into that it's kind of an experimental I want to say this is a bit of a new way of thinking about what we could do with these models in order to organize information now that we have generative capabilities and I like that people think out of the box. So if you're also interested, check out this repository. There's a new upscaler for stable diffusion made by Rivers at Wings, the notebook is by N Shepherd and compute has been sponsored by stability AI. The notebook here runs you through the whole process of up sampling and it gives really cool results. I've previously talked about DAX hub DAX hub is like a bit of GitHub for machine learning. And I know a lot of places claim this nowadays, but that's how really believes in the open source paradigm. And now they release something they call direct data access and essentially a technique to stream down and upload version data to some place. So it essentially connects DVC, which you might know as like a data versioning tool with a transparent approach where you don't need to like pull all the whole data once or you know, stream it in some custom way, you can just treat it as if it already existed. And magically, the library in the background will pull down the data as you need it in a streamed fashion. So no long waiting on data to arrive, you can just simply like go train and even if you don't have space for the whole data will still work. Now I don't have exactly time here to explain you all of the things that you can do with it. But the install is really simple, essentially install their hooks and everything works just transparently and magically. So if you're interested, check it out and also check out their blog, it's regularly updated. For example, here is how to build an end to end active learning pipeline with fully open tools. GN is a GPU environment management tool lets you easily control, configure and monitor the GPU resources that you are using. And it is intended to ease up the process of GPU allocation for data scientists without code changes. So this is in case you're in some lab and you share GPUs with others, this tool is a must have I wish that this had existed during my PhD, it manages local GPUs, remote GPUs, cluster GPUs, and so on, you can reserve GPUs free up GPUs, essentially, whatever you want to do, it has even a VS code plugin. So if you're at all using GPUs, and especially if you're sharing them, consider this tool. MBXP is a multilingual benchmark for code completion in 10 plus programming languages. TSAI is an open source package intended for applying deep learning to time series on top of PyTorch and fast AI. Colossal AI has released two blog posts, both pertain to better and faster and cheaper training of models. The first one is what they call AI GC AI generated content, which essentially means image generation models. And the second one is for structure prediction of protein monomers and multimers. And both times they're able to speed up these models by a lot. Now the code is openly available. So do go and check it out. And the performance gains here are not only during inference, like we saw before, but this in fact provides for example, for stable diffusion, 6.5 times faster training and pre training cost savings. So the hardware cost of fine tuning can be almost seven times cheaper than if you were to do it in the vanilla way. HAP vid is a benchmark for tracking any point in a video. Super gradients is an awesome library to build, train and fine tune production ready deep learning state of the art vision models. Now we've seen a lot of libraries that you know, claim to just make stuff better. If you're into vision, I believe having like a library that's specific for vision, such as semantic segmentation or bounding box prediction, or even image classification, it really pays off to have a library that's dedicated to your field, especially if it's something like vision, where we have a lot of custom techniques that make these models just so much more efficient and better. But not only that super gradients also provides a lot of pre trained checkpoints. So even if you're just into using some models, this library might be good for you. Shumai is a network connected differentiable tensor library for TypeScript and JavaScript. As you can see in this demo, what you can do is you can define neural networks in TypeScript, and then you can distribute them over multiple places over multiple machines. And you can use the await like the async await syntax from JavaScript in order to ship data to other machines or call some function on another machine. And the library handles everything from you from forward propagation, even to back propagation and training. It's really cool. And the API for this looks quite clean. Safe tensors by hugging face is a new format who store and load tensors safely. I've previously done a video where I showed how you can like smuggle remote code execution into the hugging face hub, because the models essentially use the pytorch loading function. And pytorch in turn uses the pickle function by Python, which executes arbitrary code safe tensors is supposed to alleviate that by defining a safe, fixed and simple format to store tensors. Now, obviously, the trade off here is that you can't store arbitrary things anymore. If you want to store arbitrary things, you have to allow arbitrary code to be executed. So while I expect that a lot of architectures might switch to something like safe tensors, it is not a full solution for the problem. For better or worse, research will come up with new things, new ways of doing things. And if you constrain yourself to a particular way of doing things, then that will always not be enough. However, it's mostly going to be enough. Velo is a learn optimizer. And the cool thing here is that it really seems to be better than or at least on par with very hand tuned optimizers, you might know optimizers as stochastic gradient descent or Adam or something like this, but it is possible to learn an optimizer. So to learn a system that controls the optimization behavior of a training run of another system, these people have taken a lot of different ML problems, a lot of different networks have run optimization problems on them, and have essentially learned an optimizer that optimizes all of these different problems well. So that's what we consider a learned optimizer. And this one really seems that for many problems, especially like mainstream problems, it works really, really well out of the box. So without you having to tune, you know, the beta two parameters and the learning rate and stuff like this, you just apply it in its default configuration. And it does a pretty good job. This is super important if you want to do rapid prototyping, rapid exploration of some new ideas without doing a giant grid search over all the parameters. The Merlin data loader is a data loader specifically for recommender systems, recommender systems have, you know, a few extra or a few special requirements, namely, there's often quite few data I want to say compared to something like an image classifier, like the data points are mostly tabular, and they're not as many. So loading from disk and loading like hairs and stuff from this often can become the bottleneck. So a data loader is super important here. And the Merlin data loader promises to be over 10 times faster over native framework data loaders. If you're into recommender systems, try this out. Loda is an assembly language, a computational model, and a distributed tool for mining programs. This topic is very far away from me. But some of you might actually be interested. So if you're into integer sequences, there are these online encyclopedias of integer sequences, like 12345, and so on. So there's sequences of integers. And the question is always what's the program behind them? Like, can I come up with a piece of code that produces that integer sequence into perpetuity? And you know, 12345 is quite simple, but it gets complicated very quickly. And especially to teach machines to come up with the rules behind a sequence is a very challenging problem. So Loda is a system that allows you to mine such programs, essentially, you can run it and it will crank, crank, crank, crank and intelligently search for these programs. But not only that, it is also a distributed tool for doing that. So you can distribute you can partake in mining of such programs and much more. So as I understand, this is about what a loader program looks like or what it searches for. So here you can see one of these sequences. And this is apparently the program it comes up with. It looks pretty interesting. If you're interested, check Loda out. NumGa, not numba, numga is a library for geometric algebra in JAX and NumPy. If you're into geometric algebra, here's the example of a rigid body physics engine with a constraint solver, then this library might be for you. MTEB is a benchmark for text embedding. This is from similar authors as the buyer benchmark, which is a retrieval benchmark. But this goes further. This is a benchmark that covers eight embedding tasks over 56 data sets and 112 languages. And it also evaluates in this paper already 33 models on that benchmark. So the goal here is to find the one unified text embedding that covers all downstream tasks. And the status this far is that that universal embedding hasn't been found yet. The leaderboard shows that some models are good at some tasks, other models are good at other tasks. So the holy grail of text embedding is still somewhere out there. And this benchmark might prove that you have found it. Okay, the last cool thing I want to show you is Natbot. And this is already a little bit older, Nat Friedman tweeted this out in September, but essentially he managed to connect GPT-3 to the browser to a web browser and just let it interact with the web browser by prompting it in an appropriate way, given the website's HTML structure. So apparently the original idea comes from Sharif Shamim and that bot has a repository on GitHub. Look, it's just one Python file. I know half of you are super cringing right now. But yeah, research be research. And if you want to figure out how it's done, how that bot works. And if you want to give it a shot yourself might be really cool to do. So please do. Alright, that was all from ML news. This was a big chunk. Thank you so much for being here. Thank you for supporting the channel. Come to Discord if you're not already on it. Link is in the description. We have fantastic paper discussions every week and we talk general machine learning every day. With that being said, stay hydrated. Bye bye.
[ { "start": 0, "end": 6.16, "text": " Rumors of GPT-4 are in the air, neuron transmissions is now solved in closed form," }, { "start": 6.16, "end": 11.200000000000001, "text": " and mind reading is a thing now. It's Monday and welcome to ML News." }, { "start": 15.280000000000001, "end": 20.56, "text": " Hello and welcome to ML News. This is your regular update of what's going on in the machine learning" }, { "start": 20.56, "end": 27.44, "text": " and AI world. Our first story is the most interesting one. Brain reading is more and" }, { "start": 27.44, "end": 33.04, "text": " more becoming a thing. There is a paper called Seeing Beyond the Brain conditional diffusion" }, { "start": 33.04, "end": 38.8, "text": " models with sparse masked modeling for vision decoding. In this paper, the authors give a" }, { "start": 38.8, "end": 45.68000000000001, "text": " visual stimulus to a subject, a real human and then look at their brain waves. This is non-invasive," }, { "start": 45.68000000000001, "end": 52.56, "text": " this is fMRI brain scans. And from that reading of the fMRI, they're able to decode what the person" }, { "start": 52.56, "end": 57.6, "text": " is seeing. You can see right here on the top, you have visual stimuli. And on the bottom, you have" }, { "start": 57.6, "end": 64.32000000000001, "text": " the reconstructed images. Now what you'll be able to see is that the pixels don't exactly match." }, { "start": 64.32000000000001, "end": 70.32000000000001, "text": " However, the semantic content is very often the same. Now this is done via aligning the latent" }, { "start": 70.32000000000001, "end": 76.72, "text": " spaces of the encoders for the brain data and encoders from images. And this has been a long" }, { "start": 76.72, "end": 82.08, "text": " standing problem because the training data that exists to map what people are seeing from their" }, { "start": 82.08, "end": 87.36, "text": " brain waves to the image space is just super sparse. But the authors here go around that by" }, { "start": 87.36, "end": 94.72, "text": " pre-training on unlabeled fMRI data and first get a very, very good autoencoder on that data going." }, { "start": 94.72, "end": 99.2, "text": " Then the latent space can be determined, compressed. And then from that latent space," }, { "start": 99.2, "end": 104.32, "text": " we can learn a conditional image diffusion decoder in order to map the visual stimuli" }, { "start": 104.32, "end": 109.03999999999999, "text": " to the encoding of the brain waves. So the paradigm that we see in deep learning where you want to do" }, { "start": 109.04, "end": 114.48, "text": " some unsupervised pre-training first, because you have much more unlabeled data and only then" }, { "start": 114.48, "end": 120.24000000000001, "text": " include the task specific data and learn that on top of the unsupervised pre-trained models also" }, { "start": 120.24000000000001, "end": 125.84, "text": " holds in the field of brain computer interfaces, apparently, that's pretty cool that we're more and" }, { "start": 125.84, "end": 131.28, "text": " more getting the chance to peek into people's brains. Now this isn't yet a full thought reader" }, { "start": 131.28, "end": 135.68, "text": " or anything like this. Essentially, they disambiguate between I believe some hundred" }, { "start": 135.68, "end": 141.44, "text": " different classes of labels, but it's still very, very cool that you can essentially reconstruct" }, { "start": 141.44, "end": 148.96, "text": " just from reading brain waves, what kind of image the person is seeing and what about is in the image" }, { "start": 148.96, "end": 154.64000000000001, "text": " in a related article, neuroscience news.com writes that brain machine interface device predicts" }, { "start": 154.64000000000001, "end": 159.04000000000002, "text": " internal speech. Now this is a little bit different in that it's actually invasive. So this is an" }, { "start": 159.04000000000002, "end": 164.88, "text": " interface directly to the brain, but it is able to predict internal speech, which means speech that" }, { "start": 164.88, "end": 170.64, "text": " you just internally think to yourself, it is able to decode that now it is not able to decode arbitrary" }, { "start": 170.64, "end": 176.24, "text": " speech, I believe they go up to about eight words or something like this. So it's not yet exactly" }, { "start": 176.24, "end": 181.68, "text": " super accurate, but we are making big, big progress in that front. Alright, next news." }, { "start": 183.84, "end": 189.35999999999999, "text": " Ramin Hassani writes that they've published a new article in nature, machine intelligence and" }, { "start": 189.36, "end": 195.12, "text": " solved a differential equation that's been long standing without a closed form solution, we now" }, { "start": 195.12, "end": 200.72000000000003, "text": " have that closed form solution and it concerns the interactions between neurons. This is a major" }, { "start": 200.72000000000003, "end": 205.36, "text": " benefit for people who want to implement biologically inspired sort of biologically" }, { "start": 205.36, "end": 210.48000000000002, "text": " plausible neural networks, because previously, you'd have to have some sort of an ODE solver in" }, { "start": 210.48000000000002, "end": 214.88000000000002, "text": " order to even model that connection properly. And now that there's a closed form solution," }, { "start": 214.88, "end": 219.35999999999999, "text": " you can essentially just forward and backprop through that formula. And the absolute coolest thing" }, { "start": 219.35999999999999, "end": 224.79999999999998, "text": " is that they have implemented this in both pytorch and TensorFlow. So you can technically build in" }, { "start": 224.79999999999998, "end": 230.32, "text": " this directly into your architectures today. Now it's not guaranteed to be like a lot better than" }, { "start": 230.32, "end": 235.12, "text": " what we currently have in terms of neuron neuron connection. But that's not the point. The point" }, { "start": 235.12, "end": 240, "text": " is to get to a place where we can simulate biologically plausible neural networks as well" }, { "start": 240, "end": 245.36, "text": " as possible. And from those potentially learn something about the brain and we might actually" }, { "start": 245.36, "end": 250.88, "text": " get some inspiration for how to even improve our artificial neural network architectures from this." }, { "start": 250.88, "end": 254.32, "text": " So check out the paper and the repository in case you're interested." }, { "start": 256.96, "end": 264.88, "text": " Alberto Romero on sub stack has an article called GPT for rumors from Silicon Valley. This is a" }, { "start": 264.88, "end": 273.2, "text": " summary of things that people whatever people means talk about currently around GPT for so open AI has" }, { "start": 273.2, "end": 278.71999999999997, "text": " been announcing like tiny bits of the next iteration of their language models here and there." }, { "start": 278.71999999999997, "end": 284.96, "text": " And there used to be an interview by Sam Altman where he said GPT four isn't really going to be" }, { "start": 284.96, "end": 290.24, "text": " that much bigger than GPT three. And it's probably still going to be in the text domain, it's probably" }, { "start": 290.24, "end": 295.12, "text": " going to be a bit more aligned to humans a bit more, you know, learning from human feedback and" }, { "start": 295.12, "end": 300.08, "text": " so on. And people were kind of like a tiny bit disappointed, I guess, because it's not all we're" }, { "start": 300.08, "end": 305.6, "text": " going to build the next giant thing. But now more and more rumors are coming out that in fact, GPT" }, { "start": 305.6, "end": 312.08, "text": " four might be very well what they claim colossal. So another scale up of two orders of magnitude or" }, { "start": 312.08, "end": 316.72, "text": " something like this in terms of numbers of parameters or even three orders of magnitude," }, { "start": 316.72, "end": 322.16, "text": " although some rumors claim that it is going to be sparse. So there's not really like a one to one" }, { "start": 322.16, "end": 327.28000000000003, "text": " comparison. On the other hand, there are also a lot of rumors that claim that GPT four is going" }, { "start": 327.28000000000003, "end": 334.16, "text": " to be multimodal after all. So including text, images, videos, and so on basically anything they" }, { "start": 334.16, "end": 339.36, "text": " can get their fingers on. So we'll see which one of these turns out to be true, it's very well" }, { "start": 339.36, "end": 344.40000000000003, "text": " possible that they first aim that just sort of improving GPT three and then all of a sudden with" }, { "start": 344.4, "end": 349.44, "text": " recent developments around diffusion models and so on, they've now gone into the direction of you" }, { "start": 349.44, "end": 356.15999999999997, "text": " know, let's just let's just do another giant leap. And from people who have apparently spoken to" }, { "start": 356.15999999999997, "end": 362.4, "text": " other people who have apparently tried the new model or a precursor to the new GPT four, they" }, { "start": 362.4, "end": 370.4, "text": " say that GPT four will be just as much an improvement over GPT three as GPT three was over GPT two. And" }, { "start": 370.4, "end": 377.35999999999996, "text": " if you remember in case you remember GPT three was a giant improvement over GPT two. Now is this" }, { "start": 377.35999999999996, "end": 382.23999999999995, "text": " going to be a GI and solve all our problems? Probably not. But in case this is true, in case" }, { "start": 382.23999999999995, "end": 387.67999999999995, "text": " it is really the same amount of step from GPT two to GPT three, as it is from GPT three to the new" }, { "start": 387.67999999999995, "end": 394.32, "text": " GPT four, then I think we're in for pretty pretty amazing times. In any case, rumors be rumors. And" }, { "start": 394.32, "end": 401.04, "text": " I guess we'll only know when we actually see it. The new model is rumored to be released sometimes" }, { "start": 401.04, "end": 408.96, "text": " between December and February. So the wait isn't going to be that long. Now related to this," }, { "start": 408.96, "end": 414.64, "text": " OpenAI is also rumored to collaborate with Cerebros. And Cerebros in turn has just released" }, { "start": 414.64, "end": 421.12, "text": " their biggest supercomputer to date, which is called Andromeda has 13.5 million cores. Now" }, { "start": 421.12, "end": 426.24, "text": " Cerebros is a company that builds extremely large chips, they want to do as much as they can, like" }, { "start": 426.24, "end": 431.04, "text": " on a single chip. And that's why their chips are like, I think they're about yay big, I'm not" }, { "start": 431.04, "end": 438.56, "text": " exactly sure. But this absolute supercomputer is just comprised of 16 cerebros CS two systems. So" }, { "start": 438.56, "end": 443.2, "text": " that should give you an already an indication of just how big their individual systems already are" }, { "start": 443.2, "end": 448.08, "text": " now connecting them makes for a ginormous supercomputer. Now here on the website, it" }, { "start": 448.08, "end": 454.88, "text": " says get demo but I guess for most of you, it's not really going to be an option to go into business" }, { "start": 454.88, "end": 459.59999999999997, "text": " with this kind of scale. But for some of you, it might be and you might very well want to click" }, { "start": 459.59999999999997, "end": 467.84, "text": " that button. The meta research blog announces the ESM Metagenomic Atlas, the first view of the dark" }, { "start": 467.84, "end": 472.79999999999995, "text": " matter of the protein universe. So a lot of folding work a lot of protein folding work has" }, { "start": 472.8, "end": 479.2, "text": " been done recently with alpha fold and ESM fold and now meta releases a database of what's called" }, { "start": 479.2, "end": 484.96000000000004, "text": " meta genomics. Metagenomics is essentially if you just go outside and you pick up a piece of dirt," }, { "start": 484.96000000000004, "end": 490.48, "text": " there's going to be like a ton of microbes, a ton of bacteria, a ton of organic material in there." }, { "start": 490.48, "end": 495.52, "text": " And all of that genomic material isn't necessarily something you find in like the human genome" }, { "start": 495.52, "end": 501.52, "text": " project or something like this, yet it's still very important, for example, for ecology for medicine," }, { "start": 501.52, "end": 507.28, "text": " but also for human well being. So this Metagenomic Atlas is the first database that reveals the" }, { "start": 507.28, "end": 512.72, "text": " structures of the meta genomic world at the scale of hundreds of millions of proteins," }, { "start": 512.72, "end": 517.92, "text": " you can explore that there is a link to the Atlas right here. If you're anywhere near this world of" }, { "start": 517.92, "end": 523.4399999999999, "text": " protein folding, I guess this is a very exciting time. And I'm also excited for the progress we" }, { "start": 523.4399999999999, "end": 529.04, "text": " make on other frontiers rather than just scaling up and producing more stories about unicorns." }, { "start": 529.04, "end": 534.88, "text": " Like for all the criticisms that these big models get and the pressure to just scale and scale and" }, { "start": 534.88, "end": 540, "text": " scale, they do every now and then deliver us something like this, something that's absolutely" }, { "start": 540, "end": 546.48, "text": " undeniably useful for some natural science out there. And as we get better with our core research," }, { "start": 546.48, "end": 551.68, "text": " even if that's on pictures of cats, I strongly believe that this will greatly benefit adjacent" }, { "start": 551.68, "end": 557.68, "text": " fields such as biology, mathematics, physics, chemistry, and more of the other sciences." }, { "start": 557.68, "end": 562.7199999999999, "text": " Also on the meta AI blog, they released a blog post called teaching AI advanced mathematical" }, { "start": 562.7199999999999, "end": 567.1999999999999, "text": " reasoning. Now I've dealt before with some of the papers that met I had in this regard," }, { "start": 567.1999999999999, "end": 572.56, "text": " where they tried to come up with systems that use a prover. So there are these things called prover" }, { "start": 572.56, "end": 577.92, "text": " systems or proof assistance, or essentially formalize your whole mathematics inputs to" }, { "start": 577.92, "end": 582.56, "text": " spell out everything super formally, super descriptive, super detailed, and then you can" }, { "start": 582.56, "end": 588.0799999999999, "text": " use the system to search for new proofs by applying some proof strategies here and there. So you can" }, { "start": 588.0799999999999, "end": 593.52, "text": " say I want to do now a contra position of two things and so on. However, as you'll quickly" }, { "start": 593.52, "end": 599.4399999999999, "text": " discover the amount of strategies that you can apply to a given statement to search for a proof" }, { "start": 599.4399999999999, "end": 604.0799999999999, "text": " is really, really huge. And that leaves you essentially with a search problem. So this paper" }, { "start": 604.0799999999999, "end": 610, "text": " uses essentially a variant of Monte Carlo tree search, the same thing that like AlphaGo uses" }, { "start": 610, "end": 615.6, "text": " in order to determine the next moves in a go game in order to determine the next proof strategy or" }, { "start": 615.6, "end": 621.04, "text": " the next proof step that should be applied in order to reach a given target statement. Again," }, { "start": 621.04, "end": 626.72, "text": " very cool that what initially dealt with a bunch of games and was really flashy because we can now" }, { "start": 626.72, "end": 632.8, "text": " solve go and chess much better has developed into something that is of actual use in an adjacent" }, { "start": 632.8, "end": 637.52, "text": " field in this case, mathematics. So very cool. Check out the paper if you are interested." }, { "start": 637.52, "end": 641.84, "text": " Nvidia has released a paper called eDIF-i text to image diffusion models with ensemble of expert" }, { "start": 641.84, "end": 648.56, "text": " denoisers. This is I would say a typical Nvidia paper where they don't reinvent the world. But" }, { "start": 648.56, "end": 653.68, "text": " what they do is they take what exists and they apply a strong engineering mindset to it, they" }, { "start": 653.68, "end": 659.68, "text": " improve upon it, and it just results in a very high qualitative output. So in this case, they take" }, { "start": 659.68, "end": 664.64, "text": " the idea of these text to image diffusion models. But then on top of that, they have an ensemble" }, { "start": 664.64, "end": 670.64, "text": " of expert denoisers. So they don't just have one denoiser like we used to in a diffusion model," }, { "start": 670.64, "end": 675.84, "text": " they have an ensemble of denoisers, which means that different models can take care of different" }, { "start": 675.84, "end": 681.92, "text": " phases in this denoising process. Also, they stage the image production in multiple steps. Now this" }, { "start": 681.92, "end": 687.1999999999999, "text": " has been done before, but it is a very viable strategy in that you essentially have one model" }, { "start": 687.1999999999999, "end": 692.3199999999999, "text": " produce a low resolution version of the image and then you successively scale that image." }, { "start": 692.32, "end": 698.24, "text": " And then you successively scale that up. Now, as you can see right here, all in all that results in" }, { "start": 698.24, "end": 704.5600000000001, "text": " super high quality images that can either be done from a text description or from as you can see" }, { "start": 704.5600000000001, "end": 709.6, "text": " right here, text plus some kind of map or some kind of mask that you draw. Or over here, you" }, { "start": 709.6, "end": 715.6, "text": " can also input some sort of a style reference image into this system. So again, it's just amazing" }, { "start": 715.6, "end": 723.2, "text": " how people are able to push forward the state of the art in such a short time. Big Science has" }, { "start": 723.2, "end": 728.88, "text": " released two new models, one called blooms and the other one called empty zero. These are evolutions" }, { "start": 728.88, "end": 734.8000000000001, "text": " of their previous models. And they're mainly concerned with multitask prompted fine tuning." }, { "start": 734.8000000000001, "end": 739.44, "text": " We've dealt with prompted fine tuning before in the galactica paper, which essentially means that" }, { "start": 739.44, "end": 745.7600000000001, "text": " after you retrain your model, you fine tune it on prompted samples. So like you would ask GPT three" }, { "start": 745.7600000000001, "end": 751.6800000000001, "text": " with a prompt to do some kind of task, you go ahead and actually fine tune on the prompt, the input" }, { "start": 751.6800000000001, "end": 757.6, "text": " and the output of that task to make the model learn to respond to such prompts in an appropriate" }, { "start": 757.6, "end": 763.0400000000001, "text": " fashion. And if you do that for multiple tasks, you also have the ability to then generalize to" }, { "start": 763.0400000000001, "end": 768.24, "text": " new tasks because that will carry over from the pre training. Specifically, these new models" }, { "start": 768.24, "end": 775.36, "text": " deal with this exact setting, but in non English data. So across lingual generalization doing this" }, { "start": 775.36, "end": 780.88, "text": " in multiple languages and potentially also generalizing across languages. The models are" }, { "start": 780.88, "end": 788.88, "text": " on hogging face if you want to check them out. I clear 2023 reviews are out on open review. And" }, { "start": 788.88, "end": 795.04, "text": " there are quite a few surprises in the negative direction. So Robert Tang here tweets out an" }, { "start": 795.04, "end": 801.52, "text": " example where the authors respond to a reviewer with response to you is a waste of time. I hope" }, { "start": 801.52, "end": 805.52, "text": " you can respect the author's work and give constructive comments instead of taking a few" }, { "start": 805.52, "end": 811.04, "text": " minutes to give a trivial suggestion. I recommend that you complete a university maybe kindergarten" }, { "start": 811.04, "end": 816.88, "text": " course before giving your review comments. That's just lovely. Somehow believing in the good of" }, { "start": 816.88, "end": 822.3199999999999, "text": " human beings, maybe this person just like had an absolutely terrible day and they really need this" }, { "start": 822.32, "end": 828.48, "text": " paper. And the review is actually very, very bad, like actually does make like a super trivial dunk" }, { "start": 828.48, "end": 833.84, "text": " on the paper. And you know, I'm not sure what happened right here. If you're ever inclined" }, { "start": 833.84, "end": 840.24, "text": " to write a rebuttal like this, just just sleep, go to sleep, wake up the next day, breathe and" }, { "start": 840.24, "end": 845.7600000000001, "text": " realize that it's it's kind of useless, even if it's probably true. Another worrying issue tweeted" }, { "start": 845.7600000000001, "end": 852.1600000000001, "text": " out by Stella Biederman is the following. So one reviewer criticized this model for that it is not" }, { "start": 852.16, "end": 858.3199999999999, "text": " acceptable to only compare with publicly available models, meaning that the paper should also have" }, { "start": 858.3199999999999, "end": 864.48, "text": " compared with non publicly available models. Now there is of course, a debate to have right here" }, { "start": 864.48, "end": 869.92, "text": " in order to properly compare to someone's model, you need to have access to it. On the other hand," }, { "start": 869.92, "end": 875.04, "text": " there has been a long history of science where people just hadn't been putting stuff out into" }, { "start": 875.04, "end": 879.4399999999999, "text": " open source. And you'd essentially just have to take the numbers from the tables from their paper" }, { "start": 879.44, "end": 884.72, "text": " and then put those into your paper and essentially just believe what they said is possible that the" }, { "start": 884.72, "end": 890.32, "text": " reviewer here is of the stands that look, you know, you can just take the number that they claim" }, { "start": 890.32, "end": 894.48, "text": " and put them there. On the other hand, it's also entirely fair to say that, well, I don't have" }, { "start": 894.48, "end": 898.6400000000001, "text": " access to their model, I can't verify their numbers. And therefore, I'm not going to put them" }, { "start": 898.6400000000001, "end": 906.08, "text": " into my paper. The crux is obviously if that fact that you leave these things away that aren't public" }, { "start": 906.08, "end": 912.1600000000001, "text": " also makes your method appear a lot better in comparison because the only actual competitors" }, { "start": 912.1600000000001, "end": 917.2800000000001, "text": " to your method are closed source and only have some number in some paper. I don't know what's" }, { "start": 917.2800000000001, "end": 922.64, "text": " the correct answer right here. But it's certainly worth having a discussion about. And lastly, and" }, { "start": 922.64, "end": 927.2800000000001, "text": " you might actually have heard of this one is this paper called variance reduction is an antidote to" }, { "start": 927.2800000000001, "end": 932.88, "text": " Byzantine's better rates, weaker assumptions and communication compression as a cherry on the top." }, { "start": 932.88, "end": 939.12, "text": " People do get creative with titles these days. But the problem that one reviewer here had is with" }, { "start": 939.12, "end": 945.68, "text": " the word Byzantine's, which the reviewer claimed to be disparaging of the whoever people consider" }, { "start": 945.68, "end": 952.72, "text": " themselves Byzantine. Now Byzantine is a term that's been long used in various fields of analysis," }, { "start": 952.72, "end": 959.2, "text": " security, cryptography, I believe game theory. So the term is very well known and is an established" }, { "start": 959.2, "end": 965.36, "text": " technical term. However, the reviewer is of strong opinion that that is a term that contains prejudice" }, { "start": 965.36, "end": 971.9200000000001, "text": " and is derogatory and is denouncing the ethno religious practice of some people. Now the" }, { "start": 971.9200000000001, "end": 978.8000000000001, "text": " reviewer bases their opinion strongly on the fact that the ICLEAR code of ethics says you must" }, { "start": 978.8000000000001, "end": 984.4000000000001, "text": " respect cultural heritage of others and repeatedly claims that the usage of the term Byzantine in" }, { "start": 984.4, "end": 990.48, "text": " this work is a violation of the ICLEAR code of ethics. Whereas the authors claim this is a" }, { "start": 990.48, "end": 995.6, "text": " technical term, it's been used for a long time, and it is disparaging to absolutely no one." }, { "start": 995.6, "end": 1000, "text": " The conversation goes on and on, I believe there are over 36 comments in this thread," }, { "start": 1000, "end": 1004.56, "text": " including some other people coming in and saying, hey, I'm actually considered Byzantine," }, { "start": 1004.56, "end": 1009.68, "text": " and I don't have a problem with the term. So don't defend, you know us. Well, the reviewer" }, { "start": 1009.68, "end": 1015.52, "text": " did make some suggestions for other terms such as deviant. But the authors pointed out that none of" }, { "start": 1015.52, "end": 1022, "text": " these suggestions capture the term in its full existence or in how people actually use it. As" }, { "start": 1022, "end": 1026.32, "text": " the debate goes on, you'll see the reviewer shifting their stance a little bit from the" }, { "start": 1026.32, "end": 1032.1599999999999, "text": " fact that it's just not appropriate to use the term that the paper also isn't technically correct." }, { "start": 1032.1599999999999, "end": 1037.6799999999998, "text": " But I strongly believe that the reviewers only introduced that point after the discussion had" }, { "start": 1037.68, "end": 1042.4, "text": " been going on for a while, and they realized they needed to make another stronger case on" }, { "start": 1042.4, "end": 1048.16, "text": " scientific terms. Now the problem here is that on open review, I believe you can't see the" }, { "start": 1048.16, "end": 1053.44, "text": " modifications. So we have no idea these comments, they were all changed around, even the original" }, { "start": 1053.44, "end": 1059.04, "text": " comment is changed around to like include some other feedback and so on. So it seems the timeline" }, { "start": 1059.04, "end": 1064.96, "text": " here is a little bit murky. The authors here also point out that this point, the point that the word" }, { "start": 1064.96, "end": 1071.92, "text": " Byzantine is inappropriate was apparently initially the only criticism of that reviewer or the only" }, { "start": 1071.92, "end": 1077.28, "text": " real criticism. But the reviewer gave the paper a really low score. And if you know anything about" }, { "start": 1077.28, "end": 1083.04, "text": " conferences, most meta reviewers just kind of look whether there is one bad score, and then the paper" }, { "start": 1083.04, "end": 1088, "text": " already has very poor chances or they look at the average, which would obviously be decreased" }, { "start": 1088, "end": 1093.68, "text": " strongly by one bad score. So essentially, the reviewer held the paper hostage a little bit and" }, { "start": 1093.68, "end": 1099.44, "text": " wanted the authors to change the wording. The authors even agree to abbreviate the word Byzantine" }, { "start": 1099.44, "end": 1104.5600000000002, "text": " to biz like the short form biz, because they just didn't agree that any of the other terms would do" }, { "start": 1104.5600000000002, "end": 1110.4, "text": " the technical nature justice. The reviewer disagreed that that would actually solve the problem and" }, { "start": 1110.4, "end": 1115.04, "text": " essentially said that even if they were to change the term, they would now expect not only to not" }, { "start": 1115.04, "end": 1121.2, "text": " use that term, but also the paper to contain a discussion of why the word Byzantine is not" }, { "start": 1121.2, "end": 1127.1200000000001, "text": " appropriate, or at least like a moral struggle of the authors are bringing this up of why this is" }, { "start": 1127.1200000000001, "end": 1133.52, "text": " problematic. The reviewer again, repeatedly and insistently claims that it violates the" }, { "start": 1133.52, "end": 1140.24, "text": " ICLR code of ethics and holds that as like a stick to like hit the authors with like code of ethics." }, { "start": 1140.24, "end": 1145.3600000000001, "text": " This is against the code of ethics. What's interesting is that at some point, the program" }, { "start": 1145.3600000000001, "end": 1150, "text": " chairs commented on this as well, saying that the program chair committee and ethics chair have been" }, { "start": 1150, "end": 1155.12, "text": " following this thread closely upon preliminary investigation, the ethics chair find that the use" }, { "start": 1155.12, "end": 1162.16, "text": " of the B word, it's not the B word is a possibly emerging issue, but not yet a major ethics issue" }, { "start": 1162.16, "end": 1166.72, "text": " that could justify rejecting research, there seems to be no widespread agreement that the B word is" }, { "start": 1166.72, "end": 1170.72, "text": " offensive. This discussion between reviewers and authors is still valuable to our community," }, { "start": 1170.72, "end": 1175.44, "text": " which raises awareness of this potentially emerging issue, we appreciate the thoughts from the reviews," }, { "start": 1175.44, "end": 1182.72, "text": " and they said that this is essentially now resolved by saying, you know, reviewer, you made your point," }, { "start": 1182.72, "end": 1188.64, "text": " but we don't agree with the point, the reviewer responded again, lengthily pointed out that this" }, { "start": 1188.64, "end": 1194.4, "text": " violates the ICLR code of ethics. Now in the end, you could say it's all good. And the program chairs" }, { "start": 1194.4, "end": 1199.68, "text": " came in and essentially squashed the reviewer and said, okay, the paper is fine, can use the word" }, { "start": 1199.68, "end": 1205.3600000000001, "text": " Byzantine, it's not problematic, all good. But I strongly actually believe that this is a big win" }, { "start": 1205.3600000000001, "end": 1211.04, "text": " for this reviewer right here, because the ethics chair, the appropriate response would be shut up," }, { "start": 1211.04, "end": 1216.48, "text": " you're an embarrassment to the scientific institution, and you're barred from reviewing" }, { "start": 1216.48, "end": 1222.72, "text": " any more papers for any other conferences. This is a joke, shut up. But they didn't do that. They" }, { "start": 1222.72, "end": 1227.92, "text": " essentially said yes to the reviewer, they essentially said, yes, it's a possibly emerging" }, { "start": 1227.92, "end": 1233.2, "text": " issue, because they've seen that there was quite a bit of uproar in the community that such a what" }, { "start": 1233.2, "end": 1240, "text": " is essentially a technical term that is no one absolutely no one except this reviewer feels is" }, { "start": 1240, "end": 1246.24, "text": " not appropriate was used, the ethics chair said, yes, it's possibly emerging. So this is like a" }, { "start": 1246.24, "end": 1251.68, "text": " groundwork for the future. This is how these things slip in there, I have full conviction that people" }, { "start": 1251.68, "end": 1257.28, "text": " who write these codes of ethics do so with the best intentions, at least most of them, I do believe" }, { "start": 1257.28, "end": 1262.96, "text": " some of them predict exactly this. And this is how you again and again, slip these things in. So" }, { "start": 1262.96, "end": 1268.24, "text": " one person makes a fuss, you take the temperature of the community is like, not yet ready, but we" }, { "start": 1268.24, "end": 1272.6399999999999, "text": " have now precedence, right. So at the next conference, the same reviewer can make a fuss" }, { "start": 1272.6399999999999, "end": 1276.32, "text": " again, and they can point back and say, well, other people, you don't know, it's the same reviewer," }, { "start": 1276.32, "end": 1281.6, "text": " other people have said this before. So actually, this might actually be problematic. And the ethics" }, { "start": 1281.6, "end": 1287.1999999999998, "text": " chair here seems to be bound by the fact that someone said this is ridiculous, shut up. However," }, { "start": 1287.1999999999998, "end": 1292.3999999999999, "text": " they do so in the most lenient way in the most way that guarantees that in the future, this will" }, { "start": 1292.3999999999999, "end": 1297.04, "text": " actually become a problem. So in my opinion, big win for the reviewer right here, big win for the" }, { "start": 1297.04, "end": 1304.7199999999998, "text": " complainers, and I don't like it. Google has a new paper called efficiently scaling transformer" }, { "start": 1304.72, "end": 1312.48, "text": " inference on how they scale their big home models on TPUs. Now it is not going to be very applicable" }, { "start": 1312.48, "end": 1318, "text": " for most of you. But in case you care on how they enable something like 32 larger context lengths," }, { "start": 1318, "end": 1324.72, "text": " and super duper flops and super duper hardware utilization during large batch processing," }, { "start": 1324.72, "end": 1329.68, "text": " give this paper a read. Also from Google, the Google Research blog has an entry called infinite" }, { "start": 1329.68, "end": 1335.3600000000001, "text": " nature generating 3d fly throughs from still photos. This is on top of a paper that they" }, { "start": 1335.3600000000001, "end": 1341.68, "text": " published at ECCV, which generates infinite views or infinite fly throughs as the title says. And" }, { "start": 1341.68, "end": 1346.72, "text": " the cool thing is this happens from still images. So you can give a single image and it will generate" }, { "start": 1346.72, "end": 1352.64, "text": " a fly through from that image, they use various techniques for that. But the base idea is that" }, { "start": 1352.64, "end": 1358.88, "text": " you take an image and you predict its depth map. So how far away all the stuff is, and then you use" }, { "start": 1358.88, "end": 1364.4, "text": " that in order to render the image from a slightly different view. If you know how far away all the" }, { "start": 1364.4, "end": 1369.7600000000002, "text": " things are, you can position your camera slightly differently. And you can still determine where the" }, { "start": 1369.7600000000002, "end": 1375.6000000000001, "text": " pixels go. Now this will leave some pixels to be undetermined because you can now see behind things" }, { "start": 1375.6000000000001, "end": 1380.48, "text": " that you didn't see before. And then you have another model here in this refining step that" }, { "start": 1380.48, "end": 1386, "text": " essentially fills in these missing pixels. And then you repeat again, you pose the depth map," }, { "start": 1386, "end": 1391.28, "text": " you adjust your camera position tiny bit, and then you fill in the pixels that are missing. In order" }, { "start": 1391.28, "end": 1396.48, "text": " to train this, it's not exactly super easy. But there are some various techniques called cycle" }, { "start": 1396.48, "end": 1401.44, "text": " consistency, or what they do right here, they have an adversarial setup, they have a discriminator" }, { "start": 1401.44, "end": 1406.48, "text": " to determine whether after a number of steps, the image still looks like it's been generated from" }, { "start": 1406.48, "end": 1413.04, "text": " a real like nature image. And if you back propagate that error, then you can generate very long," }, { "start": 1413.04, "end": 1417.52, "text": " very high quality fly throughs through nature. Here you can see a bunch of examples. What I do" }, { "start": 1417.52, "end": 1424.08, "text": " find interesting is that they also added a specific sky model in order to make you feel like the sky" }, { "start": 1424.08, "end": 1429.6, "text": " is more real, I suspect their original works that the sky was often the problem and looked" }, { "start": 1429.6, "end": 1434.8, "text": " unrealistic. So now everything that sky here is produced actually by a separate model, as far as" }, { "start": 1434.8, "end": 1442.8799999999999, "text": " I can tell. Aya, I hope that's how you pronounce it is a new paper that also does text" }, { "start": 1442.88, "end": 1450, "text": " to image. However, this one is speed optimized. So in order to do diffusion, you have to take some" }, { "start": 1450, "end": 1455.0400000000002, "text": " bit of noise and then run it through the diffusion process step after step after step, there are" }, { "start": 1455.0400000000002, "end": 1460.5600000000002, "text": " various techniques to speed this up and paella supercharges them and manages to do the whole" }, { "start": 1460.5600000000002, "end": 1467.7600000000002, "text": " diffusion process in only 10 steps, which amounts to only 500 milliseconds. So within only 500" }, { "start": 1467.76, "end": 1473.84, "text": " milliseconds, you have a high quality image from a given piece of text. Again, amazing progress in" }, { "start": 1473.84, "end": 1478.72, "text": " a field that is super young. Check out paella there is corresponding paper to it called fast" }, { "start": 1478.72, "end": 1486.48, "text": " text conditional discrete denoising on vector quantized latent spaces. Now, if you enjoyed" }, { "start": 1486.48, "end": 1493.68, "text": " the previous paper on how to scale up palm, then you might also enjoy multi ray, which is by meta," }, { "start": 1493.68, "end": 1499.76, "text": " and the blog post is called optimizing efficiency for large scale AI models. This describes the" }, { "start": 1499.76, "end": 1505.28, "text": " system called multi ray, I've read the blog post. And I have to say it's kind of wishy washy, you" }, { "start": 1505.28, "end": 1510.48, "text": " have to guess a lot of the stuff, they just kind of describe in words what it does. And they link" }, { "start": 1510.48, "end": 1516.64, "text": " to various things that they've done. But I can't exactly read out, you know, what precisely they're" }, { "start": 1516.64, "end": 1521.68, "text": " doing right here. But if you need some inspiration of how a system like this would work, or you know," }, { "start": 1521.68, "end": 1527.52, "text": " some hints of how this is really done in practice at scale, then give this blog post a read." }, { "start": 1529.68, "end": 1536.0800000000002, "text": " Archive pairs up with hugging face. So previously, hugging face has acquired hugging face spaces from" }, { "start": 1536.0800000000002, "end": 1541.52, "text": " Gradio, which allows you to make little demos out of your hugging face repositories. And now" }, { "start": 1541.52, "end": 1547.3600000000001, "text": " archive includes those spaces. So if you upload a paper to archive, you can attach a demo from a" }, { "start": 1547.36, "end": 1552.8, "text": " hugging face space so people can directly on archive, try out your model if you have one or" }, { "start": 1552.8, "end": 1558.56, "text": " your technique if you have one and do so interactively. This is very cool. And obviously," }, { "start": 1558.56, "end": 1565.6799999999998, "text": " I'm a big fan of integrating interactive things into our very old format of eight page PDFs." }, { "start": 1567.84, "end": 1574.08, "text": " Okay, we've got a bunch of new models this week. The first one is alt diffusions by flag AI," }, { "start": 1574.08, "end": 1580.56, "text": " which is a multi lingual diffusion model. So this is essentially stable diffusion, but multi lingual," }, { "start": 1580.56, "end": 1584.96, "text": " as you can see right here, English, Chinese, Spanish, French, Russian, Japanese, Korean," }, { "start": 1584.96, "end": 1592.8, "text": " Arabic, and Italian. Next is D mux by meta, which is a music source separation model. So this thing" }, { "start": 1592.8, "end": 1597.6, "text": " you can put like a song in there, and it will separate the sources meaning it will separate" }, { "start": 1597.6, "end": 1604, "text": " things like drums and vocals and isolate those perfect for practicing something doing karaoke," }, { "start": 1604, "end": 1608.32, "text": " and whatever you want to do with it. The paper is called hybrid transformers for music source" }, { "start": 1608.32, "end": 1613.76, "text": " separation. And it's an archive, there's a new multi lingual clip available from lion trained on" }, { "start": 1613.76, "end": 1619.92, "text": " their own data set, the lion five B, and it reaches 77% zero shot on image net in English," }, { "start": 1619.92, "end": 1626.64, "text": " and around 55% for Italian, Japanese and Chinese and supports over 100 languages. The cool thing" }, { "start": 1626.64, "end": 1630.96, "text": " is that it's very efficient in training because it uses locked image tuning, which we've discussed" }, { "start": 1630.96, "end": 1636.4, "text": " previously in a video. So check out the model and check out locked image tuning if you haven't seen" }, { "start": 1636.4, "end": 1642.08, "text": " it yet. It is really cool paper and cool and simple technique. In other news, a research group at the" }, { "start": 1642.08, "end": 1646.8, "text": " Citizen's University of New York has released a model that can accurately predict the human" }, { "start": 1646.8, "end": 1651.92, "text": " response to novel drug compounds. Now they're certainly not the first people to release such" }, { "start": 1651.92, "end": 1656.4, "text": " a model. This has obviously been going on for as long as data science has existed. But also," }, { "start": 1656.4, "end": 1661.68, "text": " it's cool to see that even in this front in the drug discovery front, giant progress is being made" }, { "start": 1661.68, "end": 1668.72, "text": " on the back of what started out as cat image research. Alright, some helpful things for this" }, { "start": 1668.72, "end": 1674.8000000000002, "text": " week, we have quite a lot to get through. So let's get into it. This is a pixel art sprite sheet" }, { "start": 1674.8000000000002, "end": 1681.92, "text": " generator. If you're into old games into sprite animations, and so on. This is a stable diffusion" }, { "start": 1681.92, "end": 1687.52, "text": " based model that will create the sprites for you given a description. Look at this, I typed in fat" }, { "start": 1687.52, "end": 1694.24, "text": " Joey prompt extend is a model that will extend your prompts. So here is an example, you type in" }, { "start": 1694.24, "end": 1702, "text": " psychedelic liquids space, and it will append what it thinks that stable diffusion needs to give you" }, { "start": 1702, "end": 1709.68, "text": " what you want. So this is like a little bit of a translator between human input and whatever a very" }, { "start": 1709.68, "end": 1715.2, "text": " competent human using stable diffusion could do with all the modifiers such as concept art," }, { "start": 1715.2, "end": 1720.64, "text": " sharp focus, illustration, Unreal Engine, and so on. There's a new blog post on hugging face telling" }, { "start": 1720.64, "end": 1727.3600000000001, "text": " you how to fine tune whisper for multilingual asr. But you can fine tune whisper for whatever you want." }, { "start": 1727.3600000000001, "end": 1732.88, "text": " This blog post is your point of entry. dream texture is a plugin to make blender interact with" }, { "start": 1732.88, "end": 1738.4, "text": " stable diffusion. So here's a demo person types into blender, whatever they want as a texture in" }, { "start": 1738.4, "end": 1744.64, "text": " terms of text and then bada bing bada boom, apply, and it's now in the texture. Absolutely great." }, { "start": 1744.64, "end": 1751.0400000000002, "text": " The YouTube channel mutual information has a series on reinforcement learning that I can" }, { "start": 1751.0400000000002, "end": 1755.68, "text": " highly recommend they spent a lot of time on this and I hope it is helpful to anyone who's looking" }, { "start": 1755.68, "end": 1763.1200000000001, "text": " to get into RL lovely tensors solves a problem we all have had in the past. So if I just want to" }, { "start": 1763.12, "end": 1768.56, "text": " print some tensor, I'm gonna get this and it's absolutely not helpful at all. As soon as your" }, { "start": 1768.56, "end": 1773.6799999999998, "text": " tensors are go beyond like four or five values, it's useless to just look at them. So all you do" }, { "start": 1773.6799999999998, "end": 1778.3999999999999, "text": " is you import lovely tensors, you monkey patch that stuff in and all of a sudden if you print" }, { "start": 1778.3999999999999, "end": 1784.8799999999999, "text": " a tensor, a non-py array, a torch tensor, whatever it will give you the shape, the amount of elements," }, { "start": 1784.8799999999999, "end": 1790.9599999999998, "text": " statistics, the means the standard deviations and so on. This is a much, much better way to" }, { "start": 1790.96, "end": 1795.8400000000001, "text": " look at tensors. Now if the tensor is small enough, it will actually show you the values. But as soon" }, { "start": 1795.8400000000001, "end": 1801.04, "text": " as it's bigger than that, it will give you much more useful information. So here it warns you that" }, { "start": 1801.04, "end": 1806.24, "text": " there is infinities, there's nans in the tensors, and so on. And even here it tells you, well," }, { "start": 1806.24, "end": 1811.8400000000001, "text": " this one is actually all zeros, you can still get back to the original tensor using sort of" }, { "start": 1811.8400000000001, "end": 1817.2, "text": " property access here, you have verbose access that will give you the values even if it's large. And" }, { "start": 1817.2, "end": 1822.8, "text": " here you get the just the plain old way if you really want that there are various helper methods" }, { "start": 1822.8, "end": 1828.96, "text": " around this also to show images to show statistics to show channels and to show things such as" }, { "start": 1828.96, "end": 1833.68, "text": " different filters in a stack of convolutional filters, I'll leave you to explore all of that" }, { "start": 1833.68, "end": 1839.6000000000001, "text": " yourself. But if you work with tensors a lot in an experimental sense, this is surely worth it." }, { "start": 1839.6, "end": 1848.6399999999999, "text": " GPT index is a technique to build an index out of files using GPT. So this uses GPT to essentially" }, { "start": 1848.6399999999999, "end": 1854.32, "text": " take a bunch of files and then for example, recursively summarize them so that you essentially" }, { "start": 1854.32, "end": 1859.52, "text": " have a structure where you have a summary on top of a bunch of stuff. And then if you like one of" }, { "start": 1859.52, "end": 1864.24, "text": " them, you go into it and then you have summaries of the sub stuff that is there you go into that" }, { "start": 1864.24, "end": 1869.28, "text": " it's kind of an experimental I want to say this is a bit of a new way of thinking about what we" }, { "start": 1869.28, "end": 1874.8799999999999, "text": " could do with these models in order to organize information now that we have generative capabilities" }, { "start": 1874.8799999999999, "end": 1879.92, "text": " and I like that people think out of the box. So if you're also interested, check out this repository." }, { "start": 1879.92, "end": 1884.6399999999999, "text": " There's a new upscaler for stable diffusion made by Rivers at Wings, the notebook is by" }, { "start": 1884.6399999999999, "end": 1890.24, "text": " N Shepherd and compute has been sponsored by stability AI. The notebook here runs you through" }, { "start": 1890.24, "end": 1895.84, "text": " the whole process of up sampling and it gives really cool results. I've previously talked about" }, { "start": 1895.84, "end": 1901.6799999999998, "text": " DAX hub DAX hub is like a bit of GitHub for machine learning. And I know a lot of places claim" }, { "start": 1901.6799999999998, "end": 1906, "text": " this nowadays, but that's how really believes in the open source paradigm. And now they release" }, { "start": 1906, "end": 1910.8799999999999, "text": " something they call direct data access and essentially a technique to stream down and" }, { "start": 1910.8799999999999, "end": 1917.12, "text": " upload version data to some place. So it essentially connects DVC, which you might know as like a data" }, { "start": 1917.12, "end": 1922.72, "text": " versioning tool with a transparent approach where you don't need to like pull all the whole data" }, { "start": 1922.72, "end": 1928.64, "text": " once or you know, stream it in some custom way, you can just treat it as if it already existed." }, { "start": 1928.64, "end": 1933.68, "text": " And magically, the library in the background will pull down the data as you need it in a streamed" }, { "start": 1933.68, "end": 1940.4, "text": " fashion. So no long waiting on data to arrive, you can just simply like go train and even if you don't" }, { "start": 1940.4, "end": 1945.84, "text": " have space for the whole data will still work. Now I don't have exactly time here to explain you" }, { "start": 1945.84, "end": 1950.56, "text": " all of the things that you can do with it. But the install is really simple, essentially install" }, { "start": 1950.56, "end": 1955.76, "text": " their hooks and everything works just transparently and magically. So if you're interested, check it" }, { "start": 1955.76, "end": 1960.24, "text": " out and also check out their blog, it's regularly updated. For example, here is how to build an" }, { "start": 1960.24, "end": 1967.44, "text": " end to end active learning pipeline with fully open tools. GN is a GPU environment management tool" }, { "start": 1967.44, "end": 1972.96, "text": " lets you easily control, configure and monitor the GPU resources that you are using. And it is" }, { "start": 1972.96, "end": 1978.1599999999999, "text": " intended to ease up the process of GPU allocation for data scientists without code changes. So this" }, { "start": 1978.16, "end": 1985.6000000000001, "text": " is in case you're in some lab and you share GPUs with others, this tool is a must have I wish that" }, { "start": 1985.6000000000001, "end": 1992.4, "text": " this had existed during my PhD, it manages local GPUs, remote GPUs, cluster GPUs, and so on, you can" }, { "start": 1992.4, "end": 1998.96, "text": " reserve GPUs free up GPUs, essentially, whatever you want to do, it has even a VS code plugin. So" }, { "start": 1998.96, "end": 2004.88, "text": " if you're at all using GPUs, and especially if you're sharing them, consider this tool." }, { "start": 2004.88, "end": 2012.48, "text": " MBXP is a multilingual benchmark for code completion in 10 plus programming languages." }, { "start": 2012.48, "end": 2017.92, "text": " TSAI is an open source package intended for applying deep learning to time series on top" }, { "start": 2017.92, "end": 2026.0800000000002, "text": " of PyTorch and fast AI. Colossal AI has released two blog posts, both pertain to better and faster" }, { "start": 2026.0800000000002, "end": 2033.2800000000002, "text": " and cheaper training of models. The first one is what they call AI GC AI generated content," }, { "start": 2033.28, "end": 2038.6399999999999, "text": " which essentially means image generation models. And the second one is for structure prediction" }, { "start": 2038.6399999999999, "end": 2044.96, "text": " of protein monomers and multimers. And both times they're able to speed up these models by a lot." }, { "start": 2044.96, "end": 2050.16, "text": " Now the code is openly available. So do go and check it out. And the performance gains here are" }, { "start": 2050.16, "end": 2055.92, "text": " not only during inference, like we saw before, but this in fact provides for example, for stable" }, { "start": 2055.92, "end": 2062, "text": " diffusion, 6.5 times faster training and pre training cost savings. So the hardware cost of" }, { "start": 2062, "end": 2066.96, "text": " fine tuning can be almost seven times cheaper than if you were to do it in the vanilla way." }, { "start": 2066.96, "end": 2072.4, "text": " HAP vid is a benchmark for tracking any point in a video. Super gradients is an awesome library" }, { "start": 2072.4, "end": 2077.12, "text": " to build, train and fine tune production ready deep learning state of the art vision models." }, { "start": 2077.12, "end": 2082, "text": " Now we've seen a lot of libraries that you know, claim to just make stuff better. If you're into" }, { "start": 2082, "end": 2087.6, "text": " vision, I believe having like a library that's specific for vision, such as semantic segmentation" }, { "start": 2087.6, "end": 2092.3199999999997, "text": " or bounding box prediction, or even image classification, it really pays off to have" }, { "start": 2092.3199999999997, "end": 2096.56, "text": " a library that's dedicated to your field, especially if it's something like vision," }, { "start": 2096.56, "end": 2100.88, "text": " where we have a lot of custom techniques that make these models just so much more efficient" }, { "start": 2100.88, "end": 2106.88, "text": " and better. But not only that super gradients also provides a lot of pre trained checkpoints. So even" }, { "start": 2106.88, "end": 2112.7999999999997, "text": " if you're just into using some models, this library might be good for you. Shumai is a network" }, { "start": 2112.8, "end": 2117.92, "text": " connected differentiable tensor library for TypeScript and JavaScript. As you can see in" }, { "start": 2117.92, "end": 2123.52, "text": " this demo, what you can do is you can define neural networks in TypeScript, and then you can" }, { "start": 2123.52, "end": 2130.2400000000002, "text": " distribute them over multiple places over multiple machines. And you can use the await like the" }, { "start": 2130.2400000000002, "end": 2136.48, "text": " async await syntax from JavaScript in order to ship data to other machines or call some function" }, { "start": 2136.48, "end": 2141.1200000000003, "text": " on another machine. And the library handles everything from you from forward propagation," }, { "start": 2141.12, "end": 2146.08, "text": " even to back propagation and training. It's really cool. And the API for this looks quite clean." }, { "start": 2146.08, "end": 2152.3199999999997, "text": " Safe tensors by hugging face is a new format who store and load tensors safely. I've previously" }, { "start": 2152.3199999999997, "end": 2157.04, "text": " done a video where I showed how you can like smuggle remote code execution into the hugging" }, { "start": 2157.04, "end": 2162.4, "text": " face hub, because the models essentially use the pytorch loading function. And pytorch in turn" }, { "start": 2162.4, "end": 2168.7999999999997, "text": " uses the pickle function by Python, which executes arbitrary code safe tensors is supposed to" }, { "start": 2168.8, "end": 2174.4, "text": " alleviate that by defining a safe, fixed and simple format to store tensors. Now, obviously," }, { "start": 2174.4, "end": 2179.1200000000003, "text": " the trade off here is that you can't store arbitrary things anymore. If you want to store" }, { "start": 2179.1200000000003, "end": 2184.8, "text": " arbitrary things, you have to allow arbitrary code to be executed. So while I expect that a lot of" }, { "start": 2184.8, "end": 2191.36, "text": " architectures might switch to something like safe tensors, it is not a full solution for the problem." }, { "start": 2191.36, "end": 2197.6800000000003, "text": " For better or worse, research will come up with new things, new ways of doing things. And if you" }, { "start": 2197.68, "end": 2202.7999999999997, "text": " constrain yourself to a particular way of doing things, then that will always not be enough." }, { "start": 2202.7999999999997, "end": 2208.96, "text": " However, it's mostly going to be enough. Velo is a learn optimizer. And the cool thing here is that" }, { "start": 2208.96, "end": 2215.2799999999997, "text": " it really seems to be better than or at least on par with very hand tuned optimizers, you might" }, { "start": 2215.2799999999997, "end": 2220.72, "text": " know optimizers as stochastic gradient descent or Adam or something like this, but it is possible" }, { "start": 2220.72, "end": 2227.68, "text": " to learn an optimizer. So to learn a system that controls the optimization behavior of a training" }, { "start": 2227.68, "end": 2233.7599999999998, "text": " run of another system, these people have taken a lot of different ML problems, a lot of different" }, { "start": 2233.7599999999998, "end": 2238.8799999999997, "text": " networks have run optimization problems on them, and have essentially learned an optimizer that" }, { "start": 2238.8799999999997, "end": 2244.8799999999997, "text": " optimizes all of these different problems well. So that's what we consider a learned optimizer." }, { "start": 2244.8799999999997, "end": 2250.24, "text": " And this one really seems that for many problems, especially like mainstream problems," }, { "start": 2250.24, "end": 2255.9199999999996, "text": " it works really, really well out of the box. So without you having to tune, you know, the beta" }, { "start": 2255.9199999999996, "end": 2261.2799999999997, "text": " two parameters and the learning rate and stuff like this, you just apply it in its default" }, { "start": 2261.2799999999997, "end": 2266.56, "text": " configuration. And it does a pretty good job. This is super important if you want to do rapid" }, { "start": 2266.56, "end": 2272.4799999999996, "text": " prototyping, rapid exploration of some new ideas without doing a giant grid search over all the" }, { "start": 2272.4799999999996, "end": 2278.3999999999996, "text": " parameters. The Merlin data loader is a data loader specifically for recommender systems," }, { "start": 2278.4, "end": 2283.6800000000003, "text": " recommender systems have, you know, a few extra or a few special requirements, namely, there's" }, { "start": 2283.6800000000003, "end": 2289.28, "text": " often quite few data I want to say compared to something like an image classifier, like the data" }, { "start": 2289.28, "end": 2294.88, "text": " points are mostly tabular, and they're not as many. So loading from disk and loading like hairs and" }, { "start": 2294.88, "end": 2299.36, "text": " stuff from this often can become the bottleneck. So a data loader is super important here. And the" }, { "start": 2299.36, "end": 2305.52, "text": " Merlin data loader promises to be over 10 times faster over native framework data loaders. If" }, { "start": 2305.52, "end": 2312.08, "text": " you're into recommender systems, try this out. Loda is an assembly language, a computational model," }, { "start": 2312.08, "end": 2317.7599999999998, "text": " and a distributed tool for mining programs. This topic is very far away from me. But some of you" }, { "start": 2317.7599999999998, "end": 2322.48, "text": " might actually be interested. So if you're into integer sequences, there are these online" }, { "start": 2322.48, "end": 2329.6, "text": " encyclopedias of integer sequences, like 12345, and so on. So there's sequences of integers. And" }, { "start": 2329.6, "end": 2334.32, "text": " the question is always what's the program behind them? Like, can I come up with a piece of code" }, { "start": 2334.32, "end": 2341.1200000000003, "text": " that produces that integer sequence into perpetuity? And you know, 12345 is quite simple," }, { "start": 2341.1200000000003, "end": 2346.56, "text": " but it gets complicated very quickly. And especially to teach machines to come up with the" }, { "start": 2346.56, "end": 2352.6400000000003, "text": " rules behind a sequence is a very challenging problem. So Loda is a system that allows you to" }, { "start": 2352.6400000000003, "end": 2358.6400000000003, "text": " mine such programs, essentially, you can run it and it will crank, crank, crank, crank and intelligently" }, { "start": 2358.6400000000003, "end": 2363.92, "text": " search for these programs. But not only that, it is also a distributed tool for doing that. So you" }, { "start": 2363.92, "end": 2370.2400000000002, "text": " can distribute you can partake in mining of such programs and much more. So as I understand, this" }, { "start": 2370.2400000000002, "end": 2375.6800000000003, "text": " is about what a loader program looks like or what it searches for. So here you can see one of these" }, { "start": 2375.6800000000003, "end": 2380.8, "text": " sequences. And this is apparently the program it comes up with. It looks pretty interesting." }, { "start": 2380.8, "end": 2388.48, "text": " If you're interested, check Loda out. NumGa, not numba, numga is a library for geometric algebra" }, { "start": 2388.48, "end": 2394.4, "text": " in JAX and NumPy. If you're into geometric algebra, here's the example of a rigid body physics" }, { "start": 2394.4, "end": 2401.44, "text": " engine with a constraint solver, then this library might be for you. MTEB is a benchmark for text" }, { "start": 2401.44, "end": 2407.6, "text": " embedding. This is from similar authors as the buyer benchmark, which is a retrieval benchmark." }, { "start": 2407.6, "end": 2413.52, "text": " But this goes further. This is a benchmark that covers eight embedding tasks over 56 data sets" }, { "start": 2413.52, "end": 2421.28, "text": " and 112 languages. And it also evaluates in this paper already 33 models on that benchmark. So the" }, { "start": 2421.28, "end": 2427.7599999999998, "text": " goal here is to find the one unified text embedding that covers all downstream tasks. And the status" }, { "start": 2427.7599999999998, "end": 2433.28, "text": " this far is that that universal embedding hasn't been found yet. The leaderboard shows that some" }, { "start": 2433.28, "end": 2438.56, "text": " models are good at some tasks, other models are good at other tasks. So the holy grail of text" }, { "start": 2438.56, "end": 2443.84, "text": " embedding is still somewhere out there. And this benchmark might prove that you have found it." }, { "start": 2443.84, "end": 2447.7599999999998, "text": " Okay, the last cool thing I want to show you is Natbot. And this is already a little bit older," }, { "start": 2447.7599999999998, "end": 2454.72, "text": " Nat Friedman tweeted this out in September, but essentially he managed to connect GPT-3 to the" }, { "start": 2454.72, "end": 2460.16, "text": " browser to a web browser and just let it interact with the web browser by prompting it in an" }, { "start": 2460.16, "end": 2466.72, "text": " appropriate way, given the website's HTML structure. So apparently the original idea comes from Sharif" }, { "start": 2466.72, "end": 2472.64, "text": " Shamim and that bot has a repository on GitHub. Look, it's just one Python file. I know half of" }, { "start": 2472.64, "end": 2478, "text": " you are super cringing right now. But yeah, research be research. And if you want to figure" }, { "start": 2478, "end": 2482.72, "text": " out how it's done, how that bot works. And if you want to give it a shot yourself might be really" }, { "start": 2482.72, "end": 2488.24, "text": " cool to do. So please do. Alright, that was all from ML news. This was a big chunk. Thank you so" }, { "start": 2488.24, "end": 2493.52, "text": " much for being here. Thank you for supporting the channel. Come to Discord if you're not already on" }, { "start": 2493.52, "end": 2498.32, "text": " it. Link is in the description. We have fantastic paper discussions every week and we talk general" }, { "start": 2498.32, "end": 2525.28, "text": " machine learning every day. With that being said, stay hydrated. Bye bye." } ]
YBlNQK0Ao6g
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Image GPT: Generative Pretraining from Pixels (Paper Explained)
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "openai", "gpt2", "gpt3", "bert", "transformer", "attention is all you need", "attention mechanism", "multi-head attention", "pixel rnn", "pixel cnn", "pretraining", "representation", "linear probe", "fine-tuning", "cifar10", "cifar100", "imagenet", "cnn", "convolutional neural network", "autoregressive" ]
BERT and GPT-2/3 have shown the enormous power of using generative models as pre-training for classification tasks. However, for images, pre-training is usually done with supervised or self-supervised objectives. This paper investigates how far you can get when applying the principles from the world of NLP to the world of images. OUTLINE: 0:00 - Intro & Overview 2:50 - Generative Models for Pretraining 4:50 - Pretraining for Visual Tasks 7:40 - Model Architecture 15:15 - Linear Probe Experiments 24:15 - Fine-Tuning Experiments 30:25 - Conclusion & Comments Paper: https://cdn.openai.com/papers/Generative_Pretraining_from_Pixels_V2.pdf Blog: https://openai.com/blog/image-gpt/ Code: https://github.com/openai/image-gpt Abstract: Inspired by progress in unsupervised representation learning for natural language, we examine whether similar models can learn useful representations for images. We train a sequence Transformer to auto-regressively predict pixels, without incorporating knowledge of the 2D input structure. Despite training on low-resolution ImageNet without labels, we find that a GPT-2 scale model learns strong image representations as measured by linear probing, fine-tuning, and low-data classification. On CIFAR-10, we achieve 96.3% accuracy with a linear probe, outperforming a supervised Wide ResNet, and 99.0% accuracy with full finetuning, matching the top supervised pre-trained models. An even larger model trained on a mixture of ImageNet and web images is competitive with self-supervised benchmarks on ImageNet, achieving 72.0% top-1 accuracy on a linear probe of our features. Authors: Mark Chen, Alec Radford, Rewon Child, Jeff Wu, Heewoo Jun, Prafulla Dhariwal, David Luan, Ilya Sutskever Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Okay, I'm sure many of you have already seen this because it was rather widely announced but the OpenAI team has announced a new model that produces pictures instead of text. So as you can see right here on the left you'll always see like a half a picture and on the right is the ground truth. So they took this picture, they simply cut the bottom half right here and then they let the model sort of imagine what they cut away and what it comes up with is pretty cool I have to say. Like look at the birds, like this is just awesome. But the special thing about this isn't that it simply completes pictures, the special thing about it is it does it one pixel by pixel. So basically it goes at this pixel right here and asks okay what's that pixel and then what's that pixel and so on. So it is basically like a language model but for pixels in that it goes over the images in order basically like this or like always from left to right, left to right, left to right and it has no clue of the spatial relations between the pixels. It needs to learn that by itself as opposed to a convolutional neural network which is specifically designed such that if you want to predict this pixel right here then it's specifically designed to say okay the most important information is probably around that pixel and then some like other important information is wider around that pixel. So CNNs are built with this in mind whereas this model right here which is also known as image GPT doesn't have any of that. It's simply a transformer model that goes over these pixels one by one and we'll see how that's done. There are some more examples right here. Particularly cool is the cat and you see that there is the beginning of this little white thing right here which is this card and the completions of the model. Yes very interesting. The model can of course as a language model can also sample by itself just random images. You sample them once through and this is what it comes up with. So these are pretty good quality images for a model that just produces one pixel by one pixel. Now this is a pixel. Now this idea of one pixel by pixel isn't new. This has been around before but the investigation here is basically how much can we how far can we push these generative models for pre-training. Hi there this is Janek from Post Production. I've realized that I've forgotten to even read the name of the paper. So it's called generative pre-training from pixels by Mark Chen, Alec Radford, Rowan Child, Jeff Wu, Hewon Ju, Prafula Dariwal, David Luan and Ilya Sotskyver. And since Henry AI Labs has already made a video on this, this video is going to be more of kind of a rumble rant about what I find interesting about the paper and some thoughts about it rather than like a classic explanation. I hope you still enjoy that. So what you saw on the right wasn't even the this isn't the final result the supposed result this is simply the pre-training task. It's fun to look at it but the actual object objective of the paper is the following. What if we train we pre-train on a large data set to generate what good images like these or we to complete images like these and then we fine-tune on a classification task. And the answer is here they say on C410 we achieve the 96.3% accuracy with a linear probe outperforming a supervised wide resonant and a 99% accuracy with full fine-tuning matching the top supervised pre-trained models. An even larger model trained on a mixture of ImageNet and web images is competitive with self supervised benchmarks on ImageNet achieving 72 top one accuracy on a linear probe of our features. So the goal here is that you have a data set that you want to train a classifier on. So usually you have a data set and the data set has images and you put them through like a convolutional neural network and then you have to classify the image into one of I don't know how many classes on C410 that's 10 classes on ImageNet it's a thousand. And the data set is these images together with these labels. Now the idea of pre-training is that you somewhere have a bigger data set that is sort of similar to the small data set but yeah it's similar enough such that the network could learn something. So what you want to do first is you first want to take the large data set, train this network right here and then in a second step fine-tune the network on this smaller data set and you sort of hope that what you learn from the large data set right here transfers over a little bit of knowledge. You already have a little bit of knowledge and you can make better use of the data that you have right here. Now the question is how do you do this pre-training and of course this has a long tradition, well long for maybe two or three years right now in the language community where people they pre-train these large models like we've just seen GPT-3 or BERT was one of them. They pre-train these large transformer models on text and then to fine-tune them on classification tasks for text and that's what this paper is doing right here. They pre-train a transformer that is a GPT-2 scale model, they pre-train it on image generation and then they fine-tune it or transfer learn it to classification tasks. And the point of the paper is to say that like in text data, in text data we have made pretty good experiences with doing this, with pre-training a generative model and then fine-tuning on a classification task. While so far in images all we've ever done is we've pre-trained these pre-training tasks which usually is a classification task or like a self-supervised task with a contrastive loss or something like this. What they're doing new is the generative modeling as a pre-training. And again this isn't entirely new but they show that if you throw a lot of computers at it and lots of data and a model then that can work equally well to these self-supervised tasks. So their model as I said is pretty pretty simple. They take an image and they unroll the image. Now a fully unrolled image on let's say ImageNet has 224 squared pixels and that times three right because you have three color channels. That's too large even for an open AI supercomputer. So what they do is first they downscale the image. So they downscale it's not as drastic as here where you just get a three by three image but they do downscale it to like a 32 by 32 or a 64 by 64. Then they unroll it which simply means they go through the image like this and make a sequence out of it because their models are naturally made for text sequences. They simply put the image into a text sequence. They further simplify this by reducing the three color channels to a single one. So they have their own color representation and basically yeah they reduce the three color channels to one channel that simply indexes the color in their color representation. And they say it's still pretty good. It's pretty faithful. So ultimately they end up with like a 32 squared length representation of their image. And then they do one of two things. They either do autoregressive generative pre-training which is the sort of GPT-2 style pre-training. And the idea here is that you always want to predict the next pixel of a sequence. So you can see right here that's the sequence that you input. And you always want to predict what is the next pixel. And in this case you see that we've already predicted everything. Here we've already predicted everything up to this red pixel. So you want to know what's this next pixel, this thing right here. What's this going to be? And the diagram here basically shows you how the attention flows. So every position in this transformer, and if you don't know what a transformer is, I haven't made a video about attention is all you need where these are explained. But briefly every position here can sort of send information only in one direction. So you train all of these in parallel. And when you predict this pixel right here, you only want information from whatever was before that pixel. Otherwise the model could cheat, right? Otherwise the model could simply learn to copy over the value. But the attention pattern here is simply to show you that this is autoregressive and it's in one direction. So you always want to predict the next pixel. And then from all of this you want to predict the next pixel. And then from all of this you want to predict the next pixel. This is in contrast to this objective here that comes from BERT. And I've also made a video on BERT. What you do in BERT is you simply take that image and you cross the block out two of the pixels, or many of the pixels, and you simply ask your network to reconstruct those pixels. And now you can see the attention flows in all directions. BERT, the B stands actually for bidirectional. So this is the contrast to the autoregressive pre-training framework. Now these two things have been applied in text both. The autoregressive is usually easier to actually make it produce something, like we saw producing these images, because you can always just predict the next pixel and then the next and then the next and then the next. Whereas in BERT it's a bit more unclear how you would produce things in a consistent manner. Because the predictions of these two pixels right here, they are independent. It's one forward pass and then both of these are predicted. But other papers have tried to solve this like this, not XLNet. I forget its name. It's something with an X. But these are the two objectives they look at. And it turns out they sort of trade off a bit. They work equally well, or a bit better and a bit worse, depending on the task. So once they have done this, they simply feed images. And you'll notice that you don't need any labels for this. So what you'll do is simply input an image and then simply take away half of it like this, and then predict that pixel. And then you want to predict that pixel and then you want to predict that pixel. That's all like you do with text. And in BERT you simply input an image, cross out pixels and then predict them. So you don't need labels for this. And that's why you can do it with this big data set. And you can do it in an unsupervised fashion. So you can just crawl the internet for images and just feed this into there. And it will sort of learn to produce these images. Now the question is, if you learn to produce these images, does that help you for classification? And they have two methods of assessing this. The bottom one here is the fine-tuning method. So this is supposed to be the representation you learn in the different layers of the network. So this is supposed to be this thing right here. What you'll do is you'll simply fine-tune. That means you on top of this representation you add a classification head that has two outputs, cat or dog, and you train this entire network on your small data set that we discussed before. So you train the entire network, all of the parameters. This is called fine-tuning. In contrast to that, what you can do is you can simply add this classification head with two outputs and then only train this classification head. And that won't perform as well, but it gives you sort of a better idea of how good is the representation that this network right here learned. And on top of that, so if you spin this idea further, you can actually go and do this at any intermediate layer right here. So you can forward propagate until layer two right here, and then here you add your classification head into the two classes and you only train the classification head. That being said, you can also do this with fine-tuning, but in this case this is called a linear probe. And it is often used to assess how good the A representation in intermediate layers is. Whereas what it actually does is assessing how linearly classifiable a representation is, which isn't the same as how useful or how informative, but it is one way to assess these things. So these are the two things they assess. So as for datasets, for C410 they use like C410 and C4100 as datasets and the STL10. And there you have to keep in mind the pre-training is done on ImageNet for those. So you pre-train on ImageNet without the labels and then you transfer learn or fine-tune or linear probe on these small datasets. Whereas later we're going to look at ImageNet and there the pre-training as I understand it is done on ImageNet itself, but also a wider collection of a hundred million or so images from the web, from the internet. Okay, so as you can see right here this is what happens if you do this linear probing. And you can see it works pretty well. So you get like a 95-96% accuracy with linear probes. This is very powerful. So it's not easy to get 96% on C410. I mean current state-of-the-art is like 99%, but still 96% is pretty good. And this is the entire network. There is this big giant network that you input your image into and then there is this one linear layer that does the classification. And all of this right here has not been trained with classification in mind. It simply has been trained to reproduce images. It hasn't even been trained on C410 as far as I understand. It's been trained on ImageNet. So this is to stress how cool or how significant this result is basically. That just a linear probe on top of that will give you such a good accuracy. And the second thing that is obvious right here is this bottom axis is the layer. So this is the layer where they attach the linear probe. And usually if you pre-train a network with a classification task in mind, so you pre-train it with the labels or maybe even without the labels in a self supervised way or something like this, usually the last layer has the best representation for classification. But here the special thing is that the intermediate layers in the middle have the best representation. You can see that the representation quality in terms of linear probing falls off as they sort of it falls off as they go into higher layers. And this is consistent across the datasets as you can see. And the idea here is or the way they interpret it is that if you have an image right here and you've blocked part of it, so you've blocked this and this, wrong way around this, so you've generated everything and now your task is to predict the next pixel. So you're trained to predict this next pixel right here. And the idea is that as you put the image through the network, what it will do is sort of, since the first layers they're going to be, if you're going to be similar to a CNN, they're going to be doing some low-level feature transformation thing. But also the last layers, they're going to really care about what's the exact pixel that goes here. Since it's their job to do that, they're going to care what color does it need to have, what exact luminosity and so on, how does it fit in with the previous pixels and so on. So that's also good. But it's not just low-level information and consistency with other pixels or something like this. At some point if you want to generate consistent images, and we saw that this model can generate consistent images, at some point there needs to be some kind of a notion of the global information in the picture, because the images are consistent throughout. So there needs to be some notion of what is in that image as a whole. And that's the exact information that we need for classification. And the only way that could actually be is here in the middle, since you know that's the place. So the hypothesis is that these models somehow learn a higher level of representation of global information somewhere in the middle before they then specify that information again down to predict the actual pixel. And that's why the best representations for classification are in the middle. So this is one of the interesting findings of this paper. I mean it's cool that they can reach a good accuracy, but to recognize that maybe in these generative models they have some intermediate stage where they represent the global information, and that will actually make the best representation. The second cool thing right here is that you can see they have different sizes of models. So the IGPT-L I believe is something like 60 layers, then this is like 48 layers, and this is 32 layers. So these are all on the scale of GPT-2, either a little bigger or a little smaller. It's not like a GPT-3 scale where you need a ginormous supercomputer, though they do do a lot of computation. But this still sort of fits within hardware of a standard size and not like exascale. What's interesting right here is that you can see the larger models, they reach a lower validation loss. So here is the validation loss. The larger model, if you train them on, so these checkpoints here are always after the same amount of steps. The larger models do reach a lower validation loss right here, as you can see. So this is the large, this is the medium, this is the small. And also you can see that on this axis the linear probe accuracy. So this is whenever you go and you find the best intermediate layer for linear probing, you probe it and you record the accuracy. So you can see a general trend as your validation loss goes down, the linear probe accuracy goes up. So there is a connection like it is in text models. In text models there's a connection of the perplexity of your language model and the quality of the representation you get for downstream tasks. In this model it seems to be the exact same thing. There is a connection between reaching lower validation loss and reaching a higher performance on classification. So that's one interesting thing, the general trend to up to the upper right corner. The other interesting and even arguably even more interesting thing is that if you look at the same validation loss. So at this point all of these models have the same validation loss, yet still the bigger model is better. You can see right here the bigger model outperforms the smaller model even though they have the same validation loss on the image modeling task. And this is also something that OpenAI in their text papers has stressed, that the larger models they seem to be somehow more capable of forming good representations even if they have the same loss. So again this could just be sort of a training data, better training data remembering thing. And when I said that in GPT-3 I didn't actually mean explicit remembering of training data. I meant kind of a fuzzy remembering of training data. I formulate that in the comments but I feel a lot of people have misunderstood me there. Here I think it's a much harder to estimate what's going on also since image pixels. Humans don't have a super good model on image pixels in their head as we have about text. As you can see if you then fine-tune, so for now we've just do linear probing, if you fine-tune these architectures then you reach like a 99% accuracy on C410 which is on par with the best models that we have. So G-Pipe is supervised, pre-trained on ImageNet but also I guess uses a bunch of data augmentation while these image GPT it uses minimal data augmentation I think. They simply random crop a little bit and that's about it. So they also experiment around with this BERT objective. So until now this was all this autoregressive objective and I feel that OpenAI people are a bit more of a fan of the autoregressive objective just given what they've done so far in their papers. And you can see here comparison of the two objectives on C410 and on ImageNet. Again C410 is pre-trained with ImageNet and ImageNet itself is pre-trained with like a larger collection of images from the web. All the pre-training is done without labels. Now the blue is what you can reach with a linear probe and the orange is then on top of that what you can reach by fine-tuning. So no linear probe but fine-tuning. I have to say that the fine-tuning is always done at the end. So even though the linear probe can be attached anywhere in between and it's often useful to do that as we saw because the in-between layers are the best. They say they tried fine-tuning also from in-between but it always worked out best whenever you fine-tune. Whenever you fine-tune you take actually the last layer. So that kind of gives you an idea that the model is then... What seems to be important is this coming up with the higher level representation and then once you fine-tune you're probably able to push that representation through to the end because of your training signal. But if you hadn't done the pre-training you wouldn't even have that higher level representation and then the signal I guess is not strong enough to back propagate through the whole model. It would be very interesting if they investigate, if they do this linear probe analysis again after they fine-tune the model. And to see if then still it is the intermediate layers that have the best representation or if now the best representation in a linear probe sense shifted towards the end. I'm gonna guess it's shifted towards the end but I sort of want to even see if the accuracy of the linear probe in the middle, does it keep the same? So does the curve go like this? This is the linear probe when you simply pre-train. This is linear probe accuracy. The question would be does it change to be like this or does it change to be like this? This is supposed to be the same at the end. So basically does it stay as good as it is but simply get better at the end or does the representation like in this curve, does the good representation now shift towards the end and leave the lower layer with even more capacity to do some low-level stuff? Yeah, maybe they've done this. I haven't seen it. And as you can see these BERT and autoregressive objective, they sort of trade off. So the BERT it tends to do poorly in the linear probe setting but then it catches up during fine-tuning. In C410 almost being at the level of the autoregressive and in in ImageNet actually outperforming it. This darker thing here it simply means that you average across different maskings of BERT because I guess even in classification it's not entirely clear how to get a signal out of BERT because they don't do this CLS vector with BERT. What they do for classification and linear probing and that's written up here, they simply take the average pooling of all the representations of the sequence. And the last thing that I've also forgotten, there's a lot of stuff, when they fine-tune, while fine-tuning the classification loss yields reasonable downstream performance, we find empirically that the joint objective, the generative objective and the classification objective works even better. So even when you fine-tune with this model you have to keep the generative modeling part, the generative loss around and then it performs even more better, more well, whatever that word is. So that's also something to think about. I think this paper right here it kind of lays down a lot of cool things that you can think about and it gives rise to a lot of hypotheses of how does this stuff work, why does this stuff work. I don't even think that the numbers are the most important thing, it's mostly the fact of the effects and what does it mean. Okay, so this was my take on it. It's more kind of a my rant of what I find special about this paper than about the actual paper. You can look at the paper, their numbers are pretty good. On ImageNet they do not reach the same like super-duper performance as they do on C410 and I guess that's probably because they have to downscale the ImageNet images way more than they have to downscale the C410 images because those are of course only 32 by 32. So because they have to downscale so much they lose probably a lot of information and I would be interested to see if there is a way to involve convolutions in all of this. So to do the downscaling that in a learned manner with convolutions or something. I'm sure this has all been done already, I'm just lazy to look it up. Yeah, so I invite you to look at their blog post where they have these samples. They look pretty funny and these full samples up here look fairly cool for what it's trained to do and that it has no spatial awareness whatsoever. It simply uses learned position encodings. And yeah, check it out, that was it from me. Bye bye.
[ { "start": 0, "end": 5.78, "text": " Okay, I'm sure many of you have already seen this because it was rather widely" }, { "start": 5.78, "end": 12.5, "text": " announced but the OpenAI team has announced a new model that produces" }, { "start": 12.5, "end": 17.84, "text": " pictures instead of text. So as you can see right here on the left you'll always" }, { "start": 17.84, "end": 23.98, "text": " see like a half a picture and on the right is the ground truth. So they took" }, { "start": 23.98, "end": 29.54, "text": " this picture, they simply cut the bottom half right here and then they let the" }, { "start": 29.54, "end": 35.16, "text": " model sort of imagine what they cut away and what it comes up with is pretty cool" }, { "start": 35.16, "end": 42.96, "text": " I have to say. Like look at the birds, like this is just awesome. But the special" }, { "start": 42.96, "end": 47.14, "text": " thing about this isn't that it simply completes pictures, the special thing" }, { "start": 47.14, "end": 54.28, "text": " about it is it does it one pixel by pixel. So basically it goes at this pixel" }, { "start": 54.28, "end": 59.08, "text": " right here and asks okay what's that pixel and then what's that pixel and" }, { "start": 59.08, "end": 67.36, "text": " so on. So it is basically like a language model but for pixels in that it" }, { "start": 67.36, "end": 74.36, "text": " goes over the images in order basically like this or like always from left to" }, { "start": 74.36, "end": 81.64, "text": " right, left to right, left to right and it has no clue of the spatial relations" }, { "start": 81.64, "end": 85, "text": " between the pixels. It needs to learn that by itself as opposed to a" }, { "start": 85, "end": 90.88, "text": " convolutional neural network which is specifically designed such that if you" }, { "start": 90.88, "end": 95.48, "text": " want to predict this pixel right here then it's specifically designed to say" }, { "start": 95.48, "end": 101.24000000000001, "text": " okay the most important information is probably around that pixel and then some" }, { "start": 101.24000000000001, "end": 106.76, "text": " like other important information is wider around that pixel. So CNNs are" }, { "start": 106.76, "end": 111.32, "text": " built with this in mind whereas this model right here which is also known as" }, { "start": 111.32, "end": 118.44, "text": " image GPT doesn't have any of that. It's simply a transformer model that" }, { "start": 118.44, "end": 123.88, "text": " goes over these pixels one by one and we'll see how that's done. There are some" }, { "start": 123.88, "end": 128.84, "text": " more examples right here. Particularly cool is the cat and you see that there" }, { "start": 128.84, "end": 135.79999999999998, "text": " is the beginning of this little white thing right here which is this card and" }, { "start": 135.8, "end": 148.68, "text": " the completions of the model. Yes very interesting. The model can of course as a" }, { "start": 148.68, "end": 154.84, "text": " language model can also sample by itself just random images. You sample them once" }, { "start": 154.84, "end": 159.8, "text": " through and this is what it comes up with. So these are pretty good quality" }, { "start": 159.8, "end": 165.24, "text": " images for a model that just produces one pixel by one pixel. Now this is a" }, { "start": 165.24, "end": 169.88, "text": " pixel. Now this idea of one pixel by pixel isn't new. This has been around" }, { "start": 169.88, "end": 177.32000000000002, "text": " before but the investigation here is basically how much can we how far can we" }, { "start": 177.32000000000002, "end": 182.84, "text": " push these generative models for pre-training. Hi there this is Janek" }, { "start": 182.84, "end": 188.20000000000002, "text": " from Post Production. I've realized that I've forgotten to even read the name of" }, { "start": 188.20000000000002, "end": 192.12, "text": " the paper. So it's called generative pre-training from pixels by Mark Chen," }, { "start": 192.12, "end": 199.8, "text": " Alec Radford, Rowan Child, Jeff Wu, Hewon Ju, Prafula Dariwal, David Luan and" }, { "start": 199.8, "end": 206.36, "text": " Ilya Sotskyver. And since Henry AI Labs has already made a video on this, this" }, { "start": 206.36, "end": 210.76, "text": " video is going to be more of kind of a rumble rant about what I find" }, { "start": 210.76, "end": 215.32, "text": " interesting about the paper and some thoughts about it rather than like a" }, { "start": 215.32, "end": 219.96, "text": " classic explanation. I hope you still enjoy that. So what you saw on the right" }, { "start": 219.96, "end": 224.28, "text": " wasn't even the this isn't the final result the supposed result this is" }, { "start": 224.28, "end": 229.56, "text": " simply the pre-training task. It's fun to look at it but the actual object" }, { "start": 229.56, "end": 237, "text": " objective of the paper is the following. What if we train we pre-train on a large" }, { "start": 237, "end": 245.96, "text": " data set to generate what good images like these or we to complete images like" }, { "start": 245.96, "end": 253.72, "text": " these and then we fine-tune on a classification task. And the answer is" }, { "start": 253.72, "end": 262.28000000000003, "text": " here they say on C410 we achieve the 96.3% accuracy with a linear probe" }, { "start": 262.28000000000003, "end": 269.08, "text": " outperforming a supervised wide resonant and a 99% accuracy with full" }, { "start": 269.08, "end": 275.64, "text": " fine-tuning matching the top supervised pre-trained models. An even larger model" }, { "start": 275.64, "end": 280.76, "text": " trained on a mixture of ImageNet and web images is competitive with self" }, { "start": 280.76, "end": 286.28, "text": " supervised benchmarks on ImageNet achieving 72 top one accuracy on a" }, { "start": 286.28, "end": 294.12, "text": " linear probe of our features. So the goal here is that you have a data set that" }, { "start": 294.12, "end": 300.59999999999997, "text": " you want to train a classifier on. So usually you have a data set and the" }, { "start": 300.6, "end": 306.04, "text": " data set has images and you put them through like a convolutional neural" }, { "start": 306.04, "end": 311.16, "text": " network and then you have to classify the image into one of I don't know how" }, { "start": 311.16, "end": 315.88, "text": " many classes on C410 that's 10 classes on ImageNet it's a thousand. And the" }, { "start": 315.88, "end": 321.88, "text": " data set is these images together with these labels. Now the idea of pre-training" }, { "start": 321.88, "end": 328.52000000000004, "text": " is that you somewhere have a bigger data set that is sort of similar to the small" }, { "start": 328.52, "end": 333.4, "text": " data set but yeah it's similar enough such that the network could learn" }, { "start": 333.4, "end": 337.32, "text": " something. So what you want to do first is you first want to take the large data" }, { "start": 337.32, "end": 343.4, "text": " set, train this network right here and then in a second step fine-tune the" }, { "start": 343.4, "end": 347.47999999999996, "text": " network on this smaller data set and you sort of hope that what you learn from" }, { "start": 347.47999999999996, "end": 352.68, "text": " the large data set right here transfers over a little bit of knowledge. You" }, { "start": 352.68, "end": 356.59999999999997, "text": " already have a little bit of knowledge and you can make better use of the data" }, { "start": 356.6, "end": 361.96000000000004, "text": " that you have right here. Now the question is how do you do this pre-training" }, { "start": 361.96000000000004, "end": 367.48, "text": " and of course this has a long tradition, well long for maybe two or three years" }, { "start": 367.48, "end": 373.48, "text": " right now in the language community where people they pre-train these large" }, { "start": 373.48, "end": 380.52000000000004, "text": " models like we've just seen GPT-3 or BERT was one of them. They pre-train these" }, { "start": 380.52, "end": 386.59999999999997, "text": " large transformer models on text and then to fine-tune them on classification" }, { "start": 386.59999999999997, "end": 391.15999999999997, "text": " tasks for text and that's what this paper is doing right here. They pre-train a" }, { "start": 391.15999999999997, "end": 401.15999999999997, "text": " transformer that is a GPT-2 scale model, they pre-train it on image generation" }, { "start": 401.15999999999997, "end": 407.56, "text": " and then they fine-tune it or transfer learn it to classification tasks. And the" }, { "start": 407.56, "end": 413.88, "text": " point of the paper is to say that like in text data, in text data we have made" }, { "start": 413.88, "end": 421.88, "text": " pretty good experiences with doing this, with pre-training a generative" }, { "start": 421.88, "end": 426.44, "text": " model and then fine-tuning on a classification task. While so far in" }, { "start": 426.44, "end": 431.96, "text": " images all we've ever done is we've pre-trained these pre-training tasks" }, { "start": 431.96, "end": 437.88, "text": " which usually is a classification task or like a self-supervised task with a" }, { "start": 437.88, "end": 443.56, "text": " contrastive loss or something like this. What they're doing new is the" }, { "start": 443.56, "end": 450.2, "text": " generative modeling as a pre-training. And again this isn't entirely new but" }, { "start": 450.2, "end": 457.4, "text": " they show that if you throw a lot of computers at it and lots of data and a" }, { "start": 457.4, "end": 462.52, "text": " model then that can work equally well to these self-supervised tasks. So their" }, { "start": 462.52, "end": 467.64, "text": " model as I said is pretty pretty simple. They take an image and they unroll the" }, { "start": 467.64, "end": 475.23999999999995, "text": " image. Now a fully unrolled image on let's say ImageNet has 224 squared" }, { "start": 475.23999999999995, "end": 479.64, "text": " pixels and that times three right because you have three color channels." }, { "start": 479.64, "end": 486.91999999999996, "text": " That's too large even for an open AI supercomputer. So what they do is first" }, { "start": 486.92, "end": 492.04, "text": " they downscale the image. So they downscale it's not as drastic as here" }, { "start": 492.04, "end": 496.68, "text": " where you just get a three by three image but they do downscale it to like a 32 by" }, { "start": 496.68, "end": 503.48, "text": " 32 or a 64 by 64. Then they unroll it which simply means they go through the" }, { "start": 503.48, "end": 508.84000000000003, "text": " image like this and make a sequence out of it because their models are naturally" }, { "start": 508.84000000000003, "end": 515.32, "text": " made for text sequences. They simply put the image into a text sequence. They" }, { "start": 515.32, "end": 521.88, "text": " further simplify this by reducing the three color channels to a single one. So" }, { "start": 521.88, "end": 527.5600000000001, "text": " they have their own color representation and basically yeah they reduce the" }, { "start": 527.5600000000001, "end": 532.9200000000001, "text": " three color channels to one channel that simply indexes the color in their color" }, { "start": 532.9200000000001, "end": 539.5600000000001, "text": " representation. And they say it's still pretty good. It's pretty faithful. So" }, { "start": 539.56, "end": 546.8399999999999, "text": " ultimately they end up with like a 32 squared length representation of their" }, { "start": 546.8399999999999, "end": 552.7199999999999, "text": " image. And then they do one of two things. They either do autoregressive" }, { "start": 552.7199999999999, "end": 559.1999999999999, "text": " generative pre-training which is the sort of GPT-2 style pre-training. And the" }, { "start": 559.1999999999999, "end": 565.3399999999999, "text": " idea here is that you always want to predict the next pixel of a sequence. So" }, { "start": 565.34, "end": 571.9200000000001, "text": " you can see right here that's the sequence that you input." }, { "start": 571.9200000000001, "end": 579.6800000000001, "text": " And you always want to predict what is the next pixel. And in this case you" }, { "start": 579.6800000000001, "end": 583.6, "text": " see that we've already predicted everything. Here we've already predicted" }, { "start": 583.6, "end": 588.96, "text": " everything up to this red pixel. So you want to know what's this next pixel, this" }, { "start": 588.96, "end": 596.08, "text": " thing right here. What's this going to be? And the diagram here basically shows" }, { "start": 596.08, "end": 600.6600000000001, "text": " you how the attention flows. So every position in this transformer, and if you" }, { "start": 600.6600000000001, "end": 604.58, "text": " don't know what a transformer is, I haven't made a video about attention is" }, { "start": 604.58, "end": 610, "text": " all you need where these are explained. But briefly every position here can sort" }, { "start": 610, "end": 618.48, "text": " of send information only in one direction. So you train all" }, { "start": 618.48, "end": 623.44, "text": " of these in parallel. And when you predict this pixel right here, you only" }, { "start": 623.44, "end": 629.24, "text": " want information from whatever was before that pixel. Otherwise the model" }, { "start": 629.24, "end": 634.72, "text": " could cheat, right? Otherwise the model could simply learn to copy over the" }, { "start": 634.72, "end": 640, "text": " value. But the attention pattern here is simply to show you that this is" }, { "start": 640, "end": 643.24, "text": " autoregressive and it's in one direction. So you always want to predict" }, { "start": 643.24, "end": 647.16, "text": " the next pixel. And then from all of this you want to predict the next pixel. And" }, { "start": 647.16, "end": 650.36, "text": " then from all of this you want to predict the next pixel. This is in" }, { "start": 650.36, "end": 655.24, "text": " contrast to this objective here that comes from BERT. And I've also made a" }, { "start": 655.24, "end": 660.12, "text": " video on BERT. What you do in BERT is you simply take that image and you cross" }, { "start": 660.12, "end": 666.6, "text": " the block out two of the pixels, or many of the pixels, and you simply ask your" }, { "start": 666.6, "end": 671.76, "text": " network to reconstruct those pixels. And now you can see the attention flows" }, { "start": 671.76, "end": 677.48, "text": " in all directions. BERT, the B stands actually for bidirectional. So this is the" }, { "start": 677.48, "end": 683.48, "text": " contrast to the autoregressive pre-training framework. Now these two" }, { "start": 683.48, "end": 689.04, "text": " things have been applied in text both. The autoregressive is usually easier" }, { "start": 689.04, "end": 694.08, "text": " to actually make it produce something, like we saw producing these images," }, { "start": 694.08, "end": 698.08, "text": " because you can always just predict the next pixel and then the next and then" }, { "start": 698.08, "end": 702.5600000000001, "text": " the next and then the next. Whereas in BERT it's a bit more unclear how you" }, { "start": 702.5600000000001, "end": 706.96, "text": " would produce things in a consistent manner. Because the predictions of these" }, { "start": 706.96, "end": 712.76, "text": " two pixels right here, they are independent. It's one forward pass and" }, { "start": 712.76, "end": 718.4000000000001, "text": " then both of these are predicted. But other papers have tried to solve this" }, { "start": 718.4000000000001, "end": 727.0400000000001, "text": " like this, not XLNet. I forget its name. It's something with an X." }, { "start": 727.04, "end": 733.92, "text": " But these are the two objectives they look at. And it turns out they" }, { "start": 733.92, "end": 739.56, "text": " sort of trade off a bit. They work equally well, or a bit better and a bit worse," }, { "start": 739.56, "end": 744.68, "text": " depending on the task. So once they have done this, they simply feed images." }, { "start": 744.68, "end": 749.76, "text": " And you'll notice that you don't need any labels for this. So what you'll do is" }, { "start": 749.76, "end": 755.98, "text": " simply input an image and then simply take away half of it like this, and then" }, { "start": 755.98, "end": 761.64, "text": " predict that pixel. And then you want to predict that pixel and then you want to" }, { "start": 761.64, "end": 766.32, "text": " predict that pixel. That's all like you do with text. And in BERT you simply" }, { "start": 766.32, "end": 772.08, "text": " input an image, cross out pixels and then predict them. So you don't need labels" }, { "start": 772.08, "end": 776.8000000000001, "text": " for this. And that's why you can do it with this big data set. And you can do it" }, { "start": 776.8000000000001, "end": 781.4, "text": " in an unsupervised fashion. So you can just crawl the internet for images and" }, { "start": 781.4, "end": 786.92, "text": " just feed this into there. And it will sort of learn to produce these images." }, { "start": 786.92, "end": 793.12, "text": " Now the question is, if you learn to produce these images, does that" }, { "start": 793.12, "end": 800.52, "text": " help you for classification? And they have two methods of assessing" }, { "start": 800.52, "end": 805.3199999999999, "text": " this. The bottom one here is the fine-tuning method. So this is" }, { "start": 805.3199999999999, "end": 810.56, "text": " supposed to be the representation you learn in the different layers of the" }, { "start": 810.56, "end": 815.1199999999999, "text": " network. So this is supposed to be this thing right here. What you'll do is" }, { "start": 815.1199999999999, "end": 820.5999999999999, "text": " you'll simply fine-tune. That means you on top of this representation you add a" }, { "start": 820.5999999999999, "end": 827.7199999999999, "text": " classification head that has two outputs, cat or dog, and you train this entire" }, { "start": 827.7199999999999, "end": 832.2399999999999, "text": " network on your small data set that we discussed before. So you train the" }, { "start": 832.2399999999999, "end": 837.1199999999999, "text": " entire network, all of the parameters. This is called fine-tuning. In contrast" }, { "start": 837.12, "end": 843.28, "text": " to that, what you can do is you can simply add this" }, { "start": 843.28, "end": 848.4, "text": " classification head with two outputs and then only train this classification head." }, { "start": 848.4, "end": 853.76, "text": " And that won't perform as well, but it gives you sort of a better idea of" }, { "start": 853.76, "end": 860.88, "text": " how good is the representation that this network right here learned. And on top of" }, { "start": 860.88, "end": 866.5600000000001, "text": " that, so if you spin this idea further, you can actually go and do this at any" }, { "start": 866.56, "end": 871.7199999999999, "text": " intermediate layer right here. So you can forward propagate until layer two" }, { "start": 871.7199999999999, "end": 878.0799999999999, "text": " right here, and then here you add your classification head into the two" }, { "start": 878.0799999999999, "end": 882.64, "text": " classes and you only train the classification head. That being said, you" }, { "start": 882.64, "end": 888.64, "text": " can also do this with fine-tuning, but in this case this is called a linear probe." }, { "start": 888.64, "end": 894.8, "text": " And it is often used to assess how good the A representation in intermediate" }, { "start": 894.8, "end": 900.5999999999999, "text": " layers is. Whereas what it actually does is assessing how linearly classifiable a" }, { "start": 900.5999999999999, "end": 905.52, "text": " representation is, which isn't the same as how useful or how informative, but it" }, { "start": 905.52, "end": 913, "text": " is one way to assess these things. So these are the two things they" }, { "start": 913, "end": 921.24, "text": " assess. So as for datasets, for C410 they use like C410 and C4100 as" }, { "start": 921.24, "end": 926.4, "text": " datasets and the STL10. And there you have to keep in mind the pre-training" }, { "start": 926.4, "end": 931.2, "text": " is done on ImageNet for those. So you pre-train on ImageNet without the" }, { "start": 931.2, "end": 939.96, "text": " labels and then you transfer learn or fine-tune or linear probe on these" }, { "start": 939.96, "end": 945.2, "text": " small datasets. Whereas later we're going to look at ImageNet and there the" }, { "start": 945.2, "end": 950.48, "text": " pre-training as I understand it is done on ImageNet itself, but also a wider" }, { "start": 950.48, "end": 958.08, "text": " collection of a hundred million or so images from the web, from the internet." }, { "start": 958.08, "end": 966.08, "text": " Okay, so as you can see right here this is what happens if you do this linear" }, { "start": 966.08, "end": 974.48, "text": " probing. And you can see it works pretty well. So you get like a 95-96% accuracy" }, { "start": 974.48, "end": 980.6, "text": " with linear probes. This is very powerful. So it's not easy to get 96% on" }, { "start": 980.6, "end": 988.5600000000001, "text": " C410. I mean current state-of-the-art is like 99%, but still 96% is pretty" }, { "start": 988.5600000000001, "end": 995.8000000000001, "text": " good. And this is the entire network. There is this big giant network that you" }, { "start": 995.8000000000001, "end": 1000.84, "text": " input your image into and then there is this one linear layer that does the" }, { "start": 1000.84, "end": 1006.36, "text": " classification. And all of this right here has not been trained with" }, { "start": 1006.36, "end": 1012.52, "text": " classification in mind. It simply has been trained to reproduce images. It" }, { "start": 1012.52, "end": 1016.6800000000001, "text": " hasn't even been trained on C410 as far as I understand. It's been trained on" }, { "start": 1016.6800000000001, "end": 1026.4, "text": " ImageNet. So this is to stress how cool or how significant this result is" }, { "start": 1026.4, "end": 1030.56, "text": " basically. That just a linear probe on top of that will give you such a good" }, { "start": 1030.56, "end": 1037.72, "text": " accuracy. And the second thing that is obvious right here is this bottom axis" }, { "start": 1037.72, "end": 1045.1599999999999, "text": " is the layer. So this is the layer where they attach the linear probe. And usually" }, { "start": 1045.1599999999999, "end": 1049.8, "text": " if you pre-train a network with a classification task in mind, so you" }, { "start": 1049.8, "end": 1053.44, "text": " pre-train it with the labels or maybe even without the labels in a self" }, { "start": 1053.44, "end": 1058.6, "text": " supervised way or something like this, usually the last layer has the best" }, { "start": 1058.6, "end": 1064.12, "text": " representation for classification. But here the special thing is that the" }, { "start": 1064.12, "end": 1069.48, "text": " intermediate layers in the middle have the best representation. You can see that" }, { "start": 1069.48, "end": 1076, "text": " the representation quality in terms of linear probing falls off as they sort of" }, { "start": 1076, "end": 1083.84, "text": " it falls off as they go into higher layers. And this is consistent across the" }, { "start": 1083.84, "end": 1090.8799999999999, "text": " datasets as you can see. And the idea here is or the way they interpret it" }, { "start": 1090.8799999999999, "end": 1099.6399999999999, "text": " is that if you have an image right here and you've blocked part of it," }, { "start": 1099.6399999999999, "end": 1111, "text": " so you've blocked this and this, wrong way around this, so you've generated" }, { "start": 1111, "end": 1118.76, "text": " everything and now your task is to predict the next pixel. So you're" }, { "start": 1118.76, "end": 1127.16, "text": " trained to predict this next pixel right here. And the idea is that as you put" }, { "start": 1127.16, "end": 1133.68, "text": " the image through the network, what it will do is sort of, since the first" }, { "start": 1133.68, "end": 1138.12, "text": " layers they're going to be, if you're going to be similar to a CNN, they're" }, { "start": 1138.12, "end": 1143.6799999999998, "text": " going to be doing some low-level feature transformation thing. But also the" }, { "start": 1143.6799999999998, "end": 1149.12, "text": " last layers, they're going to really care about what's the exact pixel that goes" }, { "start": 1149.12, "end": 1154.36, "text": " here. Since it's their job to do that, they're going to care what color" }, { "start": 1154.36, "end": 1159.84, "text": " does it need to have, what exact luminosity and so on, how does it fit in" }, { "start": 1159.84, "end": 1166.9599999999998, "text": " with the previous pixels and so on. So that's also good. But it's not" }, { "start": 1166.96, "end": 1171.72, "text": " just low-level information and consistency with other pixels or" }, { "start": 1171.72, "end": 1177.44, "text": " something like this. At some point if you want to generate consistent images, and" }, { "start": 1177.44, "end": 1183.16, "text": " we saw that this model can generate consistent images, at some point there" }, { "start": 1183.16, "end": 1187.68, "text": " needs to be some kind of a notion of the global information in the picture," }, { "start": 1187.68, "end": 1194, "text": " because the images are consistent throughout. So there needs to be some" }, { "start": 1194, "end": 1198.92, "text": " notion of what is in that image as a whole. And that's the exact" }, { "start": 1198.92, "end": 1203.36, "text": " information that we need for classification. And the only way that" }, { "start": 1203.36, "end": 1208.6, "text": " could actually be is here in the middle, since you know that's the place. So the" }, { "start": 1208.6, "end": 1213.72, "text": " hypothesis is that these models somehow learn a higher level of" }, { "start": 1213.72, "end": 1218.44, "text": " representation of global information somewhere in the middle before they then" }, { "start": 1218.44, "end": 1224.4, "text": " specify that information again down to predict the actual pixel. And that's why" }, { "start": 1224.4, "end": 1228.88, "text": " the best representations for classification are in the middle. So this" }, { "start": 1228.88, "end": 1234.96, "text": " is one of the interesting findings of" }, { "start": 1234.96, "end": 1239.44, "text": " this paper. I mean it's cool that they can reach a good accuracy, but to recognize" }, { "start": 1239.44, "end": 1247.0800000000002, "text": " that maybe in these generative models they have some intermediate stage" }, { "start": 1247.08, "end": 1250.24, "text": " where they represent the global information, and that will actually make" }, { "start": 1250.24, "end": 1256.9199999999998, "text": " the best representation. The second cool thing right here is that you can" }, { "start": 1256.9199999999998, "end": 1264.96, "text": " see they have different sizes of models. So the IGPT-L I believe is something" }, { "start": 1264.96, "end": 1272.56, "text": " like 60 layers, then this is like 48 layers, and this is 32 layers. So" }, { "start": 1272.56, "end": 1278.04, "text": " these are all on the scale of GPT-2, either a little bigger or a" }, { "start": 1278.04, "end": 1282, "text": " little smaller. It's not like a GPT-3 scale where you need a ginormous" }, { "start": 1282, "end": 1290.2, "text": " supercomputer, though they do do a lot of computation. But this still sort of fits" }, { "start": 1290.2, "end": 1298.3999999999999, "text": " within hardware of a standard size and not like exascale. What's interesting" }, { "start": 1298.4, "end": 1303.44, "text": " right here is that you can see the larger models, they reach a lower" }, { "start": 1303.44, "end": 1307.68, "text": " validation loss. So here is the validation loss. The larger model, if you train them" }, { "start": 1307.68, "end": 1311.96, "text": " on, so these checkpoints here are always after the same amount of steps. The" }, { "start": 1311.96, "end": 1317, "text": " larger models do reach a lower validation loss right here, as you can see." }, { "start": 1317, "end": 1324.96, "text": " So this is the large, this is the medium, this is the small. And also you can see" }, { "start": 1324.96, "end": 1329.56, "text": " that on this axis the linear probe accuracy. So this is whenever you go" }, { "start": 1329.56, "end": 1334.32, "text": " and you find the best intermediate layer for linear probing, you probe it and you" }, { "start": 1334.32, "end": 1339, "text": " record the accuracy. So you can see a general trend as your validation loss" }, { "start": 1339, "end": 1345.56, "text": " goes down, the linear probe accuracy goes up. So there is a connection like it is" }, { "start": 1345.56, "end": 1349.68, "text": " in text models. In text models there's a connection of the perplexity of your" }, { "start": 1349.68, "end": 1355.44, "text": " language model and the quality of the representation you get for downstream" }, { "start": 1355.44, "end": 1359.92, "text": " tasks. In this model it seems to be the exact same thing. There is a connection" }, { "start": 1359.92, "end": 1366.24, "text": " between reaching lower validation loss and reaching a higher performance on" }, { "start": 1366.24, "end": 1373.48, "text": " classification. So that's one interesting thing, the general trend to up to the" }, { "start": 1373.48, "end": 1378.68, "text": " upper right corner. The other interesting and even arguably even more" }, { "start": 1378.68, "end": 1383.88, "text": " interesting thing is that if you look at the same validation loss. So at this" }, { "start": 1383.88, "end": 1388.96, "text": " point all of these models have the same validation loss, yet still the bigger" }, { "start": 1388.96, "end": 1394.28, "text": " model is better. You can see right here the bigger model outperforms the" }, { "start": 1394.28, "end": 1399.8400000000001, "text": " smaller model even though they have the same validation loss on the image" }, { "start": 1399.8400000000001, "end": 1405.68, "text": " modeling task. And this is also something that OpenAI in their text" }, { "start": 1405.68, "end": 1410.68, "text": " papers has stressed, that the larger models they seem to be somehow more" }, { "start": 1410.68, "end": 1415.76, "text": " capable of forming good representations even if they have the" }, { "start": 1415.76, "end": 1424.1200000000001, "text": " same loss. So again this could just be sort of a training data," }, { "start": 1424.1200000000001, "end": 1429.8, "text": " better training data remembering thing. And when I said that in GPT-3 I didn't" }, { "start": 1429.8, "end": 1434.8, "text": " actually mean explicit remembering of training data. I meant kind of a fuzzy" }, { "start": 1434.8, "end": 1439.36, "text": " remembering of training data. I formulate that in the comments but" }, { "start": 1439.36, "end": 1445.68, "text": " I feel a lot of people have misunderstood me there. Here I think it's" }, { "start": 1445.68, "end": 1451.3999999999999, "text": " a much harder to estimate what's going on also since image pixels. Humans" }, { "start": 1451.3999999999999, "end": 1456.36, "text": " don't have a super good model on image pixels in their head as we have about" }, { "start": 1456.36, "end": 1461.3999999999999, "text": " text. As you can see if you then fine-tune, so for now we've just do" }, { "start": 1461.4, "end": 1467.96, "text": " linear probing, if you fine-tune these architectures then you reach like a 99%" }, { "start": 1467.96, "end": 1476.92, "text": " accuracy on C410 which is on par with the best models that we have. So G-Pipe" }, { "start": 1476.92, "end": 1483.0800000000002, "text": " is supervised, pre-trained on ImageNet but also I guess uses a bunch of data" }, { "start": 1483.0800000000002, "end": 1489.48, "text": " augmentation while these image GPT it uses minimal data augmentation I think." }, { "start": 1489.48, "end": 1501.72, "text": " They simply random crop a little bit and that's about it. So they also experiment" }, { "start": 1501.72, "end": 1508.3600000000001, "text": " around with this BERT objective. So until now this was all this" }, { "start": 1508.3600000000001, "end": 1513.08, "text": " autoregressive objective and I feel that OpenAI people are a bit more of a fan of" }, { "start": 1513.08, "end": 1518.3600000000001, "text": " the autoregressive objective just given what they've done so far in their papers." }, { "start": 1518.36, "end": 1527.8, "text": " And you can see here comparison of the two objectives on C410 and on ImageNet." }, { "start": 1527.8, "end": 1533.3999999999999, "text": " Again C410 is pre-trained with ImageNet and ImageNet itself is pre-trained" }, { "start": 1533.3999999999999, "end": 1537.6, "text": " with like a larger collection of images from the web. All the pre-training is" }, { "start": 1537.6, "end": 1544.8799999999999, "text": " done without labels. Now the blue is what you can reach with a linear probe and" }, { "start": 1544.88, "end": 1551.24, "text": " the orange is then on top of that what you can reach by fine-tuning. So no" }, { "start": 1551.24, "end": 1555.2800000000002, "text": " linear probe but fine-tuning. I have to say that the fine-tuning is always done" }, { "start": 1555.2800000000002, "end": 1562.88, "text": " at the end. So even though the linear probe can be attached" }, { "start": 1562.88, "end": 1567.3200000000002, "text": " anywhere in between and it's often useful to do that as we saw because the" }, { "start": 1567.3200000000002, "end": 1573.1200000000001, "text": " in-between layers are the best. They say they tried fine-tuning also from" }, { "start": 1573.12, "end": 1578.32, "text": " in-between but it always worked out best whenever you fine-tune. Whenever you" }, { "start": 1578.32, "end": 1583.4399999999998, "text": " fine-tune you take actually the last layer. So that kind of gives you an idea" }, { "start": 1583.4399999999998, "end": 1591.2399999999998, "text": " that the model is then... What seems to be important is this coming up" }, { "start": 1591.2399999999998, "end": 1596.3999999999999, "text": " with the higher level representation and then once you fine-tune you're probably" }, { "start": 1596.3999999999999, "end": 1602.8, "text": " able to push that representation through to the end because of your training" }, { "start": 1602.8, "end": 1607.6399999999999, "text": " signal. But if you hadn't done the pre-training you wouldn't even have" }, { "start": 1607.6399999999999, "end": 1612.28, "text": " that higher level representation and then the signal I guess is not strong" }, { "start": 1612.28, "end": 1616.56, "text": " enough to back propagate through the whole model. It would be very interesting" }, { "start": 1616.56, "end": 1621.8799999999999, "text": " if they investigate, if they do this linear probe analysis again after they" }, { "start": 1621.8799999999999, "end": 1628.08, "text": " fine-tune the model. And to see if then still it is the intermediate layers" }, { "start": 1628.08, "end": 1634.8, "text": " that have the best representation or if now the best representation in a linear" }, { "start": 1634.8, "end": 1640.1999999999998, "text": " probe sense shifted towards the end. I'm gonna guess it's shifted towards the end" }, { "start": 1640.1999999999998, "end": 1645.6399999999999, "text": " but I sort of want to even see if the accuracy of the linear probe in the" }, { "start": 1645.6399999999999, "end": 1651.96, "text": " middle, does it keep the same? So does the curve go like this? This is the" }, { "start": 1651.96, "end": 1658.56, "text": " linear probe when you simply pre-train. This is linear probe accuracy. The" }, { "start": 1658.56, "end": 1665.92, "text": " question would be does it change to be like this or does it change to be like" }, { "start": 1665.92, "end": 1672.08, "text": " this? This is supposed to be the same at the end. So basically does it stay as" }, { "start": 1672.08, "end": 1677.32, "text": " good as it is but simply get better at the end or does the representation like" }, { "start": 1677.32, "end": 1680.92, "text": " in this curve, does the good representation now shift towards the" }, { "start": 1680.92, "end": 1685.68, "text": " end and leave the lower layer with even more capacity to do some low-level" }, { "start": 1685.68, "end": 1693.3200000000002, "text": " stuff? Yeah, maybe they've done this. I haven't seen it. And as you can see" }, { "start": 1693.3200000000002, "end": 1699, "text": " these BERT and autoregressive objective, they sort of trade off. So the BERT it" }, { "start": 1699, "end": 1704.5600000000002, "text": " tends to do poorly in the linear probe setting but then it catches up during" }, { "start": 1704.5600000000002, "end": 1710.6000000000001, "text": " fine-tuning. In C410 almost being at the level of the autoregressive and in" }, { "start": 1710.6, "end": 1717.4399999999998, "text": " in ImageNet actually outperforming it. This darker thing here it" }, { "start": 1717.4399999999998, "end": 1722.32, "text": " simply means that you average across different maskings of BERT because I" }, { "start": 1722.32, "end": 1728.36, "text": " guess even in classification it's not entirely clear how to get a signal out" }, { "start": 1728.36, "end": 1733.6399999999999, "text": " of BERT because they don't do this CLS vector with BERT. What they do for" }, { "start": 1733.6399999999999, "end": 1740.24, "text": " classification and linear probing and that's written up here, they simply take" }, { "start": 1740.24, "end": 1746.28, "text": " the average pooling of" }, { "start": 1746.28, "end": 1752.92, "text": " all the representations of the sequence. And the last thing that I've" }, { "start": 1752.92, "end": 1762.84, "text": " also forgotten, there's a lot of stuff, when they fine-tune, while fine-tuning" }, { "start": 1762.84, "end": 1769.52, "text": " the classification loss yields reasonable" }, { "start": 1769.52, "end": 1774.04, "text": " downstream performance, we find empirically that the joint objective, the" }, { "start": 1774.04, "end": 1778.58, "text": " generative objective and the classification objective works even" }, { "start": 1778.58, "end": 1784.52, "text": " better. So even when you fine-tune with this model you have to keep the" }, { "start": 1784.52, "end": 1790.8, "text": " generative modeling part, the generative loss around and then it performs even" }, { "start": 1790.8, "end": 1799.4, "text": " more better, more well, whatever that word is. So that's also something to think" }, { "start": 1799.4, "end": 1805.44, "text": " about. I think this paper right here it kind of lays down a lot of cool" }, { "start": 1805.44, "end": 1811.64, "text": " things that you can think about and it gives rise to a lot of hypotheses of how" }, { "start": 1811.64, "end": 1816.6000000000001, "text": " does this stuff work, why does this stuff work. I don't even think that the" }, { "start": 1816.6000000000001, "end": 1822.0400000000002, "text": " numbers are the most important thing, it's mostly the fact of the effects and" }, { "start": 1822.04, "end": 1831.24, "text": " what does it mean. Okay, so this was my take on it. It's more kind of a my" }, { "start": 1831.24, "end": 1837.36, "text": " rant of what I find special about this paper than about the actual paper. You" }, { "start": 1837.36, "end": 1841.28, "text": " can look at the paper, their numbers are pretty good. On ImageNet they do not" }, { "start": 1841.28, "end": 1847.8, "text": " reach the same like super-duper performance as they do on C410 and I" }, { "start": 1847.8, "end": 1852.72, "text": " guess that's probably because they have to downscale the ImageNet images way" }, { "start": 1852.72, "end": 1856.2, "text": " more than they have to downscale the C410 images because those are of course" }, { "start": 1856.2, "end": 1863.46, "text": " only 32 by 32. So because they have to downscale so much they lose probably a" }, { "start": 1863.46, "end": 1869.12, "text": " lot of information and I would be interested to see if there is a way to" }, { "start": 1869.12, "end": 1876.72, "text": " involve convolutions in all of this. So to do the downscaling that in a" }, { "start": 1876.72, "end": 1880.72, "text": " learned manner with convolutions or something. I'm sure this has all been" }, { "start": 1880.72, "end": 1886.08, "text": " done already, I'm just lazy to look it up. Yeah, so I invite you to look at their" }, { "start": 1886.08, "end": 1891.96, "text": " blog post where they have these samples. They look pretty funny and these" }, { "start": 1891.96, "end": 1897.8, "text": " full samples up here look fairly cool for what it's trained to do" }, { "start": 1897.8, "end": 1901.44, "text": " and that it has no spatial awareness whatsoever. It simply uses learned" }, { "start": 1901.44, "end": 1907.8400000000001, "text": " position encodings. And yeah, check it out, that was it from me. Bye bye." } ]
G7-fRGaCZts
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
[ML News] Google introduces Pathways | OpenAI solves Math Problems | Meta goes First Person
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "google", "google ai", "google pathways", "jeff dean", "pathways model", "sparse neural network", "meta", "meta ai", "ego4d", "sam altman", "openai", "openai math", "language model math", "t0", "tzero", "bigscience", "bigsciencew", "deepmind", "deepmind lecture series", "huggingface", "huggingface hub", "dataset viewer", "machine learning news", "tech news" ]
#pathways #mlnews #ego4d Your irregular dose of Machine Learning News. OUTLINE: 0:00 - Intro 0:20 - Sponsor: Weights & Biases 2:10 - Google Introduces Pathways AI Architecture 6:30 - OpenAI trains Language Models to do High School Math 8:25 - Sam Altman says Neural Networks truly learn 9:35 - Google AI researchers frustrated with lawyers 12:10 - DeepMind RL Lecture Series 2021 12:40 - Fashion Store sells Adversarial Patches 13:15 - A viable method to remove the GIL from CPython 15:05 - BigScience Workshop releases T0 17:40 - Huggingface Hub Dataset Viewer 18:10 - Scite classifies scientific citations 19:25 - Facebook AI Ego4D dataset & challenges 21:50 - Tesla Dojo Configurable Floating Point Spec 23:10 - Windows releases PyTorch-DirectML for Deep Learning on DirectX GPUs 23:50 - Helpful Things 33:00 - Traders use ML to analyze CEOs' language 34:20 - Cadbury creates DeepFake ads for local Indian businesses 35:25 - This Shoe Does Not Exist Sponsor: Weights & Biases https://wandb.com References: Google Introduces Pathways AI Architecture https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/?utm_source=pocket_mylist OpenAI trains Language Models to do High School Math https://openai.com/blog/grade-school-math/ https://arxiv.org/abs/2110.14168 Sam Altman says Neural Networks truly learn https://twitter.com/sama/status/1450857134648823809?s=09&t=KazQPHo6Epn0M6ihs4DqHg&utm_source=pocket_mylist Google AI researchers frustrated with lawyers https://archive.ph/lsQJJ#selection-2855.0-2855.294 DeepMind RL Lecture Series 2021 https://deepmind.com/learning-resources/reinforcement-learning-series-2021 Fashion Store sells Adversarial Patches https://twitter.com/naotokui/status/1450673712722702340 A viable method to remove the GIL from CPython https://lwn.net/Articles/872869/ BigScience Workshop releases T0 https://bigscience.huggingface.co/ https://arxiv.org/abs/2110.08207 https://huggingface.co/bigscience/T0pp Huggingface Hub Dataset Viewer https://twitter.com/huggingface/status/1454079471154257923 Scite classifies scientific citations https://scite.ai https://direct.mit.edu/qss/article/doi/10.1162/qss_a_00146/102990/scite-A-smart-citation-index-that-displays-the Facebook AI Ego4D dataset & challenges https://ai.facebook.com/blog/teaching-ai-to-perceive-the-world-through-your-eyes Tesla Dojo Configurable Floating Point Spec https://tesla-cdn.thron.com/static/SBY4B9_tesla-dojo-technology_OPNZ0M.pdf?xseo=&response-content-disposition=inline%3Bfilename%3D%22tesla-dojo-technology.pdf%22 Windows releases PyTorch-DirectML for Deep Learning on DirectX GPUs https://devblogs.microsoft.com/windowsai/introducing-pytorch-directml-train-your-machine-learning-models-on-any-gpu/ Helpful Things https://github.com/achaiah/pywick?utm_source=pocket_mylist https://github.com/orybkin/lexa-benchmark?utm_source=pocket_mylist https://orybkin.github.io/lexa/ https://twitter.com/danijarh/status/1438137568688807942?utm_source=pocket_mylist https://github.com/RobertTLange/mle-hyperopt https://keras.io/examples/vision/mobilevit/?utm_source=pocket_mylist https://twitter.com/osanseviero/status/1451929248231563265?utm_source=pocket_mylist https://huggingface.co/spaces/flax-community/image-captioning https://huggingface.co/transformers/master/model_doc/visionencoderdecoder.html https://github.com/facebookresearch/bitsandbytes https://arxiv.org/abs/2110.11216 https://arxiv.org/pdf/2110.11216.pdf https://github.com/facebookresearch/xformers https://superbbenchmark.org/ https://arxiv.org/abs/2110.07731 https://github.com/BaguaSys/bagua?utm_source=pocket_mylist https://github.com/cgarciae/treex https://jax.readthedocs.io/en/latest/pytrees.html Traders use ML to analyze CEOs' language https://www.reuters.com/technology/ai-can-see-through-you-ceos-language-under-machine-microscope-2021-10-20/ Cadbury creates DeepFake ads for local Indian businesses https://www.bgr.in/entertainment/shah-rukh-khan-not-just-a-cadbury-ad-twitter-diwali-celebration-1016913/ This Shoe Does Not Exist https://www.thisshoedoesnotexist.com/ Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher LinkedIn: https://www.linkedin.com/in/ykilcher BiliBili: https://space.bilibili.com/1824646584 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher
Google introduces pathways their next generation AI architecture, open AI solves high school math problems. And Facebook goes all on first person view. Welcome to ML news. But before the video starts, a quick thanks to our sponsor weights and biases, I want to show you this one feature that I just learned about, did you know you can embed a weights and biases report in notion, it's actually not only reports, but also other stuff by weights and biases. So they have this neat little page here, ironically, it is actually a notion and it is super easy to embed live weights and biases stuff into notion. So for example, here I have a sweep and you can see the sweep is interactive. So you can do all the kinds of things you're used to analyzing a weights and biases sweep. Now I can just grab that URL, get over to notion and create a new embed, paste a link. And there we go. Look at that. This is a fully functional weights and biases report inside of notion. So you have all the interactivity here that you would usually have as you can see. So I can look at my runs, I can activate them, I can even go and look at my sweep controls and various things. This is really cool if you work together with other people and you work on more than just weights and biases reports, you can take your notes and notion and then embed the report, the sweep, whatever into notion page. I love notion, I love weights and biases. And it's very cool that to go together. If you don't know weights and biases, it is your one stop shop for all your machine learning experimental needs from trying out models optimizing hyper parameters all the way to saving your models, deploying them and so on. It runs in the cloud, it's free for personal users and for education, there are plans for teams and for self hosted setups. So all the more reason to go try it out. Thanks again to weights and biases for sponsoring this video. And now let's get into it. Bye bye. Hello and welcome to ML news. Let's dive into our first story. Jeff Dean has released a blog post on the Google blog. No, this is not the Google AI blog. This is the main Google blog. He's also given a TED talk about the subject and the subject is this model called pathways, a next generation AI architecture, we don't actually know much about this architecture, because all we have is that TED talk and this illustration right here. And essentially, Jeff Dean imagines Google's future AI projects to rely on this new architecture, where instead of having single task neural networks that train, you have this giant multitask neural network that can do all the tasks at once. And that would also be sparsely activated. As you can see here, different tasks would leverage different paths through this network. This goes along with a few criticisms on today's architectures. So he says, for example, today's AI models are typically trained to do only one thing, pathways will enable us to train a single model to do 1000s or millions of things. So the goal is to have one model do many, many tasks at once. Second, he says today's models mostly focus on one sense pathways will enable multiple senses. This refers to the fact that the input to current neural networks are single modalities. Sometimes they're two modalities, but mostly they're single modalities, like images or text or sound this pathway architecture, naturally being multitask will also be multimodal, which means that it could input any sort of modality in his TED talk, he gives the example, whether or not you see a leopard or hear the word leopard or hear someone say the word leopard or see a video of a leopard that should essentially evoke the same concept in your brain and therefore also in the pathway model. And lastly, he says today's models are dense and inefficient pathways will make them sparse and efficient. This refers to the fact that our current networks are densely activated, everything's connected to everything. And that's very, very inefficient. He imagines this future pathways architecture to be sparsely activated, meaning that only very small subparts of the network will be activated for a given input sample. And therefore the different parts of the network doing different things, they don't always have to be active at the same time. This can also make the model much more efficient in terms of parameters and computation. Now, as I said, there's not a paper to go along with this or an implementation or even a plan of how to get there. This is essentially a wishlist, and it's not a particularly new wishlist. Like people have dreamed of, oh, can't we just make multi modal multi task models where one model learns everything? Well, yeah, everyone wishes that. But you still have the problems, namely, for example, catastrophic forgetting, if you try to teach the model many tasks, and then one task more, you still have to ensure that it doesn't forget the old tasks, which is very, very difficult, especially in this picture, it seems like this is a rather feed forward architecture right here without any sort of memory modules or anything like this. So how they're going to achieve that, I don't know. Secondly, they say there are many different tasks here. However, huge data architectures mostly rely on self supervision, and then fine tuning for individual tasks and not having different tasks in parallel, though multi task training is a thing. And lastly, the sparse activations are not trivial to achieve. Again, people have been saying this forever, like, well, can't we just have a sparse neural network, probably the brain is sparse, blah, blah, blah. But how are you going to get there? This is just a wishlist, how we're going to get there. I don't know. The main problem with sparsity being that if you have a sparse forward signal, then your backwards gradients are also going to be sparse, you may never learn the correct sparse way through your network. If you only activate sparsely in the forward pass. These are all challenges that have existed forever. But it seems like Google is determined to solve these challenges. I mean, if they can, all the better. But for now, it's just a plan and an idea. And I'm excited to see what happens. Open as released a blog post called solving math word problems where they train a language model to solve math problems. This goes along with a paper saying training verifiers to solve math word problems by people at Open AI, you can read it if you want. Essentially, it is a data set of about 8000 of these high school math problems, where you mainly need the basic addition, subtraction, multiplication and division in order to solve the problem. They're usually stated as little stories, and they have some sort of an answer. Now, large language models such as GPT three are usually kind of bad at this type of stuff, mainly because they are not accurate enough, they don't do these simple steps that are required enough, they're more like a language model, they're more like a conversation model, or a thing that simply repeats some of the stuff it has already seen. So the first approach the paper takes is to fine tune such a language model on these tasks. And it turns out that doesn't go too well. Very often that makes a lot of mistakes as well. And the solution comes in the form of what they call verifiers. So verifiers are model that are not trained to produce the solution, but they are trained to rate whether a solution to a problem is likely to be the correct solution or not. So now what they do is they use one model that they fine tuned to produce like 100 solutions, and then they use the verifiers to rank the solution and pick the best one. And that turns out to be very, very powerful. So we've seen approaches like this before, you remember the Dali model of open AI also not only used a generative model for the avocado chair, but it also used the clip model in order to rank the outputs of the generative model. So this could be a more general recipe for improving generative models is train verifiers, and then generate a bunch of solutions and rank them with the verifiers. As I said, you can read the paper and the data set of these math questions is available to download. Sam Altman tweeted neural networks really truly learn it's not a fancy trick. This is one of the most remarkable things humans have ever figured out and the implications are difficult to overstate. Now I'm not sure if he just wanted to start like a fire with this kind of things. There are many ways of going about this, but it seems like the truth or veracity of the statement entirely depends on how you define learning. But it seems like Sam Altman and in general, that's what we see out of open AI is of the opinion that learning that humans do isn't that much different from the learning that current large scale neural networks inherently do. Now this is to be set a little bit into contrast with what people from the more symbolicist camp may think about these neural networks and about the nature of learning and intelligence in general. But again, I guess it only depends on the definition of words here. And just putting the modifiers really and truly in front of a non defined word doesn't suddenly make it defined. But what do you think? Let me know in the comments after you hit the subscribe button. See what I did there. Next news business insider writes Google's AI researchers say their output is being slowed by lawyers after a string of high level exits getting published really is a nightmare right now. So the article starts off with a bunch of Google controversies, obviously, some famous people were fired from Google recently, and there were a bunch of scandals around that. And now one senior AI researcher who spoke with insider on the condition of anonymity comes forward and says, well, the lawyers are essentially up our necks right now. It's so difficult to publish, this is really stifling publishing inside of Google and so on. And the article backs this up by saying, according to Google's online records, the company published 925 pieces of AI research in 2019 and 962 in 2020. But the company looks to have experienced a moderate slowdown this year, publishing just 618 research papers in 2021. Thus far. Now this is the only thing where they actually back anything up that they say now I've no doubt that this is the case inside of these big companies, they give examples whenever they write words such as bias or fairness, then the lawyers they would just have like tons of questions or want to cross them out because they just don't understand the technical terms behind these things. Now noteworthy terms like bias and fairness actually have about 60 technical definitions, and they're all in conflict with each other. So can't exactly fault the lawyers. What I found funny is that in the last section here, a spokesperson from Google took a statement and said we're publishing papers at the same rate we did last year. At this time last year, there were 815 approved papers and this year there are 820 so far the spokesperson said adding our website doesn't reflect all papers and is typically updated a few months after publications. So they had to bury this on the very bottom of the article right here because they want to like tell a story about how lawyers are so terrible and about how this exit stifled Google so much and don't get me wrong, lawyers are terrible. And I'm pretty sure that there are pain in the neck. But the claim that this is especially ramped up now doesn't seem to hold apart from like one or two anonymous people inside of Google coming forward. And the fact that they have to hide this thing at the very bottom, which is pretty clear, like that's a much more likely explanation than Google now suddenly ramping up their eyeballs of the lawyers like lawyers have always been like this. So insider, I'm calling crap on you. DeepMind releases their reinforcement learning lecture series 2021. This is a lecture series about introduction to reinforcement learning by DeepMind researchers at the University College London, and you can in fact watch all of them. They're freely available on YouTube, the slides are available. And it's pretty cool if you want to get into reinforcement learning. It starts out with the simple frameworks, and it ends with deep reinforcement learning. David Hart tweeted the following out a pop up shop in Shibuya will sell clothing with adversarial patches printed on them to make a fashion statement. Now, while I can't understand this exactly, I do think it's pretty cool. So the label or the brand or the store is called camouflage against the machines unlabeled, and the clothing features adversarial patches. Now, whether that will help in any way or form, like I'm quite doubtful, but it is a pretty cool inside joke if you meet other researchers. The next one isn't really machine learning news, but it is quite important. A contributor to pytorch has released a viable solution for Python concurrency. So if you don't know cpython, the reference implementation for the Python language has this problem that in a multi threaded application, in order to keep track of all the objects flying around, it essentially is forced to do this reference counting. And in order to do proper reference counting, it essentially means that every time a reference is incremented or decremented has to lock down all the threads. This is known as the gil, the global interpreter lock. And it is the reason why you can program multi threaded applications in Python, but they will never be able to use the interpreter at the same time, which means that if you have CPU bound applications, multi threading will just not help, it will not speed up your application at all, you need to go to multi processing. So the rule for the longest time has been if your application is IO bound, then you can use multi threading because it's easier to program, it's easier to reason about a shared state and so on. However, if your application is CPU bound, then you have to go to multi processing, which is quite a bit more heavy, more error prone, so on. Many attempts have been made previously to remove the gil, but every single actual implementation of a Python without a gil had the advantage of being able to run multi threaded applications really concurrently, but also the disadvantage that single threaded applications, which most Python programs are single threaded applications would slow down due to these changes. But now this new suggestion by Sam Gross, who as I already said is a major contributor to PyTorch is actually a viable solution and is being evaluated currently, which is pretty cool. And it may be that in the future, Python concurrent programming will get a lot easier. Big Science has released t zero plus plus, which is a model that is a multi task trained text to text model don't even exactly know how I should call this. But essentially, they took t five, and they trained it with a bunch of different NLP tasks that you all frame as a really a text input. So if you don't know what t five is, t five is this concept that when I have an NLP task, rather than encoding it somehow in a smart way, I simply encoded as a natural language prompt. For example, if I want to translate from French to English, I simply say please translate the following from French to English, and then I put the French sentence and then I train the model to auto aggressively predict the English sentence. This means I can use pre trained language models as a start for these models. And namely, that is what GPT three does zero shot out of the box. So the idea here is that if GPT three can do this in a zero shot fashion, these natural language tasks that are formulated in the language of let's say of the input of English, can't we achieve the same or better zero shot performance if we don't pre train the model on language modeling as GPT three is but if we instead pre train the model on other tasks. So t zero is this model that takes a bunch of different NLP tasks puts them all into the language as a human would input them or type them up. So they are compatible with a language model trains all of them at the same time. And it turns out that the resulting model can actually do new NLP tasks in a zero shot fashion much like GPT three but is way more parameter efficient at that. So this is pretty cool. And the model is available on hugging face. So here you see a bunch of examples of what that can look like they have different versions of this model, you can import it in the hugging face API, you can even try it out here on the website. And the thing I want to highlight is that big science isn't some research lab or a company, it's actually a one year long research workshop on large multilingual models and data sets. This is simply a conglomeration of a bunch of researchers from all over the world that is loosely organized together for one year to investigate these large models. So it's pretty cool that something outside of traditional academia or corporate research labs also comes into the game and provides lots of cool stuff for the community. Definitely check it out. Check out their paper, check out their models. Speaking of the hugging face hub, hugging face released this tweet saying that the data set viewer is available in hugging face hub is essentially a preview where you can for any data set go and see what kind of samples are in there, not for any data set, but for any that supports the hugging face streaming API, which are like half the data sets on the hugging face hub, this works for images. So here you can see MNIST and you already saw some NLP things. So pretty cool hugging face hub is getting more and more useful by the day. site is a sort of a Google scholar ish type of thing where you can look for publications and then inside the publications, every citation will be annotated, first of all, with the context of where it goes. So any citation target, if you click on it, you'll see sort of the context of the citation. And second of all, it is annotated with the fact of whether the citation actually supports the cited research or is critical of it or refutes it. So you have positive and negative citations. And this gives you a much more accurate picture of how a particular paper has fared in the research landscape in how it was cited and not only whether it was cited, this is done in part by an automated system. And I believe they already have a giant amount of research articles in there and automating these extraction of references, and they are scoring them using deep learning model. What else there is a paper to go along with it, check it out if you like and give site a try. It isn't exactly free. There are different tiers right here with different features. But if this is at all helpful to you, I guess it might be worth it. Facebook AI releases a blog post called teaching AI to perceive the world through your eyes. This is a push by Facebook or meta or whatever it is called right now to go away from the standard data sets where you have some third person view of a scene to really first person data sets. So they have a bunch of collections of data from around the world from different people in different life circumstances in many, many places, and they collected first person data, meaning I guess these people had head mounted cameras and had other sensors on and they recorded just doing everyday activities. So the data set is called ego 4d. And what I think is cool about it is the data set generation process is different from what other data sets are not only the fact that it is first person and that it is, you know, distributed all over the world and not just done by a single person or team, but also because they just told the people, you know, just record yourself doing everyday stuff. And then after the fact, they went ahead and they defined tasks and they annotated the data for labels. So they didn't have the labels in mind when they collected the data, or maybe they had them in mind, but they didn't collect the data specifically to get some labels first collected the data, and then they put different labels over top. So for example, different tasks that they imagine are memory tasks, forecasting tasks, object recognition, whatnot, they have various layers of labels annotated by humans by crowd workers on this data and the data set, you know, you can imagine that these aren't the only labels. In fact, it is very feasible that a different research group goes ahead and annotates the data in a different way to create their own task. The blog post highlights the difficulty of ego centric data, which is usually vastly different from like a third person view. As you can see here on the left, this object detector works quite well in a third person view. However, in a first person view, it just kind of fails. So is this a good way forward to build more capable systems or a step into dystopia? I guess that's up to you. But if you like working with data like this, then give this data set a try. I'm not exactly sure how you can get ahold of it. I think there is some sort of license attached. But yeah, it's out there. Tesla released apparently pretty randomly a guide to a configurable floating point format and arithmetic. So this is a very technical specification for eight bit and 16 bit floating point numbers and arithmetic and is supposed to sort of standardize or give a format to configurable floating point numbers. So as I said, it's very technical, it's actually also quite short. And the gist here is that they say if you train AI models on really large scales, like Tesla does, you might want to go down from 32 bit numbers to 16 bit numbers or even eight bit numbers. However, in these very low regimes, you only have whatever eight bit to play with. And therefore you can't exactly specify ahead of time how many bits should be the exponent and how many bits the mantissa should be. Therefore, this needs to be configurable. So not like in a 32 bit number, you have exactly this many bits for this and this many bits for that in these new configurable floating point numbers, this would be a variable that you can decide as you use the number. So that allows you to trade off what kind of range this number can potentially have with the accuracy, the resolution that the number can have in a particular range, we'll see whether this remains a thing that's purely used inside of Tesla or whether other people are going to adopt it. Microsoft introduces pytorch direct ml, they say train your machine learning models on any GPU. So this is a component for pytorch that allows you to use any direct x GPU for doing deep learning. And all that is necessary essentially is that in pytorch, you don't say to CUDA, like if you have a CUDA device, now you say to DML to direct ml. And that's it. This works on Windows and on the Windows subsystem for Linux. So if you're still a Windows user for whatever reason, good for you. Alright, more helpful things that I saw this week, there are a lot of helpful things this week. It's not only helpful libraries, it's the section is renamed to just help, like, help me please. pyWIC is a high level batteries included neural network training library for pytorch. And yes, whatever you're thinking is said here at the beginning of the readme, does the world need another pytorch framework? Probably not. But we started this project when no good frameworks were available. And it just kept growing. So here we are. Yeah, respect. Cool. If none of the current frameworks please you, pyWIC might be for you. Lexa is a benchmark for zero shot reaching of goals. This goes along with a paper by CMU, UPenn and U of T about reaching goals after discovering the world. So these agents, what they'll do is they'll essentially go ahead and they'll just try out a bunch of stuff in the world without any explicit goals. And after that, you give the models a picture of a goal to reach, and they're supposed to reach it. So this means you don't explicitly train the agents to reach that particular goal, or any goal, you simply let them explore. And after that, they have to reach a goal. So Lexa is a benchmark that achieves this. And as I said, this goes along with the paper that gives a very, very, very good baseline for this benchmark already. But the benchmark itself is available to download if you're interested in doing that kind of research, give it a try. Next, Donnie Jar Hoffner tweets out excited to introduce crafter. So this is a game sort of an open world game, long term reasoning, exploration, generalization made for reward agents and unsupervised agents. So it's called crafter. And you move around, and there's blocks, and there's food, and you have to dig and you have to build and you have to craft things. I've never seen anything like this before. This is a first. This has no relation to any game that I've seen so far. No, it's, it's pretty cool. So you can craft things, as you can see right here, you can interact with stuff, every world is randomly generated. Like this is a Minecraft clone, but amenable to machine learning research to AI research. So that is pretty cool, because Minecraft just seems too complex, because you can move like in any direction and so on here, it's really discrete. So these models, they have a much more easy time to go about it, they've already evaluated different of these AI learning mechanisms on it, like dreamer, PPO, rainbow agents, and so on. And none of them really compare so far to human expert. But I think the game is pretty cool. It is available. These RL agents can already do things like you know, dig holes, build bridges, and so on. There's a very complex behaviors already emerging here, it moves out of the way of a skeleton. And in another one builds a shelter. Excellent crafter, give it a try. If this video gets more than three likes, we'll do a crafter let's play for sure. Robert Lange releases a lightweight hyper parameter optimization tool. This seems to be a cool kind of personal project by Robert, and he released it with pretty good documentation. There's colab, there is an example. And if you're just looking for like a very simple way to do hyper parameter optimization, then this might be the library for you. As you can see, there's different strategies for doing hyper parameter optimization and different ways you can define them. That's pretty much all you need even has the fancy decorator style as you can see right here. Very pythonic. Sayak Paul released a Keras tutorial on mobile vid. So this is a tutorial that will guide you through implementing mobile visual transformers in Keras, which is quite neat. So Keras still as easy to use as ever. And this tutorial guides you through building the architecture from the ground up all the way to training it. At the end, you convert this model to TF Lite. So it actually runs on your mobile phone. Pretty cool. Omar Sansevierro tweets out this demo is surprising it combines vit with GPT to caption images with great results. And yes, actually, I was positively surprised. This is a hugging face module where you take a existing text model like GPT two and you take an existing image computer vision model like vision transformer and you combine them. So first, you start out with sort of random cross attention weights that you fine tune just a little bit. And that can have really, really good results. Essentially, the model learns how to connect the latent representation from one model to the other model and back. So this is used right here to do an image captioning demo using GPT two and vit, as I said, and training only about 7000 steps on the cocoa data set. So you can see the result. This is a man swinging a tennis racket on a tennis court that is very descriptive. That that is just an unhumanly precise description of what's going on right here. We have a blue and white street sign sitting on top of a pole. Yes, that that is also a very, very, very precise description. person riding a skateboard on top of a cement floor. Well, I guess that has some importance. Is it just me or AI models just bureaucrats? But yeah, pretty cool. Bits and bytes is a library by Facebook research for eight bit optimizers and quantization routines. So they have a bunch of optimizers such as Adam, Adam W, RMS prop, and so on that work on eight bits instead of 32. And that pretty reliably saves you 75% of the memory, something like Adam has two or three different buffers that for every parameter you need to keep track of. So this can pretty quickly get pretty large and saving three quarters of the memory has definitely value. Now I love that it's called Facebook research. But if you hover it says meta research. Is this gonna go well? I don't know. Also, is this supposed to be like a pretzel? Like it is, is it supposed to be like a flat logo? Or is it supposed to represent sort of like a Pringles chips, you know, like the saddle in 3d? I don't know. Another helpful thing user friendly introduction to pack base bounce by Pierre O'Curr. Now this is something I have no clue about. But I know it's important. And I have learned it at some point, if you're trying to get into pack base bounce, this is a I believe over 60 pages introduction to it that seems to be quite well written, introducing you to all the important concepts in it. So if you're interested, give it a try. Again, face meta, whatever research releases x formers, hackable and optimized transformers building blocks supporting a composable construction. So if you're into transformers, and if you would like to recombine them, try out different things inside of them, x formers might be a great library on doing that. So you see all of these boxes here, essentially, this library makes it pretty easy to just rearrange them, connect them differently, and so on. Superb is a speech processing universal performance benchmark. This means that this benchmark has a bunch of speech tasks, so tasks in machine learning where the input is a piece of speech. But the goal here is that you have one pipeline that generates a representation. And then that representation can be fine tuned easily to all of these tasks. You're not supposed to solve all of the tasks from scratch, you're supposed to come up with that pipeline that generates the representation. If you work on speech, this might be very cool for you. I don't know how to say this. CCQA is a web scale question answering data set for model pre training. This is a large scale QA data set that I guess you can use for pre training question answering models. Excellent. Bagua is a library that claims to speed up pytorch. So they have a bunch of things in here for pytorch, for example, advanced distributed training algorithms, performance, auto tuning, generic fused optimizers, load balanced data loader, and so on. So these seem to be specialized algorithms that in very certain cases where you want to use pytorch can potentially deliver a lot of speed up. So if your problem doesn't fall into like the standard bucket where the library is optimized for maybe you can find something inside of Bagua that is going to help you Bagua Bagua. I don't know. tree x tracks treaks is a pi tree module system for deep learning in JAX. So if you work with pi tree, this is it in JAX. Good job pi tree. For those of you don't know are essentially trees out of Python structures. So here, for example, a list which contains numbers and dicts which themselves contain tuples and so on. So a pi tree works with these kinds of objects, and now you can use them inside of JAX and tree x helps you to do that in a more module oriented way or a more object oriented way. Reuters writes AI can see through you CEOs language under machine microscope. This article essentially says that things like NLP and speech sound analysis, they now go after CEOs quarterly announcements, they analyze their voices, and they're trying to just recognize when they're nervous and so on. And they actually have a point in that they claim they can make better investment decisions if they do something like this. But as you know, as soon as you pay attention to anything like this, the CEOs are immediately going to adjust and train to trick essentially these AI systems. So they will use scripted speeches much more in order to not trip the NLP systems, they will train their voice acting more, I guess, or let some press secretary speak for them. All in all, it just seems to be like, you know, if you analyze a CEO's speech, and to to detect when they're lying, and when not, and then make investment decisions, you'll simply reinforce the like the sociopaths that have no problem with just straight out lying that have no difference in their voice whatsoever. So if you want to create a world of more sociopathic CEOs than it already is, I guess, then go right ahead, just just do this, this this is fine. Excellent. Atbury, the company has apparently made this ad for Indian local businesses. And it's not just an ad, but they've paid this Indian celebrity to record essentially one ad, and then they modified that ad using deep learning. So they have like three product categories like shoes, and I guess glasses and watches or something like this, they've recorded the different ads for the different products. But whenever the actor says the company name and the location of the company, they use deep learning to change whatever the small business is. So essentially, this is a deep faith from the same actor to his own face, but to make him say something else. So as a small business in India, you can go there and get your ad for your local business, the system will actually make sure that people that are in your area are advertised with your particular business and people in different areas will see I guess the same ad but the actor mentioning a different business that is in that area. Pretty cool. There's a form if you're in India, you know, check it out. And lastly, this shoe does not exist. This is a website, I guess it's analogous to this person does not exist, which was a famous website that trained stylegan two on a face data set. So this is stylegan three, which was recently released the alias freegan, and it's trained on a shoe data set. So you can just refresh and look at shoes that the model has come up with. I guess these shoes all look like they exist, they might as well, who knows. But yeah, if you're looking for unique design ideas, check it out. I'm looking forward to many more things where stylegan three is applied, it seems to be the quality of these models and the ease of training them has come a long way such that it is in fact possible to do this for many types of things where you have decently amounts of data such as choose, I guess. Alright, this was it for this week's ML news. Thank you so much for being here. Don't forget to like and subscribe and let me know what you think in the comments. I value your opinions. Definitely. This is not just a trick to get the YouTube algorithm to promote the video more and all of that kind of stuff. See ya.
[ { "start": 0, "end": 7.92, "text": " Google introduces pathways their next generation AI architecture, open AI solves high school math" }, { "start": 7.92, "end": 13.52, "text": " problems. And Facebook goes all on first person view. Welcome to ML news." }, { "start": 18.080000000000002, "end": 23.04, "text": " But before the video starts, a quick thanks to our sponsor weights and biases, I want to show you" }, { "start": 23.04, "end": 29.52, "text": " this one feature that I just learned about, did you know you can embed a weights and biases report" }, { "start": 29.52, "end": 35.2, "text": " in notion, it's actually not only reports, but also other stuff by weights and biases. So they" }, { "start": 35.2, "end": 41.519999999999996, "text": " have this neat little page here, ironically, it is actually a notion and it is super easy to embed" }, { "start": 41.519999999999996, "end": 47.120000000000005, "text": " live weights and biases stuff into notion. So for example, here I have a sweep and you can see the" }, { "start": 47.120000000000005, "end": 52.879999999999995, "text": " sweep is interactive. So you can do all the kinds of things you're used to analyzing a weights and" }, { "start": 52.88, "end": 61.120000000000005, "text": " biases sweep. Now I can just grab that URL, get over to notion and create a new embed, paste a link." }, { "start": 61.120000000000005, "end": 68, "text": " And there we go. Look at that. This is a fully functional weights and biases report inside of" }, { "start": 68, "end": 73.6, "text": " notion. So you have all the interactivity here that you would usually have as you can see. So" }, { "start": 73.6, "end": 80.4, "text": " I can look at my runs, I can activate them, I can even go and look at my sweep controls and various" }, { "start": 80.4, "end": 85.92, "text": " things. This is really cool if you work together with other people and you work on more than just" }, { "start": 85.92, "end": 92.32000000000001, "text": " weights and biases reports, you can take your notes and notion and then embed the report, the sweep," }, { "start": 92.32000000000001, "end": 98.80000000000001, "text": " whatever into notion page. I love notion, I love weights and biases. And it's very cool that to go" }, { "start": 98.80000000000001, "end": 104.08000000000001, "text": " together. If you don't know weights and biases, it is your one stop shop for all your machine" }, { "start": 104.08000000000001, "end": 109.76, "text": " learning experimental needs from trying out models optimizing hyper parameters all the way to" }, { "start": 109.76, "end": 115.52000000000001, "text": " saving your models, deploying them and so on. It runs in the cloud, it's free for personal users" }, { "start": 115.52000000000001, "end": 120.88000000000001, "text": " and for education, there are plans for teams and for self hosted setups. So all the more reason to" }, { "start": 120.88000000000001, "end": 126.88000000000001, "text": " go try it out. Thanks again to weights and biases for sponsoring this video. And now let's get into" }, { "start": 126.88000000000001, "end": 136.64000000000001, "text": " it. Bye bye. Hello and welcome to ML news. Let's dive into our first story. Jeff Dean has released" }, { "start": 136.64, "end": 143.35999999999999, "text": " a blog post on the Google blog. No, this is not the Google AI blog. This is the main Google blog." }, { "start": 143.35999999999999, "end": 149.67999999999998, "text": " He's also given a TED talk about the subject and the subject is this model called pathways," }, { "start": 149.67999999999998, "end": 155.76, "text": " a next generation AI architecture, we don't actually know much about this architecture," }, { "start": 155.76, "end": 160.79999999999998, "text": " because all we have is that TED talk and this illustration right here. And essentially," }, { "start": 160.8, "end": 168, "text": " Jeff Dean imagines Google's future AI projects to rely on this new architecture, where instead of" }, { "start": 168, "end": 174.88000000000002, "text": " having single task neural networks that train, you have this giant multitask neural network that can" }, { "start": 174.88000000000002, "end": 181.12, "text": " do all the tasks at once. And that would also be sparsely activated. As you can see here, different" }, { "start": 181.12, "end": 187.36, "text": " tasks would leverage different paths through this network. This goes along with a few criticisms on" }, { "start": 187.36, "end": 193.12, "text": " today's architectures. So he says, for example, today's AI models are typically trained to do" }, { "start": 193.12, "end": 199.68, "text": " only one thing, pathways will enable us to train a single model to do 1000s or millions of things." }, { "start": 199.68, "end": 206.4, "text": " So the goal is to have one model do many, many tasks at once. Second, he says today's models" }, { "start": 206.4, "end": 212.88000000000002, "text": " mostly focus on one sense pathways will enable multiple senses. This refers to the fact that the" }, { "start": 212.88, "end": 218.56, "text": " input to current neural networks are single modalities. Sometimes they're two modalities," }, { "start": 218.56, "end": 225.68, "text": " but mostly they're single modalities, like images or text or sound this pathway architecture," }, { "start": 225.68, "end": 231.92, "text": " naturally being multitask will also be multimodal, which means that it could input any sort of" }, { "start": 231.92, "end": 237.84, "text": " modality in his TED talk, he gives the example, whether or not you see a leopard or hear the word" }, { "start": 237.84, "end": 243.52, "text": " leopard or hear someone say the word leopard or see a video of a leopard that should essentially" }, { "start": 243.52, "end": 249.44, "text": " evoke the same concept in your brain and therefore also in the pathway model. And lastly, he says" }, { "start": 249.44, "end": 254.8, "text": " today's models are dense and inefficient pathways will make them sparse and efficient. This refers" }, { "start": 254.8, "end": 260.32, "text": " to the fact that our current networks are densely activated, everything's connected to everything." }, { "start": 260.32, "end": 266.56, "text": " And that's very, very inefficient. He imagines this future pathways architecture to be sparsely" }, { "start": 266.56, "end": 273.12, "text": " activated, meaning that only very small subparts of the network will be activated for a given input" }, { "start": 273.12, "end": 277.84, "text": " sample. And therefore the different parts of the network doing different things, they don't always" }, { "start": 277.84, "end": 283.36, "text": " have to be active at the same time. This can also make the model much more efficient in terms of" }, { "start": 283.36, "end": 288.32, "text": " parameters and computation. Now, as I said, there's not a paper to go along with this or an" }, { "start": 288.32, "end": 293.6, "text": " implementation or even a plan of how to get there. This is essentially a wishlist, and it's not a" }, { "start": 293.6, "end": 300.16, "text": " particularly new wishlist. Like people have dreamed of, oh, can't we just make multi modal multi task" }, { "start": 300.16, "end": 304.96000000000004, "text": " models where one model learns everything? Well, yeah, everyone wishes that. But you still have" }, { "start": 304.96000000000004, "end": 310.56, "text": " the problems, namely, for example, catastrophic forgetting, if you try to teach the model many" }, { "start": 310.56, "end": 315.52000000000004, "text": " tasks, and then one task more, you still have to ensure that it doesn't forget the old tasks," }, { "start": 315.52000000000004, "end": 319.84000000000003, "text": " which is very, very difficult, especially in this picture, it seems like this is a rather" }, { "start": 319.84, "end": 325.35999999999996, "text": " feed forward architecture right here without any sort of memory modules or anything like this." }, { "start": 325.35999999999996, "end": 330.64, "text": " So how they're going to achieve that, I don't know. Secondly, they say there are many different" }, { "start": 330.64, "end": 336.88, "text": " tasks here. However, huge data architectures mostly rely on self supervision, and then fine" }, { "start": 336.88, "end": 342.23999999999995, "text": " tuning for individual tasks and not having different tasks in parallel, though multi task" }, { "start": 342.23999999999995, "end": 347.44, "text": " training is a thing. And lastly, the sparse activations are not trivial to achieve. Again," }, { "start": 347.44, "end": 351.68, "text": " people have been saying this forever, like, well, can't we just have a sparse neural network," }, { "start": 351.68, "end": 355.68, "text": " probably the brain is sparse, blah, blah, blah. But how are you going to get there? This is just" }, { "start": 355.68, "end": 361.12, "text": " a wishlist, how we're going to get there. I don't know. The main problem with sparsity being that" }, { "start": 361.12, "end": 366, "text": " if you have a sparse forward signal, then your backwards gradients are also going to be sparse," }, { "start": 366, "end": 371.52, "text": " you may never learn the correct sparse way through your network. If you only activate sparsely in the" }, { "start": 371.52, "end": 376.4, "text": " forward pass. These are all challenges that have existed forever. But it seems like Google is" }, { "start": 376.4, "end": 381.84, "text": " determined to solve these challenges. I mean, if they can, all the better. But for now, it's just" }, { "start": 381.84, "end": 388.79999999999995, "text": " a plan and an idea. And I'm excited to see what happens. Open as released a blog post called" }, { "start": 388.79999999999995, "end": 395.12, "text": " solving math word problems where they train a language model to solve math problems. This" }, { "start": 395.12, "end": 400.96, "text": " goes along with a paper saying training verifiers to solve math word problems by people at Open AI," }, { "start": 400.96, "end": 407.35999999999996, "text": " you can read it if you want. Essentially, it is a data set of about 8000 of these high school math" }, { "start": 407.35999999999996, "end": 412.71999999999997, "text": " problems, where you mainly need the basic addition, subtraction, multiplication and division" }, { "start": 412.71999999999997, "end": 418.15999999999997, "text": " in order to solve the problem. They're usually stated as little stories, and they have some sort" }, { "start": 418.15999999999997, "end": 424.32, "text": " of an answer. Now, large language models such as GPT three are usually kind of bad at this type" }, { "start": 424.32, "end": 430, "text": " of stuff, mainly because they are not accurate enough, they don't do these simple steps that" }, { "start": 430, "end": 435.52, "text": " are required enough, they're more like a language model, they're more like a conversation model," }, { "start": 435.52, "end": 440.56, "text": " or a thing that simply repeats some of the stuff it has already seen. So the first approach the" }, { "start": 440.56, "end": 446, "text": " paper takes is to fine tune such a language model on these tasks. And it turns out that doesn't go" }, { "start": 446, "end": 451.68, "text": " too well. Very often that makes a lot of mistakes as well. And the solution comes in the form of" }, { "start": 451.68, "end": 456.64, "text": " what they call verifiers. So verifiers are model that are not trained to produce the solution," }, { "start": 456.64, "end": 462, "text": " but they are trained to rate whether a solution to a problem is likely to be the correct solution" }, { "start": 462, "end": 468.4, "text": " or not. So now what they do is they use one model that they fine tuned to produce like 100 solutions," }, { "start": 468.4, "end": 473.36, "text": " and then they use the verifiers to rank the solution and pick the best one. And that turns" }, { "start": 473.36, "end": 478.64, "text": " out to be very, very powerful. So we've seen approaches like this before, you remember the" }, { "start": 478.64, "end": 485.2, "text": " Dali model of open AI also not only used a generative model for the avocado chair, but it" }, { "start": 485.2, "end": 491.36, "text": " also used the clip model in order to rank the outputs of the generative model. So this could" }, { "start": 491.36, "end": 498.08, "text": " be a more general recipe for improving generative models is train verifiers, and then generate a" }, { "start": 498.08, "end": 502.71999999999997, "text": " bunch of solutions and rank them with the verifiers. As I said, you can read the paper and" }, { "start": 502.71999999999997, "end": 510.71999999999997, "text": " the data set of these math questions is available to download. Sam Altman tweeted neural networks" }, { "start": 510.72, "end": 517.9200000000001, "text": " really truly learn it's not a fancy trick. This is one of the most remarkable things humans have ever" }, { "start": 517.9200000000001, "end": 522.96, "text": " figured out and the implications are difficult to overstate. Now I'm not sure if he just wanted to" }, { "start": 522.96, "end": 528.88, "text": " start like a fire with this kind of things. There are many ways of going about this, but it seems" }, { "start": 528.88, "end": 534.48, "text": " like the truth or veracity of the statement entirely depends on how you define learning." }, { "start": 534.48, "end": 540.24, "text": " But it seems like Sam Altman and in general, that's what we see out of open AI is of the" }, { "start": 540.24, "end": 546.8, "text": " opinion that learning that humans do isn't that much different from the learning that current" }, { "start": 546.8, "end": 552.64, "text": " large scale neural networks inherently do. Now this is to be set a little bit into contrast with" }, { "start": 552.64, "end": 558.08, "text": " what people from the more symbolicist camp may think about these neural networks and about the" }, { "start": 558.08, "end": 564.16, "text": " nature of learning and intelligence in general. But again, I guess it only depends on the definition" }, { "start": 564.16, "end": 570.64, "text": " of words here. And just putting the modifiers really and truly in front of a non defined word" }, { "start": 570.64, "end": 575.36, "text": " doesn't suddenly make it defined. But what do you think? Let me know in the comments after you hit" }, { "start": 575.36, "end": 582.4, "text": " the subscribe button. See what I did there. Next news business insider writes Google's AI researchers" }, { "start": 582.4, "end": 588, "text": " say their output is being slowed by lawyers after a string of high level exits getting published" }, { "start": 588, "end": 594.4, "text": " really is a nightmare right now. So the article starts off with a bunch of Google controversies," }, { "start": 594.4, "end": 598.56, "text": " obviously, some famous people were fired from Google recently, and there were a bunch of" }, { "start": 598.56, "end": 603.76, "text": " scandals around that. And now one senior AI researcher who spoke with insider on the" }, { "start": 603.76, "end": 609.28, "text": " condition of anonymity comes forward and says, well, the lawyers are essentially up our necks" }, { "start": 609.28, "end": 614.64, "text": " right now. It's so difficult to publish, this is really stifling publishing inside of Google and" }, { "start": 614.64, "end": 619.52, "text": " so on. And the article backs this up by saying, according to Google's online records, the company" }, { "start": 619.52, "end": 627.84, "text": " published 925 pieces of AI research in 2019 and 962 in 2020. But the company looks to have experienced" }, { "start": 627.84, "end": 635.36, "text": " a moderate slowdown this year, publishing just 618 research papers in 2021. Thus far. Now this is the" }, { "start": 635.36, "end": 641.04, "text": " only thing where they actually back anything up that they say now I've no doubt that this is the" }, { "start": 641.04, "end": 646.64, "text": " case inside of these big companies, they give examples whenever they write words such as bias" }, { "start": 646.64, "end": 651.8399999999999, "text": " or fairness, then the lawyers they would just have like tons of questions or want to cross them out" }, { "start": 651.8399999999999, "end": 658.0799999999999, "text": " because they just don't understand the technical terms behind these things. Now noteworthy terms" }, { "start": 658.0799999999999, "end": 663.68, "text": " like bias and fairness actually have about 60 technical definitions, and they're all in" }, { "start": 663.68, "end": 669.12, "text": " conflict with each other. So can't exactly fault the lawyers. What I found funny is that in the" }, { "start": 669.12, "end": 674.4, "text": " last section here, a spokesperson from Google took a statement and said we're publishing papers at" }, { "start": 674.4, "end": 680.24, "text": " the same rate we did last year. At this time last year, there were 815 approved papers and this year" }, { "start": 680.24, "end": 685.68, "text": " there are 820 so far the spokesperson said adding our website doesn't reflect all papers and is" }, { "start": 685.68, "end": 693.04, "text": " typically updated a few months after publications. So they had to bury this on the very bottom of the" }, { "start": 693.04, "end": 698.48, "text": " article right here because they want to like tell a story about how lawyers are so terrible and" }, { "start": 698.48, "end": 704.16, "text": " about how this exit stifled Google so much and don't get me wrong, lawyers are terrible. And I'm" }, { "start": 704.16, "end": 710.08, "text": " pretty sure that there are pain in the neck. But the claim that this is especially ramped up now" }, { "start": 710.08, "end": 715.44, "text": " doesn't seem to hold apart from like one or two anonymous people inside of Google coming forward." }, { "start": 715.44, "end": 720.4, "text": " And the fact that they have to hide this thing at the very bottom, which is pretty clear, like that's" }, { "start": 720.4, "end": 726.08, "text": " a much more likely explanation than Google now suddenly ramping up their eyeballs of the lawyers" }, { "start": 726.08, "end": 734.1600000000001, "text": " like lawyers have always been like this. So insider, I'm calling crap on you. DeepMind releases" }, { "start": 734.1600000000001, "end": 740.4000000000001, "text": " their reinforcement learning lecture series 2021. This is a lecture series about introduction to" }, { "start": 740.4000000000001, "end": 745.2800000000001, "text": " reinforcement learning by DeepMind researchers at the University College London, and you can" }, { "start": 745.2800000000001, "end": 750.24, "text": " in fact watch all of them. They're freely available on YouTube, the slides are available." }, { "start": 750.24, "end": 755.2800000000001, "text": " And it's pretty cool if you want to get into reinforcement learning. It starts out with the" }, { "start": 755.28, "end": 761.36, "text": " simple frameworks, and it ends with deep reinforcement learning. David Hart tweeted the" }, { "start": 761.36, "end": 767.92, "text": " following out a pop up shop in Shibuya will sell clothing with adversarial patches printed on them" }, { "start": 767.92, "end": 773.4399999999999, "text": " to make a fashion statement. Now, while I can't understand this exactly, I do think it's pretty" }, { "start": 773.4399999999999, "end": 779.68, "text": " cool. So the label or the brand or the store is called camouflage against the machines unlabeled," }, { "start": 779.68, "end": 786.0799999999999, "text": " and the clothing features adversarial patches. Now, whether that will help in any way or form," }, { "start": 786.0799999999999, "end": 792, "text": " like I'm quite doubtful, but it is a pretty cool inside joke if you meet other researchers." }, { "start": 793.8399999999999, "end": 800.16, "text": " The next one isn't really machine learning news, but it is quite important. A contributor to" }, { "start": 800.16, "end": 807.28, "text": " pytorch has released a viable solution for Python concurrency. So if you don't know cpython, the" }, { "start": 807.28, "end": 812.48, "text": " reference implementation for the Python language has this problem that in a multi threaded" }, { "start": 812.48, "end": 817.28, "text": " application, in order to keep track of all the objects flying around, it essentially is forced" }, { "start": 817.28, "end": 822, "text": " to do this reference counting. And in order to do proper reference counting, it essentially means" }, { "start": 822, "end": 827.12, "text": " that every time a reference is incremented or decremented has to lock down all the threads." }, { "start": 827.12, "end": 833.76, "text": " This is known as the gil, the global interpreter lock. And it is the reason why you can program" }, { "start": 833.76, "end": 838.96, "text": " multi threaded applications in Python, but they will never be able to use the interpreter at the" }, { "start": 838.96, "end": 844.08, "text": " same time, which means that if you have CPU bound applications, multi threading will just not help," }, { "start": 844.08, "end": 849.4399999999999, "text": " it will not speed up your application at all, you need to go to multi processing. So the rule for" }, { "start": 849.4399999999999, "end": 854.24, "text": " the longest time has been if your application is IO bound, then you can use multi threading because" }, { "start": 854.24, "end": 859.12, "text": " it's easier to program, it's easier to reason about a shared state and so on. However, if your" }, { "start": 859.12, "end": 864.88, "text": " application is CPU bound, then you have to go to multi processing, which is quite a bit more heavy," }, { "start": 864.88, "end": 870.8, "text": " more error prone, so on. Many attempts have been made previously to remove the gil, but every single" }, { "start": 870.8, "end": 876.64, "text": " actual implementation of a Python without a gil had the advantage of being able to run multi" }, { "start": 876.64, "end": 882.4, "text": " threaded applications really concurrently, but also the disadvantage that single threaded applications," }, { "start": 882.4, "end": 888.24, "text": " which most Python programs are single threaded applications would slow down due to these changes." }, { "start": 888.24, "end": 895.52, "text": " But now this new suggestion by Sam Gross, who as I already said is a major contributor to PyTorch" }, { "start": 895.52, "end": 900.24, "text": " is actually a viable solution and is being evaluated currently, which is pretty cool." }, { "start": 900.24, "end": 905.6, "text": " And it may be that in the future, Python concurrent programming will get a lot easier." }, { "start": 907.04, "end": 915.2, "text": " Big Science has released t zero plus plus, which is a model that is a multi task trained text to" }, { "start": 915.2, "end": 920.96, "text": " text model don't even exactly know how I should call this. But essentially, they took t five," }, { "start": 920.96, "end": 928.24, "text": " and they trained it with a bunch of different NLP tasks that you all frame as a really a text input." }, { "start": 928.24, "end": 933.2, "text": " So if you don't know what t five is, t five is this concept that when I have an NLP task," }, { "start": 933.2, "end": 938.4000000000001, "text": " rather than encoding it somehow in a smart way, I simply encoded as a natural language prompt. For" }, { "start": 938.4000000000001, "end": 943.0400000000001, "text": " example, if I want to translate from French to English, I simply say please translate the" }, { "start": 943.04, "end": 948.16, "text": " following from French to English, and then I put the French sentence and then I train the model to" }, { "start": 948.16, "end": 954.24, "text": " auto aggressively predict the English sentence. This means I can use pre trained language models" }, { "start": 954.24, "end": 960.4, "text": " as a start for these models. And namely, that is what GPT three does zero shot out of the box. So" }, { "start": 960.4, "end": 967.4399999999999, "text": " the idea here is that if GPT three can do this in a zero shot fashion, these natural language tasks" }, { "start": 967.44, "end": 973.9200000000001, "text": " that are formulated in the language of let's say of the input of English, can't we achieve the same" }, { "start": 973.9200000000001, "end": 979.84, "text": " or better zero shot performance if we don't pre train the model on language modeling as GPT three" }, { "start": 979.84, "end": 986.6400000000001, "text": " is but if we instead pre train the model on other tasks. So t zero is this model that takes a bunch" }, { "start": 986.6400000000001, "end": 993.0400000000001, "text": " of different NLP tasks puts them all into the language as a human would input them or type them" }, { "start": 993.04, "end": 998.4, "text": " up. So they are compatible with a language model trains all of them at the same time. And it turns" }, { "start": 998.4, "end": 1005.68, "text": " out that the resulting model can actually do new NLP tasks in a zero shot fashion much like GPT three" }, { "start": 1005.68, "end": 1010.48, "text": " but is way more parameter efficient at that. So this is pretty cool. And the model is available" }, { "start": 1010.48, "end": 1015.5999999999999, "text": " on hugging face. So here you see a bunch of examples of what that can look like they have" }, { "start": 1015.5999999999999, "end": 1021.4399999999999, "text": " different versions of this model, you can import it in the hugging face API, you can even try it out" }, { "start": 1021.44, "end": 1026.16, "text": " here on the website. And the thing I want to highlight is that big science isn't some research" }, { "start": 1026.16, "end": 1032, "text": " lab or a company, it's actually a one year long research workshop on large multilingual models and" }, { "start": 1032, "end": 1037.1200000000001, "text": " data sets. This is simply a conglomeration of a bunch of researchers from all over the world that" }, { "start": 1037.1200000000001, "end": 1042.8, "text": " is loosely organized together for one year to investigate these large models. So it's pretty" }, { "start": 1042.8, "end": 1049.6000000000001, "text": " cool that something outside of traditional academia or corporate research labs also comes into the game" }, { "start": 1049.6, "end": 1055.6, "text": " and provides lots of cool stuff for the community. Definitely check it out. Check out their paper," }, { "start": 1055.6, "end": 1062.9599999999998, "text": " check out their models. Speaking of the hugging face hub, hugging face released this tweet saying" }, { "start": 1062.9599999999998, "end": 1069.12, "text": " that the data set viewer is available in hugging face hub is essentially a preview where you can" }, { "start": 1069.12, "end": 1074.7199999999998, "text": " for any data set go and see what kind of samples are in there, not for any data set, but for any" }, { "start": 1074.72, "end": 1080.16, "text": " that supports the hugging face streaming API, which are like half the data sets on the hugging" }, { "start": 1080.16, "end": 1085.68, "text": " face hub, this works for images. So here you can see MNIST and you already saw some NLP things. So" }, { "start": 1085.68, "end": 1094.16, "text": " pretty cool hugging face hub is getting more and more useful by the day. site is a sort of a Google" }, { "start": 1094.16, "end": 1100.48, "text": " scholar ish type of thing where you can look for publications and then inside the publications," }, { "start": 1100.48, "end": 1107.28, "text": " every citation will be annotated, first of all, with the context of where it goes. So any citation" }, { "start": 1107.28, "end": 1112.64, "text": " target, if you click on it, you'll see sort of the context of the citation. And second of all," }, { "start": 1112.64, "end": 1118, "text": " it is annotated with the fact of whether the citation actually supports the cited research" }, { "start": 1118, "end": 1123.68, "text": " or is critical of it or refutes it. So you have positive and negative citations. And this gives" }, { "start": 1123.68, "end": 1129.3600000000001, "text": " you a much more accurate picture of how a particular paper has fared in the research" }, { "start": 1129.36, "end": 1136.56, "text": " landscape in how it was cited and not only whether it was cited, this is done in part by an automated" }, { "start": 1136.56, "end": 1142.4799999999998, "text": " system. And I believe they already have a giant amount of research articles in there and automating" }, { "start": 1142.4799999999998, "end": 1148.24, "text": " these extraction of references, and they are scoring them using deep learning model. What else" }, { "start": 1148.24, "end": 1155.4399999999998, "text": " there is a paper to go along with it, check it out if you like and give site a try. It isn't exactly" }, { "start": 1155.44, "end": 1161.2, "text": " free. There are different tiers right here with different features. But if this is at all helpful" }, { "start": 1161.2, "end": 1169.52, "text": " to you, I guess it might be worth it. Facebook AI releases a blog post called teaching AI to perceive" }, { "start": 1169.52, "end": 1176.0800000000002, "text": " the world through your eyes. This is a push by Facebook or meta or whatever it is called right" }, { "start": 1176.0800000000002, "end": 1183.44, "text": " now to go away from the standard data sets where you have some third person view of a scene to" }, { "start": 1183.44, "end": 1190.4, "text": " really first person data sets. So they have a bunch of collections of data from around the world from" }, { "start": 1190.4, "end": 1196.96, "text": " different people in different life circumstances in many, many places, and they collected first" }, { "start": 1196.96, "end": 1203.28, "text": " person data, meaning I guess these people had head mounted cameras and had other sensors on and they" }, { "start": 1203.28, "end": 1210.24, "text": " recorded just doing everyday activities. So the data set is called ego 4d. And what I think is" }, { "start": 1210.24, "end": 1216.48, "text": " cool about it is the data set generation process is different from what other data sets are not" }, { "start": 1216.48, "end": 1221.36, "text": " only the fact that it is first person and that it is, you know, distributed all over the world and" }, { "start": 1221.36, "end": 1226.32, "text": " not just done by a single person or team, but also because they just told the people, you know," }, { "start": 1226.32, "end": 1231.92, "text": " just record yourself doing everyday stuff. And then after the fact, they went ahead and they defined" }, { "start": 1231.92, "end": 1237.44, "text": " tasks and they annotated the data for labels. So they didn't have the labels in mind when they" }, { "start": 1237.44, "end": 1242.48, "text": " collected the data, or maybe they had them in mind, but they didn't collect the data specifically to" }, { "start": 1242.48, "end": 1249.6000000000001, "text": " get some labels first collected the data, and then they put different labels over top. So for example," }, { "start": 1249.6000000000001, "end": 1255.92, "text": " different tasks that they imagine are memory tasks, forecasting tasks, object recognition," }, { "start": 1255.92, "end": 1261.8400000000001, "text": " whatnot, they have various layers of labels annotated by humans by crowd workers on this" }, { "start": 1261.8400000000001, "end": 1267.3600000000001, "text": " data and the data set, you know, you can imagine that these aren't the only labels. In fact," }, { "start": 1267.36, "end": 1272.4799999999998, "text": " it is very feasible that a different research group goes ahead and annotates the data in a" }, { "start": 1272.4799999999998, "end": 1278.9599999999998, "text": " different way to create their own task. The blog post highlights the difficulty of ego centric data," }, { "start": 1278.9599999999998, "end": 1284.08, "text": " which is usually vastly different from like a third person view. As you can see here on the left," }, { "start": 1284.08, "end": 1290, "text": " this object detector works quite well in a third person view. However, in a first person view," }, { "start": 1290, "end": 1296.08, "text": " it just kind of fails. So is this a good way forward to build more capable systems or a step" }, { "start": 1296.08, "end": 1301.6799999999998, "text": " into dystopia? I guess that's up to you. But if you like working with data like this, then give" }, { "start": 1301.6799999999998, "end": 1306.56, "text": " this data set a try. I'm not exactly sure how you can get ahold of it. I think there is some sort of" }, { "start": 1306.56, "end": 1314.24, "text": " license attached. But yeah, it's out there. Tesla released apparently pretty randomly a guide to a" }, { "start": 1314.24, "end": 1321.28, "text": " configurable floating point format and arithmetic. So this is a very technical specification for" }, { "start": 1321.28, "end": 1328.16, "text": " eight bit and 16 bit floating point numbers and arithmetic and is supposed to sort of standardize" }, { "start": 1328.16, "end": 1333.2, "text": " or give a format to configurable floating point numbers. So as I said, it's very technical," }, { "start": 1333.2, "end": 1339.28, "text": " it's actually also quite short. And the gist here is that they say if you train AI models on" }, { "start": 1339.28, "end": 1346.16, "text": " really large scales, like Tesla does, you might want to go down from 32 bit numbers to 16 bit" }, { "start": 1346.16, "end": 1351.3600000000001, "text": " numbers or even eight bit numbers. However, in these very low regimes, you only have whatever" }, { "start": 1351.3600000000001, "end": 1357.8400000000001, "text": " eight bit to play with. And therefore you can't exactly specify ahead of time how many bits should" }, { "start": 1357.8400000000001, "end": 1364, "text": " be the exponent and how many bits the mantissa should be. Therefore, this needs to be configurable." }, { "start": 1364, "end": 1369.44, "text": " So not like in a 32 bit number, you have exactly this many bits for this and this many bits for" }, { "start": 1369.44, "end": 1374.24, "text": " that in these new configurable floating point numbers, this would be a variable that you can" }, { "start": 1374.24, "end": 1379.6, "text": " decide as you use the number. So that allows you to trade off what kind of range this number can" }, { "start": 1379.6, "end": 1385.76, "text": " potentially have with the accuracy, the resolution that the number can have in a particular range," }, { "start": 1385.76, "end": 1391.04, "text": " we'll see whether this remains a thing that's purely used inside of Tesla or whether other" }, { "start": 1391.04, "end": 1399.28, "text": " people are going to adopt it. Microsoft introduces pytorch direct ml, they say train your machine" }, { "start": 1399.28, "end": 1406.6399999999999, "text": " learning models on any GPU. So this is a component for pytorch that allows you to use any direct x" }, { "start": 1406.6399999999999, "end": 1412.72, "text": " GPU for doing deep learning. And all that is necessary essentially is that in pytorch, you" }, { "start": 1412.72, "end": 1419.76, "text": " don't say to CUDA, like if you have a CUDA device, now you say to DML to direct ml. And that's it." }, { "start": 1419.76, "end": 1424.96, "text": " This works on Windows and on the Windows subsystem for Linux. So if you're still a" }, { "start": 1424.96, "end": 1433.3600000000001, "text": " Windows user for whatever reason, good for you. Alright, more helpful things that I saw this week," }, { "start": 1433.3600000000001, "end": 1438.32, "text": " there are a lot of helpful things this week. It's not only helpful libraries, it's the section is" }, { "start": 1438.32, "end": 1446.96, "text": " renamed to just help, like, help me please. pyWIC is a high level batteries included neural network" }, { "start": 1446.96, "end": 1452.88, "text": " training library for pytorch. And yes, whatever you're thinking is said here at the beginning of" }, { "start": 1452.88, "end": 1457.3600000000001, "text": " the readme, does the world need another pytorch framework? Probably not. But we started this" }, { "start": 1457.3600000000001, "end": 1462.5600000000002, "text": " project when no good frameworks were available. And it just kept growing. So here we are. Yeah," }, { "start": 1462.5600000000002, "end": 1469.1200000000001, "text": " respect. Cool. If none of the current frameworks please you, pyWIC might be for you. Lexa is a" }, { "start": 1469.1200000000001, "end": 1477.3600000000001, "text": " benchmark for zero shot reaching of goals. This goes along with a paper by CMU, UPenn and U of T" }, { "start": 1477.36, "end": 1483.04, "text": " about reaching goals after discovering the world. So these agents, what they'll do is they'll essentially" }, { "start": 1483.04, "end": 1488.8799999999999, "text": " go ahead and they'll just try out a bunch of stuff in the world without any explicit goals. And after" }, { "start": 1488.8799999999999, "end": 1494.24, "text": " that, you give the models a picture of a goal to reach, and they're supposed to reach it. So this" }, { "start": 1494.24, "end": 1500.6399999999999, "text": " means you don't explicitly train the agents to reach that particular goal, or any goal, you simply" }, { "start": 1500.6399999999999, "end": 1505.9199999999998, "text": " let them explore. And after that, they have to reach a goal. So Lexa is a benchmark that achieves" }, { "start": 1505.92, "end": 1512.0800000000002, "text": " this. And as I said, this goes along with the paper that gives a very, very, very good baseline" }, { "start": 1512.0800000000002, "end": 1517.44, "text": " for this benchmark already. But the benchmark itself is available to download if you're interested" }, { "start": 1517.44, "end": 1523.28, "text": " in doing that kind of research, give it a try. Next, Donnie Jar Hoffner tweets out excited to" }, { "start": 1523.28, "end": 1529.8400000000001, "text": " introduce crafter. So this is a game sort of an open world game, long term reasoning, exploration," }, { "start": 1529.84, "end": 1536.24, "text": " generalization made for reward agents and unsupervised agents. So it's called crafter. And" }, { "start": 1536.24, "end": 1542.1599999999999, "text": " you move around, and there's blocks, and there's food, and you have to dig and you have to build" }, { "start": 1542.1599999999999, "end": 1547.84, "text": " and you have to craft things. I've never seen anything like this before. This is a first." }, { "start": 1547.84, "end": 1555.28, "text": " This has no relation to any game that I've seen so far. No, it's, it's pretty cool. So you can craft" }, { "start": 1555.28, "end": 1560.72, "text": " things, as you can see right here, you can interact with stuff, every world is randomly generated." }, { "start": 1560.72, "end": 1566.8799999999999, "text": " Like this is a Minecraft clone, but amenable to machine learning research to AI research. So" }, { "start": 1566.8799999999999, "end": 1571.36, "text": " that is pretty cool, because Minecraft just seems too complex, because you can move like in any" }, { "start": 1571.36, "end": 1576.8, "text": " direction and so on here, it's really discrete. So these models, they have a much more easy time" }, { "start": 1576.8, "end": 1582.8799999999999, "text": " to go about it, they've already evaluated different of these AI learning mechanisms on it, like" }, { "start": 1582.88, "end": 1589.44, "text": " dreamer, PPO, rainbow agents, and so on. And none of them really compare so far to human expert. But" }, { "start": 1589.44, "end": 1594.3200000000002, "text": " I think the game is pretty cool. It is available. These RL agents can already do things like you" }, { "start": 1594.3200000000002, "end": 1600, "text": " know, dig holes, build bridges, and so on. There's a very complex behaviors already emerging here," }, { "start": 1600, "end": 1605.7600000000002, "text": " it moves out of the way of a skeleton. And in another one builds a shelter. Excellent crafter," }, { "start": 1605.7600000000002, "end": 1611.8400000000001, "text": " give it a try. If this video gets more than three likes, we'll do a crafter let's play for sure." }, { "start": 1611.84, "end": 1618.3999999999999, "text": " Robert Lange releases a lightweight hyper parameter optimization tool. This seems to be a cool kind of" }, { "start": 1618.3999999999999, "end": 1623.36, "text": " personal project by Robert, and he released it with pretty good documentation. There's colab," }, { "start": 1623.36, "end": 1629.76, "text": " there is an example. And if you're just looking for like a very simple way to do hyper parameter" }, { "start": 1629.76, "end": 1635.12, "text": " optimization, then this might be the library for you. As you can see, there's different strategies" }, { "start": 1635.12, "end": 1639.9199999999998, "text": " for doing hyper parameter optimization and different ways you can define them. That's pretty" }, { "start": 1639.92, "end": 1646.48, "text": " much all you need even has the fancy decorator style as you can see right here. Very pythonic." }, { "start": 1646.48, "end": 1652.4, "text": " Sayak Paul released a Keras tutorial on mobile vid. So this is a tutorial that will guide you" }, { "start": 1652.4, "end": 1659.44, "text": " through implementing mobile visual transformers in Keras, which is quite neat. So Keras still as" }, { "start": 1659.44, "end": 1664.3200000000002, "text": " easy to use as ever. And this tutorial guides you through building the architecture from the ground" }, { "start": 1664.3200000000002, "end": 1669.6000000000001, "text": " up all the way to training it. At the end, you convert this model to TF Lite. So it actually" }, { "start": 1669.6, "end": 1675.36, "text": " runs on your mobile phone. Pretty cool. Omar Sansevierro tweets out this demo is surprising" }, { "start": 1675.36, "end": 1682.48, "text": " it combines vit with GPT to caption images with great results. And yes, actually, I was positively" }, { "start": 1682.48, "end": 1689.84, "text": " surprised. This is a hugging face module where you take a existing text model like GPT two and you" }, { "start": 1689.84, "end": 1695.84, "text": " take an existing image computer vision model like vision transformer and you combine them. So first," }, { "start": 1695.84, "end": 1701.04, "text": " you start out with sort of random cross attention weights that you fine tune just a little bit. And" }, { "start": 1701.04, "end": 1705.1999999999998, "text": " that can have really, really good results. Essentially, the model learns how to connect" }, { "start": 1705.1999999999998, "end": 1711.84, "text": " the latent representation from one model to the other model and back. So this is used right here" }, { "start": 1711.84, "end": 1719.12, "text": " to do an image captioning demo using GPT two and vit, as I said, and training only about 7000 steps" }, { "start": 1719.12, "end": 1724.8799999999999, "text": " on the cocoa data set. So you can see the result. This is a man swinging a tennis racket on a tennis" }, { "start": 1724.88, "end": 1731.92, "text": " court that is very descriptive. That that is just an unhumanly precise description of what's going" }, { "start": 1731.92, "end": 1737.8400000000001, "text": " on right here. We have a blue and white street sign sitting on top of a pole. Yes, that that is" }, { "start": 1737.8400000000001, "end": 1746.72, "text": " also a very, very, very precise description. person riding a skateboard on top of a cement floor. Well," }, { "start": 1746.72, "end": 1753.68, "text": " I guess that has some importance. Is it just me or AI models just bureaucrats? But yeah, pretty cool." }, { "start": 1753.68, "end": 1760.64, "text": " Bits and bytes is a library by Facebook research for eight bit optimizers and quantization routines." }, { "start": 1760.64, "end": 1767.28, "text": " So they have a bunch of optimizers such as Adam, Adam W, RMS prop, and so on that work on eight" }, { "start": 1767.28, "end": 1774.88, "text": " bits instead of 32. And that pretty reliably saves you 75% of the memory, something like Adam has" }, { "start": 1774.88, "end": 1780.0800000000002, "text": " two or three different buffers that for every parameter you need to keep track of. So this can" }, { "start": 1780.08, "end": 1785.84, "text": " pretty quickly get pretty large and saving three quarters of the memory has definitely value. Now" }, { "start": 1785.84, "end": 1790.6399999999999, "text": " I love that it's called Facebook research. But if you hover it says meta research." }, { "start": 1792.6399999999999, "end": 1798.32, "text": " Is this gonna go well? I don't know. Also, is this supposed to be like a pretzel? Like it is," }, { "start": 1798.32, "end": 1803.36, "text": " is it supposed to be like a flat logo? Or is it supposed to represent sort of like a Pringles" }, { "start": 1803.36, "end": 1810.08, "text": " chips, you know, like the saddle in 3d? I don't know. Another helpful thing user friendly" }, { "start": 1810.08, "end": 1816.24, "text": " introduction to pack base bounce by Pierre O'Curr. Now this is something I have no clue about. But I" }, { "start": 1816.24, "end": 1821.6, "text": " know it's important. And I have learned it at some point, if you're trying to get into pack base" }, { "start": 1821.6, "end": 1828.56, "text": " bounce, this is a I believe over 60 pages introduction to it that seems to be quite well written," }, { "start": 1828.56, "end": 1834.72, "text": " introducing you to all the important concepts in it. So if you're interested, give it a try. Again," }, { "start": 1834.72, "end": 1841.6799999999998, "text": " face meta, whatever research releases x formers, hackable and optimized transformers building blocks" }, { "start": 1841.6799999999998, "end": 1848.24, "text": " supporting a composable construction. So if you're into transformers, and if you would like to" }, { "start": 1848.24, "end": 1854.24, "text": " recombine them, try out different things inside of them, x formers might be a great library on" }, { "start": 1854.24, "end": 1859.1200000000001, "text": " doing that. So you see all of these boxes here, essentially, this library makes it pretty easy to" }, { "start": 1859.1200000000001, "end": 1865.1200000000001, "text": " just rearrange them, connect them differently, and so on. Superb is a speech processing universal" }, { "start": 1865.1200000000001, "end": 1870.32, "text": " performance benchmark. This means that this benchmark has a bunch of speech tasks, so tasks" }, { "start": 1870.32, "end": 1875.6, "text": " in machine learning where the input is a piece of speech. But the goal here is that you have one" }, { "start": 1875.6, "end": 1881.92, "text": " pipeline that generates a representation. And then that representation can be fine tuned easily to" }, { "start": 1881.92, "end": 1886.8000000000002, "text": " all of these tasks. You're not supposed to solve all of the tasks from scratch, you're supposed to" }, { "start": 1886.8000000000002, "end": 1892.24, "text": " come up with that pipeline that generates the representation. If you work on speech, this might" }, { "start": 1892.24, "end": 1900.3200000000002, "text": " be very cool for you. I don't know how to say this. CCQA is a web scale question answering data set" }, { "start": 1900.3200000000002, "end": 1906, "text": " for model pre training. This is a large scale QA data set that I guess you can use for pre training" }, { "start": 1906, "end": 1912.96, "text": " question answering models. Excellent. Bagua is a library that claims to speed up pytorch. So they" }, { "start": 1912.96, "end": 1918, "text": " have a bunch of things in here for pytorch, for example, advanced distributed training algorithms," }, { "start": 1918, "end": 1924.16, "text": " performance, auto tuning, generic fused optimizers, load balanced data loader, and so on. So these" }, { "start": 1924.16, "end": 1930, "text": " seem to be specialized algorithms that in very certain cases where you want to use pytorch can" }, { "start": 1930, "end": 1936.16, "text": " potentially deliver a lot of speed up. So if your problem doesn't fall into like the standard bucket" }, { "start": 1936.16, "end": 1941.2, "text": " where the library is optimized for maybe you can find something inside of Bagua that is going to" }, { "start": 1941.2, "end": 1951.68, "text": " help you Bagua Bagua. I don't know. tree x tracks treaks is a pi tree module system for deep learning" }, { "start": 1951.68, "end": 1959.84, "text": " in JAX. So if you work with pi tree, this is it in JAX. Good job pi tree. For those of you don't know" }, { "start": 1959.84, "end": 1965.76, "text": " are essentially trees out of Python structures. So here, for example, a list which contains numbers" }, { "start": 1965.76, "end": 1972, "text": " and dicts which themselves contain tuples and so on. So a pi tree works with these kinds of objects," }, { "start": 1972, "end": 1978.48, "text": " and now you can use them inside of JAX and tree x helps you to do that in a more module oriented" }, { "start": 1978.48, "end": 1988.6399999999999, "text": " way or a more object oriented way. Reuters writes AI can see through you CEOs language under machine" }, { "start": 1988.64, "end": 1995.2800000000002, "text": " microscope. This article essentially says that things like NLP and speech sound analysis, they" }, { "start": 1995.2800000000002, "end": 2001.44, "text": " now go after CEOs quarterly announcements, they analyze their voices, and they're trying to just" }, { "start": 2001.44, "end": 2007.3600000000001, "text": " recognize when they're nervous and so on. And they actually have a point in that they claim they can" }, { "start": 2007.3600000000001, "end": 2012.0800000000002, "text": " make better investment decisions if they do something like this. But as you know, as soon" }, { "start": 2012.0800000000002, "end": 2018.16, "text": " as you pay attention to anything like this, the CEOs are immediately going to adjust and train to" }, { "start": 2018.16, "end": 2024.24, "text": " trick essentially these AI systems. So they will use scripted speeches much more in order to not" }, { "start": 2024.24, "end": 2030.3200000000002, "text": " trip the NLP systems, they will train their voice acting more, I guess, or let some press secretary" }, { "start": 2030.3200000000002, "end": 2036, "text": " speak for them. All in all, it just seems to be like, you know, if you analyze a CEO's speech," }, { "start": 2036, "end": 2041.0400000000002, "text": " and to to detect when they're lying, and when not, and then make investment decisions, you'll" }, { "start": 2041.0400000000002, "end": 2047.2, "text": " simply reinforce the like the sociopaths that have no problem with just straight out lying that have" }, { "start": 2047.2, "end": 2053.44, "text": " no difference in their voice whatsoever. So if you want to create a world of more sociopathic" }, { "start": 2053.44, "end": 2059.44, "text": " CEOs than it already is, I guess, then go right ahead, just just do this, this this is fine." }, { "start": 2059.44, "end": 2069.76, "text": " Excellent. Atbury, the company has apparently made this ad for Indian local businesses. And it's not" }, { "start": 2069.76, "end": 2075.84, "text": " just an ad, but they've paid this Indian celebrity to record essentially one ad, and then they" }, { "start": 2075.84, "end": 2081.84, "text": " modified that ad using deep learning. So they have like three product categories like shoes," }, { "start": 2081.84, "end": 2086.88, "text": " and I guess glasses and watches or something like this, they've recorded the different ads for the" }, { "start": 2086.88, "end": 2092.6400000000003, "text": " different products. But whenever the actor says the company name and the location of the company," }, { "start": 2092.6400000000003, "end": 2097.92, "text": " they use deep learning to change whatever the small business is. So essentially, this is a deep" }, { "start": 2097.92, "end": 2104.2400000000002, "text": " faith from the same actor to his own face, but to make him say something else. So as a small business" }, { "start": 2104.24, "end": 2110.16, "text": " in India, you can go there and get your ad for your local business, the system will actually make" }, { "start": 2110.16, "end": 2115.3599999999997, "text": " sure that people that are in your area are advertised with your particular business and" }, { "start": 2115.3599999999997, "end": 2120.08, "text": " people in different areas will see I guess the same ad but the actor mentioning a different" }, { "start": 2120.08, "end": 2124.8799999999997, "text": " business that is in that area. Pretty cool. There's a form if you're in India, you know," }, { "start": 2124.8799999999997, "end": 2132.3199999999997, "text": " check it out. And lastly, this shoe does not exist. This is a website, I guess it's analogous" }, { "start": 2132.32, "end": 2138.0800000000004, "text": " to this person does not exist, which was a famous website that trained stylegan two on a face data" }, { "start": 2138.0800000000004, "end": 2144.32, "text": " set. So this is stylegan three, which was recently released the alias freegan, and it's trained on a" }, { "start": 2144.32, "end": 2149.44, "text": " shoe data set. So you can just refresh and look at shoes that the model has come up with. I guess" }, { "start": 2149.44, "end": 2154, "text": " these shoes all look like they exist, they might as well, who knows. But yeah, if you're looking" }, { "start": 2154, "end": 2159.6000000000004, "text": " for unique design ideas, check it out. I'm looking forward to many more things where stylegan three" }, { "start": 2159.6, "end": 2165.68, "text": " is applied, it seems to be the quality of these models and the ease of training them has come" }, { "start": 2165.68, "end": 2170.72, "text": " a long way such that it is in fact possible to do this for many types of things where you have" }, { "start": 2170.72, "end": 2177.44, "text": " decently amounts of data such as choose, I guess. Alright, this was it for this week's ML news." }, { "start": 2177.44, "end": 2183.36, "text": " Thank you so much for being here. Don't forget to like and subscribe and let me know what you think" }, { "start": 2183.36, "end": 2190, "text": " in the comments. I value your opinions. Definitely. This is not just a trick to get the YouTube" }, { "start": 2190, "end": 2217.84, "text": " algorithm to promote the video more and all of that kind of stuff. See ya." } ]
hgSGHusDx7M
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Sparse is Enough in Scaling Transformers (aka Terraformer) | ML Research Paper Explained
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "explained", "neural networks", "ai", "artificial intelligence", "paper", "terraformer", "scaling transformers", "nli", "nlp", "natural language processing", "transformers memory", "deep learning memory", "fast transformer", "fast transformers", "attention", "attention mechanism", "attention is all you need", "bert", "gpt-3", "google research", "reversible layers", "reformer", "sparse attention", "sparse feedforward", "low-rank" ]
#scalingtransformers #terraformer #sparsity Transformers keep pushing the state of the art in language and other domains, mainly due to their ability to scale to ever more parameters. However, this scaling has made it prohibitively expensive to run a lot of inference requests against a Transformer, both in terms of compute and memory requirements. Scaling Transformers are a new kind of architecture that leverage sparsity in the Transformer blocks to massively speed up inference, and by including additional ideas from other architectures, they create the Terraformer, which is both fast, accurate, and consumes very little memory. OUTLINE: 0:00 - Intro & Overview 4:10 - Recap: Transformer stack 6:55 - Sparse Feedforward layer 19:20 - Sparse QKV Layer 43:55 - Terraformer architecture 55:05 - Experimental Results & Conclusion Paper: https://arxiv.org/abs/2111.12763 Code: https://github.com/google/trax/blob/master/trax/examples/Terraformer_from_scratch.ipynb Abstract: Large Transformer models yield impressive results on many tasks, but are expensive to train, or even fine-tune, and so slow at decoding that their use and study becomes out of reach. We address this problem by leveraging sparsity. We study sparse variants for all layers in the Transformer and propose Scaling Transformers, a family of next generation Transformer models that use sparse layers to scale efficiently and perform unbatched decoding much faster than the standard Transformer as we scale up the model size. Surprisingly, the sparse layers are enough to obtain the same perplexity as the standard Transformer with the same number of parameters. We also integrate with prior sparsity approaches to attention and enable fast inference on long sequences even with limited memory. This results in performance competitive to the state-of-the-art on long text summarization. Authors: Sebastian Jaszczur, Aakanksha Chowdhery, Afroz Mohiuddin, Łukasz Kaiser, Wojciech Gajewski, Henryk Michalewski, Jonni Kanerva Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher LinkedIn: https://www.linkedin.com/in/ykilcher BiliBili: https://space.bilibili.com/2017636191 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Hello there! Today we'll look at Sparse Is Enough in Scaling Transformers by researchers of the University of Warsaw, Google Research and OpenAI. This paper on a high level proposes a set of building blocks to introduce sparsity into transformers and this results in an architecture called the Scaling Transformer. In the second half of the paper they then introduce additional features to the Scaling Transformer to make it into the Terraformer. Both the Scaling Transformer and the Terraformer they are really fast at what they call unbatched decoding. Decoding is essentially inference in such a transformer model and unbatched means that they can do this for a single sample. Of course they're also faster in batched decoding but I guess the effects are not as pronounced and we're gonna see why because the sparsity really shines through if you have single examples and can only activate very small parts of the network at the same time. So the effect of all of this at least for the Scaling Transformer is right here. If you have a model with 800 million parameters, I guess today that be called a small model, the baseline transformer has a decoding time of about 0.16 seconds whereas if you add all the tricks to the Scaling Transformer you speed that up by a factor of about 2.6x. That's not that pronounced yet. Yet the effect really shines if you go to bigger models so if you go to a 17 billion parameter models the baseline transformer takes about 3.6 seconds on this particular hardware to decode. The Terra, no sorry the Scaling Transformer with all the tricks activated takes about 0.18 seconds giving a speed up of 20x and so in different settings on different configurations these speed ups can in fact get even higher. I've seen up to like 37x or something like this which is quite quite fast and this all while the performance doesn't degrade and that is surprising. So they say surprisingly the sparse layers are enough to obtain the same perplexity as the standard transformer with the same number of parameters. So they have the same number of parameters it's just that they activate them sparsely when forward propagating which is much faster and needs much less memory and this results in the same perplexity when language modeling. So essentially it means that the performance is on par and also they say if they integrate with prior sparsity approaches that's where they achieve the Terraformer they can do fast inference on long sequence even with limited memory this results in performance competitive to the state-of-the-art on long text summarization which is another thing where their model is state-of-the-art or equivalent to state-of-the-art while being much more sparse much more memory efficient and much faster. So yeah we'll dive into this the architecture it's quite it's quite a mess like there are engineering tricks engineering tricks engineering tricks and you know the you have to wonder a little bit you know what came first like which trick came first and which trick necessitated which other trick but we'll go through the architecture through all the different pieces and you'll see what this is all about and where the savings are done. All right if you enjoy content like this you know don't hesitate to subscribe I don't want to do the other youtubers show the graph I'll do like I'll do this here's the graph here's the graph so many of you are not subscribed I mean look at that excellent all right so the point with the these sparsity gains is that if you implement them somewhere then that part is fine but then another part is still dense and is still the bottleneck so you kind of have to do introduce them everywhere so if we look at a classic transformer model and they specifically I think refer to like the stack of attention is all you need and so on so what they have basically is they have two attention modules so there's attention one I think there's attention two and then there is this feed forward layer okay so we're going to take care of all of those right here attention one is called self attention so if I have a sequence coming in here the self attention would be essentially attention in between the elements of the sequence the second attention block is I think encoder decoder attention or something like this the variants vary a little bit right here but I would have sort of a second stack of this right here I would have a input sequence right here so this would be the input this would be the target sequence that I'm about to decode maybe this has some causal attention who knows the second layer of attention here is specifically attention that goes to the encoder sequence right here so it's it's attention in between the encoder and the decoder and the feed forward so this essentially these two mix all the information of the different tokens together and the feed forward layer simply takes a single embedding of a single single token and feeds it through a feed forward function so all the tokens are handled by the same feed forward function the first thing this paper does is it essentially eliminates the distinguishing between the self attention and the attention between encoder and decoder and I think that makes sense that's also a lot what a lot of other models do so famously BERT is an encoder only model GPT is a decoder only model and if I understand them correctly there as well they're simply taking the encodings from the source and then just prepending them to the target or something like this you know safe to say there are lots of things that one could do right here but what I wanted to say is that we now need to replace each of those things with a sparse version so we need a sparse feed forward and we also need a sparse attention block so how we're gonna achieve this first we're going to the sparse feed forward layer remember a feed forward layer is I have a sequence of embedding so that's these are all vectors and these are all embedding vectors this is a sequence of embedding vectors that came out of the attention module right and the feed forward layer essentially is a matrix and I simply pass each of these through a matrix in fact it's not one matrix I think it is usually two matrices one matrix that sort of well that's not how you draw a matrix like this and then like this so you kind of blow up the dimension in the middle and then here there is a ReLU non-linearity in between and the point is what I already said you'd feed every single token by itself through this function so this becomes like a large token then there's a ReLU and then this would become sort of a token of the input dimension again and you feed this token through as well individually which give you this one and so on so in essence we have a vector right a token all the tokens are independent we have a token and somehow we need to make this sparse right now it's a dense multiplication twice so there's two matrices right here and if dense multiplication right so what do we do the first thing they say is that well given that there is a ReLU non-linearity right here right there's a ReLU a lot of the things here essentially are gonna end up being zero right so it makes sense it makes sense to do sparsity here now I don't I don't follow that entirely you know I guess half of the stuff will end up being zero yet the sparsity goes much further so but maybe maybe they maybe they justify why they can set some things to zero not entirely sure but I found that reasoning a bit shaky but here is essentially you know you don't need in a reason to introduce sparsity if it works it's good so here's how it works first and this is what I found a bit confusing so it essentially starts on the right then it goes to the left but it I guess it's easier to start on the left so what we want to do I see here is that input vector right and here is that first matrix so the first matrix is of dimension D model which is the same as this dimension and DFF which is the feed-forward dimension and usually I just multiply that together which would give me a vector in the dimension of the feed-forward layer right which I then send through my relu however however what I want to do I want to compartmentalize I want to only certain columns here to be activated right so essentially say I already accept that a lot of my things in my result are going to be zero because you know they will go to a relu anyway so I'm going to accept that some of the things will already be zero so let's say all of these I already accept they're gonna be zero I don't even need to calculate the matrix multiplication between the vector here and let's say this column right here don't need to do it because after that they will become zero anyway so who cares so I'm simply going to decide that some of the things are just going to end up being zero and they justify this by saying well there's a relu so some of the things are going to be zero but more more here is like you know six out of eight are going to be zero and now I only need to calculate the remaining columns and that is the sparsity right here effectively they subdivide all of the they subdivide the whole matrix into these compartments so we'd have two different compartments right here and of in each compartment only one column can be activated at the same time right I think yeah yeah there's one one of them it's decided on one of them one of them can be activated and only that one needs to be loaded from memory only that one needs to be calculated as an inner product with the vector and so the cells here where an actual value is going to be are sparse now the question is how do we decide which ones we're going to activate by the way if you can see then for the second matrix you know the same thing applies in fact I can use that same mask from here and I can again say well in the first module column number three was activated here right so row number three of this matrix needs to be activated the other ones don't matter because they're zero anyway so there's a zero coming in right here being multiplied with this row you know who cares what the result is the the input is zero actually well people care it's zero right but it means you don't even need to need to do it you can simply just load the rows that you are that you know are potentially non zero so yeah how do how do you decide how do you decide which ones you should load from memory essentially you're simulating you're already pre committing to a relu pattern right so this is how you do it essentially you build you build you take your input vector right here and you're trying to somehow see how that works we somehow come up with a vector of with a binary vector with numbers between like zero and one so everything right here is like a point one point five point three point eight so every single entry has a value every single entry will output like the probability that that particular element should be non zero and then you simply sample from that distribution and use a straight through Gumbel softmax in order to back propagate so they also do a lot of tricks right here I think they mentioned that in the forward propagation they even sometimes need to do a actually to pass just the softmax output instead of the actual sampling so there's a lot of engineering tricks to actually get this to work but safe to say that's during training we are we care about inference during inference you sample exactly one per module that is non zero okay so you have two different workflows the workflow one goes here decides what needs to be non zero right and then given that information you can do this feed forward layer in a sparse way but that is all useless if this right here is is not sparse so this is actually not sparse but it is low rank so they say well in order to figure out which things need to be non zero we technically don't need as much information as you know actually propagating information so what we can do is we can have a low rank essentially it's another feed forward layer again doing this blowing up the dimension to the feed forward dimension but we make it low rank so instead of instead of wait yeah instead of blowing up the dimension in between we shrink it down right you can see right here we shrink it down to a low dimension and then we go to the dimension of the feed forward layer to decide which things are one and zero and that's a thing you're gonna see often in this model is that they make use of low rank combined with sparsity and it's also a bit of a of a trouble that I have because for some things a low rank approximation is fine but you know there is a reason we have dense multiplications everywhere because sometimes it's not because with a low rank multiplication you essentially restrict your function space to a very very small subspace yeah but it seems to work so the trade-off here is that you get to do this sparse which means that the time it takes decreases and the memory but you have to this here over this this is new right you didn't have to do this before you could simply do the multiplication so this is going to add to your compute well this here is going to be faster and now it's about whether whether or not you can make this side sufficiently low rank such that the the gains over here are more than the time that you have to invest to compute this max this mask at the first place over here again for these particular problems that they look at it seems to be working right but these kinds of trade-offs it's not guaranteed like it's not so clear to me that it would you know just work like it's not it's not straightforward that that trade-off would be positive right here there might very well be problems where this rank right here is just too small to carry meaningful information you need to make it bigger and that would sort of vanish all the savings you make over here because these savings are I mean essentially linear in the sparsity and this these gain sorry these these this right here is essentially linear in the in the low rank dimension so there's the trade-off right there they here is how you how you can express this you can essentially express this as the original multiplication with the first matrix relu through the relu then times the controller output and all of that then goes into the second multiplication that's how you can represent it mathematically that's not actually what you do right because here you still have the full multiplications with the weight matrices but it will result in the same thing as this formula all right so that is the sparse feed-forward layer and they do show that it decreases the coding time quite a bit and interestingly it also doesn't degrade performance too much in fact you can see right here this blue line is the average of the baseline models and if you if you don't go too sparse you still have quite good performance so this is quite close only if you go more sparse does your perplexity here start to suffer I think that that is one of the surprising things that there is a level of sparsity you can go at where you're actually considerably faster while your performance doesn't degrade yet again can very well be because for the problems we look at the sort of the they're not difficult enough to really make use of the capacities of the dense models okay so feed-forward is done now we go to the attention layer and the attention layer again is split up into two parts in fact they don't even they don't even really deal with the attention mechanism itself what they actually care about is in order to do attention attention is something like I have my queries and my keys and I do an outer product and I normalize by something that I can't remember and then I multiply by my values this is the attention formula and what they care about is how do I get the queries the keys and the the values they in order to make attention itself the sparse or long-range or efficient they rely on on different techniques that from other papers so for example they will later include the performer and the reformer architectures which make attention itself sparse or efficient or low-dimensional however in this particular paper they care about how do we even get these matrices and usually you get Q by multiplying your input by a weight matrix like WQ you get key by multiplying your input by a key weight matrix and you get V by X so all of these are dense multiplications and obviously they now become the bottleneck once we have the sparse feed-forward layers the dense layers in in the attention layers become the bottleneck the question is can we use the same trick here as we did before and the answer they say is no because the structure of the feed-forward layer here was such that it had the relu in between right so and that's why they argue so naturally a lot of things are gonna end up being zero which we can exploit by just making you know just just a few more things zero I guess but they don't they don't want to do this right here because here like none of the things necessarily are going to be zero in the output of these calculations so the Q or the K or the V they don't have many zero entries so might not be justified to go sparse and just say well make stuff zero so what do we do instead instead we look at this diagram here so on the top you have what the current attention mechanism looks like as I said there is a there is a dense layer essentially in front of each of these three matrices which is that's how you that's exactly how you get the matrix in the first place right we're going to look at a thing which they call a multiplicative layer so which this is this malt right here and the multiplicative layer potentially could replace the dense layer however they go a step further and they say they end up with this architecture right here where they have a multiplicative layer then it's a one multiplicative layer for all three matrices that is shared and then one convolutional layer for each of the different matrices which is gonna make stuff even faster and then they also they drop kind of this this dense mechanism right here and they simply add right here again I like I'm pretty sure this works right now for these particular problems hope like maybe because the problems don't make use of of the parameters or the original models were just poorly engineered they didn't they never actually needed all of these you know parameters like this one and we're all fine this could also be the case so we have two things to look at inside of the attention model the multiplicative layer and the conv layers and these kind of go together and it also goes together with what's usually done in the attention mechanism which is multi head attention so I'll draw a diagram of an attention mechanism for the about 500th time but you have some sort of a sequence right and every sequence I'll replicate the sequence over here so every sequence emits what's called a like a query which is a vector some vector which are the queries and also every element in the sequence emits a key so the keys are also some vectors and the keys are also some vectors and then routing is done via inner product overlap so probably these go would be routed together these two would be routed together this would probably be routed here it can also be routed to multiple stuff but you route essentially via inner product so that's how you construct the weight matrix or the query key matrix for then multiplying by the values the idea behind multi-headed attention which is what's usually on is that let's not only have one such block let's actually have many such blocks in parallel right and instead of using the entire vectors that are output right here by for example that are in Q Q or these the queries right Q or is a matrix and every row or column don't exactly remember is one of these vectors right here they say hey let's instead of so Q is a matrix let's say every row but for for let's just say every row if I'm wrong then you know just reimagine so instead of taking the entire vectors here like the entire vectors as queries we split the vectors into in this case into three parts and this first part right here that becomes the query for this attention mechanism the second part becomes the query for that attention mechanism and the third one becomes the query for yet another attention mechanism that's multi-headed attention same with the keys same with the values and yeah so now now we're prepared so what we want to do right here is we want to take a token and remember we now need to make a query let's say we want to produce the queries right so from this token we need to produce a query vector not only one but number of heads many query vectors from this token using some sort of some sort of a linear layer some sort of a linear function so that's how we do it they say we have this matrix right here the weight matrix D and what the weight matrix D the weight matrix D is there's the same dimension here as the input and has as many as many rows as we have different attention heads right so what we're going to do is we're going to element wise multiply and I would also add right here broadcast right broadcast so if you've used non-pi or TensorFlow or pi torch you know the broadcasting operation so the broadcasting is done this is of dimension one right here the broadcasting is done between this one and this s right here this is going to be broadcast into this form right here and you can see now I mean it's just an element wise multiplication so all that is is like differently scaled versions of X in each dimension right so each row is essentially X a little bit shaky so let's double shake X for the bottom row okay but this already is now a vector one vector for each of the attention heads now since element wise multiply is probably not going to get us very far we also multiply this by an actual matrix but instead of multiplying it by a D model times the model matrix again we go into a low rank low rank regime and simply say okay we have this number M and that's going to be a reduction on reduction on our dimensionality so this isn't D model by a D model matrix which would probably be expensive it's a D model by M matrix and out comes this so this is going to be the query vector for the first attention mechanism sorry no this is going to be the query vector for the first attention mechanism and this is going to be the query vector for the second attention head head I meant to say head there is a thing like they don't just choose M arbitrarily they in fact choose I believe s times M equals to D model right that is that is their their formula so they if they split into s different heads like let's in this case you see s is 2 then M is 3 and that has a very particular reason namely they say with this particular construction of the element was multiply followed by the multiplication by this weight matrix E if if we do it like this then they can have a theorem where is the theorem there is the theorem the theorem essentially says that they can they can represent an arbitrary permutation so they say the minimum thing the minimum thing that we have to be able to do is to take X and kind of permute it so to place every single element of X in the output wherever we want essentially they say every part of X should be able to be forward propagated to all the attention heads or to any of the attention heads and if a theorem that says that if they constructed like this any permutation is within the the realm is within possibilities for some matrices for some weight matrices D and E so that's kind of their justification of well we can represent all permutations so it can't be too bad right yeah I found a little bit of another way of you know seeing this if you look at this with the element wise multiply and so on it is easier to understand this as let me try to draw this up maybe over oopsie boops over here so if you think about it a little bit it is like so you have and you also look at the formula this formula right here you can clearly see that this is in fact a matrix multiplication again so you have I would say you have if you look at this as D times X times E where X here is a matrix that has zeros but X on so on the diagonal it's X right which would give you it would give you sort of a so D is kind of this shape then X is that shape but only the diagonal is filled with X and then E is like that shape so and D and E are fixed matrices so you can see that what the multi what this multiplicative layer is doing essentially is it it defines outputs it defines outputs so these are the number of outputs and this is the dimensionality of the output and what you're able to do is this is in some height higher dimensional space you're able to manipulate the coordinate system scaling a little bit well a little bit arbitrarily but you cannot mix the individual dimension freely you can simply in that high dimensional space for a given mixing of dimensions that's what these matrices here do for a given mixing of dimensions for a given linear projections from the low dimensional to the high dimensional space you're able to manipulate the coordinate system so if you if you learn you need to be able to find matrices D and E such that for arbitrary samples the manipulation of the coordinate systems there makes sense it's a little bit like you know like doing a PCA or something on a on a data set right but it's just like during training right here so yeah I'm not sure again this is quite this is quite a loss this is quite a trade-off with an actual dense layer right here so but it's interesting to see that it works right and again this is only conceptual right here if you were to actually do this you would lose all the benefits that you would lose all the benefits that you had and again you can see a little bit that the trick here isn't necessarily sparsity but mostly low rank this is mostly like a low rank function yeah okay so we have the multiplicative layer we end up with the queries and the keys and the values for each attention head and now we're going to they're essentially say okay we could do this for every one of the three things or or we simply do it once which would give us this property of would you give us this property of the permutation being able and then we can do something even cheaper if we want to get the individual matrices right and so the trade-off here is well here still every permutation was possible for the different matrices so the Q could have different permutations than K then V or different functions here we're simply going to resort to one function one mixing or shuffling around of the dimension and then we're going to do something even cheaper which is this convolutional module and this convolutional module is also fairly simple to see so this output Y right here and draw it again over here you have two vectors right here and they say it somewhere they say the dimensionality somewhere so you have two vectors one per attention head this is the output of the multiplicative layer and presumably you would have those per token right we just looked at one token but the next token let me draw a little in this color the next token would also have them and then the next token would also have two of those all right let's do this so what you'd get is a tensor that has the sequence length L it has the number of heads what's s I guess or number of modules and it has M which is that that essentially that low rank dimensionality that the keys and queries and values live in and they simply treat this as an image and then they run a convolution across it so the convolution is going to be let me see if I can draw this properly the convolution is going to be across these two so the filter is going to be like this and then in all the dimensions like this I'm terrible at drawing but the filter essentially is going to be F in the dimension of s F in the dimension of L and M deep and you have M filters of those so you you have an s by L by M tensor here and you transform it also to an s by L by M tensor essentially you can just think of this as a regular convolutional layer and what again what does the convolution go over remember that the multiplicative layer is simply works on a single token it mixes it's kind of shot it is able to shuffle around the tokens dimensionalities a little bit to permute them a little bit in the best case and in all other cases it essentially manipulates the scaling in a high dimensional space and now with the convolutional layer what we can do is we can bridge a little bit of information already between the tokens even before we go into the attention module so given that the convolution is across the L and the s dimension it means that for the s dimension information is able to be passed between neighboring attention heads and for the L dimension it means information is being able to be passed between neighboring tokens in the sequence so that potentially gives some sort of a positionality to tokens because now that there's a notion of being close together and also it gives maybe a little bit of a meaning to different attention heads because the attention heads up until this point they've just been kind of unordered independent things and now they hang together a little bit this all of this is sort of one of the things why the the exact conclusions of this paper are going to be hard to assess even if they do ablations right they at the same time where they introduce efficiency they also introduce entirely new ways of sort of doing things they introduce new paths when it where information can be passed from between things and so it's very hard to point down exactly where things go right and wrong so this was the sparse or rather low dimensional attention module again this is first one of these multiplicative layers which is element wise multiply followed by matrix multiplication to a lower dimension and then that is followed by these by these convolutions but it's convolutional layers right here so they call this whole thing a multconv right if they combine all of this together you can see right here the blue with the shade is the average of the baselines this perplexity so lower is presumably better and you can see up to some noise all of these things are fairly consistent right they follow the trajectory of the baselines quite neatly some are even kind of it lower this one right here though I'm not sure if there is a there is exactly confusion because so the F right here is the filter size right and the S is the the sparsity in the multiplicative layer so essentially how many attention heads it splits stuff into and you can see right here there is a conv there's just a conv and there's just a mult but the F is with the mult which confuses me because the F is the filter size so technically that should be with the conv I guess if the authors are watching please please leave a comment if I'm wrong right here other I'm confused in any case they show that the baseline transformer don't particularly do that much better in these NLP tasks or even do worse sometimes as you can see right here though everything is pretty much within like a standard deviation than these scaling transformers so this architecture that we've discussed right now is this scaling transformer the last thing to do would be to add a sparse loss layer so they can replace the dense layer with a multiplicative layer similar to previous sections this speeds up the coding time say sorry they say but may degrade perplexity results are in the appendix so the loss layer might not might be the last refuge of of really dense things to do but remember due to the fact that in the feed forward layers we sample from this distribution to really be sparse or in fact we might do argmax right in during inference that's where the speed up comes from during training we actually have to forward propagate the softmax from time to time so that the training works and that means that the benefits of sparsity are lost because if we don't hard sample ones and zeros if we soft sample them then all the rows are still activated and we need to track everything and the same goes I think a little bit for batch inference so if I have batch inference even if I hard sample right different samples are going to have different activation patterns and therefore you know with enough samples all the things are going to be one somewhere and therefore I probably need to load the entire matrix right here from memory I need to do the multiplication with the entire matrix possibly not for all the vectors but also possibly something like a GPU probably wouldn't care that some stuff is zero it's gonna be as fast just to do all the things at the same time but that might be a hardware limitation okay so that was the scaling transformer and now we're going to supercharge the scaling transformer which makes it into a terraformer I don't think there's any relation to the tool terraform but no we're running out of names of formers so yeah this was the last refuge I guess so what they do is they use essentially they use essentially the architecture from the attention from reformer so yes we focus on the locality sensitive hashing attention from reformer was that reformer I thought that was perform I am confused by my by my own stuff reformer yes so they do two things right they have an architecture for a long sequences while integrating sparse attention layer into a scaling transformer we noticed architecture is suboptimal that's what I said at the beginning separating decoder self-attention and encoder decoder attention is not necessary anymore from the perspective of efficiency we remove the encoder decoder attention that I said that at the very beginning but just concatenate the encoder representation before the decoder tokens so they replace the encoder decoder attention by essentially two attention blocks that is that okay I guess there's no performer in here just the reformer so the LSH I've done a video on this locality sensitive hashing instead of full attention so if you have really long sequences you as I said you need to compute inner products between all pairs between all pairs of of nodes right here of tokens and this is cumbersome there are various techniques to speed that up one is LSH locality sensitive hashing where you essentially create hash buckets and then you hash all the vectors all the vectors inside of it or all the inner products become hashes and you look for essentially hash collisions that indicate where you want to calculate and check and a whole everything that's not a hash collision you don't need to check so locality sensitive hashing has been long-standing technique to make inner product search in high dimensions or inner product computations and looking for the most close inner product in in among very many elements have very fast so they borrow that from there and then also they include the recurrent blocks so recurrent blocks is no that's later first it's the reversibility all of this is just so similar reversibility is also apparently in reformer and what reversibility means it's kind of this architecture right here so again we have two attention and then one feed forward right the second attention replaces the encoder decoder attention and reversible means that instead of having one strand like one flow of forward propagating information right one flow of information we have two so there's I one and I two input one and input two we have two information flows forward and then every function that's applied is applied to one flow and added to the other flow right this gives you this and this one right here is simply forward propagated as a residual connection essentially and then x2 is taken so this the flow of the actual function would be this right here right you can see this is the flow of hitting all the functions and you can also see that we always have a signal for each of the functions we always have a signal that travels without being touched by the function right here okay so that signal right here and this is the signal right here and that makes the blocks reversible and that means that I can I don't have to keep activations in mind this limits this limits the capabilities a lot so non-reversible an example for non-reversible would be well this here is non-reversible because because unless I do like a linear function that goes from exactly the same dimension to the same dimension that is non-degenerate unless I do that I cannot possibly reconstruct the input right here like the signal right here X from the output Y not even for a single one of those blocks right it's not possible for me essentially to do this or yeah so the reversibility changes that essentially means I can always reconstruct from the from these signals I can reconstruct the intermediate activations and therefore I don't need to store them because in a normal neural network as I forward propagate I need to store a lot of intermediate stuff like right here and right here in order to then during back propagation I need those things because otherwise I couldn't calculate the gradient so I need to store the activation somewhere reversible networks reversible blocks do not have this property they do not need to store because they're reversible and they're made reversible not by changing the individual modules like this or this but by simply having this construction of the two strands of information and the modules simply apply between the two that's it's pretty smart architecture but one has to say it has very often significant trade-offs because these things being reversible also brings some some properties like there are a lot of functions you cannot express anymore because you need to keep everything reversible so again I think for the problems they particularly look at here it might work it might not work for all problems I think that's a bit of a general thing in this in this paper right here it's more like we're gonna have to test for every new task we tackle or new challenges new modalities whether these things still hold the last thing they build in is recurrence and they say it's for generalization and that is if I understand it correctly it is they use simple recurrent units not like an LSTM because they say that would be too slow so simple recurrent units they're still fairly complicated like I've looked them up there I didn't know what they were they're still they're still okay complicated so it's not just like a recurrent layer it's actually you know it has gates and so on like bit like GRU's or LSTM cells and if I understand correctly this goes between so as I said before in the feed forward layer that every single token goes independently through that if I understand this correctly if I understand this correctly this introduces a recurrent connection in between these did I well did I understand it correctly okay we also add a recurrence to the feed forward block of terraformer recurrent layers allow information to propagate in time even a even in a single decoder block okay I think I understood that correctly so within the feed forward block right here there is a recurrent connection between the different tokens every token goes independently through that but now we introduce actually a sort of dependence or a function that goes from the first token to the second to the third and so on a recurrent small recurrent neural network and again they one can only speculate why they have this in here I mean they say that this the results on C4 are minimal which is their language modeling task and they say the biggest benefits are when they do like these these toy tasks where you need to copy a decimal digit and then you can train at on 128 digits but then you can test on 256 so it's over two times longer than seen in training so they really make this point it's for generalization though it is very very odd like this is a very odd addition I can I could get them until like you know here it says yeah okay you go for long sequences you know that that's cool long sequences are cool it's cool if your model can you know also do long sequences fine then memory efficiency okay you know so given that is all sparse and low rank and so on you also might want to use less memory cool but then recurrence for this is this is quite an odd choice I feel and it could be that it simply didn't work like so they also say that the terraformer here in sort of these tasks like summarization that it sort of beats or matches state-of-the-art matches much much larger models and so on it could I can imagine that their numbers were slightly smaller like slightly worse than kind of the baselines and they were just looking for something to add to pump up those numbers and this worked if this is the case if that's a big if again it's very dangerous because it might work for these particular problems and not for others if not if this was really just like an idea they had and said well it'd be cool if that's in there then you know good like I'm willing to I'm willing to accept that as well alright so that was the terraformer and here you see so the terraformer now has over a 37 X speed up on it's a considerably large model but for this large model it requires less than 100 milliseconds per token of decoding time while not degrading in performance too much so that is that is I think quite an achievement even if it's only for particular types of tasks like these here it is quite an achievement and it's a bit of a shame that the speed ups are only for like they're only so huge for the really huge models I guess it makes sense because these effects are often compounding you know so it for you and me with like our regular old computers laptops it maybe won't make that much a difference in terms of speed it might make a difference in terms of memory because of the reversibility but other than that yeah but it's it's good for like if you work if you want to work with larger models but you don't necessarily have to compute and you do inference this might be something for you they specifically say that not everything has been tried yet they still don't do quantization which could yet deliver another speed up and there's also lots of things to do to actually speed up training maybe there's a way to get around this gumball softmax need to forward propagate the true soft max from time to time and so on so lots of engineering lots of kind of choices that are interleaved very hard to say where gain comes from but undeniable gain has been made in huge form and that's cool all right tell me what you think I'll see you next time bye bye
[ { "start": 0, "end": 5.4, "text": " Hello there! Today we'll look at Sparse Is Enough in Scaling Transformers by" }, { "start": 5.4, "end": 10.76, "text": " researchers of the University of Warsaw, Google Research and OpenAI. This paper" }, { "start": 10.76, "end": 15.92, "text": " on a high level proposes a set of building blocks to introduce sparsity" }, { "start": 15.92, "end": 20.36, "text": " into transformers and this results in an architecture called the Scaling" }, { "start": 20.36, "end": 24.72, "text": " Transformer. In the second half of the paper they then introduce additional" }, { "start": 24.72, "end": 30.64, "text": " features to the Scaling Transformer to make it into the Terraformer. Both the" }, { "start": 30.64, "end": 34.28, "text": " Scaling Transformer and the Terraformer they are really fast at what they call" }, { "start": 34.28, "end": 39.28, "text": " unbatched decoding. Decoding is essentially inference in such a" }, { "start": 39.28, "end": 43.739999999999995, "text": " transformer model and unbatched means that they can do this for a single" }, { "start": 43.739999999999995, "end": 48.08, "text": " sample. Of course they're also faster in batched decoding but I guess the" }, { "start": 48.08, "end": 53.480000000000004, "text": " effects are not as pronounced and we're gonna see why because the sparsity" }, { "start": 53.48, "end": 58.64, "text": " really shines through if you have single examples and can only activate very" }, { "start": 58.64, "end": 64.88, "text": " small parts of the network at the same time. So the effect of all of this at" }, { "start": 64.88, "end": 70.47999999999999, "text": " least for the Scaling Transformer is right here. If you have a model with 800" }, { "start": 70.47999999999999, "end": 75.36, "text": " million parameters, I guess today that be called a small model, the baseline" }, { "start": 75.36, "end": 80.92, "text": " transformer has a decoding time of about 0.16 seconds whereas if you add all the" }, { "start": 80.92, "end": 85.76, "text": " tricks to the Scaling Transformer you speed that up by a factor of about 2.6x." }, { "start": 85.76, "end": 90.16, "text": " That's not that pronounced yet. Yet the effect really shines if you go to" }, { "start": 90.16, "end": 96.08, "text": " bigger models so if you go to a 17 billion parameter models the baseline" }, { "start": 96.08, "end": 102.44, "text": " transformer takes about 3.6 seconds on this particular hardware to decode. The" }, { "start": 102.44, "end": 107.16, "text": " Terra, no sorry the Scaling Transformer with all the tricks activated takes" }, { "start": 107.16, "end": 115.44, "text": " about 0.18 seconds giving a speed up of 20x and so in different settings on" }, { "start": 115.44, "end": 120.64, "text": " different configurations these speed ups can in fact get even higher. I've seen" }, { "start": 120.64, "end": 126.96, "text": " up to like 37x or something like this which is quite quite fast and this all" }, { "start": 126.96, "end": 136.4, "text": " while the performance doesn't degrade and that is surprising. So they say" }, { "start": 136.4, "end": 140.68, "text": " surprisingly the sparse layers are enough to obtain the same perplexity as" }, { "start": 140.68, "end": 145.68, "text": " the standard transformer with the same number of parameters. So they have the" }, { "start": 145.68, "end": 151.36, "text": " same number of parameters it's just that they activate them sparsely when" }, { "start": 151.36, "end": 157.04000000000002, "text": " forward propagating which is much faster and needs much less memory and this" }, { "start": 157.04000000000002, "end": 161.84, "text": " results in the same perplexity when language modeling. So essentially it means" }, { "start": 161.84, "end": 170.6, "text": " that the performance is on par and also they say if they integrate with" }, { "start": 170.6, "end": 177.48000000000002, "text": " prior sparsity approaches that's where they achieve the Terraformer they can do" }, { "start": 177.48000000000002, "end": 181.92000000000002, "text": " fast inference on long sequence even with limited memory this results in" }, { "start": 181.92000000000002, "end": 185.08, "text": " performance competitive to the state-of-the-art on long text" }, { "start": 185.08, "end": 190.92000000000002, "text": " summarization which is another thing where their model is state-of-the-art or" }, { "start": 190.92, "end": 196.95999999999998, "text": " equivalent to state-of-the-art while being much more sparse much more memory" }, { "start": 196.95999999999998, "end": 202.16, "text": " efficient and much faster. So yeah we'll dive into this the architecture it's" }, { "start": 202.16, "end": 207.56, "text": " quite it's quite a mess like there are engineering tricks engineering tricks" }, { "start": 207.56, "end": 215.11999999999998, "text": " engineering tricks and you know the you have to wonder a little bit you know" }, { "start": 215.11999999999998, "end": 219.35999999999999, "text": " what came first like which trick came first and which trick necessitated which" }, { "start": 219.36, "end": 223.36, "text": " other trick but we'll go through the architecture through all the different" }, { "start": 223.36, "end": 228.56, "text": " pieces and you'll see what this is all about and where the savings are done." }, { "start": 228.56, "end": 233.92000000000002, "text": " All right if you enjoy content like this you know don't hesitate to subscribe I" }, { "start": 233.92000000000002, "end": 238.60000000000002, "text": " don't want to do the other youtubers show the graph I'll do like I'll do this" }, { "start": 238.60000000000002, "end": 244.16000000000003, "text": " here's the graph here's the graph so many of you are not subscribed I mean" }, { "start": 244.16, "end": 251.44, "text": " look at that excellent all right so the point with the these sparsity gains is" }, { "start": 251.44, "end": 259.36, "text": " that if you implement them somewhere then that part is fine but then another" }, { "start": 259.36, "end": 263.96, "text": " part is still dense and is still the bottleneck so you kind of have to do" }, { "start": 263.96, "end": 269.84, "text": " introduce them everywhere so if we look at a classic transformer model and they" }, { "start": 269.84, "end": 276.32, "text": " specifically I think refer to like the stack of attention is all you need and" }, { "start": 276.32, "end": 282.96, "text": " so on so what they have basically is they have two attention modules so" }, { "start": 282.96, "end": 287.88, "text": " there's attention one I think there's attention two and then there is this" }, { "start": 287.88, "end": 293.47999999999996, "text": " feed forward layer okay so we're going to take care of all of those right here" }, { "start": 293.48, "end": 299.96000000000004, "text": " attention one is called self attention so if I have a sequence coming in here" }, { "start": 299.96000000000004, "end": 305.56, "text": " the self attention would be essentially attention in between the elements of the" }, { "start": 305.56, "end": 311.52000000000004, "text": " sequence the second attention block is I think encoder decoder attention or" }, { "start": 311.52000000000004, "end": 316.28000000000003, "text": " something like this the variants vary a little bit right here but I would have" }, { "start": 316.28000000000003, "end": 322, "text": " sort of a second stack of this right here I would have a input sequence right" }, { "start": 322, "end": 325.28, "text": " here so this would be the input this would be the target sequence that I'm" }, { "start": 325.28, "end": 331.32, "text": " about to decode maybe this has some causal attention who knows the second" }, { "start": 331.32, "end": 336.6, "text": " layer of attention here is specifically attention that goes to the encoder" }, { "start": 336.6, "end": 342.04, "text": " sequence right here so it's it's attention in between the encoder and the" }, { "start": 342.04, "end": 347.44, "text": " decoder and the feed forward so this essentially these two mix all the" }, { "start": 347.44, "end": 350.84, "text": " information of the different tokens together and the feed forward layer" }, { "start": 350.84, "end": 356.64, "text": " simply takes a single embedding of a single single token and feeds it through" }, { "start": 356.64, "end": 360.71999999999997, "text": " a feed forward function so all the tokens are handled by the same feed" }, { "start": 360.71999999999997, "end": 365.59999999999997, "text": " forward function the first thing this paper does is it essentially eliminates" }, { "start": 365.59999999999997, "end": 371.08, "text": " the distinguishing between the self attention and the attention between" }, { "start": 371.08, "end": 376.67999999999995, "text": " encoder and decoder and I think that makes sense that's also a lot what a lot" }, { "start": 376.68, "end": 383.44, "text": " of other models do so famously BERT is an encoder only model GPT is a decoder" }, { "start": 383.44, "end": 388.08, "text": " only model and if I understand them correctly there as well they're simply" }, { "start": 388.08, "end": 394.76, "text": " taking the encodings from the source and then just prepending them to the target" }, { "start": 394.76, "end": 398.78000000000003, "text": " or something like this you know safe to say there are lots of things that one" }, { "start": 398.78000000000003, "end": 406.52, "text": " could do right here but what I wanted to say is that we now need to replace each" }, { "start": 406.52, "end": 411.24, "text": " of those things with a sparse version so we need a sparse feed forward and we" }, { "start": 411.24, "end": 416.56, "text": " also need a sparse attention block so how we're gonna achieve this first we're" }, { "start": 416.56, "end": 422.44, "text": " going to the sparse feed forward layer remember a feed forward layer is I have" }, { "start": 422.44, "end": 428.08, "text": " a sequence of embedding so that's these are all vectors and these are all" }, { "start": 428.08, "end": 432.28, "text": " embedding vectors this is a sequence of embedding vectors that came out of the" }, { "start": 432.28, "end": 439.35999999999996, "text": " attention module right and the feed forward layer essentially is a matrix" }, { "start": 439.35999999999996, "end": 446.4, "text": " and I simply pass each of these through a matrix in fact it's not one matrix I" }, { "start": 446.4, "end": 454.64, "text": " think it is usually two matrices one matrix that sort of well that's not how" }, { "start": 454.64, "end": 462.91999999999996, "text": " you draw a matrix like this and then like this so you kind of blow up the" }, { "start": 462.91999999999996, "end": 468.76, "text": " dimension in the middle and then here there is a ReLU non-linearity in between" }, { "start": 468.76, "end": 475.84, "text": " and the point is what I already said you'd feed every single token by itself" }, { "start": 475.84, "end": 479.88, "text": " through this function so this becomes like a large token then there's a ReLU" }, { "start": 479.88, "end": 485.92, "text": " and then this would become sort of a token of the input dimension again and" }, { "start": 485.92, "end": 491.52, "text": " you feed this token through as well individually which give you this one and" }, { "start": 491.52, "end": 497.92, "text": " so on so in essence we have a vector right a token all the tokens are" }, { "start": 497.92, "end": 503.04, "text": " independent we have a token and somehow we need to make this sparse right now" }, { "start": 503.04, "end": 508.84, "text": " it's a dense multiplication twice so there's two matrices right here and if" }, { "start": 508.84, "end": 514.3199999999999, "text": " dense multiplication right so what do we do the first thing they say is that well" }, { "start": 514.3199999999999, "end": 520.04, "text": " given that there is a ReLU non-linearity right here right there's a ReLU a lot of" }, { "start": 520.04, "end": 525.4399999999999, "text": " the things here essentially are gonna end up being zero right so it makes sense" }, { "start": 525.4399999999999, "end": 533.52, "text": " it makes sense to do sparsity here now I don't I don't follow that entirely you" }, { "start": 533.52, "end": 539.76, "text": " know I guess half of the stuff will end up being zero yet the sparsity goes much" }, { "start": 539.76, "end": 547.14, "text": " further so but maybe maybe they maybe they justify why they can set some" }, { "start": 547.14, "end": 551.4, "text": " things to zero not entirely sure but I found that reasoning a bit shaky but" }, { "start": 551.4, "end": 555.76, "text": " here is essentially you know you don't need in a reason to introduce sparsity" }, { "start": 555.76, "end": 562.6999999999999, "text": " if it works it's good so here's how it works first and this is what I found a" }, { "start": 562.7, "end": 568.12, "text": " bit confusing so it essentially starts on the right then it goes to the left but" }, { "start": 568.12, "end": 573.72, "text": " it I guess it's easier to start on the left so what we want to do I see here is" }, { "start": 573.72, "end": 579.12, "text": " that input vector right and here is that first matrix so the first matrix is of" }, { "start": 579.12, "end": 585.44, "text": " dimension D model which is the same as this dimension and DFF which is the" }, { "start": 585.44, "end": 592.6800000000001, "text": " feed-forward dimension and usually I just multiply that together which would" }, { "start": 592.68, "end": 598.4399999999999, "text": " give me a vector in the dimension of the feed-forward layer right which I then" }, { "start": 598.4399999999999, "end": 606.64, "text": " send through my relu however however what I want to do I want to compartmentalize" }, { "start": 606.64, "end": 614.88, "text": " I want to only certain columns here to be activated right so essentially say I" }, { "start": 614.88, "end": 619.64, "text": " already accept that a lot of my things in my result are going to be zero" }, { "start": 619.64, "end": 624.08, "text": " because you know they will go to a relu anyway so I'm going to accept that some" }, { "start": 624.08, "end": 628.92, "text": " of the things will already be zero so let's say all of these I already accept" }, { "start": 628.92, "end": 633.68, "text": " they're gonna be zero I don't even need to calculate the matrix multiplication" }, { "start": 633.68, "end": 638.6, "text": " between the vector here and let's say this column right here don't need to do" }, { "start": 638.6, "end": 647.12, "text": " it because after that they will become zero anyway so who cares so I'm simply" }, { "start": 647.12, "end": 651.2, "text": " going to decide that some of the things are just going to end up being zero and" }, { "start": 651.2, "end": 655.48, "text": " they justify this by saying well there's a relu so some of the things are going" }, { "start": 655.48, "end": 660.84, "text": " to be zero but more more here is like you know six out of eight are going to" }, { "start": 660.84, "end": 668.24, "text": " be zero and now I only need to calculate the remaining columns and that is the" }, { "start": 668.24, "end": 675.24, "text": " sparsity right here effectively they subdivide all of the they subdivide the" }, { "start": 675.24, "end": 679.28, "text": " whole matrix into these compartments so we'd have two different compartments" }, { "start": 679.28, "end": 686.04, "text": " right here and of in each compartment only one column can be activated at the" }, { "start": 686.04, "end": 692.76, "text": " same time right I think yeah yeah there's one one of them it's decided on" }, { "start": 692.76, "end": 696.64, "text": " one of them one of them can be activated and only that one needs to be loaded" }, { "start": 696.64, "end": 701.6800000000001, "text": " from memory only that one needs to be calculated as an inner product with the" }, { "start": 701.68, "end": 707.7199999999999, "text": " vector and so the cells here where an actual value is going to be are sparse" }, { "start": 707.7199999999999, "end": 712.92, "text": " now the question is how do we decide which ones we're going to activate by" }, { "start": 712.92, "end": 717.3599999999999, "text": " the way if you can see then for the second matrix you know the same thing" }, { "start": 717.3599999999999, "end": 724.4399999999999, "text": " applies in fact I can use that same mask from here and I can again say well in" }, { "start": 724.4399999999999, "end": 730.68, "text": " the first module column number three was activated here right so row number three" }, { "start": 730.68, "end": 735.7199999999999, "text": " of this matrix needs to be activated the other ones don't matter because they're" }, { "start": 735.7199999999999, "end": 741, "text": " zero anyway so there's a zero coming in right here being multiplied with this" }, { "start": 741, "end": 746.52, "text": " row you know who cares what the result is the the input is zero actually well" }, { "start": 746.52, "end": 753.04, "text": " people care it's zero right but it means you don't even need to need to do it you" }, { "start": 753.04, "end": 759.14, "text": " can simply just load the rows that you are that you know are potentially non" }, { "start": 759.14, "end": 767.4399999999999, "text": " zero so yeah how do how do you decide how do you decide which ones you should" }, { "start": 767.4399999999999, "end": 772.76, "text": " load from memory essentially you're simulating you're already pre committing" }, { "start": 772.76, "end": 778.72, "text": " to a relu pattern right so this is how you do it essentially you build you" }, { "start": 778.72, "end": 786.3199999999999, "text": " build you take your input vector right here and you're trying to somehow see" }, { "start": 786.32, "end": 795.44, "text": " how that works we somehow come up with a vector of with a binary vector with" }, { "start": 795.44, "end": 799.72, "text": " numbers between like zero and one so everything right here is like a point" }, { "start": 799.72, "end": 808.6400000000001, "text": " one point five point three point eight so every single entry has a value every" }, { "start": 808.6400000000001, "end": 813.6, "text": " single entry will output like the probability that that particular element" }, { "start": 813.6, "end": 819.48, "text": " should be non zero and then you simply sample from that distribution and use a" }, { "start": 819.48, "end": 826, "text": " straight through Gumbel softmax in order to back propagate so they also do a lot" }, { "start": 826, "end": 830.52, "text": " of tricks right here I think they mentioned that in the forward propagation" }, { "start": 830.52, "end": 835.64, "text": " they even sometimes need to do a actually to pass just the softmax output" }, { "start": 835.64, "end": 840, "text": " instead of the actual sampling so there's a lot of engineering tricks to" }, { "start": 840, "end": 844.48, "text": " actually get this to work but safe to say that's during training we are we" }, { "start": 844.48, "end": 850.24, "text": " care about inference during inference you sample exactly one per module that" }, { "start": 850.24, "end": 858.48, "text": " is non zero okay so you have two different workflows the workflow one" }, { "start": 858.48, "end": 865.8, "text": " goes here decides what needs to be non zero right and then given that" }, { "start": 865.8, "end": 871.0799999999999, "text": " information you can do this feed forward layer in a sparse way but that is all" }, { "start": 871.0799999999999, "end": 878.76, "text": " useless if this right here is is not sparse so this is actually not sparse" }, { "start": 878.76, "end": 883.4799999999999, "text": " but it is low rank so they say well in order to figure out which things need to" }, { "start": 883.4799999999999, "end": 888.3199999999999, "text": " be non zero we technically don't need as much information as you know actually" }, { "start": 888.3199999999999, "end": 895.2199999999999, "text": " propagating information so what we can do is we can have a low rank essentially" }, { "start": 895.22, "end": 900.8000000000001, "text": " it's another feed forward layer again doing this blowing up the dimension to" }, { "start": 900.8000000000001, "end": 908, "text": " the feed forward dimension but we make it low rank so instead of instead of" }, { "start": 908, "end": 914.2, "text": " wait yeah instead of blowing up the dimension in between we shrink it down" }, { "start": 914.2, "end": 921.2, "text": " right you can see right here we shrink it down to a low dimension and then we" }, { "start": 921.2, "end": 927.48, "text": " go to the dimension of the feed forward layer to decide which things are one and" }, { "start": 927.48, "end": 932.8000000000001, "text": " zero and that's a thing you're gonna see often in this model is that they make" }, { "start": 932.8000000000001, "end": 941.6800000000001, "text": " use of low rank combined with sparsity and it's also a bit of a of a trouble" }, { "start": 941.6800000000001, "end": 946.84, "text": " that I have because for some things a low rank approximation is fine but you" }, { "start": 946.84, "end": 950.88, "text": " know there is a reason we have dense multiplications everywhere because" }, { "start": 950.88, "end": 955.88, "text": " sometimes it's not because with a low rank multiplication you essentially" }, { "start": 955.88, "end": 964.08, "text": " restrict your function space to a very very small subspace yeah but it seems" }, { "start": 964.08, "end": 969.88, "text": " to work so the trade-off here is that you get to do this sparse which means" }, { "start": 969.88, "end": 976.44, "text": " that the time it takes decreases and the memory but you have to this here over" }, { "start": 976.44, "end": 980.44, "text": " this this is new right you didn't have to do this before you could simply do" }, { "start": 980.44, "end": 987.0400000000001, "text": " the multiplication so this is going to add to your compute well this here is" }, { "start": 987.0400000000001, "end": 994.9200000000001, "text": " going to be faster and now it's about whether whether or not you can make this" }, { "start": 994.9200000000001, "end": 1004.12, "text": " side sufficiently low rank such that the the gains over here are more than the" }, { "start": 1004.12, "end": 1008.8800000000001, "text": " time that you have to invest to compute this max this mask at the first place" }, { "start": 1008.88, "end": 1014.4, "text": " over here again for these particular problems that they look at it seems to" }, { "start": 1014.4, "end": 1020.16, "text": " be working right but these kinds of trade-offs it's not guaranteed like it's" }, { "start": 1020.16, "end": 1026.92, "text": " not so clear to me that it would you know just work like it's not it's not" }, { "start": 1026.92, "end": 1031.8, "text": " straightforward that that trade-off would be positive right here there might" }, { "start": 1031.8, "end": 1036.56, "text": " very well be problems where this rank right here is just too small to carry" }, { "start": 1036.56, "end": 1041.72, "text": " meaningful information you need to make it bigger and that would sort of vanish" }, { "start": 1041.72, "end": 1047.6, "text": " all the savings you make over here because these savings are I mean" }, { "start": 1047.6, "end": 1054.56, "text": " essentially linear in the sparsity and this these gain sorry these these this" }, { "start": 1054.56, "end": 1060.28, "text": " right here is essentially linear in the in the low rank dimension so there's the" }, { "start": 1060.28, "end": 1066.02, "text": " trade-off right there they here is how you how you can express this you can" }, { "start": 1066.02, "end": 1071.56, "text": " essentially express this as the original multiplication with the first matrix" }, { "start": 1071.56, "end": 1079.44, "text": " relu through the relu then times the controller output and all of that then" }, { "start": 1079.44, "end": 1084.48, "text": " goes into the second multiplication that's how you can represent it" }, { "start": 1084.48, "end": 1089.04, "text": " mathematically that's not actually what you do right because here you still have" }, { "start": 1089.04, "end": 1095.36, "text": " the full multiplications with the weight matrices but it will result in the same" }, { "start": 1095.36, "end": 1102.4799999999998, "text": " thing as this formula all right so that is the sparse feed-forward layer and" }, { "start": 1102.4799999999998, "end": 1109.6399999999999, "text": " they do show that it decreases the coding time quite a bit and interestingly" }, { "start": 1109.6399999999999, "end": 1115.1999999999998, "text": " it also doesn't degrade performance too much in fact you can see right here this" }, { "start": 1115.1999999999998, "end": 1122.84, "text": " blue line is the average of the baseline models and if you if you don't go too" }, { "start": 1122.84, "end": 1128.12, "text": " sparse you still have quite good performance so this is quite close only" }, { "start": 1128.12, "end": 1134.72, "text": " if you go more sparse does your perplexity here start to suffer I think" }, { "start": 1134.72, "end": 1137.9199999999998, "text": " that that is one of the surprising things that there is a level of sparsity" }, { "start": 1137.9199999999998, "end": 1142.4399999999998, "text": " you can go at where you're actually considerably faster while your" }, { "start": 1142.4399999999998, "end": 1148.56, "text": " performance doesn't degrade yet again can very well be because for the problems" }, { "start": 1148.56, "end": 1155.1599999999999, "text": " we look at the sort of the they're not difficult enough to really make use of" }, { "start": 1155.1599999999999, "end": 1162.44, "text": " the capacities of the dense models okay so feed-forward is done now we go to the" }, { "start": 1162.44, "end": 1169.84, "text": " attention layer and the attention layer again is split up into two parts in fact" }, { "start": 1169.84, "end": 1176.08, "text": " they don't even they don't even really deal with the attention mechanism itself" }, { "start": 1176.08, "end": 1182.72, "text": " what they actually care about is in order to do attention attention is" }, { "start": 1182.72, "end": 1188.48, "text": " something like I have my queries and my keys and I do an outer product and I" }, { "start": 1188.48, "end": 1193.4399999999998, "text": " normalize by something that I can't remember and then I multiply by my" }, { "start": 1193.4399999999998, "end": 1202.52, "text": " values this is the attention formula and what they care about is how do I get the" }, { "start": 1202.52, "end": 1209.6, "text": " queries the keys and the the values they in order to make attention itself the" }, { "start": 1209.6, "end": 1215.96, "text": " sparse or long-range or efficient they rely on on different techniques that" }, { "start": 1215.96, "end": 1220.32, "text": " from other papers so for example they will later include the performer and the" }, { "start": 1220.32, "end": 1228.36, "text": " reformer architectures which make attention itself sparse or efficient or" }, { "start": 1228.36, "end": 1235.76, "text": " low-dimensional however in this particular paper they care about how do" }, { "start": 1235.76, "end": 1242.7199999999998, "text": " we even get these matrices and usually you get Q by multiplying your input by" }, { "start": 1242.7199999999998, "end": 1252.12, "text": " a weight matrix like WQ you get key by multiplying your input by a key weight" }, { "start": 1252.12, "end": 1259.56, "text": " matrix and you get V by X so all of these are dense multiplications and" }, { "start": 1259.56, "end": 1264.32, "text": " obviously they now become the bottleneck once we have the sparse feed-forward" }, { "start": 1264.32, "end": 1271.9199999999998, "text": " layers the dense layers in in the attention layers become the bottleneck" }, { "start": 1271.9199999999998, "end": 1276.1599999999999, "text": " the question is can we use the same trick here as we did before and the" }, { "start": 1276.1599999999999, "end": 1280.84, "text": " answer they say is no because the structure of the feed-forward layer here" }, { "start": 1280.84, "end": 1287.76, "text": " was such that it had the relu in between right so and that's why they argue so" }, { "start": 1287.76, "end": 1293.08, "text": " naturally a lot of things are gonna end up being zero which we can exploit by" }, { "start": 1293.08, "end": 1298.9599999999998, "text": " just making you know just just a few more things zero I guess but they don't" }, { "start": 1298.9599999999998, "end": 1304.36, "text": " they don't want to do this right here because here like none of the things" }, { "start": 1304.36, "end": 1310.6399999999999, "text": " necessarily are going to be zero in the output of these calculations so the Q or" }, { "start": 1310.64, "end": 1317.72, "text": " the K or the V they don't have many zero entries so might not be justified to go" }, { "start": 1317.72, "end": 1328.4, "text": " sparse and just say well make stuff zero so what do we do instead instead we look" }, { "start": 1328.4, "end": 1334.6000000000001, "text": " at this diagram here so on the top you have what the current attention" }, { "start": 1334.6, "end": 1340.7199999999998, "text": " mechanism looks like as I said there is a there is a dense layer essentially in" }, { "start": 1340.7199999999998, "end": 1344.6, "text": " front of each of these three matrices which is that's how you that's exactly" }, { "start": 1344.6, "end": 1351.84, "text": " how you get the matrix in the first place right we're going to look at a" }, { "start": 1351.84, "end": 1358.6799999999998, "text": " thing which they call a multiplicative layer so which this is this malt right" }, { "start": 1358.6799999999998, "end": 1363.8, "text": " here and the multiplicative layer potentially could replace the dense" }, { "start": 1363.8, "end": 1370.6, "text": " layer however they go a step further and they say they end up with this" }, { "start": 1370.6, "end": 1376.84, "text": " architecture right here where they have a multiplicative layer then it's a one" }, { "start": 1376.84, "end": 1381.24, "text": " multiplicative layer for all three matrices that is shared and then one" }, { "start": 1381.24, "end": 1386.6, "text": " convolutional layer for each of the different matrices which is gonna make" }, { "start": 1386.6, "end": 1392.32, "text": " stuff even faster and then they also they drop kind of this this dense" }, { "start": 1392.32, "end": 1399.56, "text": " mechanism right here and they simply add right here again I like I'm pretty sure" }, { "start": 1399.56, "end": 1405.56, "text": " this works right now for these particular problems hope like maybe" }, { "start": 1405.56, "end": 1411.8799999999999, "text": " because the problems don't make use of of the parameters or the original models" }, { "start": 1411.8799999999999, "end": 1417.56, "text": " were just poorly engineered they didn't they never actually needed all of these" }, { "start": 1417.56, "end": 1422.52, "text": " you know parameters like this one and we're all fine this could also be the" }, { "start": 1422.52, "end": 1427.6799999999998, "text": " case so we have two things to look at inside of the attention model the" }, { "start": 1427.6799999999998, "end": 1434.56, "text": " multiplicative layer and the conv layers and these kind of go together and it" }, { "start": 1434.56, "end": 1438.36, "text": " also goes together with what's usually done in the attention mechanism which" }, { "start": 1438.36, "end": 1446.6799999999998, "text": " is multi head attention so I'll draw a diagram of an attention mechanism for" }, { "start": 1446.68, "end": 1452.68, "text": " the about 500th time but you have some sort of a sequence right and every" }, { "start": 1452.68, "end": 1459.8400000000001, "text": " sequence I'll replicate the sequence over here so every sequence emits what's" }, { "start": 1459.8400000000001, "end": 1466.0800000000002, "text": " called a like a query which is a vector some vector which are the queries and" }, { "start": 1466.0800000000002, "end": 1473.92, "text": " also every element in the sequence emits a key so the keys are also some vectors" }, { "start": 1473.92, "end": 1484.16, "text": " and the keys are also some vectors and then routing is done via inner product" }, { "start": 1484.16, "end": 1489.6000000000001, "text": " overlap so probably these go would be routed together these two would be" }, { "start": 1489.6000000000001, "end": 1494.52, "text": " routed together this would probably be routed here it can also be routed to" }, { "start": 1494.52, "end": 1499.6000000000001, "text": " multiple stuff but you route essentially via inner product so that's how you" }, { "start": 1499.6, "end": 1507.1599999999999, "text": " construct the weight matrix or the query key matrix for then multiplying by the" }, { "start": 1507.1599999999999, "end": 1512.9199999999998, "text": " values the idea behind multi-headed attention which is what's usually on is" }, { "start": 1512.9199999999998, "end": 1518.8, "text": " that let's not only have one such block let's actually have many such blocks in" }, { "start": 1518.8, "end": 1525.1599999999999, "text": " parallel right and instead of using the entire vectors that are output right" }, { "start": 1525.16, "end": 1532.1200000000001, "text": " here by for example that are in Q Q or these the queries right Q or is a" }, { "start": 1532.1200000000001, "end": 1537.88, "text": " matrix and every row or column don't exactly remember is one of these vectors" }, { "start": 1537.88, "end": 1545.5600000000002, "text": " right here they say hey let's instead of so Q is a matrix let's say every row but" }, { "start": 1545.5600000000002, "end": 1554.28, "text": " for for let's just say every row if I'm wrong then you know just reimagine so" }, { "start": 1554.28, "end": 1560.48, "text": " instead of taking the entire vectors here like the entire vectors as queries" }, { "start": 1560.48, "end": 1567.6, "text": " we split the vectors into in this case into three parts and this first part" }, { "start": 1567.6, "end": 1571.32, "text": " right here that becomes the query for this attention mechanism the second" }, { "start": 1571.32, "end": 1574.96, "text": " part becomes the query for that attention mechanism and the third one" }, { "start": 1574.96, "end": 1579.16, "text": " becomes the query for yet another attention mechanism that's multi-headed" }, { "start": 1579.16, "end": 1587.48, "text": " attention same with the keys same with the values and yeah so now now we're" }, { "start": 1587.48, "end": 1597.88, "text": " prepared so what we want to do right here is we want to take a token and" }, { "start": 1597.88, "end": 1604.88, "text": " remember we now need to make a query let's say we want to produce the" }, { "start": 1604.88, "end": 1613.2800000000002, "text": " queries right so from this token we need to produce a query vector not only one" }, { "start": 1613.2800000000002, "end": 1620.72, "text": " but number of heads many query vectors from this token using some sort of some" }, { "start": 1620.72, "end": 1626.8400000000001, "text": " sort of a linear layer some sort of a linear function so that's how we do it" }, { "start": 1626.8400000000001, "end": 1632.4, "text": " they say we have this matrix right here the weight matrix D and what the weight" }, { "start": 1632.4, "end": 1638.76, "text": " matrix D the weight matrix D is there's the same dimension here as the input and" }, { "start": 1638.76, "end": 1648, "text": " has as many as many rows as we have different attention heads right so what" }, { "start": 1648, "end": 1652.24, "text": " we're going to do is we're going to element wise multiply and I would also" }, { "start": 1652.24, "end": 1659.48, "text": " add right here broadcast right broadcast so if you've used non-pi or" }, { "start": 1659.48, "end": 1663.48, "text": " TensorFlow or pi torch you know the broadcasting operation so the" }, { "start": 1663.48, "end": 1667.84, "text": " broadcasting is done this is of dimension one right here the broadcasting" }, { "start": 1667.84, "end": 1674.3600000000001, "text": " is done between this one and this s right here this is going to be broadcast" }, { "start": 1674.3600000000001, "end": 1681.08, "text": " into this form right here and you can see now I mean it's just an element wise" }, { "start": 1681.08, "end": 1686.48, "text": " multiplication so all that is is like differently scaled versions of X in each" }, { "start": 1686.48, "end": 1692.3600000000001, "text": " dimension right so each row is essentially X a little bit shaky so" }, { "start": 1692.3600000000001, "end": 1699.68, "text": " let's double shake X for the bottom row okay but this already is now a vector" }, { "start": 1699.68, "end": 1708.4, "text": " one vector for each of the attention heads now since element wise multiply is" }, { "start": 1708.4, "end": 1714.52, "text": " probably not going to get us very far we also multiply this by an actual matrix" }, { "start": 1714.52, "end": 1719.96, "text": " but instead of multiplying it by a D model times the model matrix again we go" }, { "start": 1719.96, "end": 1726.6399999999999, "text": " into a low rank low rank regime and simply say okay we have this number M" }, { "start": 1726.6399999999999, "end": 1734.56, "text": " and that's going to be a reduction on reduction on our dimensionality so this" }, { "start": 1734.56, "end": 1739.72, "text": " isn't D model by a D model matrix which would probably be expensive it's a D" }, { "start": 1739.72, "end": 1746.6000000000001, "text": " model by M matrix and out comes this so this is going to be the query vector for" }, { "start": 1746.6000000000001, "end": 1753.08, "text": " the first attention mechanism sorry no this is going to be the query vector for" }, { "start": 1753.08, "end": 1758.48, "text": " the first attention mechanism and this is going to be the query vector for the" }, { "start": 1758.48, "end": 1766.24, "text": " second attention head head I meant to say head there is a thing like they" }, { "start": 1766.24, "end": 1773.8, "text": " don't just choose M arbitrarily they in fact choose I believe s times M equals" }, { "start": 1773.8, "end": 1786.1200000000001, "text": " to D model right that is that is their their formula so they if they split into" }, { "start": 1786.1200000000001, "end": 1795.36, "text": " s different heads like let's in this case you see s is 2 then M is 3 and that" }, { "start": 1795.36, "end": 1799.84, "text": " has a very particular reason namely they say with this particular" }, { "start": 1799.84, "end": 1806.4799999999998, "text": " construction of the element was multiply followed by the multiplication" }, { "start": 1806.4799999999998, "end": 1813.7199999999998, "text": " by this weight matrix E if if we do it like this then they can have a theorem" }, { "start": 1813.7199999999998, "end": 1818.6799999999998, "text": " where is the theorem there is the theorem the theorem essentially says" }, { "start": 1818.68, "end": 1828.6000000000001, "text": " that they can they can represent an arbitrary permutation so they say the" }, { "start": 1828.6000000000001, "end": 1833.8, "text": " minimum thing the minimum thing that we have to be able to do is to take X and" }, { "start": 1833.8, "end": 1840.2, "text": " kind of permute it so to place every single element of X in the output" }, { "start": 1840.2, "end": 1849.24, "text": " wherever we want essentially they say every part of X should be able to be" }, { "start": 1849.24, "end": 1855.0800000000002, "text": " forward propagated to all the attention heads or to any of the attention heads" }, { "start": 1855.0800000000002, "end": 1859.6000000000001, "text": " and if a theorem that says that if they constructed like this any permutation is" }, { "start": 1859.6000000000001, "end": 1866.3600000000001, "text": " within the the realm is within possibilities for some matrices for some" }, { "start": 1866.36, "end": 1872.08, "text": " weight matrices D and E so that's kind of their justification of well we can" }, { "start": 1872.08, "end": 1879.04, "text": " represent all permutations so it can't be too bad right yeah I found a little" }, { "start": 1879.04, "end": 1884.12, "text": " bit of another way of you know seeing this if you look at this with the" }, { "start": 1884.12, "end": 1888.84, "text": " element wise multiply and so on it is easier to understand this as let me try" }, { "start": 1888.84, "end": 1897.8, "text": " to draw this up maybe over oopsie boops over here so if you think about it a" }, { "start": 1897.8, "end": 1903.48, "text": " little bit it is like so you have and you also look at the formula this" }, { "start": 1903.48, "end": 1911.6799999999998, "text": " formula right here you can clearly see that this is in fact a matrix" }, { "start": 1911.6799999999998, "end": 1916.56, "text": " multiplication again so you have I would say you have if you look at this as D" }, { "start": 1916.56, "end": 1929.76, "text": " times X times E where X here is a matrix that has zeros but X on so on the" }, { "start": 1929.76, "end": 1937.1599999999999, "text": " diagonal it's X right which would give you it would give you sort of a so D is" }, { "start": 1937.1599999999999, "end": 1945.9199999999998, "text": " kind of this shape then X is that shape but only the diagonal is filled with X" }, { "start": 1945.92, "end": 1956.6000000000001, "text": " and then E is like that shape so and D and E are fixed matrices so you can see" }, { "start": 1956.6000000000001, "end": 1962, "text": " that what the multi what this multiplicative layer is doing essentially" }, { "start": 1962, "end": 1969.48, "text": " is it it defines outputs it defines outputs so these are the number of" }, { "start": 1969.48, "end": 1975.8400000000001, "text": " outputs and this is the dimensionality of the output and what you're able to" }, { "start": 1975.84, "end": 1981.24, "text": " do is this is in some height higher dimensional space you're able to" }, { "start": 1981.24, "end": 1986.24, "text": " manipulate the coordinate system scaling a little bit well a little bit" }, { "start": 1986.24, "end": 1991.1999999999998, "text": " arbitrarily but you cannot mix the individual dimension freely you can" }, { "start": 1991.1999999999998, "end": 1996.28, "text": " simply in that high dimensional space for a given mixing of dimensions that's" }, { "start": 1996.28, "end": 2001.1599999999999, "text": " what these matrices here do for a given mixing of dimensions for a given linear" }, { "start": 2001.16, "end": 2006.5600000000002, "text": " projections from the low dimensional to the high dimensional space you're able" }, { "start": 2006.5600000000002, "end": 2012.0800000000002, "text": " to manipulate the coordinate system so if you if you learn you need to be able" }, { "start": 2012.0800000000002, "end": 2019, "text": " to find matrices D and E such that for arbitrary samples the manipulation of" }, { "start": 2019, "end": 2023.72, "text": " the coordinate systems there makes sense it's a little bit like you know like" }, { "start": 2023.72, "end": 2031.48, "text": " doing a PCA or something on a on a data set right but it's just like during" }, { "start": 2031.48, "end": 2040.44, "text": " training right here so yeah I'm not sure again this is quite this is quite a loss" }, { "start": 2040.44, "end": 2049, "text": " this is quite a trade-off with an actual dense layer right here so but it's" }, { "start": 2049, "end": 2053.04, "text": " interesting to see that it works right and again this is only conceptual right" }, { "start": 2053.04, "end": 2058.8, "text": " here if you were to actually do this you would lose all the benefits that you" }, { "start": 2058.8, "end": 2062.7599999999998, "text": " would lose all the benefits that you had and again you can see a little bit that" }, { "start": 2062.7599999999998, "end": 2066.8, "text": " the trick here isn't necessarily sparsity but mostly low rank this is" }, { "start": 2066.8, "end": 2077.4, "text": " mostly like a low rank function yeah okay so we have the multiplicative layer" }, { "start": 2077.4, "end": 2082.2799999999997, "text": " we end up with the queries and the keys and the values for each attention head" }, { "start": 2082.28, "end": 2087.84, "text": " and now we're going to they're essentially say okay we could do this" }, { "start": 2087.84, "end": 2095.2000000000003, "text": " for every one of the three things or or we simply do it once which would give us" }, { "start": 2095.2000000000003, "end": 2103.28, "text": " this property of would you give us this property of the permutation being able" }, { "start": 2103.28, "end": 2108.76, "text": " and then we can do something even cheaper if we want to get the individual" }, { "start": 2108.76, "end": 2116.0800000000004, "text": " matrices right and so the trade-off here is well here still every permutation was" }, { "start": 2116.0800000000004, "end": 2121.8, "text": " possible for the different matrices so the Q could have different permutations" }, { "start": 2121.8, "end": 2126.48, "text": " than K then V or different functions here we're simply going to resort to one" }, { "start": 2126.48, "end": 2132.1200000000003, "text": " function one mixing or shuffling around of the dimension and then we're going" }, { "start": 2132.1200000000003, "end": 2135.84, "text": " to do something even cheaper which is this convolutional module and this" }, { "start": 2135.84, "end": 2142.08, "text": " convolutional module is also fairly simple to see so this output Y right" }, { "start": 2142.08, "end": 2150.08, "text": " here and draw it again over here you have two vectors right here and they" }, { "start": 2150.08, "end": 2156.52, "text": " say it somewhere they say the dimensionality somewhere so you have two" }, { "start": 2156.52, "end": 2162.1600000000003, "text": " vectors one per attention head this is the output of the multiplicative layer" }, { "start": 2162.16, "end": 2169.96, "text": " and presumably you would have those per token right we just looked at one token" }, { "start": 2169.96, "end": 2175.16, "text": " but the next token let me draw a little in this color the next token would also" }, { "start": 2175.16, "end": 2185.8399999999997, "text": " have them and then the next token would also have two of those all right let's" }, { "start": 2185.84, "end": 2195.7200000000003, "text": " do this so what you'd get is a tensor that has the sequence length L it has" }, { "start": 2195.7200000000003, "end": 2204.44, "text": " the number of heads what's s I guess or number of modules and it has M which is" }, { "start": 2204.44, "end": 2210.8, "text": " that that essentially that low rank dimensionality that the keys and queries" }, { "start": 2210.8, "end": 2217.4, "text": " and values live in and they simply treat this as an image and then they run a" }, { "start": 2217.4, "end": 2223.6800000000003, "text": " convolution across it so the convolution is going to be let me see if I can draw" }, { "start": 2223.6800000000003, "end": 2231.2400000000002, "text": " this properly the convolution is going to be across these two so the filter is" }, { "start": 2231.24, "end": 2241.68, "text": " going to be like this and then in all the dimensions like this I'm terrible at" }, { "start": 2241.68, "end": 2248.3999999999996, "text": " drawing but the filter essentially is going to be F in the dimension of s F in" }, { "start": 2248.3999999999996, "end": 2255.8999999999996, "text": " the dimension of L and M deep and you have M filters of those so you you have" }, { "start": 2255.9, "end": 2263.04, "text": " an s by L by M tensor here and you transform it also to an s by L by M" }, { "start": 2263.04, "end": 2269.76, "text": " tensor essentially you can just think of this as a regular convolutional layer" }, { "start": 2269.76, "end": 2276.36, "text": " and what again what does the convolution go over remember that the multiplicative" }, { "start": 2276.36, "end": 2283.4, "text": " layer is simply works on a single token it mixes it's kind of shot it is able to" }, { "start": 2283.4, "end": 2288.8, "text": " shuffle around the tokens dimensionalities a little bit to permute" }, { "start": 2288.8, "end": 2292.8, "text": " them a little bit in the best case and in all other cases it essentially" }, { "start": 2292.8, "end": 2298.28, "text": " manipulates the scaling in a high dimensional space and now with the" }, { "start": 2298.28, "end": 2302.48, "text": " convolutional layer what we can do is we can bridge a little bit of information" }, { "start": 2302.48, "end": 2308.76, "text": " already between the tokens even before we go into the attention module so given" }, { "start": 2308.76, "end": 2314.6000000000004, "text": " that the convolution is across the L and the s dimension it means that for the s" }, { "start": 2314.6000000000004, "end": 2320.28, "text": " dimension information is able to be passed between neighboring attention" }, { "start": 2320.28, "end": 2324.7200000000003, "text": " heads and for the L dimension it means information is being able to be passed" }, { "start": 2324.7200000000003, "end": 2331.28, "text": " between neighboring tokens in the sequence so that potentially gives some" }, { "start": 2331.28, "end": 2335.1600000000003, "text": " sort of a positionality to tokens because now that there's a notion of" }, { "start": 2335.16, "end": 2340.12, "text": " being close together and also it gives maybe a little bit of a meaning to" }, { "start": 2340.12, "end": 2345.2, "text": " different attention heads because the attention heads up until this point" }, { "start": 2345.2, "end": 2350.7599999999998, "text": " they've just been kind of unordered independent things and now they hang" }, { "start": 2350.7599999999998, "end": 2357, "text": " together a little bit this all of this is sort of one of the things why the" }, { "start": 2357, "end": 2363.8399999999997, "text": " the exact conclusions of this paper are going to be hard to assess even if they" }, { "start": 2363.84, "end": 2368.56, "text": " do ablations right they at the same time where they introduce efficiency they" }, { "start": 2368.56, "end": 2374, "text": " also introduce entirely new ways of sort of doing things they introduce new paths" }, { "start": 2374, "end": 2380.92, "text": " when it where information can be passed from between things and so it's very" }, { "start": 2380.92, "end": 2387.6800000000003, "text": " hard to point down exactly where things go right and wrong so this was the" }, { "start": 2387.68, "end": 2396.3599999999997, "text": " sparse or rather low dimensional attention module again this is first" }, { "start": 2396.3599999999997, "end": 2402.3999999999996, "text": " one of these multiplicative layers which is element wise multiply followed by" }, { "start": 2402.3999999999996, "end": 2409.3199999999997, "text": " matrix multiplication to a lower dimension and then that is followed by" }, { "start": 2409.32, "end": 2418.04, "text": " these by these convolutions but it's convolutional layers right here so they" }, { "start": 2418.04, "end": 2425.0800000000004, "text": " call this whole thing a multconv right if they combine all of this together" }, { "start": 2425.0800000000004, "end": 2430.8, "text": " you can see right here the blue with the shade is the average of the baselines" }, { "start": 2430.8, "end": 2437.1600000000003, "text": " this perplexity so lower is presumably better and you can see up to some noise" }, { "start": 2437.16, "end": 2443.96, "text": " all of these things are fairly consistent right they follow the" }, { "start": 2443.96, "end": 2450.96, "text": " trajectory of the baselines quite neatly some are even kind of it lower this one" }, { "start": 2450.96, "end": 2455.6, "text": " right here though I'm not sure if there is a there is exactly confusion because" }, { "start": 2455.6, "end": 2461.96, "text": " so the F right here is the filter size right and the S is the the sparsity in" }, { "start": 2461.96, "end": 2466.8799999999997, "text": " the multiplicative layer so essentially how many attention heads it splits" }, { "start": 2466.88, "end": 2473.8, "text": " stuff into and you can see right here there is a conv there's just a conv and" }, { "start": 2473.8, "end": 2479.12, "text": " there's just a mult but the F is with the mult which confuses me because the" }, { "start": 2479.12, "end": 2488.6800000000003, "text": " F is the filter size so technically that should be with the conv I guess if the" }, { "start": 2488.6800000000003, "end": 2495.2000000000003, "text": " authors are watching please please leave a comment if I'm wrong right here other" }, { "start": 2495.2, "end": 2503.6, "text": " I'm confused in any case they show that the baseline transformer don't" }, { "start": 2503.6, "end": 2509.24, "text": " particularly do that much better in these NLP tasks or even do worse" }, { "start": 2509.24, "end": 2513.68, "text": " sometimes as you can see right here though everything is pretty much within" }, { "start": 2513.68, "end": 2520.3999999999996, "text": " like a standard deviation than these scaling transformers so this" }, { "start": 2520.3999999999996, "end": 2524.96, "text": " architecture that we've discussed right now is this scaling transformer the" }, { "start": 2524.96, "end": 2529.8, "text": " last thing to do would be to add a sparse loss layer so they can replace" }, { "start": 2529.8, "end": 2535.92, "text": " the dense layer with a multiplicative layer similar to previous sections this" }, { "start": 2535.92, "end": 2542.44, "text": " speeds up the coding time say sorry they say but may degrade perplexity" }, { "start": 2542.44, "end": 2547.6, "text": " results are in the appendix so the loss layer might not might be the last" }, { "start": 2547.6, "end": 2558.16, "text": " refuge of of really dense things to do but remember due to the fact that in the" }, { "start": 2558.16, "end": 2566.64, "text": " feed forward layers we sample from this distribution to really be sparse or in" }, { "start": 2566.64, "end": 2571.96, "text": " fact we might do argmax right in during inference that's where the speed up" }, { "start": 2571.96, "end": 2577.44, "text": " comes from during training we actually have to forward propagate the softmax" }, { "start": 2577.44, "end": 2583.76, "text": " from time to time so that the training works and that means that the benefits" }, { "start": 2583.76, "end": 2589.92, "text": " of sparsity are lost because if we don't hard sample ones and zeros if we soft" }, { "start": 2589.92, "end": 2593.92, "text": " sample them then all the rows are still activated and we need to track" }, { "start": 2593.92, "end": 2599.16, "text": " everything and the same goes I think a little bit for batch inference so if I" }, { "start": 2599.16, "end": 2603.12, "text": " have batch inference even if I hard sample right different samples are going" }, { "start": 2603.12, "end": 2608.88, "text": " to have different activation patterns and therefore you know with enough" }, { "start": 2608.88, "end": 2613.92, "text": " samples all the things are going to be one somewhere and therefore I probably" }, { "start": 2613.92, "end": 2618, "text": " need to load the entire matrix right here from memory I need to do the" }, { "start": 2618, "end": 2623.6, "text": " multiplication with the entire matrix possibly not for all the vectors but also" }, { "start": 2623.6, "end": 2628.7999999999997, "text": " possibly something like a GPU probably wouldn't care that some stuff is zero" }, { "start": 2628.8, "end": 2634.6800000000003, "text": " it's gonna be as fast just to do all the things at the same time but that might" }, { "start": 2634.6800000000003, "end": 2641.1200000000003, "text": " be a hardware limitation okay so that was the scaling transformer and now we're" }, { "start": 2641.1200000000003, "end": 2645.7200000000003, "text": " going to supercharge the scaling transformer which makes it into a" }, { "start": 2645.7200000000003, "end": 2651.6000000000004, "text": " terraformer I don't think there's any relation to the tool terraform but no" }, { "start": 2651.6000000000004, "end": 2658.6000000000004, "text": " we're running out of names of formers so yeah this was the last refuge" }, { "start": 2658.6, "end": 2667, "text": " I guess so what they do is they use essentially they use essentially the" }, { "start": 2667, "end": 2676.12, "text": " architecture from the attention from reformer so yes we focus on the" }, { "start": 2676.12, "end": 2682, "text": " locality sensitive hashing attention from reformer was that reformer I" }, { "start": 2682, "end": 2692.64, "text": " thought that was perform I am confused by my by my own stuff reformer yes so" }, { "start": 2692.64, "end": 2698.84, "text": " they do two things right they have an architecture for a long sequences while" }, { "start": 2698.84, "end": 2702.32, "text": " integrating sparse attention layer into a scaling transformer we noticed" }, { "start": 2702.32, "end": 2707.96, "text": " architecture is suboptimal that's what I said at the beginning separating" }, { "start": 2707.96, "end": 2711.6, "text": " decoder self-attention and encoder decoder attention is not necessary" }, { "start": 2711.6, "end": 2716.2799999999997, "text": " anymore from the perspective of efficiency we remove the encoder decoder" }, { "start": 2716.2799999999997, "end": 2721.52, "text": " attention that I said that at the very beginning but just concatenate the" }, { "start": 2721.52, "end": 2730.36, "text": " encoder representation before the decoder tokens so they replace the" }, { "start": 2730.36, "end": 2738.72, "text": " encoder decoder attention by essentially two attention blocks that is that okay I" }, { "start": 2738.72, "end": 2745.4399999999996, "text": " guess there's no performer in here just the reformer so the LSH I've done a" }, { "start": 2745.4399999999996, "end": 2751.68, "text": " video on this locality sensitive hashing instead of full attention so if you have" }, { "start": 2751.68, "end": 2757.48, "text": " really long sequences you as I said you need to compute inner products between" }, { "start": 2757.48, "end": 2765.12, "text": " all pairs between all pairs of of nodes right here of tokens and this is" }, { "start": 2765.12, "end": 2770.3599999999997, "text": " cumbersome there are various techniques to speed that up one is LSH locality" }, { "start": 2770.3599999999997, "end": 2775.08, "text": " sensitive hashing where you essentially create hash buckets and then you hash" }, { "start": 2775.08, "end": 2782.88, "text": " all the vectors all the vectors inside of it or all the inner products become" }, { "start": 2782.88, "end": 2789.3599999999997, "text": " hashes and you look for essentially hash collisions that indicate where you want" }, { "start": 2789.3599999999997, "end": 2793.2, "text": " to calculate and check and a whole everything that's not a hash collision" }, { "start": 2793.2, "end": 2797.2799999999997, "text": " you don't need to check so locality sensitive hashing has been long-standing" }, { "start": 2797.2799999999997, "end": 2803.2, "text": " technique to make inner product search in high dimensions or inner product" }, { "start": 2803.2, "end": 2809.68, "text": " computations and looking for the most close inner product in in among very" }, { "start": 2809.68, "end": 2815.6, "text": " many elements have very fast so they borrow that from there and then also" }, { "start": 2815.6, "end": 2825.44, "text": " they include the recurrent blocks so recurrent blocks is no that's later" }, { "start": 2825.44, "end": 2831.52, "text": " first it's the reversibility all of this is just so similar" }, { "start": 2831.52, "end": 2840.2, "text": " reversibility is also apparently in reformer and what reversibility means" }, { "start": 2840.2, "end": 2843.96, "text": " it's kind of this architecture right here so again we have two attention and" }, { "start": 2843.96, "end": 2849.12, "text": " then one feed forward right the second attention replaces the encoder decoder" }, { "start": 2849.12, "end": 2855.92, "text": " attention and reversible means that instead of having one strand like one" }, { "start": 2855.92, "end": 2860.7200000000003, "text": " flow of forward propagating information right one flow of information we have" }, { "start": 2860.7200000000003, "end": 2866.84, "text": " two so there's I one and I two input one and input two we have two information" }, { "start": 2866.84, "end": 2872.4, "text": " flows forward and then every function that's applied is applied to one flow" }, { "start": 2872.4, "end": 2878.32, "text": " and added to the other flow right this gives you this and this one right here" }, { "start": 2878.32, "end": 2885.08, "text": " is simply forward propagated as a residual connection essentially and then" }, { "start": 2885.08, "end": 2890.7200000000003, "text": " x2 is taken so this the flow of the actual function would be this right here" }, { "start": 2890.7200000000003, "end": 2897.84, "text": " right you can see this is the flow of hitting all the functions and you can" }, { "start": 2897.84, "end": 2902.36, "text": " also see that we always have a signal for each of the functions we always have" }, { "start": 2902.36, "end": 2908.2400000000002, "text": " a signal that travels without being touched by the function right here okay" }, { "start": 2908.2400000000002, "end": 2913.7200000000003, "text": " so that signal right here and this is the signal right here and that makes the" }, { "start": 2913.7200000000003, "end": 2919.4, "text": " blocks reversible and that means that I can I don't have to keep activations in" }, { "start": 2919.4, "end": 2928.04, "text": " mind this limits this limits the capabilities a lot so non-reversible an" }, { "start": 2928.04, "end": 2932.28, "text": " example for non-reversible would be well this here is non-reversible because" }, { "start": 2932.28, "end": 2939.2400000000002, "text": " because unless I do like a linear function that goes from exactly the same" }, { "start": 2939.2400000000002, "end": 2944.6400000000003, "text": " dimension to the same dimension that is non-degenerate unless I do that I cannot" }, { "start": 2944.6400000000003, "end": 2950.6400000000003, "text": " possibly reconstruct the input right here like the signal right here X from" }, { "start": 2950.6400000000003, "end": 2955.36, "text": " the output Y not even for a single one of those blocks right it's not possible" }, { "start": 2955.36, "end": 2964.52, "text": " for me essentially to do this or yeah so the reversibility changes that" }, { "start": 2964.52, "end": 2969.08, "text": " essentially means I can always reconstruct from the from these signals" }, { "start": 2969.08, "end": 2974, "text": " I can reconstruct the intermediate activations and therefore I don't need" }, { "start": 2974, "end": 2979.02, "text": " to store them because in a normal neural network as I forward propagate I need to" }, { "start": 2979.02, "end": 2984.56, "text": " store a lot of intermediate stuff like right here and right here in order to" }, { "start": 2984.56, "end": 2990.92, "text": " then during back propagation I need those things because otherwise I couldn't" }, { "start": 2990.92, "end": 2994.52, "text": " calculate the gradient so I need to store the activation somewhere" }, { "start": 2994.52, "end": 3000.64, "text": " reversible networks reversible blocks do not have this property they do not need" }, { "start": 3000.64, "end": 3005.7999999999997, "text": " to store because they're reversible and they're made reversible not by changing" }, { "start": 3005.7999999999997, "end": 3010.34, "text": " the individual modules like this or this but by simply having this construction" }, { "start": 3010.34, "end": 3016.44, "text": " of the two strands of information and the modules simply apply between the two" }, { "start": 3016.44, "end": 3022.6400000000003, "text": " that's it's pretty smart architecture but one has to say it has very often" }, { "start": 3022.6400000000003, "end": 3029.1200000000003, "text": " significant trade-offs because these things being reversible also brings some" }, { "start": 3029.1200000000003, "end": 3033.56, "text": " some properties like there are a lot of functions you cannot express anymore" }, { "start": 3033.56, "end": 3040.12, "text": " because you need to keep everything reversible so again I think for the" }, { "start": 3040.12, "end": 3045.4, "text": " problems they particularly look at here it might work it might not work for all" }, { "start": 3045.4, "end": 3051.04, "text": " problems I think that's a bit of a general thing in this in this paper" }, { "start": 3051.04, "end": 3056.44, "text": " right here it's more like we're gonna have to test for every new task we" }, { "start": 3056.44, "end": 3064.04, "text": " tackle or new challenges new modalities whether these things still hold the last" }, { "start": 3064.04, "end": 3069.88, "text": " thing they build in is recurrence and they say it's for generalization and" }, { "start": 3069.88, "end": 3078.52, "text": " that is if I understand it correctly it is they use simple recurrent units not" }, { "start": 3078.52, "end": 3082.6, "text": " like an LSTM because they say that would be too slow so simple recurrent units" }, { "start": 3082.6, "end": 3087.12, "text": " they're still fairly complicated like I've looked them up there I didn't know" }, { "start": 3087.12, "end": 3092.08, "text": " what they were they're still they're still okay complicated so it's not just" }, { "start": 3092.08, "end": 3096.64, "text": " like a recurrent layer it's actually you know it has gates and so on like bit" }, { "start": 3096.64, "end": 3106.56, "text": " like GRU's or LSTM cells and if I understand correctly this goes between" }, { "start": 3106.56, "end": 3114.36, "text": " so as I said before in the feed forward layer that every single token goes" }, { "start": 3114.36, "end": 3120.2799999999997, "text": " independently through that if I understand this correctly if I" }, { "start": 3120.2799999999997, "end": 3126.44, "text": " understand this correctly this introduces a recurrent connection in" }, { "start": 3126.44, "end": 3138.48, "text": " between these did I well did I understand it correctly okay we also add" }, { "start": 3138.48, "end": 3144.92, "text": " a recurrence to the feed forward block of terraformer recurrent layers allow" }, { "start": 3144.92, "end": 3153.12, "text": " information to propagate in time even a even in a single decoder block okay I" }, { "start": 3153.12, "end": 3158.56, "text": " think I understood that correctly so within the feed forward block right here" }, { "start": 3158.56, "end": 3165.6, "text": " there is a recurrent connection between the different tokens every token goes" }, { "start": 3165.6, "end": 3170.56, "text": " independently through that but now we introduce actually a sort of dependence" }, { "start": 3170.56, "end": 3174.16, "text": " or a function that goes from the first token to the second to the third and so" }, { "start": 3174.16, "end": 3181.68, "text": " on a recurrent small recurrent neural network and again they one can only" }, { "start": 3181.68, "end": 3186.72, "text": " speculate why they have this in here I mean they say that this the results on" }, { "start": 3186.72, "end": 3194.8399999999997, "text": " C4 are minimal which is their language modeling task and they say the biggest" }, { "start": 3194.8399999999997, "end": 3199.72, "text": " benefits are when they do like these these toy tasks where you need to copy" }, { "start": 3199.72, "end": 3205.3999999999996, "text": " a decimal digit and then you can train at on 128 digits but then you can test" }, { "start": 3205.3999999999996, "end": 3211.2, "text": " on 256 so it's over two times longer than seen in training so they really" }, { "start": 3211.2, "end": 3217.48, "text": " make this point it's for generalization though it is very very odd like this is" }, { "start": 3217.48, "end": 3222.72, "text": " a very odd addition I can I could get them until like you know here it says" }, { "start": 3222.72, "end": 3226.4399999999996, "text": " yeah okay you go for long sequences you know that that's cool long sequences" }, { "start": 3226.4399999999996, "end": 3231.7599999999998, "text": " are cool it's cool if your model can you know also do long sequences fine then" }, { "start": 3231.7599999999998, "end": 3237.3599999999997, "text": " memory efficiency okay you know so given that is all sparse and low rank and so" }, { "start": 3237.36, "end": 3245.1600000000003, "text": " on you also might want to use less memory cool but then recurrence for this is" }, { "start": 3245.1600000000003, "end": 3251.2000000000003, "text": " this is quite an odd choice I feel and it could be that it simply didn't work" }, { "start": 3251.2000000000003, "end": 3258.1600000000003, "text": " like so they also say that the terraformer here in sort of these tasks" }, { "start": 3258.1600000000003, "end": 3264.76, "text": " like summarization that it sort of beats or matches state-of-the-art matches much" }, { "start": 3264.76, "end": 3270.36, "text": " much larger models and so on it could I can imagine that their numbers were" }, { "start": 3270.36, "end": 3277, "text": " slightly smaller like slightly worse than kind of the baselines and they were" }, { "start": 3277, "end": 3283.5600000000004, "text": " just looking for something to add to pump up those numbers and this worked if" }, { "start": 3283.5600000000004, "end": 3289.82, "text": " this is the case if that's a big if again it's very dangerous because it" }, { "start": 3289.82, "end": 3294.32, "text": " might work for these particular problems and not for others if not if this was" }, { "start": 3294.32, "end": 3298.6800000000003, "text": " really just like an idea they had and said well it'd be cool if that's in" }, { "start": 3298.6800000000003, "end": 3305.6800000000003, "text": " there then you know good like I'm willing to I'm willing to accept that as" }, { "start": 3305.6800000000003, "end": 3312.6800000000003, "text": " well alright so that was the terraformer and here you see so the" }, { "start": 3312.6800000000003, "end": 3321.96, "text": " terraformer now has over a 37 X speed up on it's a considerably large model but" }, { "start": 3321.96, "end": 3329, "text": " for this large model it requires less than 100 milliseconds per token of" }, { "start": 3329, "end": 3336.92, "text": " decoding time while not degrading in performance too much so that is that is" }, { "start": 3336.92, "end": 3341.52, "text": " I think quite an achievement even if it's only for particular types of tasks" }, { "start": 3341.52, "end": 3346.4, "text": " like these here it is quite an achievement and it's a bit of a shame" }, { "start": 3346.4, "end": 3351.2, "text": " that the speed ups are only for like they're only so huge for the really huge" }, { "start": 3351.2, "end": 3357.2, "text": " models I guess it makes sense because these effects are often compounding you" }, { "start": 3357.2, "end": 3365.8399999999997, "text": " know so it for you and me with like our regular old computers laptops it maybe" }, { "start": 3365.8399999999997, "end": 3370.2, "text": " won't make that much a difference in terms of speed it might make a" }, { "start": 3370.2, "end": 3374.7599999999998, "text": " difference in terms of memory because of the reversibility but other than that" }, { "start": 3374.7599999999998, "end": 3380.8399999999997, "text": " yeah but it's it's good for like if you work if you want to work with larger" }, { "start": 3380.84, "end": 3387.44, "text": " models but you don't necessarily have to compute and you do inference this might" }, { "start": 3387.44, "end": 3391.6800000000003, "text": " be something for you they specifically say that not everything has been tried" }, { "start": 3391.6800000000003, "end": 3395.56, "text": " yet they still don't do quantization which could yet deliver another speed up" }, { "start": 3395.56, "end": 3400.84, "text": " and there's also lots of things to do to actually speed up training maybe there's" }, { "start": 3400.84, "end": 3407.2400000000002, "text": " a way to get around this gumball softmax need to forward propagate the true soft" }, { "start": 3407.24, "end": 3413.3199999999997, "text": " max from time to time and so on so lots of engineering lots of kind of choices" }, { "start": 3413.3199999999997, "end": 3418.56, "text": " that are interleaved very hard to say where gain comes from but undeniable" }, { "start": 3418.56, "end": 3424, "text": " gain has been made in huge form and that's cool all right tell me what you" }, { "start": 3424, "end": 3437.84, "text": " think I'll see you next time bye bye" } ]