audio
audioduration (s)
0.62
28.4
text
stringlengths
2
287
now essentially through this whole thing what we are doing is what is called as gradient
descent learning so in this gradient descent learning what happens is that you have your
w and and if you look over this is what is the gradient over there now as you see this
the direction of this gradient which goes down now in that what essentially happens
what happens in the weight space itself so if i plot down to different plots over here
say that i have a very simple neuron we just associate two features to one single output
with some random initial value over there and with this random initial value so we will
have a point on the weight space as well as in this space of epochs versus my cost now
based on my update rule over there on the gradient descent now with that i will get
down a different value of cost coming down over here accordingly i get down to my next
is the major fallacy which comes down and as we go down deeper and deeper we will come
is that maybe you are at some value of w which is not exactly the minima so in case you are
that implementation you will be able to relate down the ways of connecting these input images
via certain features to or class of outputs now finally if you want to read more in details
you are also welcome to use down matlab with the neural network toolbox which is has a
much better gui and given the fact that a lot of people are most experienced on matlab
you might be able to use it much faster but we would be sticking down to our options of
doing it on python the other implementations are obviously based on lua torch which also
so its a syntax and and the base library of torch which is one of the fastest ones as
of date integrated within python to work it out ah so thats all i have for this particular
so welcome to todays lecture and as from we will continue with what we have done on the
last class so today is a practical lab based session in which we would be doing coding
exercise so if you have followed down the last four lectures which we have finished
off so the earlier lecture was on actually getting you introduced to a very linear perceptron
which was basically a coding exercise and there we had implemented one particular code
set as well as on the test set of c p od minus and then that was also stored down as a pickle
file which is your local file used within python in order to store down your recorded
matrices now today what we are going to do is build
up on top of that so in the last class and i had concluded on the point where all of
onwards where you have all the files which are saved down and then appropriately from
those i am going to take it forward and create a new perceptron model and use these features
which were already stored in a pickle file by loading it from the pickle file and subsequently
go down to run a classification so the location is still the same so you will
have to get it down from the github link where we have been keeping all our codes posted
but the reason we preserved it if you would like to have all the earlier computation also
and once you have calculated out the features you would go into it then this header actually
needed as well as you will no more be requiring direct access to the torch vision data set
because all the required information have been stored down in your pickle file itself
so these are optionals but always a good guide to keep them keep going on
now in the last class what we had done was we had trained them and we had stored them
with your features from the training data there is another pickle file which stores
such if i am not giving down a very explicit location this means that all these files reside
some other location then you can just append down the extra directory location from where
load this one so this is where you just define a file pointer as f and then you use that
file pointer in order to load your pickle and you get your ah basically your arrays
,nd to just display whether your matrices have been fetched out so lets keep on running
one by one so this is the first model which has to be executed so once thats done so it
works out you get your first instance which has been executed over there
now we go from the comment on to the next one which is about loading so it shows that
go down to the next one which is to define my network architecture ok so within my network
architecture how it goes down is if you remember from our simple neural network so what we
a ten class classification so it means that there are ten neurons on which the output
will be coming down and these are a one hot neuron kind of a thing so what that technically
is going to have a zero value over there ideally thats what is present on my training data
set whenever you are predicting you will get one of the neurons which will have a very
high value which is close to one all the other neurons are going to have a value which is
there so the first part is to define an initialization part so first we will start by defining a
class so this class definition is basically trying to define your container within python
of saying that this is a particular one which i am using so what i am going to do is that
we are going to import down my n n library so within n n i have a particular data type
this initialization is trying to say down that what is the number of inputs which goes
down into my neural network over here that is the on the input side of it and that
is equal to my number of channels which i call over here as an underscore channels so
this is exactly if you look over here so this n underscore channels it is supposed to be
sort of like within the perceptron model over here which is the starting point of my pointer
on the constructor and which is my end point of the constructor ok so the starting point
of the constructor is what gets defined by this ,nd of super or which is just a superior
constructor over there so over here the first part is that i will
be defining my perceptron which has this and then my next part over there is that the perceptron
the neurons which i have on my output side and how this is connected is that i have n
number of channels in my input set which equals to the length of the feature vector and they
connect down to all the ten neurons on my output side over there
now typically for this initial part there wont be much of a modification subsequently
one output vector so this is what initializes and defines my perceptron model
the next part is that i need to define what is my forward pass over this model or forward
pass means that if i am giving an input i get an output so what is the relationship
between this input to my output ok now you remember from our perceptron model over there
over there had to be connected down everything to my output and once i have this feed forward
going down the next part is that there will be a summation which happens over there
here so what this means over here is that whatever comes down as my input that gets
it as input to this network and then i apply a soft max criteria over there
called as perceptron whatever i give as input it will have a linear transformation then
so this is how i define my perceptron model so lets run that part so i have my function
like to do is actually generate a data set labels are accordingly so here if you remember
so i was telling that in this classification problem you just have a one hot vector remaining
everything else is going to be zero but then in the labels which we had stored so they
had just numbers from one two three four five six seven eight nine ten so its its not exactly
down another simple for loop so what it would do is that it would define
elements in your training sample and ten is the number of output ah classes which can
be present over there and now based on whichever is the class indicated in your train label
over there so this part if that matches down to a particular column index on a given row
then you are just going to set that as one and everything else is otherwise going to
remain as zero based on this initialization given down over there so we can just run that
part and this generates your one hot label vectors now once that is done
the next part is that ah you can actually create a pytorch data set from your feature
matrix and know where this comes down from the fact is that your earlier data structures
and whatever videos you are using so they were all in numpy format and numpy is a library
you are dealing with torch as another library it has its own data type definitions to go
the functions which are needed are something which is called as a torch from numpy so what
this effectively does is that given any array in numpy you are going to convert it into