Regression
Well, It's been at least 15 years since I first gained an interest in Machine Learning, Neural Nets, etc. I have many books on the subject that really I couldn't quite figure out what all was going on. Too many Summations, subscripts and superscripts to take in. I wonder if I have some sort of dyslexia in this regard? But Huzzah! At last I have finally taken the time to write some very simple software that has opened up all of those formerly arcane subjects:
A very simple and basic Linear Regression which I hope will be the beginning of many understandings to come.
public DoubleMatrix1D descent(double alpha,
DoubleMatrix1D thetas,
DoubleMatrix2D independent,
DoubleMatrix1D dependent ) {
Algebra algebra = new Algebra();
// ALPHA*(1/M) in one.
double modifier = alpha / (double)independent.rows();
//I think this can just skip the transpose of theta.
//This is the result of every Xi run through the theta (hypothesis fn)
//So each Xj feature is multiplied by its Theata, to get the results of the hypotesis
DoubleMatrix1D hypothesies = algebra.mult( independent, thetas );
//hypothesis - Y
//Now we have for each Xi, the difference between predicted by the hypothesis and the actual Yi
hypothesies.assign(dependent, Functions.minus);
//Transpose Examples(MxN) to NxM so we can matrix multiply by hypothesis Nx1
//Note that the Transpose is constant time and doesn't create a new matrix.
DoubleMatrix2D transposed = algebra.transpose(independent);
DoubleMatrix1D deltas = algebra.mult(transposed, hypothesies );
// Scale the deltas by 1/m and learning rate alhpa. (alpha/m)
//deltas.assign(Functions.mult(modifier));
//Theta = Theta - Deltas
//thetas.assign( deltas, Functions.minus );
// thetas = thetas - (deltas*modifier) in one step
thetas.assign(deltas, Functions.minusMult(modifier));
return( thetas );
DoubleMatrix1D thetas,
DoubleMatrix2D independent,
DoubleMatrix1D dependent ) {
Algebra algebra = new Algebra();
// ALPHA*(1/M) in one.
double modifier = alpha / (double)independent.rows();
//I think this can just skip the transpose of theta.
//This is the result of every Xi run through the theta (hypothesis fn)
//So each Xj feature is multiplied by its Theata, to get the results of the hypotesis
DoubleMatrix1D hypothesies = algebra.mult( independent, thetas );
//hypothesis - Y
//Now we have for each Xi, the difference between predicted by the hypothesis and the actual Yi
hypothesies.assign(dependent, Functions.minus);
//Transpose Examples(MxN) to NxM so we can matrix multiply by hypothesis Nx1
//Note that the Transpose is constant time and doesn't create a new matrix.
DoubleMatrix2D transposed = algebra.transpose(independent);
DoubleMatrix1D deltas = algebra.mult(transposed, hypothesies );
// Scale the deltas by 1/m and learning rate alhpa. (alpha/m)
//deltas.assign(Functions.mult(modifier));
//Theta = Theta - Deltas
//thetas.assign( deltas, Functions.minus );
// thetas = thetas - (deltas*modifier) in one step
thetas.assign(deltas, Functions.minusMult(modifier));
return( thetas );
}
0 Comments:
Post a Comment
<< Home