function [W,b,err]=wid_hoff1(X,Y,lr,iters) %FUNCTION [W,b,err]=wid_hoff1(X,Y,lr,iters) %This function trains a linear neural network %using the Widrow-Hoff training algorithm. This %is a steepest descent method, and so will need %a learning rate, lr (for example, lr=0.1) % % Input: Data sets X, Y (for input, output) % Dimensions: number of points x dimension % % lr: Learning rate % iters: Number of times to run through the data % % Output: Weight matrix W and bias vector b so that Wx+b approximates y. % % err: Training record for the error % %It’s convenient to work with X and Y as dimension by number of points X=X'; Y=Y'; [m1,m2]=size(X); [n1,n2]=size(Y); %Initialize W and b to zero W=zeros(n1,m1); b=zeros(n1,1); err=zeros(iters,m2); %Storage for all errors for i=1:iters %Number of times through data for j=1:m2 %Go through every data point e=(Y(:,j)-(W*X(:,j)+b)); %Target - Network Output dW=lr*e*X(:,j)'; W=W+dW; b=b+lr*e; err(i,j)=norm(e); %Store error for later end end