Stanford 機器學習 Week4 作業 Multi-class Classification and Neural Networks

Vectorizing regularized logistic regression

m = length(y); % number of training examples
J = 0;
grad = zeros(size(theta));
J = sum( -y .* log(sigmoid(X*theta)) - (1 - y) .* log(1 - sigmoid(X*theta))) / m + theta(2:end)' * theta(2:end) * lambda / m / 2;

grad = X' * (sigmoid(X * theta) - y) / m;
grad(2:end) = grad(2:end) + lambda/m * theta(2:end);
grad = grad(:);

和上週同樣的問題,代碼向量化後簡潔多了,代碼要儘量避免出現loop。
A = A(:)可以矩陣轉成列向量

One-vs-all Prediction

m = size(X, 1);
n = size(X, 2);
all_theta = zeros(num_labels, n + 1);
X = [ones(m, 1) X];
initial_theta = zeros(n + 1, 1);
options = optimset('GradObj', 'on', 'MaxIter', 50);

for i = 1:num_labels
    all_theta(i,:) = fmincg (@(t)(lrCostFunction(t, X, (y == i), lambda)),initial_theta, options);
end;

Neural network predict

m = size(X, 1);
num_labels = size(Theta2, 1);
p = zeros(size(X, 1), 1);
X = [ones(m,1) X];
layer2 = sigmoid(X * Theta1');
layer2 = [ones(size(layer2,1),1) layer2];
layer3 = sigmoid(layer2 * Theta2');
[a,b] = max(layer3,[],2);
p = b;

Predict one vs all

m = size(X, 1);
num_labels = size(all_theta, 1);
p = zeros(size(X, 1), 1);
X = [ones(m, 1) X];
tmp = X * all_theta';
[value,index] = max(tmp,[],2);
p = index;
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章