linear regression and training loss

what is linear regression?
y = w1x1 + w2x2 + … + wn * xn + x0

xi is a feature

what is training loss?
squared loss = L2 loss = sum up(observation - prediction)^2 / n

精確率(accuracy)
正確識別出來的樣本量 / 識別出來的樣本總量

召回率(recall)
正確識別出來的樣本量 / 理想情況下應該被識別出來的樣本總量

以相似圖標識別爲例,待選圖標1000張,100張加號,用另一個加號圖標去做搜索,總共搜索出120張相似圖標,其中80張爲真相似,其餘40張爲假相似,則
精確率 = 80 / 120
召回率 = 80 / 100

for linear regression problems, starting values aren’t important.

the leaning procedure continues iterating until the algorithm discovers the model parameters with the lowest possible loss. usually, you iterate until overall loss stops changing or at least changes extremely slowly. when that happens, the model has converged.

a machine learning model is trained by starting with an initial guess for the weights and bias and iteratively adjusting those guesses until learning the weights and bias with the lowest possible loss.

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章