Home Mechine Learning原理公式整理
Post
Cancel

Mechine Learning原理公式整理

线性回归

一元方程可表示为:

\[h_\theta(x)=\theta_0 + \theta_1 x\]

多元方程可表示为:

\[h_\theta(x)=\theta_0 + \theta_1 x_1+...+\theta_n x_n\]

可以统一写成如下格式:

\[h_\theta(x) =\sum_{i=0}^n (\theta_i x_i)\]

其中\(x_0\)为0

梯度下降法

损失函数(loss/error fuction)为

\[J(\theta) = \frac 1 2 \sum_{i=1}^m (h_\theta(x^{(i)})-y^{(i)})^2\]

梯度(batch)计算

\[\theta_j := \theta_j - \alpha\frac\partial{\partial\theta_j}J(\theta)\]

\[\frac\partial{\partial\theta_j}(h_\theta(x)-y)^2\] \[=2 (h_\theta(x)-y) \frac\partial{\partial\theta_j}(h_\theta(x)-y)\] \[=2 (h_\theta(x)-y) \frac\partial{\partial\theta_j}(\sum_{i=0}^n (\theta_i x_i)-y)\] \[=2 (h_\theta(x)-y) x_j\]

其中 \(\sum_{i=0}^n (\theta_i x_i)\) 由于是对 \(\theta_j\) 求导数,最后仅剩下 \(x_j\)

于是

\[\frac\partial{\partial\theta_j}J(\theta)=\frac\partial{\partial\theta_j} \frac 1 2 \sum_{i=1}^m (h_\theta(x^{(i)})-y^{(i)})^2\] \[=\sum_{i=1}^m \frac 1 2 2 (y^{(i)}-h_\theta(x^{(i)})) x_j^{(i)}\] \[=\sum_{i=1}^m (y^{(i)}-h_\theta(x^{(i)})) x_j^{(i)}\]

最终,梯度(batch)计算公式为

\[\theta_j := \theta_j - \alpha \sum_{i=1}^m (h_\theta(x^{(i)})-y^{(i)}) x_j^{(i)}\] \[= \theta_j + \alpha \sum_{i=1}^m (y^{(i)}-h_\theta(x^{(i)})) x_j^{(i)}\]

随机梯度下降法为

1
2
repeat{
      for i=1 to m {

\(\theta_j := \theta_j + \alpha (y^{(i)}-h_\theta(x^{(i)})) x_j^{(i)}\)

1
2
3
         }
      
      }

Logistic函数(或者称 Sigmoid函数)

\[g(z)=\frac 1 {1+e^{(-z)}}\]

\[z=\theta_0 + \theta_1 x_1+...+\theta_n x_n=\sum_{i=0}^n (\theta_i x_i)=\theta^{(T)}x\]

\[g(z)=g(\theta^{T}x)=\frac 1 {1+e^{(-\theta^{T}x)}}=h_\theta(x)\]

函数\(h_\theta(x)\)的含义,表示结果取1 时的概率,对于输入x,分类结果为类别0和类别1的概率,可表示为:

\[P(y=1|x;\theta)=h_\theta(x)\]

\(P(y=0|x;\theta)=1-h_\theta(x)\) 假设现在有m个相互独立的观察事件\(y(y^{(1)},y^{(2)},y^{(3)},...,y^{(m)})\), 则一个事件\(y^{(i)}\)发生的概率为(\(y^{(i)}=1\))

\[p(y^{(i)})=p^{y^{(i)}} {(1-p)}^{1-y^{(i)}}\]

\[p(y|x;\theta)=h_\theta(x)^{y^{(i)}}(1-h_\theta(x))^{1-y^{(i)}}\]

似然函数

\[L(\theta)=\prod_{j=0}^m (h_\theta(x_i)^{y_i}*(1-h_\theta(x_i))^{1-y_i})\]

对数似然函数 \(l(\theta)=log(L(\theta))=\sum_1^m(y_ilogh_\theta(x_i)+(1-y_i)log(1-h_\theta(x_i)))\)23333

loss(Cost)函数

\[J(\theta)=-\frac 1 m l(\theta)\]

结合梯度下降法求最小值时的公式

\[\theta_i = \theta_i - \alpha\frac\partial{\partial\theta_i}J(\theta)\]

\[\frac\partial{\partial\theta_j}J(\theta)=-\frac 1 m \sum_1^m{((y_i \frac 1 {h_\theta(x)})}{\frac\partial{\partial\theta_j}{h_\theta(x)}}-{(1-y_i)\frac 1 {(1-h_\theta(x))}}{\frac\partial{\partial\theta_j}{h_\theta(x)})}\]

\(\frac\partial{\partial\theta_j}J(\theta)\)求导数

\[\begin {aligned} \frac\partial{\partial\theta_j}J(\theta)\\ &= -\frac 1 m \sum_1^m{(y_i* \frac 1 {h_\theta(x_i)}}{\frac\partial{\partial\theta_j}{h_\theta(x_i)}}-{(1-y_i)\frac 1 {(1-h_\theta(x_i))}}{\frac\partial{\partial\theta_j}{h_\theta(x_i)})}\\ &=-\frac 1 m \sum_1^m({y_i* \frac 1 {g(\theta^{T}x_i)}}-{(1-y_i)\frac 1{(1-g(\theta^{T}x_i))}}){\frac\partial{\partial\theta_j}{g(\theta^{T}x_i)}}\\ &=-\frac 1 m \sum_1^m({y_i* \frac 1 {g(\theta^{T}x_i)}}-{(1-y_i)\frac 1 {(1-g(\theta^{T}x_i))}}){g(\theta^{T}x_i){(1-{g(\theta^{T}x_i))}}}{\frac\partial{\partial\theta_j}{(\theta^{T}{x_i})}}\\ &=-\frac 1 m \sum_1^m{(({y_i*{(1-g(\theta^{T}x_i)}})-{(1-y_i)}{g(\theta^{T}x_i})})x_i^j\\ &=-\frac 1 m \sum_1^m{(y_i-g(\theta^{T}x_i))}x_i^j\\ &=-\frac 1 m \sum_1^mx_i^j\\ \end {aligned}\]

梯度下降与回归方法

  1. 从中心极限定理到正太分布到极大释然函数到平方和最小函数能求解出最佳theta
  2. 批量梯度:每次迭代计算theta使用所有样本
  3. 随机梯度:没读取一条样本就迭代对theta进行更新
  4. 改进后的随机梯度下降

逻辑回归 softmax

  1. Gradient Descent (SGD BGD)
  2. 牛顿法
  3. 拟牛顿法
  4. BGFS
  5. L-BGFS

求解过程 注意误差函数:E(w)=∑[h(x-y]^2/2 梯度求解 wi=wi-η∂E/∂wi ∂E/∂wi=∑(h(x)-y(xi)

决策树

  1. ID3,C4.5 信息增益(最大信息增益)/信息增益率
  2. CART 基尼系数Gini 最好的划分就是使得GINI_Gain最小的划分。(回归树:最小平方残差、最小绝对残差等)

相关课程资源

  1. cs231n
  2. CS224d
  3. cs229
This post is licensed under CC BY 4.0 by the author.