先看迭代效果图(附带sklearn的LinearRegression对比)html
咱们能够看到,不一样的目标函数对最终的拟合曲线效果也是不同的。如下是函数代码:node
def huber_approx_obj(real, predict): d = predict - real h = 1 # h is delta in the graphic scale = 1 + (d / h) ** 2 scale_sqrt = np.sqrt(scale) grad = d / scale_sqrt hess = 1 / scale / scale_sqrt return grad, hess
相应损失函数图像(当预测值越偏离真实值的时候):python
def fair_obj(real, predict): """y = c * abs(x) - c**2 * np.log(abs(x)/c + 1)""" x = predict - real c = 1 den = abs(x) + c grad = c * x / den hess = c * c / den ** 2 return grad, hess
著名的Los-Cosh: app
def log_cosh_obj(real, predict): x = predict - real grad = np.tanh(x) # hess = 1 / np.cosh(x)**2 带除法的原方法,可能报ZeroDivisionException hess = 1.0 - np.tanh(x) ** 2 return grad, hess
以及它的函数图像:机器学习
def m4e(real, predict): grad = 4.0 * predict * predict * predict - 12.0 * predict * predict * real + 12.0 * predict * real * real - 4.0 * real * real hess = 12.0 * predict * predict - 24.0 * predict * real + 12.0 * real * real return grad, hess
附录:ide
log-cosh求导:函数
参考连接:学习
损失函数续集:Huber Loss,Log-Cosh Loss3d