高级集成学习技巧

Examined ensemble methods

  • Averaging (or blending)
  • Weighted averaging
  • Conditional averaging
  • Bagging
  • Boosting
  • Stacking
  • StackNet

Averaging ensemble methods

举个例子,假设咱们有一个名为age的变量,就像年龄同样,咱们试着预测它。咱们有两个模型:python

  • 低于50,模型效果更好
    model1.pnggit

  • 高于50,模型效果更好
    model2.pnggithub

那么若是咱们试图结合它们将会发生什么呢?算法

Averaging(or blending)
app

  • (model1 + model2) / 2
    model12.png

$R^2$上升到0.95,较以前有所改善。但该模型并无比单模型作的好的地方更好,尽管如此,它平均表现更好。也许可能会有更好的组合呢?来试试加权平均框架

Weighted averaging
dom

  • (model1 x 0.7 + model 2 x 0.3)
    model_weight.png

看起来没有以前的好ide

Conditional averaging
工具

  • 各取好的部分
    model_best.png

理想状况下,咱们但愿获得相似的结果性能

Bagging

Why Bagging

建模中有两个主要偏差来源

  • 1.因为误差而存在偏差(underfitting)
  • 2.因为方差而存在偏差(overfitting)

经过略微不一样的模型,确保预测不会有读取很是高的方差。这一般使它更具广泛性。

Parameters that control bagging?

  • Changing the seed
  • Row(Sub) sampling or Bootstrapping
  • Shuffling
  • Column(Sub) sampling
  • Model-specific parameters
  • Number of models (or bags)
  • (Optionally) parallelism

Examples of bagging

bagging_code.png

Boosting

Boosting是对每一个模型构建的模型进行加权平均的一种形式,顺序地考虑之前的模型性能。

Weight based boosting

weight_based.png

假设咱们有一个表格数据集,有四个特征。 咱们称它们为x0,x1,x2和x3,咱们但愿使用这些功能来预测目标变量y。
咱们将预测值称为pred,这些预测有必定的偏差。咱们能够计算这些绝对偏差,|y - pred|。咱们能够基于今生成一个新列或向量,在这里咱们建立一个权重列,使用1加上绝对偏差。固然有不一样的方法来计算这个权重,如今咱们只是以此为例。

全部接下来要作的是用这些特征去拟合新的模型,但每次也要增长这个权重列。这就是按顺序添加模型的方法。

Weight based boosting parameters

  • Learning rate (or shrinkage or eta)
  • 每一个模型只相信一点点:predictionN = pred0*eta + pred1*eta + ... + predN*eta
  • Number of estimators
  • estimators扩大一倍,eta减少一倍
  • Input model - can be anything that accepts weights
  • Sub boosting type:
  • AdaBoost-Good implementation in sklearn(python)
  • LogitBoost-Good implementation in Weka(Java)

Residual based boosting [&]

咱们使用一样的数据集作相同的事。预测出pred后
residual_pred.png

接下来会计算偏差
residual_error.png

将error做为新的y获得新的预测new_pred
residual_new_pred.png

以Rownum=1为例:

最终预测=0.75 + 0.20 = 0.95更接近于1

这种方法颇有效,能够很好的减少偏差。

Residual based boosting parameters

  • Learning rate (or shrinkage or eta)
  • predictionN = pred0 + pred1*eta + ... + predN*eta
  • 前面的例子,若是eta为0.1,则Prediction=0.75 + 0.2*(0.1) = 0.77
  • Number of estimators
  • Row (sub)sampling
  • Column (sub)sampling
  • Input model - better be trees.
  • Sub boosting type:
  • Full gradient based
  • Dart

Residual based favourite implementations

  • Xgboost
  • Lightgbm
  • H2O's GBM
  • Catboost
  • Sklearn's GBM

Stacking

Methodology

  • Wolpert in 1992 introduced stacking. It involves:
    1. Splitting the train set into two disjoint sets.
    1. Train several base learners on the first part.
    1. Make predictions with the base learners on the second (validation) part.

具体步骤

假设有A,B,C三个数据集,其中A,B的目标变量y已知。
stacking_data.png

而后

  • 算法0拟合A,预测B和C,而后保存pred0到B1,C1
  • 算法1拟合A,预测B和C,而后保存pred1到B1,C1
  • 算法2拟合A,预测B和C,而后保存pred2到B1,C1
    stacking_data2.png

  • 算法3拟合B1,预测C1,获得最终结果preds3

Stacking example

from sklearn.ensemble import RandomForestRegressor
from sklearn.linear_model import LinearRegression
import numpy as np
from sklearn.model_selection import train_test_split
train = '' # your training set
y = ''     # your target variable
# split train data in 2 part, training and valdiation.
training, valid, ytraining, yvalid = train_test_split(train, y, test_size=0.5)
# specify models
model1 = RandomForestRegressor()
model2 = LinearRegression()
#fit models
model1.fit(training, ytraining)
model2.fit(trainging, ytraining)
# make predictions for validation
preds1 = model1.predict(valid)
preds2 = model2.predict(valid)
# make predictions for test data
test_preds1 = model1.predict(test)
test_preds2 = model2.predict(test)
# From a new dataset for valid and test via stacking the predictions
stacked_predictions = np.colum_stack((preds1, preds2))
stacked_test_predictions = np.column_stack((test_preds1, test_preds2))
# specify meta model
meta_model = LinearRegression()
meta_model.fit(stacked_predictions, yvalid)
# make predictions on the stacked predictions of the test data
final_predictions = meta_model.predict(stacked_test_predictions)

Stacking(past) example

stacking_past.png

能够看到,它与咱们使用Conditional averaging的结果很是近似。只是在50附件作的不够好,这是有道理的,由于模型没有见到目标变量,没法准确识别出50这个缺口。因此它只是尝试根据模型的输入来肯定。

Things to be mindful of

  • With time sensitive data - respect time
  • 若是你的数据带有时间元素,你须要指定你的stacking,以便尊重时间。
  • Diversity as important as performance
  • 单一模型表现很重要,但模型的多样性也很是重要。当模型是坏的或弱的状况,你不需太担忧,stacking实际上能够从每一个预测中提取到精华,获得好的结果。所以,你真正须要关注的是,我正在制做的模型能给我带来哪些信息,即便它一般很弱。
  • Diversity may come from:
  • Different algorithms
  • Different input features
  • Performance plateauing after N models
  • Meta model is normally modest

StackNet

https://github.com/kaz-Anova/StackNet

Ensembling Tips and Tricks

$1^{st}$ level tips

  • Diversity based on algorithms:
  • 2-3 gradient boosted trees (lightgbm, xgboost, H2O, catboost)
  • 2-3 Neural nets (keras, pytorch)
  • 1-2 ExtraTrees/RandomForest (sklearn)
  • 1-2 linear models as in logistic/ridge regression, linear svm (sklearn)
  • 1-2 knn models (sklearn)
  • 1 Factorization machine (libfm)
  • 1 svm with nonlinear kernel(like RBF) if size/memory allows (sklearn)
  • Diversity based on input data:
  • Categorical features: One hot, label encoding, target encoding, likelihood encoding, frequency or counts
  • Numerical features: outliers, binning, derivatives, percentiles, scaling
  • Interactions: col1*/+-col2, groupby, unsupervised

$2^{st}$ level tips

  • Simpler (or shallower) Algorithms:
  • gradient boosted trees with small depth(like 2 or 3)
  • Linear models with high regularization
  • Extra Trees (just don't make them too big)
  • Shallow networks (as in 1 hidden layer, with not that many hidden neurons)
  • knn with BrayCurtis Distance
  • Brute forcing a search for best linear weights based on cv

  • Feature engineering:
  • pairwise differences between meta features
  • row-wise statistics like averages or stds
  • Standard feature selection techniques
  • For every 7.5 models in previous level we add 1 in meta (经验)
  • Be mindful to target leakage

Additional materials

wechat.jpg

相关文章
相关标签/搜索