前天偶然在一个网站上看到一个数据分析的比赛(sofasofa),本身虽然学习一些关于机器学习的内容,可是并无在比赛中实践过,因而我带着一种好奇心参加了此次比赛。php
赛题:足球运动员身价估计比赛概述html
本比赛为我的练习赛,主要针对于于数据新人进行自我练习、自我提升,与你们切磋。 python
练习赛时限:2018-03-05 至 2020-03-05 git
任务类型:回归 github
背景介绍: 每一个足球运动员在转会市场都有各自的价码。本次数据练习的目的是根据球员的各项信息和能力值来预测该球员的市场价值。 数组
根据以上描述,咱们很容易能够判断出这是一个回归预测类的问题。固然,要想进行预测,咱们首先要作的就是先看看数据的格式以及内容(因为参数太多,我就不一一列举了,你们能够直接去网上看,下面我简单贴个图):app
简单了解了数据的格式以及大小之后,因为没有实践经验,我就凭本身的感受,单纯的认为一下几个字段多是最重要的:机器学习
字段 | 含义 |
---|---|
club | 该球员所属的俱乐部。该信息已经被编码。 |
league | 该球员所在的联赛。已被编码。 |
potential | 球员的潜力。数值变量。 |
international_reputation | 国际知名度。数值变量。 |
巧合的是恰好这些字段都没有缺失值,我很开心啊,心想着能够直接利用XGBoost模型进行预测了。具体XGBoost的使用方法,能够参考:XGBoost以及官方文档XGBoost Parameters。说来就来,我开始了coding工做,下面就贴出个人初版代码:函数
#!/usr/bin/env python # -*- coding: utf-8 -*- # @File : soccer_value.py # @Author: Huangqinjian # @Date : 2018/3/22 # @Desc : import pandas as pd import matplotlib.pyplot as plt import xgboost as xgb import numpy as np from xgboost import plot_importance from sklearn.preprocessing import Imputer def loadDataset(filePath): df = pd.read_csv(filepath_or_buffer=filePath) return df def featureSet(data): data_num = len(data) XList = [] for row in range(0, data_num): tmp_list = [] tmp_list.append(data.iloc[row]['club']) tmp_list.append(data.iloc[row]['league']) tmp_list.append(data.iloc[row]['potential']) tmp_list.append(data.iloc[row]['international_reputation']) XList.append(tmp_list) yList = data.y.values return XList, yList def loadTestData(filePath): data = pd.read_csv(filepath_or_buffer=filePath) data_num = len(data) XList = [] for row in range(0, data_num): tmp_list = [] tmp_list.append(data.iloc[row]['club']) tmp_list.append(data.iloc[row]['league']) tmp_list.append(data.iloc[row]['potential']) tmp_list.append(data.iloc[row]['international_reputation']) XList.append(tmp_list) return XList def trainandTest(X_train, y_train, X_test): # XGBoost训练过程 model = xgb.XGBRegressor(max_depth=5, learning_rate=0.1, n_estimators=160, silent=False, objective='reg:gamma') model.fit(X_train, y_train) # 对测试集进行预测 ans = model.predict(X_test) ans_len = len(ans) id_list = np.arange(10441, 17441) data_arr = [] for row in range(0, ans_len): data_arr.append([int(id_list[row]), ans[row]]) np_data = np.array(data_arr) # 写入文件 pd_data = pd.DataFrame(np_data, columns=['id', 'y']) # print(pd_data) pd_data.to_csv('submit.csv', index=None) # 显示重要特征 # plot_importance(model) # plt.show() if __name__ == '__main__': trainFilePath = 'dataset/soccer/train.csv' testFilePath = 'dataset/soccer/test.csv' data = loadDataset(trainFilePath) X_train, y_train = featureSet(data) X_test = loadTestData(testFilePath) trainandTest(X_train, y_train, X_test)
而后我就把获得的结果文件submit.csv提交到网站上,看告终果,MAE为106.6977,排名24/28,很不理想。不过这也在预料之中,由于我基本没有进行特征处理。学习
我固然不满意啦,一直想着怎么能提升准确率呢?后来就想到了能够利用一下scikit这个库啊!在scikit中包含了一个特征选择的模块sklearn.feature_selection,而在这个模块下面有如下几个方法:
我首先想到的是利用单变量特征选择的方法选出几个跟预测结果最相关的特征。根据官方文档,有如下几种得分函数来检验变量之间的依赖程度:
因为这个比赛是一个回归预测问题,因此我选择了f_regression这个得分函数(刚开始我没有注意,错误使用了分类问题中的得分函数chi2,致使程序一直报错!心很累~)
f_regression的参数:
sklearn.feature_selection.f_regression(X, y, center=True)
X:一个多维数组,大小为(n_samples, n_features),即行数为训练样本的大小,列数为特征的个数
y:一个一维数组,长度为训练样本的大小
return:返回值为特征的F值以及p值
不过在进行这个操做以前,咱们还有一个重大的任务要完成,那就是对于空值的处理!幸运的是scikit中也有专门的模块能够处理这个问题:Imputation of missing values
sklearn.preprocessing.Imputer的参数:
sklearn.preprocessing.Imputer(missing_values=’NaN’, strategy=’mean’, axis=0, verbose=0, copy=True)
其中strategy表明对于空值的填充策略(默认为mean,即取所在列的平均数进行填充):
axis默认值为0:
其余具体参数能够参考:sklearn.preprocessing.Imputer
根据以上,我对数据进行了一些处理:
from sklearn.feature_selection import f_regression from sklearn.preprocessing import Imputer imputer = Imputer(missing_values='NaN', strategy='mean', axis=0) imputer.fit(data.loc[:, 'rw':'lb']) x_new = imputer.transform(data.loc[:, 'rw':'lb']) data_num = len(x_new) XList = [] yList = [] for row in range(0, data_num): tmp_list = [] tmp_list.append(x_new[row][0]) tmp_list.append(x_new[row][1]) tmp_list.append(x_new[row][2]) tmp_list.append(x_new[row][3]) tmp_list.append(x_new[row][4]) tmp_list.append(x_new[row][5]) tmp_list.append(x_new[row][6]) tmp_list.append(x_new[row][7]) tmp_list.append(x_new[row][8]) tmp_list.append(x_new[row][9]) XList.append(tmp_list) yList.append(data.iloc[row]['y']) F = f_regression(XList, yList) print(len(F)) print(F)
测试结果:
2 (array([2531.07587725, 1166.63303449, 2891.97789543, 2531.07587725, 2786.75491791, 2891.62686404, 3682.42649607, 1394.46743196, 531.08672792, 1166.63303449]), array([0.00000000e+000, 1.74675421e-242, 0.00000000e+000, 0.00000000e+000, 0.00000000e+000, 0.00000000e+000, 0.00000000e+000, 1.37584507e-286, 1.15614152e-114, 1.74675421e-242]))
根据以上获得的结果,我选取了rw,st,lw,cf,cam,cm(选取F值相对大的)几个特征加入模型之中。如下是我改进后的代码:
#!/usr/bin/env python # -*- coding: utf-8 -*- # @File : soccer_value.py # @Author: Huangqinjian # @Date : 2018/3/22 # @Desc : import pandas as pd import matplotlib.pyplot as plt import xgboost as xgb import numpy as np from xgboost import plot_importance from sklearn.preprocessing import Imputer def loadDataset(filePath): df = pd.read_csv(filepath_or_buffer=filePath) return df def featureSet(data): imputer = Imputer(missing_values='NaN', strategy='mean', axis=0) imputer.fit(data.loc[:, ['rw', 'st', 'lw', 'cf', 'cam', 'cm']]) x_new = imputer.transform(data.loc[:, ['rw', 'st', 'lw', 'cf', 'cam', 'cm']]) data_num = len(data) XList = [] for row in range(0, data_num): tmp_list = [] tmp_list.append(data.iloc[row]['club']) tmp_list.append(data.iloc[row]['league']) tmp_list.append(data.iloc[row]['potential']) tmp_list.append(data.iloc[row]['international_reputation']) tmp_list.append(data.iloc[row]['pac']) tmp_list.append(data.iloc[row]['sho']) tmp_list.append(data.iloc[row]['pas']) tmp_list.append(data.iloc[row]['dri']) tmp_list.append(data.iloc[row]['def']) tmp_list.append(data.iloc[row]['phy']) tmp_list.append(data.iloc[row]['skill_moves']) tmp_list.append(x_new[row][0]) tmp_list.append(x_new[row][1]) tmp_list.append(x_new[row][2]) tmp_list.append(x_new[row][3]) tmp_list.append(x_new[row][4]) tmp_list.append(x_new[row][5]) XList.append(tmp_list) yList = data.y.values return XList, yList def loadTestData(filePath): data = pd.read_csv(filepath_or_buffer=filePath) imputer = Imputer(missing_values='NaN', strategy='mean', axis=0) imputer.fit(data.loc[:, ['rw', 'st', 'lw', 'cf', 'cam', 'cm']]) x_new = imputer.transform(data.loc[:, ['rw', 'st', 'lw', 'cf', 'cam', 'cm']]) data_num = len(data) XList = [] for row in range(0, data_num): tmp_list = [] tmp_list.append(data.iloc[row]['club']) tmp_list.append(data.iloc[row]['league']) tmp_list.append(data.iloc[row]['potential']) tmp_list.append(data.iloc[row]['international_reputation']) tmp_list.append(data.iloc[row]['pac']) tmp_list.append(data.iloc[row]['sho']) tmp_list.append(data.iloc[row]['pas']) tmp_list.append(data.iloc[row]['dri']) tmp_list.append(data.iloc[row]['def']) tmp_list.append(data.iloc[row]['phy']) tmp_list.append(data.iloc[row]['skill_moves']) tmp_list.append(x_new[row][0]) tmp_list.append(x_new[row][1]) tmp_list.append(x_new[row][2]) tmp_list.append(x_new[row][3]) tmp_list.append(x_new[row][4]) tmp_list.append(x_new[row][5]) XList.append(tmp_list) return XList def trainandTest(X_train, y_train, X_test): # XGBoost训练过程 model = xgb.XGBRegressor(max_depth=5, learning_rate=0.1, n_estimators=160, silent=False, objective='reg:gamma') model.fit(X_train, y_train) # 对测试集进行预测 ans = model.predict(X_test) ans_len = len(ans) id_list = np.arange(10441, 17441) data_arr = [] for row in range(0, ans_len): data_arr.append([int(id_list[row]), ans[row]]) np_data = np.array(data_arr) # 写入文件 pd_data = pd.DataFrame(np_data, columns=['id', 'y']) # print(pd_data) pd_data.to_csv('submit.csv', index=None) # 显示重要特征 # plot_importance(model) # plt.show() if __name__ == '__main__': trainFilePath = 'dataset/soccer/train.csv' testFilePath = 'dataset/soccer/test.csv' data = loadDataset(trainFilePath) X_train, y_train = featureSet(data) X_test = loadTestData(testFilePath) trainandTest(X_train, y_train, X_test)
再次提交,此次MAE为 42.1227,排名16/28。虽然提高了很多,不过距离第一名仍是有差距,仍需努力。
接下来,咱们来处理一下下面这个字段:
因为这两个字段是标签,须要进行处理之后(标签标准化)才用到模型中。咱们要用到的函数是sklearn.preprocessing.LabelEncoder:
le = preprocessing.LabelEncoder() le.fit(['Low', 'Medium', 'High']) att_label = le.transform(data.work_rate_att.values) # print(att_label) def_label = le.transform(data.work_rate_def.values) # print(def_label)
固然你也可使用pandas直接来处理离散型特征变量,具体内容能够参考:pandas使用get_dummies进行one-hot编码。顺带提一句,scikit中也有一个方法能够来处理,可参考:sklearn.preprocessing.OneHotEncoder。
调整后的代码:
#!/usr/bin/env python # -*- coding: utf-8 -*- # @File : soccer_value.py # @Author: Huangqinjian # @Date : 2018/3/22 # @Desc : import pandas as pd import matplotlib.pyplot as plt import xgboost as xgb from sklearn import preprocessing import numpy as np from xgboost import plot_importance from sklearn.preprocessing import Imputer from sklearn.cross_validation import train_test_split def featureSet(data): imputer = Imputer(missing_values='NaN', strategy='mean', axis=0) imputer.fit(data.loc[:, ['rw', 'st', 'lw', 'cf', 'cam', 'cm']]) x_new = imputer.transform(data.loc[:, ['rw', 'st', 'lw', 'cf', 'cam', 'cm']]) le = preprocessing.LabelEncoder() le.fit(['Low', 'Medium', 'High']) att_label = le.transform(data.work_rate_att.values) # print(att_label) def_label = le.transform(data.work_rate_def.values) # print(def_label) data_num = len(data) XList = [] for row in range(0, data_num): tmp_list = [] tmp_list.append(data.iloc[row]['club']) tmp_list.append(data.iloc[row]['league']) tmp_list.append(data.iloc[row]['potential']) tmp_list.append(data.iloc[row]['international_reputation']) tmp_list.append(data.iloc[row]['pac']) tmp_list.append(data.iloc[row]['sho']) tmp_list.append(data.iloc[row]['pas']) tmp_list.append(data.iloc[row]['dri']) tmp_list.append(data.iloc[row]['def']) tmp_list.append(data.iloc[row]['phy']) tmp_list.append(data.iloc[row]['skill_moves']) tmp_list.append(x_new[row][0]) tmp_list.append(x_new[row][1]) tmp_list.append(x_new[row][2]) tmp_list.append(x_new[row][3]) tmp_list.append(x_new[row][4]) tmp_list.append(x_new[row][5]) tmp_list.append(att_label[row]) tmp_list.append(def_label[row]) XList.append(tmp_list) yList = data.y.values return XList, yList def loadTestData(filePath): data = pd.read_csv(filepath_or_buffer=filePath) imputer = Imputer(missing_values='NaN', strategy='mean', axis=0) imputer.fit(data.loc[:, ['rw', 'st', 'lw', 'cf', 'cam', 'cm']]) x_new = imputer.transform(data.loc[:, ['rw', 'st', 'lw', 'cf', 'cam', 'cm']]) le = preprocessing.LabelEncoder() le.fit(['Low', 'Medium', 'High']) att_label = le.transform(data.work_rate_att.values) # print(att_label) def_label = le.transform(data.work_rate_def.values) # print(def_label) data_num = len(data) XList = [] for row in range(0, data_num): tmp_list = [] tmp_list.append(data.iloc[row]['club']) tmp_list.append(data.iloc[row]['league']) tmp_list.append(data.iloc[row]['potential']) tmp_list.append(data.iloc[row]['international_reputation']) tmp_list.append(data.iloc[row]['pac']) tmp_list.append(data.iloc[row]['sho']) tmp_list.append(data.iloc[row]['pas']) tmp_list.append(data.iloc[row]['dri']) tmp_list.append(data.iloc[row]['def']) tmp_list.append(data.iloc[row]['phy']) tmp_list.append(data.iloc[row]['skill_moves']) tmp_list.append(x_new[row][0]) tmp_list.append(x_new[row][1]) tmp_list.append(x_new[row][2]) tmp_list.append(x_new[row][3]) tmp_list.append(x_new[row][4]) tmp_list.append(x_new[row][5]) tmp_list.append(att_label[row]) tmp_list.append(def_label[row]) XList.append(tmp_list) return XList def trainandTest(X_train, y_train, X_test): # XGBoost训练过程 model = xgb.XGBRegressor(max_depth=6, learning_rate=0.05, n_estimators=500, silent=False, objective='reg:gamma') model.fit(X_train, y_train) # 对测试集进行预测 ans = model.predict(X_test) ans_len = len(ans) id_list = np.arange(10441, 17441) data_arr = [] for row in range(0, ans_len): data_arr.append([int(id_list[row]), ans[row]]) np_data = np.array(data_arr) # 写入文件 pd_data = pd.DataFrame(np_data, columns=['id', 'y']) # print(pd_data) pd_data.to_csv('submit.csv', index=None) # 显示重要特征 # plot_importance(model) # plt.show() if __name__ == '__main__': trainFilePath = 'dataset/soccer/train.csv' testFilePath = 'dataset/soccer/test.csv' data = pd.read_csv(trainFilePath) X_train, y_train = featureSet(data) X_test = loadTestData(testFilePath) trainandTest(X_train, y_train, X_test)
此次只提升到了40.8686。暂时想不到提升的方法了,还请大神多多赐教!
更多内容欢迎关注个人我的公众号