这里的格式并无作过多的处理,可参考于OneNote笔记连接算法
因为OneNote取消了单页分享,若是须要请留下邮箱,我会邮件发送pdf版本,后续再解决这个问题数组
推荐算法库surprise安装app
pip install surprise
基本用法
• 自动交叉验证框架
# Load the movielens-100k dataset (download it if needed), data = Dataset.load_builtin('ml-100k') # We'll use the famous SVD algorithm. algo = SVD() # Run 5-fold cross-validation and print results cross_validate(algo, data, measures=['RMSE', 'MAE'], cv=5, verbose=True) load_builtin方法会自动下载“movielens-100k”数据集,放在.surprise_data目录下面 • 使用自定义的数据集 # path to dataset file file_path = os.path.expanduser('~/.surprise_data/ml-100k/ml-100k/u.data') # As we're loading a custom dataset, we need to define a reader. In the # movielens-100k dataset, each line has the following format: # 'user item rating timestamp', separated by '\t' characters. reader = Reader(line_format='user item rating timestamp', sep='\t') data = Dataset.load_from_file(file_path, reader=reader) # We can now use this dataset as we please, e.g. calling cross_validate cross_validate(BaselineOnly(), data, verbose=True)
交叉验证dom
○ cross_validate(算法,数据集,评估模块measures=[],交叉验证折数cv) ○ 经过test方法和KFold也能够对数据集进行更详细的操做,也可使用LeaveOneOut或是ShuffleSplit from surprise import SVD from surprise import Dataset from surprise import accuracy from surprise.model_selection import Kfold # Load the movielens-100k dataset data = Dataset.load_builtin('ml-100k') # define a cross-validation iterator kf = KFold(n_splits=3) algo = SVD() for trainset, testset in kf.split(data): # train and test algorithm. algo.fit(trainset) predictions = algo.test(testset) # Compute and print Root Mean Squared Error accuracy.rmse(predictions, verbose=True)
使用GridSearchCV来调节算法参数函数
例如对SVD的参数尝试不一样的值工具
from surprise import SVD from surprise import Dataset from surprise.model_selection import GridSearchCV # Use movielens-100K data = Dataset.load_builtin('ml-100k') param_grid = {'n_epochs': [5, 10], 'lr_all': [0.002, 0.005], 'reg_all': [0.4, 0.6]} gs = GridSearchCV(SVD, param_grid, measures=['rmse', 'mae'], cv=3) gs.fit(data) # best RMSE score print(gs.best_score['rmse']) # combination of parameters that gave the best RMSE score print(gs.best_params['rmse']) # We can now use the algorithm that yields the best rmse: algo = gs.best_estimator['rmse'] algo.fit(data.build_full_trainset())
使用预测算法性能
○ 基线估算配置 § 在使用最小二乘法(ALS)时传入参数: 1) reg_i:项目正则化参数,默认值为10 2) reg_u:用户正则化参数,默认值为15 3) n_epochs:als过程当中的迭代次数,默认值为10 print('Using ALS') bsl_options = {'method': 'als', 'n_epochs': 5, 'reg_u': 12, 'reg_i': 5 } algo = BaselineOnly(bsl_options=bsl_options) § 在使用随机梯度降低(SGD)时传入参数: 1) reg:优化成本函数的正则化参数,默认值为0.02 2) learning_rate:SGD的学习率,默认值为0.005 3) n_epochs:SGD过程当中的迭代次数,默认值为20 print('Using SGD') bsl_options = {'method': 'sgd', 'learning_rate': .00005, } algo = BaselineOnly(bsl_options=bsl_options) § 在建立KNN算法时候来传递参数 bsl_options = {'method': 'als', 'n_epochs': 20, } sim_options = {'name': 'pearson_baseline'} algo = KNNBasic(bsl_options=bsl_options, sim_options=sim_options) ○ 类似度配置 § name:要使用的类似度名称,默认是MSD § user_based:是否时基于用户计算类似度,默认为True § min_support:最小的公共数目,当最小的公共用户或者公共项目小于min_support时候,类似度为0 § shrinkage:收缩参数,默认值为100 i. sim_options = {'name': 'cosine', 'user_based': False # compute similarities between items } algo = KNNBasic(sim_options=sim_options) ii. sim_options = {'name': 'pearson_baseline', 'shrinkage': 0 # no shrinkage } algo = KNNBasic(sim_options=sim_options) • 其余一些问题 ○ 如何获取top-N的推荐 from collections import defaultdict from surprise import SVD from surprise import Dataset def get_top_n(predictions, n=10): '''Return the top-N recommendation for each user from a set of predictions. Args: predictions(list of Prediction objects): The list of predictions, as returned by the test method of an algorithm. n(int): The number of recommendation to output for each user. Default is 10. Returns: A dict where keys are user (raw) ids and values are lists of tuples: [(raw item id, rating estimation), ...] of size n. ''' # First map the predictions to each user. top_n = defaultdict(list) for uid, iid, true_r, est, _ in predictions: top_n[uid].append((iid, est)) # Then sort the predictions for each user and retrieve the k highest ones. for uid, user_ratings in top_n.items(): user_ratings.sort(key=lambda x: x[1], reverse=True) top_n[uid] = user_ratings[:n] return top_n # First train an SVD algorithm on the movielens dataset. data = Dataset.load_builtin('ml-100k') trainset = data.build_full_trainset() algo = SVD() algo.fit(trainset) # Than predict ratings for all pairs (u, i) that are NOT in the training set. testset = trainset.build_anti_testset() predictions = algo.test(testset) top_n = get_top_n(predictions, n=10) # Print the recommended items for each user for uid, user_ratings in top_n.items(): print(uid, [iid for (iid, _) in user_ratings]) ○ 如何计算精度
from collections import defaultdict学习
from surprise import Dataset from surprise import SVD from surprise.model_selection import KFold def precision_recall_at_k(predictions, k=10, threshold=3.5): '''Return precision and recall at k metrics for each user.''' # First map the predictions to each user. user_est_true = defaultdict(list) for uid, _, true_r, est, _ in predictions: user_est_true[uid].append((est, true_r)) precisions = dict() recalls = dict() for uid, user_ratings in user_est_true.items(): # Sort user ratings by estimated value user_ratings.sort(key=lambda x: x[0], reverse=True) # Number of relevant items n_rel = sum((true_r >= threshold) for (_, true_r) in user_ratings) # Number of recommended items in top k n_rec_k = sum((est >= threshold) for (est, _) in user_ratings[:k]) # Number of relevant and recommended items in top k n_rel_and_rec_k = sum(((true_r >= threshold) and (est >= threshold)) for (est, true_r) in user_ratings[:k]) # Precision@K: Proportion of recommended items that are relevant precisions[uid] = n_rel_and_rec_k / n_rec_k if n_rec_k != 0 else 1 # Recall@K: Proportion of relevant items that are recommended recalls[uid] = n_rel_and_rec_k / n_rel if n_rel != 0 else 1 return precisions, recalls data = Dataset.load_builtin('ml-100k') kf = KFold(n_splits=5) algo = SVD() for trainset, testset in kf.split(data): algo.fit(trainset) predictions = algo.test(testset) precisions, recalls = precision_recall_at_k(predictions, k=5, threshold=4) # Precision and recall can then be averaged over all users print(sum(prec for prec in precisions.values()) / len(precisions)) print(sum(rec for rec in recalls.values()) / len(recalls)) ○ 如何得到用户(或项目)的k个最近邻居
import io # needed because of weird encoding of u.item file测试
from surprise import KNNBaseline from surprise import Dataset from surprise import get_dataset_dir def read_item_names(): """Read the u.item file from MovieLens 100-k dataset and return two mappings to convert raw ids into movie names and movie names into raw ids. """ file_name = get_dataset_dir() + '/ml-100k/ml-100k/u.item' rid_to_name = {} name_to_rid = {} with io.open(file_name, 'r', encoding='ISO-8859-1') as f: for line in f: line = line.split('|') rid_to_name[line[0]] = line[1] name_to_rid[line[1]] = line[0] return rid_to_name, name_to_rid # First, train the algortihm to compute the similarities between items data = Dataset.load_builtin('ml-100k') trainset = data.build_full_trainset() sim_options = {'name': 'pearson_baseline', 'user_based': False} algo = KNNBaseline(sim_options=sim_options) algo.fit(trainset) # Read the mappings raw id <-> movie name rid_to_name, name_to_rid = read_item_names() # Retrieve inner id of the movie Toy Story toy_story_raw_id = name_to_rid['Toy Story (1995)'] toy_story_inner_id = algo.trainset.to_inner_iid(toy_story_raw_id) # Retrieve inner ids of the nearest neighbors of Toy Story. toy_story_neighbors = algo.get_neighbors(toy_story_inner_id, k=10) # Convert inner ids of the neighbors into names. toy_story_neighbors = (algo.trainset.to_raw_iid(inner_id) for inner_id in toy_story_neighbors) toy_story_neighbors = (rid_to_name[rid] for rid in toy_story_neighbors) print() print('The 10 nearest neighbors of Toy Story are:') for movie in toy_story_neighbors: print(movie) ○ 解释一下什么是raw_id和inner_id? i. 用户和项目有本身的raw_id和inner_id,原生id是评分文件或者pandas数据集中定义的id,重点在于要知道你使用predict()或者其余方法时候接收原生的id ii. 在训练集建立时,每个原生的id映射到inner id(这是一个惟一的整数,方便surprise操做),原生id和内部id之间的转换能够用训练集中的to_inner_uid(), to_inner_iid(), to_raw_uid(), 以及to_raw_iid()方法 ○ 默认数据集下载到了哪里?怎么修改这个位置 i. 默认数据集下载到了——“~/.surprise_data”中 ii. 若是须要修改,能够经过设置“SURPRISE_DATA_FOLDER”环境变量来修改位置 • API合集 ○ 推荐算法包 random_pred.NormalPredictor Algorithm predicting a random rating based on the distribution of the training set, which is assumed to be normal. baseline_only. BaselineOnly Algorithm predicting the baseline estimate for given user and item. knns.KNNBasic A basic collaborative filtering algorithm. knns.KNNWithMeans A basic collaborative filtering algorithm, taking into account the mean ratings of each user. knns.KNNWithZScore A basic collaborative filtering algorithm, taking into account the z-score normalization of each user. knns.KNNBaseline A basic collaborative filtering algorithm taking into account a baseline rating. matrix_factorization.SVD The famous SVD algorithm, as popularized by Simon Funk during the Netflix Prize. matrix_factorization.SVDpp The SVD++ algorithm, an extension of SVD taking into account implicit ratings. matrix_factorization.NMF A collaborative filtering algorithm based on Non-negative Matrix Factorization. slope_one.SlopeOne A simple yet accurate collaborative filtering algorithm. co_clustering.CoClustering A collaborative filtering algorithm based on co-clustering. ○ 推荐算法基类 § class surprise.prediction_algorithms.algo_base.AlgoBase(**kwargs) § 若是算法须要计算类似度,那么baseline_options参数能够用来配置 § 方法介绍: 1) compute_baselines() 计算用户和项目的基线,这个方法只能适用于Pearson类似度或者BaselineOnly算法,返回一个包含用户类似度和用户类似度的元组 2) compute_similarities() 类似度矩阵,计算类似度矩阵的方式取决于sim_options算法建立时候所传递的参数,返回类似度矩阵 3) default_preditction() 默认的预测值,若是计算期间发生了异常,那么预测值则使用这个值。默认状况下时全部评分的均值(能够在子类中重写,以改变这个值),返回一个浮点类型 4) fit(trainset) 在给定的训练集上训练算法,每一个派生类都会调用这个方法做为训练算法的第一个基本步骤,它负责初始化一些内部结构和设置self.trainset属性,返回self指针 5) get_neighbors(iid, k) 返回inner id所对应的k个最近邻居的,取决于这个iid所对应的是用户仍是项目(由sim_options里面的user_based是True仍是False决定),返回K个最近邻居的内部id列表 6) predict(uid, iid, r_ui=None, clip=True, verbose=False) 计算给定的用户和项目的评分预测,该方法将原生id转换为内部id,而后调用estimate每一个派生类中定义的方法。若是结果是一个不可能的预测结果,那么会根据default_prediction()来计算预测值 另外解释一下clip,这个参数决定是否对预测结果进行近似。举个例子来讲,若是预测结果是5.5,而评分的区间是[1,5],那么将预测结果修改成5;若是预测结果小于1,那么修改成1。默认为True verbose参数决定了是否打印每一个预测的详细信息。默认值为False 返回值,一个rediction对象,包含了: a) 原生用户id b) 原生项目id c) 真实评分 d) 预测评分 e) 可能对后面预测有用的一些其余的详细信息 7) test(testset, verbose=False) 在给定的测试集上测试算法,即估计给定测试集中的全部评分。返回值是prediction对象的列表 8) ○ 预测模块 § surprise.prediction_algorithms.predictions模块定义了Prediction命名元组和PredictionImpossible异常 § Prediction □ 用于储存预测结果的命名元组 □ 仅用于文档和打印等目的 □ 参数: uid 原生用户id iid 原生项目id r_ui 浮点型的真实评分 est 浮点型的预测评分 details 预测相关的其余详细信息 § surprise.prediction_algorithms.predictions.PredictionImpossible □ 当预测不可能时候,出现这个异常 □ 这个异常会设置当前的预测评分变为默认值(全局平均值) ○ model_selection包 § 交叉验证迭代器 □ 该模块中包含各类交叉验证迭代器: KFold 基础交叉验证迭代器 RepeatedKFold 重复KFold交叉验证迭代器 ShuffleSplit 具备随机训练集和测试集的基本交叉验证迭代器 LeaveOneOut 交叉验证迭代器,其中每一个用户再测试集中只有一个评级 PredefinedKFold 使用load_from_folds方法加载数据集时的交叉验证迭代器 □ 该模块中还包含了将数据集分为训练集和测试集的功能 train_test_split(data, test_size=0,2, train_size=None, random_state=None, shuffle=True) data,要拆分的数据集 test_size,若是是浮点数,表示要包含在测试集中的评分比例;若是是整数,则表示测试集中固定的评分数;若是是None,则设置为训练集大小的补码;默认为0.2 train_size,若是是浮点数,表示要包含在训练集中的评分比例;若是是整数,则表示训练集中固定的评分数;若是是None,则设置为训练集大小的补码;默认为None random_state,整形,一个随机种子,若是屡次拆分后得到的训练集和测试集没有多大分别,能够用这个参数来定义随机种子 shuffle,布尔值,是否在数据集中改变评分,默认为True § 交叉验证 surprise.model_selection.validation.cross_validate(algo, data, measures=[u'rmse',u'mae'], cv=None, return_train_measures=False, n_jobs=1, pre_dispatch=u'2 * n_jobs', verbose=False) ® algo,算法 ® data,数据集 ® measures,字符串列表,指定评估方案 ® cv,交叉迭代器或者整形或者None,若是是迭代器那么按照指定的参数;若是是int,则使用KFold交叉验证迭代器,以参数为折叠次数;若是是None,那么使用默认的KFold,默认折叠次数5 ® return_train_measures,是否计算训练集的性能指标,默认为False ® n_jobs,整形,并行进行评估的最大折叠数。若是为-1,那么使用全部的CPU;若是为1,那么没有并行计算(有利于调试);若是小于-1,那么使用(CPU数目 + n_jobs + 1)个CPU计算;默认值为1 ® pre_dispatch,整形或者字符串,控制在并行执行期间调度的做业数。(减小这个数量可有助于避免在分配过多的做业多于CPU可处理内容时候的内存消耗)这个参数能够是: None,全部做业会当即建立并生成 int,给出生成的总做业数确切数量 string,给出一个表达式做为函数n_jobs,例如“2*n_jobs” 默认为2*n_jobs 返回值是一个字典: ® test_*,*对应评估方案,例如“test_rmse” ® train_*,*对应评估方案,例如“train_rmse”。当return_train_measures为True时候生效 ® fit_time,数组,每一个分割出来的训练数据评估时间,以秒为单位 ® test_time,数组,每一个分割出来的测试数据评估时间,以秒为单位 § 参数搜索 □ class surprise.model_selection.search.GridSearchCV(algo_class, param_grid, measures=[u'rmse', u'mae'], cv=None, refit=False, return_train_measures=False, n_jobs=1, pre_dispatch=u'2 * n_jobs', joblib_verbose=0) ® 参数相似于上文中交叉验证 ® refit,布尔或者整形。若是为True,使用第一个评估方案中最佳平均性能的参数,在整个数据集上从新构造算法measures;经过传递字符串能够指定其余的评估方案;默认为False ® joblib_verbose,控制joblib的详细程度,整形数字越高,消息越多 □ 内部方法: a) best_estimator,字典,使用measures方案的最佳评估值,对全部的分片计算平均 b) best_score,浮点数,计算平均得分 c) best_params,字典,得到measure中最佳的参数组合 d) best_index,整数,获取用于该指标cv_results的最高精度(平均下来的)的指数 e) cv_results,数组字典,measures中全部的参数组合的训练和测试的时间 f) fit,经过cv参数给出不一样的分割方案,对全部的参数组合计算 g) predit,当refit为False时候生效,传入数组,见上文 h) test,当refit为False时候生效,传入数组,见上文 □ class surprise.model_selection.search.RandomizedSearchCV(algo_class,param_distributions,n_iter = 10,measures = [u'rmse',u'mae'],cv = None,refit = False,return_train_measures = False,n_jobs = 1,pre_dispatch = u'2 * n_jobs',random_state =无,joblib_verbose = 0 ) 随机抽样进行计算而非像上面的进行琼剧 ○ 类似度模块 § similarities模块中包含了用于计算用户或者项目之间类似度的工具: 1) cosine 2) msd 3) pearson 4) pearson_baseline ○ 精度模块 § surprise.accuracy模块提供了用于计算一组预测的精度指标的工具: 1) rmse(均方根偏差) 2) mae(平均绝对偏差) 3) fcp ○ 数据集模块 § dataset模块定义了用于管理数据集的Dataset类和其余子类 § class surprise.dataset.Dataset(reader) § 内部方法: 1) load_builtin(name=u'ml-100k'),加载内置数据集,返回一个Dataset对象 2) load_from_df(df, reader),df(dataframe),数据框架,要求必须具备三列(要求顺序),用户原生id,项目原生id,评分;reader,指定字段内容 3) load_from_file(file_path, reader),从文件中加载数据,参数为路径和读取器 4) load_from_folds(folds_files, reader),处理一种特殊状况,movielens-100k数据集中已经定义好了训练集和测试集,能够经过这个方法导入 ○ 训练集类 § class surprise.Trainset(ur, ir, n_users, n_items, n_ratings, rating_scale, offset, raw2inner_id_users, raw2inner_id_items) § 属性分析: 1) ur,用户评分列表(item_inner_id,rating)的字典,键是用户的inner_id 2) ir,项目评分列表(user_inner_id,rating)的字典,键是项目的inner_id 3) n_users,用户数量 4) n_items,项目数量 5) n_ratings,总评分数 6) rating_scale,评分的最高以及最低的元组 7) global_mean,全部评级的平均值 § 方法分析: 1) all_items(),生成函数,迭代全部项目,返回全部项目的内部id 2) all_ratings(),生成函数,迭代全部评分,返回一个(uid, iid, rating)的元组 3) all_users(),生成函数,迭代全部的用户,然会用户的内部id 4) build_anti_testset(fill=None),返回能够在test()方法中用做测试集的评分列表,参数决定填充未知评级的值,若是使用None则使用global_mean 5) knows_item(iid),标志物品是否属于训练集 6) knows_user(uid),标志用户是否属于训练集 7) to_inner_iid(riid),将项目原始id转换为内部id 8) to_innser_uid(ruid),将用户原始id转换为内部id 9) to_raw_iid(iiid),将项目的内部id转换为原始id 10) to_raw_uid(iuid),将用户的内部id转换为原始id ○ 读取器类 § class surprise.reader.Reader(name=None, line_format=u'user item rating', sep=None, rating_scale=(1, 5), skip_lines=0) Reader类用于解析包含评分的文件,要求这样的文件每行只指定一个评分,而且须要每行遵照这个接口:用户;项目;评分;[时间戳],不要求顺序,可是须要指定 § 参数分析: 1) name,若是指定,则返回一个内置的数据集Reader,并忽略其余参数,可接受的值是"ml-100k",“m1l-1m”和“jester”。默认为None 2) line_format,string类型,字段名称,指定时须要用空格分割,默认是“user item rating” 3) sep,char类型,指定字段之间的分隔符 4) rating_scale,元组类型,评分区间,默认为(1,5) 5) skip_lines,int类型,要在文件开头跳过的行数,默认为0 ○ 转储模块 § surprise.dump.dump(file_name, predictions=None, algo=None, verbose=0) □ 一个pickle的基本包装器,用来序列化预测或者算法的列表 □ 参数分析: a) file_name,str,指定转储的位置 b) predictions,Prediction列表,用来转储的预测 c) algo,Algorithm,用来转储的算法 d) verbose,详细程度,0或者1 § surprise.dump.load(file_name) □ 用于读取转储文件 □ 返回一个元组(predictions, algo),其中可能为None