NFL GamePass广告投放策略分析

0 概论

NFL以Game Pass产品做为主要营销方法(Pat Even,2019),在欧洲的观众人数增长。可是,上一季的Game Pass广告策略仍存在问题。在市场方面,一些国家得到了更多投资,但投资回报率很是有限,而其余国家则相反。为了解决这个问题,我选择英国市场进行分析并给出相应的解决方案,以达到扩大市场和提升投资回报率的效果。html

0.1 构成

为了解决这个问题,本篇文章分为6个部分。python

第1部分是问题的提出以及GamePass的业务概述。git

第2部分是MySQL的数据导入与链接工做。github

第3部分是Notebook上数据清洗工做。算法

第4部分是在Tableau上对GamePass的广告投放表现作分析,以及指出英国市场的问题。sql

第5部分是针对英国市场的广告投放所影响的用户消费具体数据,进行特征工程处理,并经过处理过的特征,进行简单的机器学习预测。api

第6部分则是对用户进行细分,并针对不一样群体的购买金额、购买次数对比,提出相应的简单营销策略。网络

0.2 注明

(本次项目数据来自NFL和TwoCircles。因为NFL规定,本次项目的数据不公开,望谅解)app

本次项目在maxaishaojun的Github上能够浏览(注意Github移动端浏览可能效果不佳)。dom

"4 问题分析"部分可移步到Tableau Dashboard上浏览 NFL GamePass 2018-19赛季广告投放问题分析

0.3 引用

Pat, E. (2019) ‘NFL Viewership Growth Throughout Europe Exposes Opportunities in the US’, Front Office Sport, 7 March. Available at: frntofficesport.com/two-circles… (Accessed: 6 April 2019)

1 项目背景

(注:本部分文档来源 - Two Circles)

1.1 问题提出

假设如今是2018/19 NFL赛季的结束,你是NFL Game Pass Europe营销团队的成员。 你的任务是分析广告投放在获取新的NFL Game Pass订阅者方面的有效性,经过部署的广告渠道挖掘购买者的客户档案,并为下一季的数字广告活动提供建议,涵盖广告素材,受众群体,地理位置定位 ,渠道组合和预算。

1.2 项目概述

NFL Game Pass是国外NFL球迷的首选OTT订阅产品,也是NFL国际业务的重中之重。与“NFL的Netflix”类似,粉丝能够访问每一个实时游戏,游戏回放和精彩集锦,NFL RedZone,NFL网络24/7直播电视频道,原创内容,节目和游戏档案,下载等等。

近年来,NFL在欧洲的人气愈来愈高,这已反映在非洲大陆的NFL Game Pass的用户数量上。保持这种用户增加水平须要采起策略来最小化用户流失,优化用户获取并增长还没有准备好转换的粉丝群体的考虑。

NFL Game Pass Europe的订户收购营销活动中最大的组成部分是数字广告,每季占收购订户的约46%。该活动一般在NFL季前赛(8月)开始,并在NFL赛季(9月)的前4周达到峰值强度,最终在超级碗(2月初)以后结束。该活动中部署了各类付费营销渠道,主要是Google广告(搜索/ PPC),Facebook和YouTube(社交)以及各类程序化展现广告合做伙伴(展现广告)。此外,展现广告在www.nfl.com和NFL免费拥有的其余数字资产(O&O)上发布。预算,广告创意和受众群体定位策略因渠道而异,在整个广告系列中得到平衡对其成功相当重要。

NFL Game Pass Europe刚刚完成了2018/19赛季的数字广告活动,并正在进入2019/20赛季的规划过程。经过访问丰富的数据集,能够分析广告效果以及购买者人口统计数据,NFL Game Pass Europe正在寻求挖掘数据集,以获取有助于将来数字广告策略的看法。

1.3 附录 2018/19 NFL Game Pass Europe 产品定义

2018/19 Audience Strategy and Definitions

2018/19 Market Strategy and Definitions

2018/19 Example Ad Units

2018/19 Marketing Plan and Promotions

2 数据导入

-- 1 在MediaPerformanceData中增长numweek与revenue

select 
	a.date, a.nflweek, a.numweek, a.platform, a.market, a.audience, round(a.`Spend (GBP)`*1.3, 2) as spend_usd, a.impressions, a.clicks, a.transactions, b.revenue_usd
from (
	select *,
		@number := case
			when @weeks = nflweek then @number
			else @number+1
			end as numweek,
		@weeks := nflweek as weeks
	from mediaperformancedata
	join (select @weeks := null, @number := 0) as variable
	order by date
	) a
join (
	select 
		mt.date, mt.nflweek, mt.platform, mt.market, mt.audience, round(sum(`Revenue (USD)`),2) as revenue_usd
	from 
		MediaTransactionsdata mt
	join 
		Subscriptionsdata s on mt.transactionID = s.transactionid
	group by
		mt.date, mt.nflweek, mt.platform, mt.market, mt.audience
	order by 
		mt.date, mt.platform, mt.market, mt.audience
	) b
on 
	(a.date = b.date and a.nflweek = b.nflweek and a.platform = b.platform and a.market = b.market and a.audience = b.audience)
;

-- 2 链接剩下三表

select 
	mt.transactionid, s.customerid, mt.date, mt.nflweek, mt.platform, mt.market, mt.audience,
	c.`NFL Game Pass Segment`, c.gender, c.age, c.`NFL Tickets`, c.`NFL Shop`, c.`NFL Fantasy`, c.`New To NFL Database`, c.`Email Opt-In`, c.`Favourite Team`, 
	s.SKU, s.`Buy Type`, s.`Converted Free Trial`, s.`Revenue (USD)`
from
	Subscriptionsdata s
left join
	mediatransactionsdata mt on s.transactionid = mt.transactionid
left join 
	customersdata c on s.customerid = c.customerid
order by
	mt.date
;
复制代码

3 数据清洗

import pandas as pd
import numpy as np

mp_file_path = '/Users/apple/Downloads/nfl/mediaperformance.csv'
nfl_file_path = '/Users/apple/Downloads/nfl/nfladverts.csv'

mp_data = pd.read_csv(mp_file_path)
nfl_data = pd.read_csv(nfl_file_path)

# 从新命名列

nfl_data.columns = ['transaction_id', 'customer_id', 'date', 'nflweek', 'platform', 'market',
       'audience', 'segment', 'gender', 'age_group', 'tickets',
       'shop', 'fantasy', 'new_to_database', 'email_opt_in',
       'favourite_team', 'sku', 'buy_type', 'converted_free_trial',
       'revenue_usd']

nfl_data['revenue_usd'].describe()
# mp_data.columns

# 重复值

mp_unique = mp_data.groupby(['date', 'nflweek', 'numweek', 'platform', 'market', 'audience']).size().reset_index(name='Freq')
mp_unique = mp_unique.sort_values(by=['Freq'], ascending=False)
print(mp_unique)

nfl_data.head()
nfl_unique = nfl_data.groupby(['transaction_id']).size().reset_index(name='Freq')
nfl_unique = nfl_unique.sort_values(by=['Freq'], ascending=False)
print(nfl_unique)
# data.drop_duplicates(subset ="columns Name", keep = False, inplace = True) 

# 缺失值

mp_null_total = mp_data.isnull().sum(axis=0).sort_values(ascending=False)
mp_null_percent = (mp_data.isnull().sum()/len(mp_data.index)).sort_values(ascending=False).round(3)
mp_missing_data = pd.concat([mp_null_total, mp_null_percent], axis=1, keys=['Total', 'Percent'])

nfl_null_total = nfl_data.isnull().sum(axis=0).sort_values(ascending=False)
nfl_null_percent = (nfl_data.isnull().sum()/len(nfl_data.index)).sort_values(ascending=False).round(3)
nfl_missing_data = pd.concat([nfl_null_total, nfl_null_percent], axis=1, keys=['Total', 'Percent'])


# 去掉没有transaction_id的行
nfl_data = nfl_data.dropna(subset=['transaction_id'])

# 没有填喜爱球队的编为‘No Team’
nfl_data['favourite_team'] = nfl_data['favourite_team'].fillna('No Team')

# 没填性别编为'U'
nfl_data['gender'] = nfl_data['gender'].fillna('U')

# 因为gender缺失太多,和其余列也没有存在明显的逻辑关系,尝试看有没有男女id差别,结果没有
# https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.pivot_table.html
gender_predict = nfl_data[nfl_data.gender.notnull()]
gender_pivot = pd.pivot_table(gender_predict, values='customer_id', index=['gender'], aggfunc=np.mean)

# 按照逻辑填充buy_type
# nfl_data['buy_type'].unique()
nfl_data['buy_type'] = nfl_data.apply(
    lambda row: 
        'Buy Now' if row['sku'] =='Free' else 
            ('Buy Now' if (row['converted_free_trial'] == 0 and row['revenue_usd'] > 0) else 
                 ('Free Trial' if (row['converted_free_trial'] == 0 and row['revenue_usd'] == 0) or (row['converted_free_trial'] == 1 and row['revenue_usd'] > 0) else 
                      row['buy_type'])), axis=1)

# nfl_data.isnull().sum(axis=0).sort_values(ascending=False)

# 异常值

# binary都变成0&1,方便计算;并去掉异常值
# nfl_data['market'].unique()
nfl_data['tickets'] = np.where(nfl_data['tickets'] == 'N', 0, 1)
nfl_data['shop'] = np.where(nfl_data['shop'] == 'N', 0, 1)
nfl_data['fantasy'] = np.where(nfl_data['fantasy'] == False, 0, 1)
nfl_data['new_to_database'] = np.where(nfl_data['new_to_database'] == 'N', 0, 1)
nfl_data['email_opt_in'] = np.where(nfl_data['email_opt_in'] == 'False', 0, np.where(nfl_data['email_opt_in'] == '0', 0, 1))
nfl_data['buy_type'] = np.where(nfl_data['buy_type'] == 'Free Trial', 0, 1)                                   
nfl_data['favourite_team'] = np.where(nfl_data['favourite_team'] == 'NFL', 'No Team', nfl_data['favourite_team'])
nfl_data['favourite_team'] = np.where(nfl_data['favourite_team'] == 'No Team', 0, 1)

# 纠正sku、buy_type、converted_free_trial的逻辑错误
# https://stackoverflow.com/questions/36921951/truth-value-of-a-series-is-ambiguous-use-a-empty-a-bool-a-item-a-any-o
# https://discuss.analyticsvidhya.com/t/how-to-resolve-python-error-cannot-compare-a-dtyped-int64-array-with-a-scalar-of-type-bool/73065
nfl_data['sku'] = np.where((nfl_data['revenue_usd'] == 0) & (nfl_data['sku'] != 'Pro'), 'Free', nfl_data['sku'])
nfl_data['sku'] = np.where((nfl_data['revenue_usd'] > 0) & (nfl_data['sku'] == 'Free'), 'Pro', nfl_data['sku'])
nfl_data['buy_type'] = np.where((nfl_data['sku'] == 'Pro') & (nfl_data['revenue_usd'] == 0), 0, nfl_data['buy_type'])
nfl_data['converted_free_trial'] = np.where((nfl_data['sku'] == 'Pro') & (nfl_data['buy_type'] == 0) & (nfl_data['revenue_usd'] > 0), 1, nfl_data['converted_free_trial'])
nfl_data['converted_free_trial'] = np.where((nfl_data['sku'] == 'Pro') & (nfl_data['buy_type'] == 1) & (nfl_data['revenue_usd'] > 0), 0, nfl_data['converted_free_trial'])

mp_data.to_excel(r'/Users/apple/Downloads/nfl/mp_data.xlsx', index=True)
nfl_data.to_csv(r'/Users/apple/Downloads/nfl/nfl_data.csv', index=True)
复制代码

4 问题分析

问题分析部分可移步到Tableau Dashboard上浏览 NFL GamePass 2018-19赛季广告投放问题分析

5 特征工程

import pandas as pd
from pandas import DataFrame
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns

user_file_path = '/Users/apple/Downloads/nfl/nfl_data.csv'
user_data = pd.read_csv(user_file_path)

# extract UK market
uk_data = user_data.loc[user_data['market'] == 'UK']
# uk_data['revenue_usd'].describe()

# user_data.customer_id = user_data.customer_id.astype(str)
uk_data = uk_data.drop(['Unnamed: 0', 'date', 'customer_id', 'audience', 'market', 'nflweek', 'platform'], axis=1)
# uk_data.head()

# 看消费的分布状况
uk_data.revenue_usd.hist(bins=20, alpha=0.5)
plt.title("Game Pass Europe Revenue Distribution")
plt.xlabel("Revenue($)")
plt.ylabel("Frequency")
复制代码

5.1 数据预处理

# 量纲的问题

# 这里的data都是 yes/no 或者是category的数据,除了revenue_usd以外,没有continuous data
# 规范化处理(norm)对数据异常值很敏感,处理以后数据中的异常值会消失,所以若是数据集中存在异常值,则这是一种很差的作法。 
# 而标准化不受数据限制,因此通常咱们采用标准化来处理数据。

def normalize(data, column):
    for col in column:
        data['normalize_'+col] = (data[col] - np.min(data[col])) / (np.max(data[col]) - np.min(data[col]))
    
    return data

def standardize(data, column):
    for col in column:
        data['standardize_'+col] = (data[col] - np.mean(data[col])) / (np.std(data[col]))
    
    return data

columns = ['revenue_usd']
uk_data = standardize(uk_data, columns)

# 再介绍几种激活函数,其本意都是把数值压缩在某个区间,其中有的区间敏感,有的不敏感

def tanh(data, column):  
    for col in column:
        data['tanh_'+col] = np.tanh(data[col])
    
    return data  

def sigmoid(data, column):
    for col in column:
        data['sigmoid_'+col] = 1.0 / (1.0 + (np.exp(data[col])*(-1)))
        
    return data


def leakyrelu(data, column, a=1): 
    for col in column:
        data['leakyrelu_'+col] = np.array([x if x > 0 else a * x for x in data[col]])
    
    return data

def softplus(data, column):
    for col in column:
        data['softplus_'+col] = np.log(np.exp(data[col]) + 1)
    
    return data

uk_data.standardize_revenue_usd.hist(bins=20, alpha=0.5)
plt.title("Game Pass Europe Revenue Distribution")
plt.xlabel("Revenue($)")
plt.ylabel("Frequency")

# 特征值类型的问题

# category编码 - dummy coding
# 这种虚拟变量的作法容易增长数据特征的维度

dummy_data = pd.get_dummies(
        uk_data,columns=['segment', 'gender', 'age_group', 'sku'],
                prefix=['segment', 'gender', 'age_group', 'sku'],prefix_sep="_"
            )

# uk_data = uk_data.drop(['gender_U'], axis=1)

# 连续值转为category

conditions = [
        (uk_data['revenue_usd'] == 0),
        (uk_data['revenue_usd'] <= 13),
        (uk_data['revenue_usd'] <= 98)
        ]
choices = [0,1,2]
dummy_data['revenue_category'] = np.select(conditions, choices, default= 3)
uk_data['revenue_category'] = np.select(conditions, choices, default= 3)

fig, ax = plt.subplots()
dummy_data['revenue_category'].value_counts().plot(ax=ax, kind='bar')

复制代码

5.2 特征选择

# dummy_data.columns

select_feature = ['tickets', 'shop', 'fantasy', 'new_to_database', 'email_opt_in',
                   'favourite_team', 'buy_type', 'converted_free_trial', 'segment_Acq',
                   'segment_Ret', 'segment_iOS', 'gender_F', 'gender_M', 'gender_U',
                   'age_group_18-21', 'age_group_22-25', 'age_group_26-30',
                   'age_group_31-35', 'age_group_36-40', 'age_group_41-50',
                   'age_group_51-60', 'age_group_60+', 'age_group_Under 18',
                   'age_group_Unknown', 'sku_Essential', 'sku_Free', 'sku_Playoffs',
                   'sku_Pro', 'sku_Super Bowl', 'sku_Weekly']

# 方差选择法

from sklearn.feature_selection import VarianceThreshold

varianceThreshold = VarianceThreshold(threshold = 0.2)
varianceThreshold.fit_transform(dummy_data[select_feature])
var_result = varianceThreshold.get_support()

# 相关系数法
# 选择基本的feature,而后匹配其余
# 注意逻辑关系

from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import f_regression
selectKBest = SelectKBest(f_regression, k=10)
feature = dummy_data[select_feature]
bestFeature = selectKBest.fit_transform(feature, dummy_data[['revenue_usd']])
feature_result = selectKBest.get_support()

def feature_results(list_feature, list_result):
    
    dic = {}
    
    for i in range(len(list_feature)):
        feature = list_feature[i]
        result = list_result[i]
        dic[feature] = result
        result_tuple = sorted(dic.items(), key=lambda kv: kv[1])
    return result_tuple

var_results = feature_results(select_feature, var_result)
weights_results = feature_results(select_feature, feature_result)

print(var_results[-5:-1])
print(weights_results[-10:-1])

复制代码

5.3 维度降低

# 逻辑降维
# 根据发散性和关联性的结果,对愿特征进行处理
# 合并tickets、shop、fantasy、new_to_database、email_opt_in,以提升相关性
# 若是能够的话应该加上权重
# uk_data.sku.unique()

uk_data['user_behaviour'] = uk_data['tickets'] + uk_data['shop'] + uk_data['fantasy'] + uk_data['new_to_database'] + uk_data['email_opt_in']    

# age_group 分红老中青 0-21 22-40 40+
uk_data['age'] = np.where((uk_data['age_group']=='Under 18') | (uk_data['age_group']=='18-21'), 'young', 
                          np.where((uk_data['age_group']=='22-25') | (uk_data['age_group']=='26-30') | (uk_data['age_group']=='31-35') | (uk_data['age_group']=='36-40'), 'adult',
                                  'old'))

# sku, buy_ype & converted_free_trial 是属于购买行为了,应该归在target_variable中
uk_data['sku_category'] = np.where((uk_data['sku']=='Pro') & (uk_data['buy_type']== 1), 'Pro-BuyNow', 
                          np.where((uk_data['sku']=='Pro') & (uk_data['buy_type']== 0) & (uk_data['converted_free_trial']== 0), 'Pro-FreeTrial-NoConvert',
                                   np.where((uk_data['sku']=='Pro') & (uk_data['buy_type']== 0) & (uk_data['converted_free_trial']== 1), 'Pro-FreeTrial-Convert',
                                            uk_data['sku'])))
uk_data = uk_data.drop(['tickets', 'shop', 'fantasy', 'new_to_database', 'email_opt_in', 
                          'sku', 'buy_type', 'converted_free_trial', 'age_group', 'standardize_revenue_usd'], axis=1)

uk_data = uk_data[['transaction_id', 'age','gender', 'favourite_team', 'user_behaviour', 'segment', 'revenue_usd', 'sku_category', 'revenue_category']]
uk_data.head(10)

# 哑编码,以适应某些模型不接受categorical data
dummy_data = pd.get_dummies(
        uk_data,columns=['age', 'gender', 'segment'],
                prefix=['age', 'gender', 'segment'],prefix_sep="_"
            )

# 将哑编码与原数据结合
feature_data = pd.merge(dummy_data, uk_data[['transaction_id', 'age', 'gender', 'segment']], on='transaction_id', how='inner')
feature_data.head()

# 透视表

""" feature_data['transaction_count'] = 1 pd.pivot_table(feature_data, columns=["age"], index = ['favourite_team'], values=['revenue_usd', 'transaction_count'], aggfunc=[np.mean,np.sum]) """

# 相关性可视化

""" variables = ['favourite_team', 'user_behaviour', 'revenue_usd', 'age_adult', 'age_old', 'age_young', 'gender_F', 'gender_M', 'segment_Acq', 'segment_Ret', 'segment_iOS'] sns.set() sns.pairplot(feature_data[variables], size = 2.5) plt.show() """

# 多重共线性检验

""" x = feature_data[['favourite_team', 'user_behaviour', 'age_adult', 'age_old', 'age_young', 'gender_F', 'gender_M', 'segment_Acq', 'segment_Ret', 'segment_iOS']] from statsmodels.stats.outliers_influence import variance_inflation_factor vif = pd.DataFrame() vif["VIF Factor"] = [variance_inflation_factor(x.values, i) for i in range(x.shape[1])] vif["features"] = x.columns vif.round(1) feature_data.to_csv(r'/Users/apple/Downloads/nfl/feature_data.csv', index=True) """

# 算法试验
# 检验方法-train-validation
from sklearn.model_selection import train_test_split

x = feature_data[['favourite_team', 'user_behaviour', 'age_adult', 'age_old', 'age_young',
                   'gender_F', 'gender_M', 'segment_Acq', 'segment_Ret', 'segment_iOS']]
y1 = feature_data[['revenue_usd']] #regression
y2 = feature_data[['revenue_category']] #classification


x_train1, x_val1, y_train1, y_val1 = train_test_split(x, y1, test_size=0.2, random_state=1)
x_train2, x_val2, y_train2, y_val2 = train_test_split(x, y2, test_size=0.2, random_state=1)

print("the number of data for training:")
print(y_train1.count())
print("the number of data for validation:")
print(y_val1.count())

#衡量方法-accuracy与RMSE

from sklearn.metrics import mean_squared_error

def rmse_model(model, x, y):
    predictions = model.predict(x)
    rmse = np.sqrt(mean_squared_error(predictions, y))
    return rmse


""" from sklearn import metrics def confusion_matrix(model, x, y): model_confusion_test = metrics.confusion_matrix(y, model.predict(x)) matrix = pd.DataFrame(data = model_confusion_test, columns = ['Predicted 0', 'Predicted 1', 'Predicted 2', 'Predicted 3'], index = ['Predicted 0', 'Predicted 1', 'Predicted 2', 'Predicted 3']) return matrix """

# regression
# 因为运算量和时间问题,就只放上代码了

""" from sklearn.linear_model import LinearRegression linear_regression = LinearRegression() linear_regression.fit(x_train1, y_train1) print(rmse_model(linear_regression, x_val1, y_val1)) """

# bias-variance trade-off

""" from sklearn.preprocessing import PolynomialFeatures train_rmses = [] val_rmses = [] degrees = range(1,8) for i in degrees: poly = PolynomialFeatures(degree=i, include_bias=False) x_train_poly = poly.fit_transform(x_train1) poly_reg = LinearRegression() poly_reg.fit(x_train_poly, y_train1) # training RMSE y_train_pred = poly_reg.predict(x_train_poly) train_poly_rmse = np.sqrt(mean_squared_error(y_train1, y_train_pred)) train_rmses.append(train_poly_rmse) # validation RMSE x_val_poly = poly.fit_transform(x_val1) y_val_pred = poly_reg.predict(x_val_poly) val_poly_rmse = np.sqrt(mean_squared_error(y_val1, y_val_pred)) val_rmses.append(val_poly_rmse) print('degree = %s, training RMSE = %.2f, validation RMSE = %.2f' % (i, train_poly_rmse, val_poly_rmse)) fig = plt.figure() ax = fig.add_subplot(111) ax.plot(degrees, train_rmses,label= 'training set') ax.plot(degrees, val_rmses,label= 'validation set') ax.set_yscale('log') ax.set_xlabel('Degree') ax.set_ylabel('RMSE') ax.set_title('Bias/Variance Trade-off') plt.legend() plt.show() """

# regularization in order to reduce the effect of overfitting
# ridge (lasso, elasticnet代码类似,lasso和elasticnet对RMSE压缩得更狠一些,不过不会去掉有collinearity的feature,ridge相反)

""" from sklearn.linear_model import Ridge from sklearn.pipeline import make_pipeline rmse=[] alpha=[1, 2, 5, 10, 20, 30, 40, 50, 75, 100] for a in alpha: ridge = make_pipeline(PolynomialFeatures(4), Ridge(alpha=a)) ridge.fit(x_train1, y_train1) predict=ridge.predict(x_val1) rmse.append(np.sqrt(mean_squared_error(predict, y_val1))) print(rmse) plt.scatter(alpha, rmse) alpha=np.arange(20, 60, 2) rmse=[] for a in alpha: #ridge=Ridge(alpha=a, copy_X=True, fit_intercept=True) #ridge.fit(x_train1, y_train1) ridge = make_pipeline(PolynomialFeatures(4), Ridge(alpha=a)) ridge.fit(x_train1, y_train1) predict=ridge.predict(x_val1) rmse.append(np.sqrt(mean_squared_error(predict, y_val1))) print(rmse) plt.scatter(alpha, rmse) ridge = make_pipeline(PolynomialFeatures(4), Ridge(alpha=24.6)) ridge_model = ridge.fit(x_train1, y_train1) predictions = ridge_model.predict(x_val1) print("Ridge RMSE is: " + str(rmse_model(ridge_model, x_val1, y_val1))) """

# classfication

list(y_train2['revenue_category'].unique())

# decision tree

from sklearn import tree
from sklearn.tree import DecisionTreeClassifier

decision_tree_model = DecisionTreeClassifier(criterion='entropy')
decision_tree_model.fit(x_train2, y_train2)
print(decision_tree_model.score(x_train2,y_train2))
print(decision_tree_model.score(x_val2,y_val2))

# tuning
train_score = []
val_score = []
for depth in np.arange(1,20):
    decision_tree = tree.DecisionTreeClassifier(max_depth = depth,min_samples_leaf = 5)
    decision_tree.fit(x_train2, y_train2)
    train_score.append(decision_tree.score(x_train2, y_train2))
    val_score.append(decision_tree.score(x_val2, y_val2))

plt.plot(np.arange(1,20),train_score)
plt.plot(np.arange(1,20),val_score)
plt.legend(['Training Accuracy','Validation Accuracy'])
plt.title('Decision Tree Tuning')
plt.xlabel('Depth')
plt.ylabel('Accuracy')

train_score = []
val_score = []
for leaf in np.arange(20,100):
    decision_tree = tree.DecisionTreeClassifier(max_depth = 10, min_samples_leaf = leaf)
    decision_tree.fit(x_train2, y_train2)
    train_score.append(decision_tree.score(x_train2, y_train2))
    val_score.append(decision_tree.score(x_val2, y_val2))

plt.plot(np.arange(20,100),train_score)
plt.plot(np.arange(20,100),val_score)
plt.legend(['Training Accuracy','Validation Accuracy'])
plt.title('Decision Tree Tuning')
plt.xlabel('Minimum Samples Leaf')
plt.ylabel('Accuracy')

my_decision_tree_model = DecisionTreeClassifier(max_depth = 10, min_samples_leaf = 20)
my_decision_tree_model.fit(x_train2, y_train2)
print(my_decision_tree_model.score(x_train2,y_train2))
print(my_decision_tree_model.score(x_val2,y_val2))

# confusion matrix
from sklearn.metrics import accuracy_score, confusion_matrix, precision_recall_fscore_support


y_predict = my_decision_tree_model.predict(x_val2)
cm = confusion_matrix(y_val2, y_predict) 

# Transform to df for easier plotting
cm_df = pd.DataFrame(cm,
                     index = ['free', 'median', 'high', 'low'], 
                     columns = ['free', 'median', 'high', 'low'])

plt.figure(figsize=(5.5,4))
sns.heatmap(cm_df, annot=True)
plt.title('Decision Tree \nAccuracy:{0:.3f}'.format(accuracy_score(y_val2, y_predict)))
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.show()

# learning curve

from sklearn.model_selection import learning_curve

train_sizes, train_scores, val_scores = learning_curve(DecisionTreeClassifier(max_depth = 10, min_samples_leaf = 20), 
        x, 
        y2,
        # Number of folds in cross-validation
        cv=5,
        # Evaluation metric
        scoring='accuracy',
        # Use all computer cores
        n_jobs=-1, 
        # 50 different sizes of the training set
        train_sizes=np.linspace(0.1, 1.0, 5))

# Create means and standard deviations of training set scores
train_mean = np.mean(train_scores, axis=1)
train_std = np.std(train_scores, axis=1)

# Create means and standard deviations of validation set scores
val_mean = np.mean(val_scores, axis=1)
val_std = np.std(val_scores, axis=1)

# Draw lines
plt.plot(train_sizes, train_mean, '--', color="#ff8040",  label="Training score")
plt.plot(train_sizes, val_mean, color="#40bfff", label="Cross-validation score")

# Draw bands
plt.fill_between(train_sizes, train_mean - train_std, train_mean + train_std, color="#DDDDDD")
plt.fill_between(train_sizes, val_mean - val_std, val_mean + val_std, color="#DDDDDD")

# Create plot
plt.title("Learning Curve \n k-fold=5, number of neighbours=5")
plt.xlabel("Training Set Size"), plt.ylabel("Accuracy Score"), plt.legend(loc="best")
plt.tight_layout()
plt.show()

# Curse of Dimensionality

d_train = []
d_val = []

for i in range(1,9):
    
    X_train_index = x_train2.iloc[: , 0:i]
    X_val_index = x_val2.iloc[: , 0:i]
    
    classifier = DecisionTreeClassifier(max_depth = 10, min_samples_leaf = 20)
    dt_model = classifier.fit(X_train_index, y_train2.values.ravel())

    d_train.append(dt_model.score(X_train_index, y_train2))
    d_val.append(dt_model.score(X_val_index, y_val2))

plt.title('Decision Tree Curse of Dimensionality')
plt.plot(range(1,9),d_val,label="Validation")
plt.plot(range(1,9),d_train,label="Train")
plt.xlabel('Number of Features')
plt.ylabel('Score (Accuracy)')
plt.legend()
plt.xticks(range(1,9))
plt.show()

# 预测结果不好,可能和feature选择有关系,也有可能和算法不适合有关系。
复制代码

6 用户细分及建议

6.1 用户细分

经过特征工程后,咱们这一步的工做就是将咱们的用户细分,并根据不一样的用户群体提出不一样的策略。在经过Excel PivotTable & 可视化操做后,以喜欢球队、NFL消费行为、用户segment、年龄、性别为比较维度,购买金额(平均)、购买次数为目标维度,构成矩阵。

6.2 Marketing建议

基于英国在Game Pass欧洲市场战略的地位,其目的应该是让愈来愈多的人参与到NFL中的同时,提升ROI。所以,在不考虑Budget的状况下,应该想办法改进四个群体的消费额与用户体验。

¶ 对于明星用户(中老年男性,有喜欢的球队),应该保持和提升用户的产品体验,并组织球迷活动,增进关系,提升消费意愿。

¶ 对于便宜套餐用户(男性新用户,没有喜欢的球队),其对GamePass还处在观望的状态,或者归属感不强影响消费意愿(喜欢看比赛,可是更多的是寻找免费套餐)。这个时候能够在线上线下举行增进球迷关系的活动,或者在赛季期间搞价格促销活动,提升购买意愿。另外一方面,其购买能力也有可能影响消费金额,不过能够经过广告从而为GamePass间接获取流量收入。

¶ 对于“只花一次大价钱”用户,首先明确他们的消费能力应该是没问题的。应该举行调研,为何愿意花大价钱,可是为何只消费了一次?如何提升这部分用户对产品的体验从而促使消费次数增长?

¶ 而对于瘦狗用户(中老年女性or青年,没有喜欢球队),首先要作的是NFL对于这部分群体的普及营销工做(若是不喜欢NFL,怎么会用GamePass呢),包括推出关于女性、青少年的NFL宣传广告(提升印象)、举行屡次推广活动(提升认知,并鼓励用户参与)、以及公关活动,以提高这部分群体对NFL的功能认知和道德认知。

相关文章
相关标签/搜索