[译]剖析勇士如何成为新赛季夺冠热门:基于Spark GraphFrames的金州勇士传球网络分析

rank

databricks 最近发布了 GraphFrames,这是一个用 DataFrames 封装图处理过程的Spark插件。html

我评估了网络分析而且利用丰富的NBA.com的数据对金州勇士的传球网络进行可视化。node

head

金州勇士的传球网络

传接球

联盟 MVP Stephen Curry 接到了大多数的传球,而团队中的 MVP Draymond Green则发动了最多的传球。git

咱们已经看到大多数的进攻是由 Curry 和 Green 的相互传球开始的。github

图片来自 GIPHY算法

入度 inDegree

id inDegree
CurryStephen 3993
GreenDraymond 3123
ThompsonKlay 2276
LivingstonShaun 1925
IguodalaAndre 1814
BarnesHarrison 1241
BogutAndrew 1062
BarbosaLeandro 946
SpeightsMarreese 826
ClarkIan 692
RushBrandon 685
EzeliFestus 559
McAdooJames Michael 182
VarejaoAnderson 67
LooneyKevon 22

出度 outDegree

id outDegree
GreenDraymond 3841
CurryStephen 3300
IguodalaAndre 1896
LivingstonShaun 1878
BogutAndrew 1660
ThompsonKlay 1460
BarnesHarrison 1300
SpeightsMarreese 795
RushBrandon 772
EzeliFestus 765
BarbosaLeandro 758
ClarkIan 597
McAdooJames Michael 261
VarejaoAnderson 94
LooneyKevon 36

标签传递算法 (Label Propagation Algorithm)

标签传递是一种在图网络中寻找队伍的算法。
这种算法在没有已有标签的状况下,依然能够很好地将球员分为前锋和后卫。sql

名字 标签
Thompson, Klay 3
Barbosa, Leandro 3
Curry, Stephen 3
Clark, Ian 3
Livingston, Shaun 3
Rush, Brandon 7
Green, Draymond 7
Speights, Marreese 7
Bogut, Andrew 7
McAdoo, James Michael 7
Iguodala, Andre 7
Varejao, Anderson 7
Ezeli, Festus 7
Looney, Kevon 7
Barnes, Harrison 7

网页排名算法 (Pagerank Algorithm)

在一个网络中 PageRank 能够检测节点的重要程度。
毫无疑问,Stephen Curry、 Draymond Green 和 Klay Thompson 是Top3.
这个算法能够发现 Shaun Livingston 和 Andre Iguodala 在金州勇士的传球中扮演着关键角色。json

name pagerank
Curry, Stephen 2.17
Green, Draymond 1.99
Thompson, Klay 1.34
Livingston, Shaun 1.29
Iguodala, Andre 1.21
Barnes, Harrison 0.86
Bogut, Andrew 0.77
Barbosa, Leandro 0.72
Speights, Marreese 0.66
Clark, Ian 0.59
Rush, Brandon 0.57
Ezeli, Festus 0.48
McAdoo, James Michael 0.27
Varejao, Anderson 0.19
Looney, Kevon 0.16

示例

library(networkD3)

setwd('/Users/yuki/Documents/code_for_blog/gsw_passing_network')
passes <- read.csv("passes.csv")
groups <- read.csv("groups.csv")
size <- read.csv("size.csv")

passes$source <- as.numeric(as.factor(passes$PLAYER))-1
passes$target <- as.numeric(as.factor(passes$PASS_TO))-1
passes$PASS <- passes$PASS/50

groups$nodeid <- groups$name
groups$name <- as.numeric(as.factor(groups$name))-1
groups$group <- as.numeric(as.factor(groups$label))-1
nodes <- merge(groups,size[-1],by="id")
nodes$pagerank <- nodes$pagerank^2*100


forceNetwork(Links = passes,
             Nodes = nodes,
             Source = "source",
             fontFamily = "Arial",
             colourScale = JS("d3.scale.category10()"),
             Target = "target",
             Value = "PASS",
             NodeID = "nodeid",
             Nodesize = "pagerank",
             linkDistance = 350,
             Group = "group", 
             opacity = 0.8,
             fontSize = 16,
             zoom = TRUE,
             opacityNoHover = TRUE)

example

  • 节点大小: pagerank值segmentfault

  • 节点颜色: 队伍微信

  • 连线宽度: 传球次数(接球和发球)网络

工做流

调用API

我使用 playerdashptpass 的端点而且将同队全部球员数据保存到本地的 JSON 文件中。
数据来自 2015-16赛季的传球记录。

# 金州勇士球员 IDs
playerids = [201575,201578,2738,202691,101106,2760,2571,203949,203546,
203110,201939,203105,2733,1626172,203084]

# 调用 API 而且存储结果为 JSON
for playerid in playerids:
    os.system('curl "http://stats.nba.com/stats/playerdashptpass?'
        'DateFrom=&'
        'DateTo=&'
        'GameSegment=&'
        'LastNGames=0&'
        'LeagueID=00&'
        'Location=&'
        'Month=0&'
        'OpponentTeamID=0&'
        'Outcome=&'
        'PerMode=Totals&'
        'Period=0&'
        'PlayerID={playerid}&'
        'Season=2015-16&'
        'SeasonSegment=&'
        'SeasonType=Regular+Season&'
        'TeamID=0&'
        'VsConference=&'
        'VsDivision=" > {playerid}.json'.format(playerid=playerid))

JSON -> Panda’s DataFrame

接着,我结合每一个JSON文件到一个 DataFrame 中。

raw = pd.DataFrame()
for playerid in playerids:
    with open("{playerid}.json".format(playerid=playerid)) as json_file:
        parsed = json.load(json_file)['resultSets'][0]
        raw = raw.append(
            pd.DataFrame(parsed['rowSet'], columns=parsed['headers']))

raw = raw.rename(columns={'PLAYER_NAME_LAST_FIRST': 'PLAYER'})

raw['id'] = raw['PLAYER'].str.replace(', ', '')

准备节点和边

你须要为 Spark 中的 GraphFrames 准备一个像点+边的特殊的数据格式。顶点表示了图中的节点和运动员ID,边表示节点之间的关系。你能够添加一些附加特征好比权重,可是你无法找出在稍后的分析中能够更好表现的特征。一个可行的办法是尝试穷举全部的可能方案。(也欢迎你们留言讨论)

# 生成初始节点
pandas_vertices = raw[['PLAYER', 'id']].drop_duplicates()
pandas_vertices.columns = ['name', 'id']

# 生成初始边
pandas_edges = pd.DataFrame()
for passer in raw['id'].drop_duplicates():
    for receiver in raw[(raw['PASS_TO'].isin(raw['PLAYER'])) &
     (raw['id'] == passer)]['PASS_TO'].drop_duplicates():
        pandas_edges = pandas_edges.append(pd.DataFrame(
            {'passer': passer, 'receiver': receiver
            .replace(  ', ', '')}, 
            index=range(int(raw[(raw['id'] == passer) &
             (raw['PASS_TO'] == receiver)]['PASS'].values))))

pandas_edges.columns = ['src', 'dst']

图分析

vertices = sqlContext.createDataFrame(pandas_vertices)
edges = sqlContext.createDataFrame(pandas_edges)

# Analysis part
g = GraphFrame(vertices, edges)
print("vertices")
g.vertices.show()
print("edges")
g.edges.show()
print("inDegrees")
g.inDegrees.sort('inDegree', ascending=False).show()
print("outDegrees")
g.outDegrees.sort('outDegree', ascending=False).show()
print("degrees")
g.degrees.sort('degree', ascending=False).show()
print("labelPropagation")
g.labelPropagation(maxIter=5).show()
print("pageRank")
g.pageRank(resetProbability=0.15, tol=0.01).vertices.sort(
    'pagerank', ascending=False).show()

网络可视化

当你运行 GitHub 仓库中的代码 gsw_passing_network.py,你须要检查在工做目录下有 passes.csvgroups.csvsize.csv 这三个文件。我用R中的networkD3包来实现酷炫的可交互的 D3 制图。

library(networkD3)

setwd('/Users/yuki/Documents/code_for_blog/gsw_passing_network')
passes <- read.csv("passes.csv")
groups <- read.csv("groups.csv")
size <- read.csv("size.csv")

passes$source <- as.numeric(as.factor(passes$PLAYER))-1
passes$target <- as.numeric(as.factor(passes$PASS_TO))-1
passes$PASS <- passes$PASS/50

groups$nodeid <- groups$name
groups$name <- as.numeric(as.factor(groups$name))-1
groups$group <- as.numeric(as.factor(groups$label))-1
nodes <- merge(groups,size[-1],by="id")
nodes$pagerank <- nodes$pagerank^2*100


forceNetwork(Links = passes,
             Nodes = nodes,
             Source = "source",
             fontFamily = "Arial",
             colourScale = JS("d3.scale.category10()"),
             Target = "target",
             Value = "PASS",
             NodeID = "nodeid",
             Nodesize = "pagerank",
             linkDistance = 350,
             Group = "group", 
             opacity = 0.8,
             fontSize = 16,
             zoom = TRUE,
             opacityNoHover = TRUE)

参考资料

本文已得到原做者:YUKI KATOH 受权HarryZhu翻译
英文原文地址:http://opiateforthemass.es/ar...

做为分享主义者(sharism),本人全部互联网发布的图文均听从CC版权,转载请保留做者信息并注明做者 Harry Zhu 的 FinanceR专栏:https://segmentfault.com/blog...,若是涉及源代码请注明GitHub地址:https://github.com/harryprince。微信号: harryzhustudio商业使用请联系做者。

相关文章
相关标签/搜索