spark hive python依赖第三方包

  1. 下载python对应版本源代码,https://www.python.org/downloads/source/
  2. 构建过程:
# 下载 wget https://www.python.org/ftp/python/2.7.9/Python-2.7.9.tgz
tar -zxvf Python-2.7.9.tgz
cd Python-2.7.9
# 指定打包路径
./configure --prefix=/home/tmp/python2.7.9 
make && make install​

  3.  安装须要用到的库,以 pykafka 为例python

# 用 -t 指定安装路径,而非默认路径​
pip install -t /home/tmp/python2.7.9/lib/python2.7/site-packages pykafka

   4.  打包app

# 注意是在安装目录内部打的包,这关系到后续指定python时的路径,若是这里不一样,后续也要相应调整
cd python2.7.9
tar -zcf python2.7.9.tgz *​

  5.  上传到hdfspython2.7

hadoop fs -put python2.7.9.tgz /usr/jar/python

spark yarn client模式oop

spark-submit --queue <yarn queue> --conf spark.yarn.dist.archives=hdfs://DClusterNmg4/user/xxx/xxx/python2.7.9.tgz#python2.7.9 --conf spark.pyspark.python=./python2.7.9/bin/python --deploy-mode client --py-files xxxx-dependency.py main.py
#后为后续引用这个包所用的名称

spark yarn cluster模式spa

spark-submit --queue <yarn queue> --conf spark.yarn.dist.archives=hdfs://DClusterNmg4/user/xxx/xxx/python2.7.9.tgz#python2.7.9 --conf spark.yarn.appMasterEnv.PYSPARK_PYTHON=./python2.7.9/bin/python --deploy-mode cluster --py-files xxxx-dependency.py main.py
#后为后续引用这个包所用的名称

hive udf模式code

hive > add ARCHIVE /usr/python/anaconda2.tar.gz;
hive > add file /usr/test.py;
hive > select
     >    TRANSFORM(data)
     >    USING 'anaconda2.tar.gz/anaconda2/bin/python test.py'
     >    as (min_num)
     > from test_a;
相关文章
相关标签/搜索