>>> d = [{'name': 'Alice', 'age': 1}] >>> f = spark.createDataFrame(d) >>> f.collect() [Row(age=1, name=u'Alice')] >>> from pyspark.sql import functions as F
如今要新增长一列newNamepython
>>> ff = ff.withColumn('newName','===') Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/spark-current/python/pyspark/sql/dataframe.py", line 1619, in withColumn assert isinstance(col, Column), "col should be Column" AssertionError: col should be Column
会报错,说不是列sql
>>> ff = f.withColumn('newName',F.col('name') + '===') >>> ff.collect() [Row(age=1, name=u'Alice', newName=None)]
没有报错,可是新列值为None,可是针对于整数类型能够函数
>>> ff = ff.withColumn('newAge',F.col('age') + 1) >>> ff.collect() [Row(age=1, name=u'Alice', newName=None, newAge=2)]
>>> ff = ff.withColumn('newNameV2',F.lit('===')) >>> ff.collect() [Row(name=u'Alice', age=1, newNameV2=u'===')]
sql.functions.lit()函数,直接返回的是字面值spa
转化为rdd,用map函数增长code