Spark连续特征转化成离散特征

当数据量很大的时候,分类任务一般使用【离散特征+LR】集成【连续特征+xgboost】,若是把连续特征加入到LR、决策树中,容易形成overfit。
若是想用上连续型特征,使用集成学习集成多种算法是一种方法,可是一是过程复杂了一些,另外训练过程会很是耗时,在不损失不少特征信息的状况下,能够考虑将连续特征转换成离散特征加入到LR模型中。html

转换特征分红两种状况:算法

  • 第一种状况: 特征还未转化成训练数据所须要的向量格式,此时每一个特征为单独的一列,须要对这些单独的列进行离散化分桶。
  • 第二种状况: 全部特征已经转化成训练数据所须要的向量格式,可是离散化的特征编号杂乱,例如:编号为[10,15,128,……],须要转化为[0,1,2,……],此时全部特征已经合并成一个向量,可是这个向量为单独的一列可是包含了离散特征和连续特征,那么须要先识别出离散特征,再把离散特征进行规范化。

1. 第一种状况

1.1.二元转化

Binarization is the process of thresholding numerical features to binary (0/1) features.(二元转化,把连续特征转化为0/1特征)sql

Binarizer takes the common parameters inputCol and outputCol, as well as the threshold for binarization. Feature values greater than the threshold are binarized to 1.0; values equal to or less than the threshold are binarized to 0.0. Both Vector and Double types are supported for inputCol.(支持两种格式,double&vector,大于阈值的改成1.0,低于阈值的改成0.0)apache

import org.apache.spark.ml.feature.Binarizer val data = Array((0, 0.1), (1, 0.8), (2, 0.2)) val dataFrame = spark.createDataFrame(data).toDF("id", "feature") val binarizer: Binarizer = new Binarizer() .setInputCol("feature") .setOutputCol("binarized_feature") .setThreshold(0.5) val binarizedDataFrame = binarizer.transform(dataFrame) println(s"Binarizer output with Threshold = ${binarizer.getThreshold}") binarizedDataFrame.show()

1.2.多元转换(分桶Bucketizer)

Bucketizer transforms a column of continuous features to a column of feature buckets, where the buckets are specified by users. It takes a parameter:
splits: Parameter for mapping continuous features into buckets. With n+1 splits, there are n buckets. A bucket defined by splits x,y holds values in the range [x,y) except the last bucket, which also includes y. Splits should be strictly increasing. Values at -inf, inf must be explicitly provided to cover all Double values; Otherwise, values outside the splits specified will be treated as errors. Two examples of splits are Array(Double.NegativeInfinity, 0.0, 1.0, Double.PositiveInfinity) and Array(0.0, 1.0, 2.0).api

二元转换的时候须要给出一个阀值,在多元换转换中,若是要分红n类,就要给出n+1个阀值组成的array,任意一个数均可以被放在某两个阀值的区间内,就像把它放进属于它的桶中,故称为分桶策略。
好比有x,y两个阀值,那么他们组成的区间是[x,y)的前开后闭区间;对于最后一个区间是前闭后闭区间。app

给出的这个阀值array,里面的元素必须是递增的。若是在转换的过程当中有一个数没有被包含在区间内,那么就会报错,因此,若是不肯定特征值的最小与最大值,那么就添加Double.NegativeInfinity(负无穷)和Double.PositiveInfinity(正无穷)到array的两侧。less

Note that if you have no idea of the upper and lower bounds of the targeted column, you should add Double.NegativeInfinity and Double.PositiveInfinity as the bounds of your splits to prevent a potential out of Bucketizer bounds exception. 当不知道范围的时候设定成正负无穷做为边界。
Note also that the splits that you provided have to be in strictly increasing order, i.e. s0 < s1 < s2 < ... < sn.ide

import org.apache.spark.ml.feature.Bucketizer val splits = Array(Double.NegativeInfinity, -0.5, 0.0, 0.5, Double.PositiveInfinity) val data = Array(-999.9, -0.5, -0.3, 0.0, 0.2, 999.9) val dataFrame = spark.createDataFrame(data.map(Tuple1.apply)).toDF("features") val bucketizer = new Bucketizer() .setInputCol("features") .setOutputCol("bucketedFeatures") .setSplits(splits) // Transform original data into its bucket index.
val bucketedData = bucketizer.transform(dataFrame) println(s"Bucketizer output with ${bucketizer.getSplits.length-1} buckets") bucketedData.show() val splitsArray = Array( Array(Double.NegativeInfinity, -0.5, 0.0, 0.5, Double.PositiveInfinity), Array(Double.NegativeInfinity, -0.3, 0.0, 0.3, Double.PositiveInfinity)) val data2 = Array( (-999.9, -999.9), (-0.5, -0.2), (-0.3, -0.1), (0.0, 0.0), (0.2, 0.4), (999.9, 999.9)) val dataFrame2 = spark.createDataFrame(data2).toDF("features1", "features2") val bucketizer2 = new Bucketizer() .setInputCols(Array("features1", "features2")) .setOutputCols(Array("bucketedFeatures1", "bucketedFeatures2")) .setSplitsArray(splitsArray) // Transform original data into its bucket index.
val bucketedData2 = bucketizer2.transform(dataFrame2) println(s"Bucketizer output with [" + s"${bucketizer2.getSplitsArray(0).length-1}, " + s"${bucketizer2.getSplitsArray(1).length-1}] buckets for each input column") bucketedData2.show()

封装成函数调用:函数

  //连续特征离散化(分多个桶)
  def QuantileDiscretizer_multi_class(df:DataFrame,InputCol:String,OutputCol:String,NumBuckets:Int):(DataFrame) = { import org.apache.spark.ml.feature.Bucketizer val discretizer = new QuantileDiscretizer() .setHandleInvalid("skip") .setInputCol(InputCol) .setOutputCol(OutputCol) .setNumBuckets(NumBuckets) println("\n\n*********分桶数量:"+ NumBuckets  + "***********分桶列:" + InputCol + "**********输出列:" + OutputCol + "**********\n\n") val result = discretizer.fit(df).transform(df) result.show(false) result }

1.3.QuantileDiscretizer(分位数离散化)

QuantileDiscretizer takes a column with continuous features and outputs a column with binned categorical features. The number of bins is set by the numBuckets parameter. It is possible that the number of buckets used will be smaller than this value, for example, if there are too few distinct values of the input to create enough distinct quantiles.学习

NaN values: NaN values will be removed from the column during QuantileDiscretizer fitting. This will produce a Bucketizermodel for making predictions. During the transformation, Bucketizer will raise an error when it finds NaN values in the dataset, but the user can also choose to either keep or remove NaN values within the dataset by setting handleInvalid. If the user chooses to keep NaN values, they will be handled specially and placed into their own bucket, for example, if 4 buckets are used, then non-NaN data will be put into buckets[0-3], but NaNs will be counted in a special bucket[4].

Algorithm: The bin ranges are chosen using an approximate algorithm (see the documentation for approxQuantile for a detailed description). The precision of the approximation can be controlled with the relativeError parameter. When set to zero, exact quantiles are calculated (Note: Computing exact quantiles is an expensive operation). The lower and upper bin bounds will be -Infinity and +Infinity covering all real values.

QuantileDiscretizer(分位数离散化)。经过取一个样本的数据,并将其分为大体相等的部分,设定范围。其下限为 -Infinity(负无重大) ,上限为+Infinity(正无重大)。
分桶的数量由numbucket参数设置,但若是样本数据只存在n个区间,此时设置numBuckets为n+1,则仍只能划分出n个区间。
分级的范围有渐进算法决定。渐进的精度由relativeError参数决定。当relativeError设置为0时,将会计算精确的分位点(计算代价较大,一般使用默认便可)。relativeError参数必须在[0,1]范围内,默认值为0.001。
当分桶器分桶遇到NaN值时,会出现一个错误(默认)。handleInvalid参数能够来选择保留或者删除NaN值,若是选择不删除,NaN值的数据会单独放入一个桶中。
handleInvalid的选项有'skip'(过滤掉具备无效值的行)、'error'(抛出错误)或'keep'(将无效值保留在一个特殊的额外bucket中,默认是'error'。

import org.apache.spark.ml.feature.QuantileDiscretizer val data = Array((0, 18.0), (1, 19.0), (2, 8.0), (3, 5.0), (4, 2.2)) val df = spark.createDataFrame(data).toDF("id", "hour") val discretizer = new QuantileDiscretizer() .setHandleInvalid("skip") .setInputCol("hour") .setOutputCol("result") .setNumBuckets(3) val result = discretizer.fit(df).transform(df) result.show(false)

封装使用:

 //连续特征离散化(分多个桶)
  def QuantileDiscretizer_multi_class(df:DataFrame,InputCol:String,OutputCol:String,NumBuckets:Int):(DataFrame) = { import org.apache.spark.ml.feature.QuantileDiscretizer val discretizer = new QuantileDiscretizer() .setHandleInvalid("skip") .setInputCol(InputCol) .setOutputCol(OutputCol) .setNumBuckets(NumBuckets) println("\n\n*********分桶数量:"+ NumBuckets  + "***********分桶列:" + InputCol + "**********输出列:" + OutputCol + "**********\n\n") val result = discretizer.fit(df).transform(df) result.show(false) result }

实际使用中不建议直接对全量数据作处理,由于一般全量数据都很大,使用这个函数时集群常常会出现各类问题,建议只对训练集作处理或者对全量数据采样处理,再保存训练好的模型直接转换全量数据。

2. 第二种状况

2.1.向量转规范的离散特征-VectorIndexer

import org.apache.spark.ml.feature.VectorIndexer VectorIndexerModel featureIndexerModel=new VectorIndexer() .setInputCol("features")  //定义特征列
                 .setMaxCategories(5)     //多于5个取值视为连续值,连续值不进行转换。
                 .setOutputCol("indexedFeatures") .fit(rawData); //加入到Pipeline
Pipeline pipeline=new Pipeline() .setStages(new PipelineStage[] {labelIndexerModel, featureIndexerModel, dtClassifier, converter}); pipeline.fit(rawData).transform(rawData).select("features","indexedFeatures").show(20,false);

2.2.字符串转离散特征

import org.apache.spark.ml.feature.StringIndexer val df = spark.createDataFrame(Seq((0, "a"), (1, "b"), (2, "c"), (3, "a"), (4, "a"), (5, "c"))).toDF("id", "category") val indexer = new StringIndexer() .setInputCol("category")   //改成带索引的标签,索引为该标签出现的次数。
         .setOutputCol("categoryIndex") .setHandleInvalid("skip")  //若是category数量多于label的数量,选择error会报错,选择skip则直接跳过这些数据
val indexed = indexer.fit(df).transform(df) indexed.show()
相关文章
相关标签/搜索