Packages

  • package root
    Definition Classes
    root
  • package org
    Definition Classes
    root
  • package apache
    Definition Classes
    org
  • package spark

    Core Spark functionality.

    Core Spark functionality. org.apache.spark.SparkContext serves as the main entry point to Spark, while org.apache.spark.rdd.RDD is the data type representing a distributed collection, and provides most parallel operations.

    In addition, org.apache.spark.rdd.PairRDDFunctions contains operations available only on RDDs of key-value pairs, such as groupByKey and join; org.apache.spark.rdd.DoubleRDDFunctions contains operations available only on RDDs of Doubles; and org.apache.spark.rdd.SequenceFileRDDFunctions contains operations available on RDDs that can be saved as SequenceFiles. These operations are automatically available on any RDD of the right type (e.g. RDD[(Int, Int)] through implicit conversions.

    Java programmers should reference the org.apache.spark.api.java package for Spark programming APIs in Java.

    Classes and methods marked with Experimental are user-facing features which have not been officially adopted by the Spark project. These are subject to change or removal in minor releases.

    Classes and methods marked with Developer API are intended for advanced users want to extend Spark through lower level interfaces. These are subject to changes or removal in minor releases.

    Definition Classes
    apache
  • package ml

    DataFrame-based machine learning APIs to let users quickly assemble and configure practical machine learning pipelines.

    DataFrame-based machine learning APIs to let users quickly assemble and configure practical machine learning pipelines.

    Definition Classes
    spark
  • package attribute

    The ML pipeline API uses DataFrames as ML datasets.

    ML attributes

    The ML pipeline API uses DataFrames as ML datasets. Each dataset consists of typed columns, e.g., string, double, vector, etc. However, knowing only the column type may not be sufficient to handle the data properly. For instance, a double column with values 0.0, 1.0, 2.0, ... may represent some label indices, which cannot be treated as numeric values in ML algorithms, and, for another instance, we may want to know the names and types of features stored in a vector column. ML attributes are used to provide additional information to describe columns in a dataset.

    ML columns

    A column with ML attributes attached is called an ML column. The data in ML columns are stored as double values, i.e., an ML column is either a scalar column of double values or a vector column. Columns of other types must be encoded into ML columns using transformers. We use Attribute to describe a scalar ML column, and AttributeGroup to describe a vector ML column. ML attributes are stored in the metadata field of the column schema.

    Definition Classes
    ml
  • package classification
    Definition Classes
    ml
  • package clustering
    Definition Classes
    ml
  • package evaluation
    Definition Classes
    ml
  • package feature

    The ml.feature package provides common feature transformers that help convert raw data or features into more suitable forms for model fitting.

    Feature transformers

    The ml.feature package provides common feature transformers that help convert raw data or features into more suitable forms for model fitting. Most feature transformers are implemented as Transformers, which transform one DataFrame into another, e.g., HashingTF. Some feature transformers are implemented as Estimators, because the transformation requires some aggregated information of the dataset, e.g., document frequencies in IDF. For those feature transformers, calling Estimator.fit is required to obtain the model first, e.g., IDFModel, in order to apply transformation. The transformation is usually done by appending new columns to the input DataFrame, so all input columns are carried over.

    We try to make each transformer minimal, so it becomes flexible to assemble feature transformation pipelines. Pipeline can be used to chain feature transformers, and VectorAssembler can be used to combine multiple feature transformations, for example:

    import org.apache.spark.ml.feature._
    import org.apache.spark.ml.Pipeline
    
    // a DataFrame with three columns: id (integer), text (string), and rating (double).
    val df = spark.createDataFrame(Seq(
      (0, "Hi I heard about Spark", 3.0),
      (1, "I wish Java could use case classes", 4.0),
      (2, "Logistic regression models are neat", 4.0)
    )).toDF("id", "text", "rating")
    
    // define feature transformers
    val tok = new RegexTokenizer()
      .setInputCol("text")
      .setOutputCol("words")
    val sw = new StopWordsRemover()
      .setInputCol("words")
      .setOutputCol("filtered_words")
    val tf = new HashingTF()
      .setInputCol("filtered_words")
      .setOutputCol("tf")
      .setNumFeatures(10000)
    val idf = new IDF()
      .setInputCol("tf")
      .setOutputCol("tf_idf")
    val assembler = new VectorAssembler()
      .setInputCols(Array("tf_idf", "rating"))
      .setOutputCol("features")
    
    // assemble and fit the feature transformation pipeline
    val pipeline = new Pipeline()
      .setStages(Array(tok, sw, tf, idf, assembler))
    val model = pipeline.fit(df)
    
    // save transformed features with raw data
    model.transform(df)
      .select("id", "text", "rating", "features")
      .write.format("parquet").save("/output/path")

    Some feature transformers implemented in MLlib are inspired by those implemented in scikit-learn. The major difference is that most scikit-learn feature transformers operate eagerly on the entire input dataset, while MLlib's feature transformers operate lazily on individual columns, which is more efficient and flexible to handle large and complex datasets.

    Definition Classes
    ml
    See also

    scikit-learn.preprocessing

  • Binarizer
  • BucketedRandomProjectionLSH
  • BucketedRandomProjectionLSHModel
  • Bucketizer
  • ChiSqSelector
  • ChiSqSelectorModel
  • CountVectorizer
  • CountVectorizerModel
  • DCT
  • ElementwiseProduct
  • FeatureHasher
  • HashingTF
  • IDF
  • IDFModel
  • Imputer
  • ImputerModel
  • IndexToString
  • Interaction
  • LabeledPoint
  • MaxAbsScaler
  • MaxAbsScalerModel
  • MinHashLSH
  • MinHashLSHModel
  • MinMaxScaler
  • MinMaxScalerModel
  • NGram
  • Normalizer
  • OneHotEncoder
  • OneHotEncoderModel
  • PCA
  • PCAModel
  • PolynomialExpansion
  • QuantileDiscretizer
  • RFormula
  • RFormulaModel
  • RegexTokenizer
  • SQLTransformer
  • StandardScaler
  • StandardScalerModel
  • StopWordsRemover
  • StringIndexer
  • StringIndexerModel
  • Tokenizer
  • VectorAssembler
  • VectorIndexer
  • VectorIndexerModel
  • VectorSizeHint
  • VectorSlicer
  • Word2Vec
  • Word2VecModel
  • package fpm
    Definition Classes
    ml
  • package image
    Definition Classes
    ml
  • package linalg
    Definition Classes
    ml
  • package param
    Definition Classes
    ml
  • package recommendation
    Definition Classes
    ml
  • package regression
    Definition Classes
    ml
  • package source
    Definition Classes
    ml
  • package stat
    Definition Classes
    ml
  • package tree
    Definition Classes
    ml
  • package tuning
    Definition Classes
    ml
  • package util
    Definition Classes
    ml
p

org.apache.spark.ml

feature

package feature

Feature transformers

The ml.feature package provides common feature transformers that help convert raw data or features into more suitable forms for model fitting. Most feature transformers are implemented as Transformers, which transform one DataFrame into another, e.g., HashingTF. Some feature transformers are implemented as Estimators, because the transformation requires some aggregated information of the dataset, e.g., document frequencies in IDF. For those feature transformers, calling Estimator.fit is required to obtain the model first, e.g., IDFModel, in order to apply transformation. The transformation is usually done by appending new columns to the input DataFrame, so all input columns are carried over.

We try to make each transformer minimal, so it becomes flexible to assemble feature transformation pipelines. Pipeline can be used to chain feature transformers, and VectorAssembler can be used to combine multiple feature transformations, for example:

import org.apache.spark.ml.feature._
import org.apache.spark.ml.Pipeline

// a DataFrame with three columns: id (integer), text (string), and rating (double).
val df = spark.createDataFrame(Seq(
  (0, "Hi I heard about Spark", 3.0),
  (1, "I wish Java could use case classes", 4.0),
  (2, "Logistic regression models are neat", 4.0)
)).toDF("id", "text", "rating")

// define feature transformers
val tok = new RegexTokenizer()
  .setInputCol("text")
  .setOutputCol("words")
val sw = new StopWordsRemover()
  .setInputCol("words")
  .setOutputCol("filtered_words")
val tf = new HashingTF()
  .setInputCol("filtered_words")
  .setOutputCol("tf")
  .setNumFeatures(10000)
val idf = new IDF()
  .setInputCol("tf")
  .setOutputCol("tf_idf")
val assembler = new VectorAssembler()
  .setInputCols(Array("tf_idf", "rating"))
  .setOutputCol("features")

// assemble and fit the feature transformation pipeline
val pipeline = new Pipeline()
  .setStages(Array(tok, sw, tf, idf, assembler))
val model = pipeline.fit(df)

// save transformed features with raw data
model.transform(df)
  .select("id", "text", "rating", "features")
  .write.format("parquet").save("/output/path")

Some feature transformers implemented in MLlib are inspired by those implemented in scikit-learn. The major difference is that most scikit-learn feature transformers operate eagerly on the entire input dataset, while MLlib's feature transformers operate lazily on individual columns, which is more efficient and flexible to handle large and complex datasets.

See also

scikit-learn.preprocessing

Linear Supertypes
AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. feature
  2. AnyRef
  3. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Type Members

  1. final class Binarizer extends Transformer with HasInputCol with HasOutputCol with DefaultParamsWritable

    Binarize a column of continuous features given a threshold.

    Binarize a column of continuous features given a threshold.

    Annotations
    @Since( "1.4.0" )
  2. class BucketedRandomProjectionLSH extends LSH[BucketedRandomProjectionLSHModel] with BucketedRandomProjectionLSHParams with HasSeed

    :: Experimental ::

    :: Experimental ::

    This BucketedRandomProjectionLSH implements Locality Sensitive Hashing functions for Euclidean distance metrics.

    The input is dense or sparse vectors, each of which represents a point in the Euclidean distance space. The output will be vectors of configurable dimension. Hash values in the same dimension are calculated by the same hash function.

    References:

    1. Wikipedia on Stable Distributions

    2. Wang, Jingdong et al. "Hashing for similarity search: A survey." arXiv preprint arXiv:1408.2927 (2014).

    Annotations
    @Experimental() @Since( "2.1.0" )
  3. class BucketedRandomProjectionLSHModel extends LSHModel[BucketedRandomProjectionLSHModel] with BucketedRandomProjectionLSHParams

    :: Experimental ::

    :: Experimental ::

    Model produced by BucketedRandomProjectionLSH, where multiple random vectors are stored. The vectors are normalized to be unit vectors and each vector is used in a hash function: h_i(x) = floor(r_i.dot(x) / bucketLength) where r_i is the i-th random unit vector. The number of buckets will be (max L2 norm of input vectors) / bucketLength.

    Annotations
    @Experimental() @Since( "2.1.0" )
  4. final class Bucketizer extends Model[Bucketizer] with HasHandleInvalid with HasInputCol with HasOutputCol with HasInputCols with HasOutputCols with DefaultParamsWritable

    Bucketizer maps a column of continuous features to a column of feature buckets.

    Bucketizer maps a column of continuous features to a column of feature buckets.

    Since 2.3.0, Bucketizer can map multiple columns at once by setting the inputCols parameter. Note that when both the inputCol and inputCols parameters are set, an Exception will be thrown. The splits parameter is only used for single column usage, and splitsArray is for multiple columns.

    Annotations
    @Since( "1.4.0" )
  5. final class ChiSqSelector extends Estimator[ChiSqSelectorModel] with ChiSqSelectorParams with DefaultParamsWritable

    Chi-Squared feature selection, which selects categorical features to use for predicting a categorical label.

    Chi-Squared feature selection, which selects categorical features to use for predicting a categorical label. The selector supports different selection methods: numTopFeatures, percentile, fpr, fdr, fwe.

    • numTopFeatures chooses a fixed number of top features according to a chi-squared test.
    • percentile is similar but chooses a fraction of all features instead of a fixed number.
    • fpr chooses all features whose p-value are below a threshold, thus controlling the false positive rate of selection.
    • fdr uses the [Benjamini-Hochberg procedure] (https://en.wikipedia.org/wiki/False_discovery_rate#Benjamini.E2.80.93Hochberg_procedure) to choose all features whose false discovery rate is below a threshold.
    • fwe chooses all features whose p-values are below a threshold. The threshold is scaled by 1/numFeatures, thus controlling the family-wise error rate of selection. By default, the selection method is numTopFeatures, with the default number of top features set to 50.
    Annotations
    @Since( "1.6.0" )
  6. final class ChiSqSelectorModel extends Model[ChiSqSelectorModel] with ChiSqSelectorParams with MLWritable

    Model fitted by ChiSqSelector.

    Model fitted by ChiSqSelector.

    Annotations
    @Since( "1.6.0" )
  7. class CountVectorizer extends Estimator[CountVectorizerModel] with CountVectorizerParams with DefaultParamsWritable

    Extracts a vocabulary from document collections and generates a CountVectorizerModel.

    Extracts a vocabulary from document collections and generates a CountVectorizerModel.

    Annotations
    @Since( "1.5.0" )
  8. class CountVectorizerModel extends Model[CountVectorizerModel] with CountVectorizerParams with MLWritable

    Converts a text document to a sparse vector of token counts.

    Converts a text document to a sparse vector of token counts.

    Annotations
    @Since( "1.5.0" )
  9. class DCT extends UnaryTransformer[Vector, Vector, DCT] with DefaultParamsWritable

    A feature transformer that takes the 1D discrete cosine transform of a real vector.

    A feature transformer that takes the 1D discrete cosine transform of a real vector. No zero padding is performed on the input vector. It returns a real vector of the same length representing the DCT. The return vector is scaled such that the transform matrix is unitary (aka scaled DCT-II).

    More information on DCT-II in Discrete cosine transform (Wikipedia).

    Annotations
    @Since( "1.5.0" )
  10. class ElementwiseProduct extends UnaryTransformer[Vector, Vector, ElementwiseProduct] with DefaultParamsWritable

    Outputs the Hadamard product (i.e., the element-wise product) of each input vector with a provided "weight" vector.

    Outputs the Hadamard product (i.e., the element-wise product) of each input vector with a provided "weight" vector. In other words, it scales each column of the dataset by a scalar multiplier.

    Annotations
    @Since( "1.4.0" )
  11. class FeatureHasher extends Transformer with HasInputCols with HasOutputCol with DefaultParamsWritable

    Feature hashing projects a set of categorical or numerical features into a feature vector of specified dimension (typically substantially smaller than that of the original feature space).

    Feature hashing projects a set of categorical or numerical features into a feature vector of specified dimension (typically substantially smaller than that of the original feature space). This is done using the hashing trick (https://en.wikipedia.org/wiki/Feature_hashing) to map features to indices in the feature vector.

    The FeatureHasher transformer operates on multiple columns. Each column may contain either numeric or categorical features. Behavior and handling of column data types is as follows: -Numeric columns: For numeric features, the hash value of the column name is used to map the feature value to its index in the feature vector. By default, numeric features are not treated as categorical (even when they are integers). To treat them as categorical, specify the relevant columns in categoricalCols. -String columns: For categorical features, the hash value of the string "column_name=value" is used to map to the vector index, with an indicator value of 1.0. Thus, categorical features are "one-hot" encoded (similarly to using OneHotEncoder with dropLast=false). -Boolean columns: Boolean values are treated in the same way as string columns. That is, boolean features are represented as "column_name=true" or "column_name=false", with an indicator value of 1.0.

    Null (missing) values are ignored (implicitly zero in the resulting feature vector).

    The hash function used here is also the MurmurHash 3 used in HashingTF. Since a simple modulo on the hashed value is used to determine the vector index, it is advisable to use a power of two as the numFeatures parameter; otherwise the features will not be mapped evenly to the vector indices.

    val df = Seq(
     (2.0, true, "1", "foo"),
     (3.0, false, "2", "bar")
    ).toDF("real", "bool", "stringNum", "string")
    
    val hasher = new FeatureHasher()
     .setInputCols("real", "bool", "stringNum", "string")
     .setOutputCol("features")
    
    hasher.transform(df).show(false)
    
    +----+-----+---------+------+------------------------------------------------------+
    |real|bool |stringNum|string|features                                              |
    +----+-----+---------+------+------------------------------------------------------+
    |2.0 |true |1        |foo   |(262144,[51871,63643,174475,253195],[1.0,1.0,2.0,1.0])|
    |3.0 |false|2        |bar   |(262144,[6031,80619,140467,174475],[1.0,1.0,1.0,3.0]) |
    +----+-----+---------+------+------------------------------------------------------+
    Annotations
    @Experimental() @Since( "2.3.0" )
  12. class HashingTF extends Transformer with HasInputCol with HasOutputCol with DefaultParamsWritable

    Maps a sequence of terms to their term frequencies using the hashing trick.

    Maps a sequence of terms to their term frequencies using the hashing trick. Currently we use Austin Appleby's MurmurHash 3 algorithm (MurmurHash3_x86_32) to calculate the hash code value for the term object. Since a simple modulo is used to transform the hash function to a column index, it is advisable to use a power of two as the numFeatures parameter; otherwise the features will not be mapped evenly to the columns.

    Annotations
    @Since( "1.2.0" )
  13. final class IDF extends Estimator[IDFModel] with IDFBase with DefaultParamsWritable

    Compute the Inverse Document Frequency (IDF) given a collection of documents.

    Compute the Inverse Document Frequency (IDF) given a collection of documents.

    Annotations
    @Since( "1.4.0" )
  14. class IDFModel extends Model[IDFModel] with IDFBase with MLWritable

    Model fitted by IDF.

    Model fitted by IDF.

    Annotations
    @Since( "1.4.0" )
  15. class Imputer extends Estimator[ImputerModel] with ImputerParams with DefaultParamsWritable

    :: Experimental :: Imputation estimator for completing missing values, either using the mean or the median of the columns in which the missing values are located.

    :: Experimental :: Imputation estimator for completing missing values, either using the mean or the median of the columns in which the missing values are located. The input columns should be of DoubleType or FloatType. Currently Imputer does not support categorical features (SPARK-15041) and possibly creates incorrect values for a categorical feature.

    Note that the mean/median value is computed after filtering out missing values. All Null values in the input columns are treated as missing, and so are also imputed. For computing median, DataFrameStatFunctions.approxQuantile is used with a relative error of 0.001.

    Annotations
    @Experimental() @Since( "2.2.0" )
  16. class ImputerModel extends Model[ImputerModel] with ImputerParams with MLWritable

    :: Experimental :: Model fitted by Imputer.

    :: Experimental :: Model fitted by Imputer.

    Annotations
    @Experimental() @Since( "2.2.0" )
  17. class IndexToString extends Transformer with HasInputCol with HasOutputCol with DefaultParamsWritable

    A Transformer that maps a column of indices back to a new column of corresponding string values.

    A Transformer that maps a column of indices back to a new column of corresponding string values. The index-string mapping is either from the ML attributes of the input column, or from user-supplied labels (which take precedence over ML attributes).

    Annotations
    @Since( "1.5.0" )
    See also

    StringIndexer for converting strings into indices

  18. class Interaction extends Transformer with HasInputCols with HasOutputCol with DefaultParamsWritable

    Implements the feature interaction transform.

    Implements the feature interaction transform. This transformer takes in Double and Vector type columns and outputs a flattened vector of their feature interactions. To handle interaction, we first one-hot encode any nominal features. Then, a vector of the feature cross-products is produced.

    For example, given the input feature values Double(2) and Vector(3, 4), the output would be Vector(6, 8) if all input features were numeric. If the first feature was instead nominal with four categories, the output would then be Vector(0, 0, 0, 0, 3, 4, 0, 0).

    Annotations
    @Since( "1.6.0" )
  19. case class LabeledPoint(label: Double, features: Vector) extends Product with Serializable

    Class that represents the features and label of a data point.

    Class that represents the features and label of a data point.

    label

    Label for this data point.

    features

    List of features for this data point.

    Annotations
    @Since( "2.0.0" )
  20. class MaxAbsScaler extends Estimator[MaxAbsScalerModel] with MaxAbsScalerParams with DefaultParamsWritable

    Rescale each feature individually to range [-1, 1] by dividing through the largest maximum absolute value in each feature.

    Rescale each feature individually to range [-1, 1] by dividing through the largest maximum absolute value in each feature. It does not shift/center the data, and thus does not destroy any sparsity.

    Annotations
    @Since( "2.0.0" )
  21. class MaxAbsScalerModel extends Model[MaxAbsScalerModel] with MaxAbsScalerParams with MLWritable

    Model fitted by MaxAbsScaler.

    Model fitted by MaxAbsScaler.

    Annotations
    @Since( "2.0.0" )
  22. class MinHashLSH extends LSH[MinHashLSHModel] with HasSeed

    :: Experimental ::

    :: Experimental ::

    LSH class for Jaccard distance.

    The input can be dense or sparse vectors, but it is more efficient if it is sparse. For example, Vectors.sparse(10, Array((2, 1.0), (3, 1.0), (5, 1.0))) means there are 10 elements in the space. This set contains elements 2, 3, and 5. Also, any input vector must have at least 1 non-zero index, and all non-zero values are treated as binary "1" values.

    References: Wikipedia on MinHash

    Annotations
    @Experimental() @Since( "2.1.0" )
  23. class MinHashLSHModel extends LSHModel[MinHashLSHModel]

    :: Experimental ::

    :: Experimental ::

    Model produced by MinHashLSH, where multiple hash functions are stored. Each hash function is picked from the following family of hash functions, where a_i and b_i are randomly chosen integers less than prime: h_i(x) = ((x \cdot a_i + b_i) \mod prime)

    This hash family is approximately min-wise independent according to the reference.

    Reference: Tom Bohman, Colin Cooper, and Alan Frieze. "Min-wise independent linear permutations." Electronic Journal of Combinatorics 7 (2000): R26.

    Annotations
    @Experimental() @Since( "2.1.0" )
  24. class MinMaxScaler extends Estimator[MinMaxScalerModel] with MinMaxScalerParams with DefaultParamsWritable

    Rescale each feature individually to a common range [min, max] linearly using column summary statistics, which is also known as min-max normalization or Rescaling.

    Rescale each feature individually to a common range [min, max] linearly using column summary statistics, which is also known as min-max normalization or Rescaling. The rescaled value for feature E is calculated as:

    $$ Rescaled(e_i) = \frac{e_i - E_{min}}{E_{max} - E_{min}} * (max - min) + min $$

    For the case \(E_{max} == E_{min}\), \(Rescaled(e_i) = 0.5 * (max + min)\).

    Annotations
    @Since( "1.5.0" )
    Note

    Since zero values will probably be transformed to non-zero values, output of the transformer will be DenseVector even for sparse input.

  25. class MinMaxScalerModel extends Model[MinMaxScalerModel] with MinMaxScalerParams with MLWritable

    Model fitted by MinMaxScaler.

    Model fitted by MinMaxScaler.

    Annotations
    @Since( "1.5.0" )
  26. class NGram extends UnaryTransformer[Seq[String], Seq[String], NGram] with DefaultParamsWritable

    A feature transformer that converts the input array of strings into an array of n-grams.

    A feature transformer that converts the input array of strings into an array of n-grams. Null values in the input array are ignored. It returns an array of n-grams where each n-gram is represented by a space-separated string of words.

    When the input is empty, an empty array is returned. When the input array length is less than n (number of elements per n-gram), no n-grams are returned.

    Annotations
    @Since( "1.5.0" )
  27. class Normalizer extends UnaryTransformer[Vector, Vector, Normalizer] with DefaultParamsWritable

    Normalize a vector to have unit norm using the given p-norm.

    Normalize a vector to have unit norm using the given p-norm.

    Annotations
    @Since( "1.4.0" )
  28. class OneHotEncoder extends Estimator[OneHotEncoderModel] with OneHotEncoderBase with DefaultParamsWritable

    A one-hot encoder that maps a column of category indices to a column of binary vectors, with at most a single one-value per row that indicates the input category index.

    A one-hot encoder that maps a column of category indices to a column of binary vectors, with at most a single one-value per row that indicates the input category index. For example with 5 categories, an input value of 2.0 would map to an output vector of [0.0, 0.0, 1.0, 0.0]. The last category is not included by default (configurable via dropLast), because it makes the vector entries sum up to one, and hence linearly dependent. So an input value of 4.0 maps to [0.0, 0.0, 0.0, 0.0].

    Annotations
    @Since( "3.0.0" )
    Note

    This is different from scikit-learn's OneHotEncoder, which keeps all categories. The output vectors are sparse. When handleInvalid is configured to 'keep', an extra "category" indicating invalid values is added as last category. So when dropLast is true, invalid values are encoded as all-zeros vector.

    ,

    When encoding multi-column by using inputCols and outputCols params, input/output cols come in pairs, specified by the order in the arrays, and each pair is treated independently.

    See also

    StringIndexer for converting categorical values into category indices

  29. class OneHotEncoderModel extends Model[OneHotEncoderModel] with OneHotEncoderBase with MLWritable

    Annotations
    @Since( "3.0.0" )
  30. class PCA extends Estimator[PCAModel] with PCAParams with DefaultParamsWritable

    PCA trains a model to project vectors to a lower dimensional space of the top PCA!.k principal components.

    PCA trains a model to project vectors to a lower dimensional space of the top PCA!.k principal components.

    Annotations
    @Since( "1.5.0" )
  31. class PCAModel extends Model[PCAModel] with PCAParams with MLWritable

    Model fitted by PCA.

    Model fitted by PCA. Transforms vectors to a lower dimensional space.

    Annotations
    @Since( "1.5.0" )
  32. class PolynomialExpansion extends UnaryTransformer[Vector, Vector, PolynomialExpansion] with DefaultParamsWritable

    Perform feature expansion in a polynomial space.

    Perform feature expansion in a polynomial space. As said in wikipedia of Polynomial Expansion, which is available at Polynomial expansion (Wikipedia) , "In mathematics, an expansion of a product of sums expresses it as a sum of products by using the fact that multiplication distributes over addition". Take a 2-variable feature vector as an example: (x, y), if we want to expand it with degree 2, then we get (x, x * x, y, x * y, y * y).

    Annotations
    @Since( "1.4.0" )
  33. final class QuantileDiscretizer extends Estimator[Bucketizer] with QuantileDiscretizerBase with DefaultParamsWritable with HasInputCols with HasOutputCols

    QuantileDiscretizer takes a column with continuous features and outputs a column with binned categorical features.

    QuantileDiscretizer takes a column with continuous features and outputs a column with binned categorical features. The number of bins can be set using the numBuckets parameter. It is possible that the number of buckets used will be smaller than this value, for example, if there are too few distinct values of the input to create enough distinct quantiles. Since 2.3.0, QuantileDiscretizer can map multiple columns at once by setting the inputCols parameter. If both of the inputCol and inputCols parameters are set, an Exception will be thrown. To specify the number of buckets for each column, the numBucketsArray parameter can be set, or if the number of buckets should be the same across columns, numBuckets can be set as a convenience.

    NaN handling: null and NaN values will be ignored from the column during QuantileDiscretizer fitting. This will produce a Bucketizer model for making predictions. During the transformation, Bucketizer will raise an error when it finds NaN values in the dataset, but the user can also choose to either keep or remove NaN values within the dataset by setting handleInvalid. If the user chooses to keep NaN values, they will be handled specially and placed into their own bucket, for example, if 4 buckets are used, then non-NaN data will be put into buckets[0-3], but NaNs will be counted in a special bucket[4].

    Algorithm: The bin ranges are chosen using an approximate algorithm (see the documentation for org.apache.spark.sql.DataFrameStatFunctions.approxQuantile for a detailed description). The precision of the approximation can be controlled with the relativeError parameter. The lower and upper bin bounds will be -Infinity and +Infinity, covering all real values.

    Annotations
    @Since( "1.6.0" )
  34. class RFormula extends Estimator[RFormulaModel] with RFormulaBase with DefaultParamsWritable

    :: Experimental :: Implements the transforms required for fitting a dataset against an R model formula.

    :: Experimental :: Implements the transforms required for fitting a dataset against an R model formula. Currently we support a limited subset of the R operators, including '~', '.', ':', '+', and '-'. Also see the R formula docs here: http://stat.ethz.ch/R-manual/R-patched/library/stats/html/formula.html

    The basic operators are:

    • ~ separate target and terms
    • + concat terms, "+ 0" means removing intercept
    • - remove a term, "- 1" means removing intercept
    • : interaction (multiplication for numeric values, or binarized categorical values)
    • . all columns except target

    Suppose a and b are double columns, we use the following simple examples to illustrate the effect of RFormula:

    • y ~ a + b means model y ~ w0 + w1 * a + w2 * b where w0 is the intercept and w1, w2 are coefficients.
    • y ~ a + b + a:b - 1 means model y ~ w1 * a + w2 * b + w3 * a * b where w1, w2, w3 are coefficients.

    RFormula produces a vector column of features and a double or string column of label. Like when formulas are used in R for linear regression, string input columns will be one-hot encoded, and numeric columns will be cast to doubles. If the label column is of type string, it will be first transformed to double with StringIndexer. If the label column does not exist in the DataFrame, the output label column will be created from the specified response variable in the formula.

    Annotations
    @Experimental() @Since( "1.5.0" )
  35. class RFormulaModel extends Model[RFormulaModel] with RFormulaBase with MLWritable

    :: Experimental :: Model fitted by RFormula.

    :: Experimental :: Model fitted by RFormula. Fitting is required to determine the factor levels of formula terms.

    Annotations
    @Experimental() @Since( "1.5.0" )
  36. class RegexTokenizer extends UnaryTransformer[String, Seq[String], RegexTokenizer] with DefaultParamsWritable

    A regex based tokenizer that extracts tokens either by using the provided regex pattern to split the text (default) or repeatedly matching the regex (if gaps is false).

    A regex based tokenizer that extracts tokens either by using the provided regex pattern to split the text (default) or repeatedly matching the regex (if gaps is false). Optional parameters also allow filtering tokens using a minimal length. It returns an array of strings that can be empty.

    Annotations
    @Since( "1.4.0" )
  37. class SQLTransformer extends Transformer with DefaultParamsWritable

    Implements the transformations which are defined by SQL statement.

    Implements the transformations which are defined by SQL statement. Currently we only support SQL syntax like 'SELECT ... FROM THIS ...' where 'THIS' represents the underlying table of the input dataset. The select clause specifies the fields, constants, and expressions to display in the output, it can be any select clause that Spark SQL supports. Users can also use Spark SQL built-in function and UDFs to operate on these selected columns. For example, SQLTransformer supports statements like:

    SELECT a, a + b AS a_b FROM __THIS__
    SELECT a, SQRT(b) AS b_sqrt FROM __THIS__ where a > 5
    SELECT a, b, SUM(c) AS c_sum FROM __THIS__ GROUP BY a, b
    Annotations
    @Since( "1.6.0" )
  38. class StandardScaler extends Estimator[StandardScalerModel] with StandardScalerParams with DefaultParamsWritable

    Standardizes features by removing the mean and scaling to unit variance using column summary statistics on the samples in the training set.

    Standardizes features by removing the mean and scaling to unit variance using column summary statistics on the samples in the training set.

    The "unit std" is computed using the corrected sample standard deviation, which is computed as the square root of the unbiased sample variance.

    Annotations
    @Since( "1.2.0" )
  39. class StandardScalerModel extends Model[StandardScalerModel] with StandardScalerParams with MLWritable

    Model fitted by StandardScaler.

    Model fitted by StandardScaler.

    Annotations
    @Since( "1.2.0" )
  40. class StopWordsRemover extends Transformer with HasInputCol with HasOutputCol with DefaultParamsWritable

    A feature transformer that filters out stop words from input.

    A feature transformer that filters out stop words from input.

    Annotations
    @Since( "1.5.0" )
    Note

    null values from input array are preserved unless adding null to stopWords explicitly.

    See also

    Stop words (Wikipedia)

  41. class StringIndexer extends Estimator[StringIndexerModel] with StringIndexerBase with DefaultParamsWritable

    A label indexer that maps a string column of labels to an ML column of label indices.

    A label indexer that maps a string column of labels to an ML column of label indices. If the input column is numeric, we cast it to string and index the string values. The indices are in [0, numLabels). By default, this is ordered by label frequencies so the most frequent label gets index 0. The ordering behavior is controlled by setting stringOrderType.

    Annotations
    @Since( "1.4.0" )
    See also

    IndexToString for the inverse transformation

  42. class StringIndexerModel extends Model[StringIndexerModel] with StringIndexerBase with MLWritable

    Model fitted by StringIndexer.

    Model fitted by StringIndexer.

    Annotations
    @Since( "1.4.0" )
    Note

    During transformation, if the input column does not exist, StringIndexerModel.transform would return the input dataset unmodified. This is a temporary fix for the case when target labels do not exist during prediction.

  43. class Tokenizer extends UnaryTransformer[String, Seq[String], Tokenizer] with DefaultParamsWritable

    A tokenizer that converts the input string to lowercase and then splits it by white spaces.

    A tokenizer that converts the input string to lowercase and then splits it by white spaces.

    Annotations
    @Since( "1.2.0" )
    See also

    RegexTokenizer

  44. class VectorAssembler extends Transformer with HasInputCols with HasOutputCol with HasHandleInvalid with DefaultParamsWritable

    A feature transformer that merges multiple columns into a vector column.

    A feature transformer that merges multiple columns into a vector column.

    This requires one pass over the entire dataset. In case we need to infer column lengths from the data we require an additional call to the 'first' Dataset method, see 'handleInvalid' parameter.

    Annotations
    @Since( "1.4.0" )
  45. class VectorIndexer extends Estimator[VectorIndexerModel] with VectorIndexerParams with DefaultParamsWritable

    Class for indexing categorical feature columns in a dataset of Vector.

    Class for indexing categorical feature columns in a dataset of Vector.

    This has 2 usage modes:

    • Automatically identify categorical features (default behavior)
      • This helps process a dataset of unknown vectors into a dataset with some continuous features and some categorical features. The choice between continuous and categorical is based upon a maxCategories parameter.
      • Set maxCategories to the maximum number of categorical any categorical feature should have.
      • E.g.: Feature 0 has unique values {-1.0, 0.0}, and feature 1 values {1.0, 3.0, 5.0}. If maxCategories = 2, then feature 0 will be declared categorical and use indices {0, 1}, and feature 1 will be declared continuous.
    • Index all features, if all features are categorical
      • If maxCategories is set to be very large, then this will build an index of unique values for all features.
      • Warning: This can cause problems if features are continuous since this will collect ALL unique values to the driver.
      • E.g.: Feature 0 has unique values {-1.0, 0.0}, and feature 1 values {1.0, 3.0, 5.0}. If maxCategories is greater than or equal to 3, then both features will be declared categorical.

    This returns a model which can transform categorical features to use 0-based indices.

    Index stability:

    • This is not guaranteed to choose the same category index across multiple runs.
    • If a categorical feature includes value 0, then this is guaranteed to map value 0 to index 0. This maintains vector sparsity.
    • More stability may be added in the future.

    TODO: Future extensions: The following functionality is planned for the future:

    • Preserve metadata in transform; if a feature's metadata is already present, do not recompute.
    • Specify certain features to not index, either via a parameter or via existing metadata.
    • Add warning if a categorical feature has only 1 category.
    Annotations
    @Since( "1.4.0" )
  46. class VectorIndexerModel extends Model[VectorIndexerModel] with VectorIndexerParams with MLWritable

    Model fitted by VectorIndexer.

    Model fitted by VectorIndexer. Transform categorical features to use 0-based indices instead of their original values.

    • Categorical features are mapped to indices.
    • Continuous features (columns) are left unchanged. This also appends metadata to the output column, marking features as Numeric (continuous), Nominal (categorical), or Binary (either continuous or categorical). Non-ML metadata is not carried over from the input to the output column.

    This maintains vector sparsity.

    Annotations
    @Since( "1.4.0" )
  47. class VectorSizeHint extends Transformer with HasInputCol with HasHandleInvalid with DefaultParamsWritable

    :: Experimental :: A feature transformer that adds size information to the metadata of a vector column.

    :: Experimental :: A feature transformer that adds size information to the metadata of a vector column. VectorAssembler needs size information for its input columns and cannot be used on streaming dataframes without this metadata.

    Note: VectorSizeHint modifies inputCol to include size metadata and does not have an outputCol.

    Annotations
    @Experimental() @Since( "2.3.0" )
  48. final class VectorSlicer extends Transformer with HasInputCol with HasOutputCol with DefaultParamsWritable

    This class takes a feature vector and outputs a new feature vector with a subarray of the original features.

    This class takes a feature vector and outputs a new feature vector with a subarray of the original features.

    The subset of features can be specified with either indices (setIndices()) or names (setNames()). At least one feature must be selected. Duplicate features are not allowed, so there can be no overlap between selected indices and names.

    The output vector will order features with the selected indices first (in the order given), followed by the selected names (in the order given).

    Annotations
    @Since( "1.5.0" )
  49. final class Word2Vec extends Estimator[Word2VecModel] with Word2VecBase with DefaultParamsWritable

    Word2Vec trains a model of Map(String, Vector), i.e.

    Word2Vec trains a model of Map(String, Vector), i.e. transforms a word into a code for further natural language processing or machine learning process.

    Annotations
    @Since( "1.4.0" )
  50. class Word2VecModel extends Model[Word2VecModel] with Word2VecBase with MLWritable

    Model fitted by Word2Vec.

    Model fitted by Word2Vec.

    Annotations
    @Since( "1.4.0" )

Value Members

  1. object Binarizer extends DefaultParamsReadable[Binarizer] with Serializable
    Annotations
    @Since( "1.6.0" )
  2. object BucketedRandomProjectionLSH extends DefaultParamsReadable[BucketedRandomProjectionLSH] with Serializable
    Annotations
    @Since( "2.1.0" )
  3. object BucketedRandomProjectionLSHModel extends MLReadable[BucketedRandomProjectionLSHModel] with Serializable
    Annotations
    @Since( "2.1.0" )
  4. object Bucketizer extends DefaultParamsReadable[Bucketizer] with Serializable
    Annotations
    @Since( "1.6.0" )
  5. object ChiSqSelector extends DefaultParamsReadable[ChiSqSelector] with Serializable
    Annotations
    @Since( "1.6.0" )
  6. object ChiSqSelectorModel extends MLReadable[ChiSqSelectorModel] with Serializable
    Annotations
    @Since( "1.6.0" )
  7. object CountVectorizer extends DefaultParamsReadable[CountVectorizer] with Serializable
    Annotations
    @Since( "1.6.0" )
  8. object CountVectorizerModel extends MLReadable[CountVectorizerModel] with Serializable
    Annotations
    @Since( "1.6.0" )
  9. object DCT extends DefaultParamsReadable[DCT] with Serializable
    Annotations
    @Since( "1.6.0" )
  10. object ElementwiseProduct extends DefaultParamsReadable[ElementwiseProduct] with Serializable
    Annotations
    @Since( "2.0.0" )
  11. object FeatureHasher extends DefaultParamsReadable[FeatureHasher] with Serializable
    Annotations
    @Since( "2.3.0" )
  12. object HashingTF extends DefaultParamsReadable[HashingTF] with Serializable
    Annotations
    @Since( "1.6.0" )
  13. object IDF extends DefaultParamsReadable[IDF] with Serializable
    Annotations
    @Since( "1.6.0" )
  14. object IDFModel extends MLReadable[IDFModel] with Serializable
    Annotations
    @Since( "1.6.0" )
  15. object Imputer extends DefaultParamsReadable[Imputer] with Serializable
    Annotations
    @Since( "2.2.0" )
  16. object ImputerModel extends MLReadable[ImputerModel] with Serializable
    Annotations
    @Since( "2.2.0" )
  17. object IndexToString extends DefaultParamsReadable[IndexToString] with Serializable
    Annotations
    @Since( "1.6.0" )
  18. object Interaction extends DefaultParamsReadable[Interaction] with Serializable
    Annotations
    @Since( "1.6.0" )
  19. object MaxAbsScaler extends DefaultParamsReadable[MaxAbsScaler] with Serializable
    Annotations
    @Since( "2.0.0" )
  20. object MaxAbsScalerModel extends MLReadable[MaxAbsScalerModel] with Serializable
    Annotations
    @Since( "2.0.0" )
  21. object MinHashLSH extends DefaultParamsReadable[MinHashLSH] with Serializable
    Annotations
    @Since( "2.1.0" )
  22. object MinHashLSHModel extends MLReadable[MinHashLSHModel] with Serializable
    Annotations
    @Since( "2.1.0" )
  23. object MinMaxScaler extends DefaultParamsReadable[MinMaxScaler] with Serializable
    Annotations
    @Since( "1.6.0" )
  24. object MinMaxScalerModel extends MLReadable[MinMaxScalerModel] with Serializable
    Annotations
    @Since( "1.6.0" )
  25. object NGram extends DefaultParamsReadable[NGram] with Serializable
    Annotations
    @Since( "1.6.0" )
  26. object Normalizer extends DefaultParamsReadable[Normalizer] with Serializable
    Annotations
    @Since( "1.6.0" )
  27. object OneHotEncoder extends DefaultParamsReadable[OneHotEncoder] with Serializable
    Annotations
    @Since( "3.0.0" )
  28. object OneHotEncoderModel extends MLReadable[OneHotEncoderModel] with Serializable
    Annotations
    @Since( "3.0.0" )
  29. object PCA extends DefaultParamsReadable[PCA] with Serializable
    Annotations
    @Since( "1.6.0" )
  30. object PCAModel extends MLReadable[PCAModel] with Serializable
    Annotations
    @Since( "1.6.0" )
  31. object PolynomialExpansion extends DefaultParamsReadable[PolynomialExpansion] with Serializable

    The expansion is done via recursion.

    The expansion is done via recursion. Given n features and degree d, the size after expansion is (n + d choose d) (including 1 and first-order values). For example, let f([a, b, c], 3) be the function that expands [a, b, c] to their monomials of degree 3. We have the following recursion:

    $$ f([a, b, c], 3) &= f([a, b], 3) ++ f([a, b], 2) * c ++ f([a, b], 1) * c^2 ++ [c^3] $$

    To handle sparsity, if c is zero, we can skip all monomials that contain it. We remember the current index and increment it properly for sparse input.

    Annotations
    @Since( "1.6.0" )
  32. object QuantileDiscretizer extends DefaultParamsReadable[QuantileDiscretizer] with Logging with Serializable
    Annotations
    @Since( "1.6.0" )
  33. object RFormula extends DefaultParamsReadable[RFormula] with Serializable
    Annotations
    @Since( "2.0.0" )
  34. object RFormulaModel extends MLReadable[RFormulaModel] with Serializable
    Annotations
    @Since( "2.0.0" )
  35. object RegexTokenizer extends DefaultParamsReadable[RegexTokenizer] with Serializable
    Annotations
    @Since( "1.6.0" )
  36. object SQLTransformer extends DefaultParamsReadable[SQLTransformer] with Serializable
    Annotations
    @Since( "1.6.0" )
  37. object StandardScaler extends DefaultParamsReadable[StandardScaler] with Serializable
    Annotations
    @Since( "1.6.0" )
  38. object StandardScalerModel extends MLReadable[StandardScalerModel] with Serializable
    Annotations
    @Since( "1.6.0" )
  39. object StopWordsRemover extends DefaultParamsReadable[StopWordsRemover] with Serializable
    Annotations
    @Since( "1.6.0" )
  40. object StringIndexer extends DefaultParamsReadable[StringIndexer] with Serializable
    Annotations
    @Since( "1.6.0" )
  41. object StringIndexerModel extends MLReadable[StringIndexerModel] with Serializable
    Annotations
    @Since( "1.6.0" )
  42. object Tokenizer extends DefaultParamsReadable[Tokenizer] with Serializable
    Annotations
    @Since( "1.6.0" )
  43. object VectorAssembler extends DefaultParamsReadable[VectorAssembler] with Serializable
    Annotations
    @Since( "1.6.0" )
  44. object VectorIndexer extends DefaultParamsReadable[VectorIndexer] with Serializable
    Annotations
    @Since( "1.6.0" )
  45. object VectorIndexerModel extends MLReadable[VectorIndexerModel] with Serializable
    Annotations
    @Since( "1.6.0" )
  46. object VectorSizeHint extends DefaultParamsReadable[VectorSizeHint] with Serializable

    :: Experimental ::

    :: Experimental ::

    Annotations
    @Experimental() @Since( "2.3.0" )
  47. object VectorSlicer extends DefaultParamsReadable[VectorSlicer] with Serializable
    Annotations
    @Since( "1.6.0" )
  48. object Word2Vec extends DefaultParamsReadable[Word2Vec] with Serializable
    Annotations
    @Since( "1.6.0" )
  49. object Word2VecModel extends MLReadable[Word2VecModel] with Serializable
    Annotations
    @Since( "1.6.0" )

Inherited from AnyRef

Inherited from Any

Members