Packages

class LocalLDAModel extends LDAModel with Serializable

Local LDA model. This model stores only the inferred topics.

Annotations
@Since( "1.3.0" )
Linear Supertypes
Serializable, Serializable, LDAModel, Saveable, AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. LocalLDAModel
  2. Serializable
  3. Serializable
  4. LDAModel
  5. Saveable
  6. AnyRef
  7. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  4. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  5. def clone(): AnyRef
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @native() @throws( ... )
  6. def describeTopics(maxTermsPerTopic: Int): Array[(Array[Int], Array[Double])]

    Return the topics described by weighted terms.

    Return the topics described by weighted terms.

    maxTermsPerTopic

    Maximum number of terms to collect for each topic.

    returns

    Array over topics. Each topic is represented as a pair of matching arrays: (term indices, term weights in topic). Each topic's terms are sorted in order of decreasing weight.

    Definition Classes
    LocalLDAModelLDAModel
    Annotations
    @Since( "1.3.0" )
  7. def describeTopics(): Array[(Array[Int], Array[Double])]

    Return the topics described by weighted terms.

    Return the topics described by weighted terms.

    WARNING: If vocabSize and k are large, this can return a large object!

    returns

    Array over topics. Each topic is represented as a pair of matching arrays: (term indices, term weights in topic). Each topic's terms are sorted in order of decreasing weight.

    Definition Classes
    LDAModel
    Annotations
    @Since( "1.3.0" )
  8. val docConcentration: Vector

    Concentration parameter (commonly named "alpha") for the prior placed on documents' distributions over topics ("theta").

    Concentration parameter (commonly named "alpha") for the prior placed on documents' distributions over topics ("theta").

    This is the parameter to a Dirichlet distribution.

    Definition Classes
    LocalLDAModelLDAModel
    Annotations
    @Since( "1.5.0" )
  9. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  10. def equals(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  11. def finalize(): Unit
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  12. def formatVersion: String

    Current version of model save/load format.

    Current version of model save/load format.

    Attributes
    protected
    Definition Classes
    LocalLDAModelSaveable
  13. val gammaShape: Double

    Shape parameter for random initialization of variational parameter gamma.

    Shape parameter for random initialization of variational parameter gamma. Used for variational inference for perplexity and other test-time computations.

    Attributes
    protected[org.apache.spark]
    Definition Classes
    LocalLDAModelLDAModel
  14. final def getClass(): Class[_]
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  15. def getSeed: Long

    Random seed for cluster initialization.

    Random seed for cluster initialization.

    Annotations
    @Since( "2.4.0" )
  16. def hashCode(): Int
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  17. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  18. def k: Int

    Number of topics

    Number of topics

    Definition Classes
    LocalLDAModelLDAModel
    Annotations
    @Since( "1.3.0" )
  19. def logLikelihood(documents: JavaPairRDD[Long, Vector]): Double

    Java-friendly version of logLikelihood

    Java-friendly version of logLikelihood

    Annotations
    @Since( "1.5.0" )
  20. def logLikelihood(documents: RDD[(Long, Vector)]): Double

    Calculates a lower bound on the log likelihood of the entire corpus.

    Calculates a lower bound on the log likelihood of the entire corpus.

    See Equation (16) in original Online LDA paper.

    documents

    test corpus to use for calculating log likelihood

    returns

    variational lower bound on the log likelihood of the entire corpus

    Annotations
    @Since( "1.5.0" )
  21. def logPerplexity(documents: JavaPairRDD[Long, Vector]): Double

    Java-friendly version of logPerplexity

    Java-friendly version of logPerplexity

    Annotations
    @Since( "1.5.0" )
  22. def logPerplexity(documents: RDD[(Long, Vector)]): Double

    Calculate an upper bound on perplexity.

    Calculate an upper bound on perplexity. (Lower is better.) See Equation (16) in original Online LDA paper.

    documents

    test corpus to use for calculating perplexity

    returns

    Variational upper bound on log perplexity per token.

    Annotations
    @Since( "1.5.0" )
  23. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  24. final def notify(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  25. final def notifyAll(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  26. def save(sc: SparkContext, path: String): Unit

    Save this model to the given path.

    Save this model to the given path.

    This saves:

    • human-readable (JSON) model metadata to path/metadata/
    • Parquet formatted data to path/data/

    The model may be loaded using Loader.load.

    sc

    Spark context used to save model data.

    path

    Path specifying the directory in which to save this model. If the directory already exists, this method throws an exception.

    Definition Classes
    LocalLDAModelSaveable
    Annotations
    @Since( "1.5.0" )
  27. def setSeed(seed: Long): LocalLDAModel.this.type

    Set the random seed for cluster initialization.

    Set the random seed for cluster initialization.

    Annotations
    @Since( "2.4.0" )
  28. final def synchronized[T0](arg0: ⇒ T0): T0
    Definition Classes
    AnyRef
  29. def toString(): String
    Definition Classes
    AnyRef → Any
  30. val topicConcentration: Double

    Concentration parameter (commonly named "beta" or "eta") for the prior placed on topics' distributions over terms.

    Concentration parameter (commonly named "beta" or "eta") for the prior placed on topics' distributions over terms.

    This is the parameter to a symmetric Dirichlet distribution.

    Definition Classes
    LocalLDAModelLDAModel
    Annotations
    @Since( "1.5.0" )
    Note

    The topics' distributions over terms are called "beta" in the original LDA paper by Blei et al., but are called "phi" in many later papers such as Asuncion et al., 2009.

  31. def topicDistribution(document: Vector): Vector

    Predicts the topic mixture distribution for a document (often called "theta" in the literature).

    Predicts the topic mixture distribution for a document (often called "theta" in the literature). Returns a vector of zeros for an empty document.

    Note this means to allow quick query for single document. For batch documents, please refer to topicDistributions() to avoid overhead.

    document

    document to predict topic mixture distributions for

    returns

    topic mixture distribution for the document

    Annotations
    @Since( "2.0.0" )
  32. def topicDistributions(documents: JavaPairRDD[Long, Vector]): JavaPairRDD[Long, Vector]

    Java-friendly version of topicDistributions

    Java-friendly version of topicDistributions

    Annotations
    @Since( "1.4.1" )
  33. def topicDistributions(documents: RDD[(Long, Vector)]): RDD[(Long, Vector)]

    Predicts the topic mixture distribution for each document (often called "theta" in the literature).

    Predicts the topic mixture distribution for each document (often called "theta" in the literature). Returns a vector of zeros for an empty document.

    This uses a variational approximation following Hoffman et al. (2010), where the approximate distribution is called "gamma." Technically, this method returns this approximation "gamma" for each document.

    documents

    documents to predict topic mixture distributions for

    returns

    An RDD of (document ID, topic mixture distribution for document)

    Annotations
    @Since( "1.3.0" )
  34. val topics: Matrix
    Annotations
    @Since( "1.3.0" )
  35. def topicsMatrix: Matrix

    Inferred topics, where each topic is represented by a distribution over terms.

    Inferred topics, where each topic is represented by a distribution over terms. This is a matrix of size vocabSize x k, where each column is a topic. No guarantees are given about the ordering of the topics.

    Definition Classes
    LocalLDAModelLDAModel
    Annotations
    @Since( "1.3.0" )
  36. def vocabSize: Int

    Vocabulary size (number of terms or terms in the vocabulary)

    Vocabulary size (number of terms or terms in the vocabulary)

    Definition Classes
    LocalLDAModelLDAModel
    Annotations
    @Since( "1.3.0" )
  37. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  38. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  39. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @native() @throws( ... )

Inherited from Serializable

Inherited from Serializable

Inherited from LDAModel

Inherited from Saveable

Inherited from AnyRef

Inherited from Any

Ungrouped