Packages

  • package root
    Definition Classes
    root
  • package org
    Definition Classes
    root
  • package apache
    Definition Classes
    org
  • package spark

    Core Spark functionality.

    Core Spark functionality. org.apache.spark.SparkContext serves as the main entry point to Spark, while org.apache.spark.rdd.RDD is the data type representing a distributed collection, and provides most parallel operations.

    In addition, org.apache.spark.rdd.PairRDDFunctions contains operations available only on RDDs of key-value pairs, such as groupByKey and join; org.apache.spark.rdd.DoubleRDDFunctions contains operations available only on RDDs of Doubles; and org.apache.spark.rdd.SequenceFileRDDFunctions contains operations available on RDDs that can be saved as SequenceFiles. These operations are automatically available on any RDD of the right type (e.g. RDD[(Int, Int)] through implicit conversions.

    Java programmers should reference the org.apache.spark.api.java package for Spark programming APIs in Java.

    Classes and methods marked with Experimental are user-facing features which have not been officially adopted by the Spark project. These are subject to change or removal in minor releases.

    Classes and methods marked with Developer API are intended for advanced users want to extend Spark through lower level interfaces. These are subject to changes or removal in minor releases.

    Definition Classes
    apache
  • package sql

    Allows the execution of relational queries, including those expressed in SQL using Spark.

    Allows the execution of relational queries, including those expressed in SQL using Spark.

    Definition Classes
    spark
  • package api

    Contains API classes that are specific to a single language (i.e.

    Contains API classes that are specific to a single language (i.e. Java).

    Definition Classes
    sql
  • package catalog
    Definition Classes
    sql
  • package expressions
    Definition Classes
    sql
  • package hive

    Support for running Spark SQL queries using functionality from Apache Hive (does not require an existing Hive installation).

    Support for running Spark SQL queries using functionality from Apache Hive (does not require an existing Hive installation). Supported Hive features include:

    • Using HiveQL to express queries.
    • Reading metadata from the Hive Metastore using HiveSerDes.
    • Hive UDFs, UDAs, UDTs

    Users that would like access to this functionality should create a HiveContext instead of a SQLContext.

    Definition Classes
    sql
  • package execution
  • package orc
  • DetermineTableStats
  • HiveAnalysis
  • HiveContext
  • HiveExternalCatalog
  • HiveSessionResourceLoader
  • HiveSessionStateBuilder
  • RelationConversions
  • ResolveHiveSerdeTable
  • package jdbc
    Definition Classes
    sql
  • package sources

    A set of APIs for adding data sources to Spark SQL.

    A set of APIs for adding data sources to Spark SQL.

    Definition Classes
    sql
  • package streaming
    Definition Classes
    sql
  • package types

    Contains a type system for attributes produced by relations, including complex types like structs, arrays and maps.

    Contains a type system for attributes produced by relations, including complex types like structs, arrays and maps.

    Definition Classes
    sql
  • package util
    Definition Classes
    sql
  • package vectorized
    Definition Classes
    sql

package hive

Support for running Spark SQL queries using functionality from Apache Hive (does not require an existing Hive installation). Supported Hive features include:

  • Using HiveQL to express queries.
  • Reading metadata from the Hive Metastore using HiveSerDes.
  • Hive UDFs, UDAs, UDTs

Users that would like access to this functionality should create a HiveContext instead of a SQLContext.

Linear Supertypes
AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. hive
  2. AnyRef
  3. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Type Members

  1. class DetermineTableStats extends Rule[LogicalPlan]
  2. class HiveSessionResourceLoader extends SessionResourceLoader
  3. class HiveSessionStateBuilder extends BaseSessionStateBuilder

    Builder that produces a Hive-aware SessionState.

    Builder that produces a Hive-aware SessionState.

    Annotations
    @Experimental() @Unstable()
  4. case class RelationConversions(conf: SQLConf, sessionCatalog: HiveSessionCatalog) extends Rule[LogicalPlan] with Product with Serializable

    Relation conversion from metastore relations to data source relations for better performance

    Relation conversion from metastore relations to data source relations for better performance

    - When writing to non-partitioned Hive-serde Parquet/Orc tables - When scanning Hive-serde Parquet/ORC tables

    This rule must be run before all other DDL post-hoc resolution rules, i.e. PreprocessTableCreation, PreprocessTableInsertion, DataSourceAnalysis and HiveAnalysis.

  5. class ResolveHiveSerdeTable extends Rule[LogicalPlan]

    Determine the database, serde/format and schema of the Hive serde table, according to the storage properties.

  6. class HiveContext extends SQLContext with Logging

    An instance of the Spark SQL execution engine that integrates with data stored in Hive.

    An instance of the Spark SQL execution engine that integrates with data stored in Hive. Configuration for Hive is read from hive-site.xml on the classpath.

    Annotations
    @deprecated
    Deprecated

    (Since version 2.0.0) Use SparkSession.builder.enableHiveSupport instead

Value Members

  1. object HiveAnalysis extends Rule[LogicalPlan]

    Replaces generic operations with specific variants that are designed to work with Hive.

    Replaces generic operations with specific variants that are designed to work with Hive.

    Note that, this rule must be run after PreprocessTableCreation and PreprocessTableInsertion.

  2. object HiveExternalCatalog

Inherited from AnyRef

Inherited from Any

Ungrouped