Packages

p

org.apache.spark

storage

package storage

Ordering
  1. Alphabetic
Visibility
  1. Public
  2. All

Type Members

  1. class BasicBlockReplicationPolicy extends BlockReplicationPolicy with Logging
    Annotations
    @DeveloperApi()
  2. sealed abstract class BlockId extends AnyRef

    :: DeveloperApi :: Identifies a particular Block of data, usually associated with a single file.

    :: DeveloperApi :: Identifies a particular Block of data, usually associated with a single file. A Block can be uniquely identified by its filename, but each type of Block has a different set of keys which produce its unique name.

    If your BlockId should be serializable, be sure to add it to the BlockId.apply() method.

    Annotations
    @DeveloperApi()
  3. class BlockManagerId extends Externalizable

    :: DeveloperApi :: This class represent an unique identifier for a BlockManager.

    :: DeveloperApi :: This class represent an unique identifier for a BlockManager.

    The first 2 constructors of this class are made private to ensure that BlockManagerId objects can be created only using the apply method in the companion object. This allows de-duplication of ID objects. Also, constructor parameters are private to ensure that parameters cannot be modified from outside this class.

    Annotations
    @DeveloperApi()
  4. class BlockNotFoundException extends Exception
  5. trait BlockReplicationPolicy extends AnyRef

    ::DeveloperApi:: BlockReplicationPrioritization provides logic for prioritizing a sequence of peers for replicating blocks.

    ::DeveloperApi:: BlockReplicationPrioritization provides logic for prioritizing a sequence of peers for replicating blocks. BlockManager will replicate to each peer returned in order until the desired replication order is reached. If a replication fails, prioritize() will be called again to get a fresh prioritization.

    Annotations
    @DeveloperApi()
  6. case class BlockStatus(storageLevel: StorageLevel, memSize: Long, diskSize: Long) extends Product with Serializable
    Annotations
    @DeveloperApi()
  7. case class BlockUpdatedInfo(blockManagerId: BlockManagerId, blockId: BlockId, storageLevel: StorageLevel, memSize: Long, diskSize: Long) extends Product with Serializable

    :: DeveloperApi :: Stores information about a block status in a block manager.

    :: DeveloperApi :: Stores information about a block status in a block manager.

    Annotations
    @DeveloperApi()
  8. case class BroadcastBlockId(broadcastId: Long, field: String = "") extends BlockId with Product with Serializable
    Annotations
    @DeveloperApi()
  9. class DefaultTopologyMapper extends TopologyMapper with Logging

    A TopologyMapper that assumes all nodes are in the same rack

    A TopologyMapper that assumes all nodes are in the same rack

    Annotations
    @DeveloperApi()
  10. class FileBasedTopologyMapper extends TopologyMapper with Logging

    A simple file based topology mapper.

    A simple file based topology mapper. This expects topology information provided as a java.util.Properties file. The name of the file is obtained from SparkConf property spark.storage.replication.topologyFile. To use this topology mapper, set the spark.storage.replication.topologyMapper property to org.apache.spark.storage.FileBasedTopologyMapper

    Annotations
    @DeveloperApi()
  11. case class RDDBlockId(rddId: Int, splitIndex: Int) extends BlockId with Product with Serializable
    Annotations
    @DeveloperApi()
  12. class RDDInfo extends Ordered[RDDInfo]
    Annotations
    @DeveloperApi()
  13. class RandomBlockReplicationPolicy extends BlockReplicationPolicy with Logging
    Annotations
    @DeveloperApi()
  14. case class ShuffleBlockId(shuffleId: Int, mapId: Int, reduceId: Int) extends BlockId with Product with Serializable
    Annotations
    @DeveloperApi()
  15. case class ShuffleDataBlockId(shuffleId: Int, mapId: Int, reduceId: Int) extends BlockId with Product with Serializable
    Annotations
    @DeveloperApi()
  16. case class ShuffleIndexBlockId(shuffleId: Int, mapId: Int, reduceId: Int) extends BlockId with Product with Serializable
    Annotations
    @DeveloperApi()
  17. class StorageLevel extends Externalizable

    :: DeveloperApi :: Flags for controlling the storage of an RDD.

    :: DeveloperApi :: Flags for controlling the storage of an RDD. Each StorageLevel records whether to use memory, or ExternalBlockStore, whether to drop the RDD to disk if it falls out of memory or ExternalBlockStore, whether to keep the data in memory in a serialized format, and whether to replicate the RDD partitions on multiple nodes.

    The org.apache.spark.storage.StorageLevel singleton object contains some static constants for commonly useful storage levels. To create your own storage level object, use the factory method of the singleton object (StorageLevel(...)).

    Annotations
    @DeveloperApi()
  18. case class StreamBlockId(streamId: Int, uniqueId: Long) extends BlockId with Product with Serializable
    Annotations
    @DeveloperApi()
  19. case class TaskResultBlockId(taskId: Long) extends BlockId with Product with Serializable
    Annotations
    @DeveloperApi()
  20. final class TimeTrackingOutputStream extends OutputStream
  21. abstract class TopologyMapper extends AnyRef

    ::DeveloperApi:: TopologyMapper provides topology information for a given host

    ::DeveloperApi:: TopologyMapper provides topology information for a given host

    Annotations
    @DeveloperApi()
  22. class UnrecognizedBlockId extends SparkException
    Annotations
    @DeveloperApi()

Value Members

  1. object BlockId
    Annotations
    @DeveloperApi()
  2. object BlockReplicationUtils
  3. object BlockStatus extends Serializable
    Annotations
    @DeveloperApi()
  4. object StorageLevel extends Serializable

    Various org.apache.spark.storage.StorageLevel defined and utility functions for creating new storage levels.

Ungrouped