package storage
- Alphabetic
- Public
- All
Type Members
-
class
BasicBlockReplicationPolicy extends BlockReplicationPolicy with Logging
- Annotations
- @DeveloperApi()
-
sealed abstract
class
BlockId extends AnyRef
:: DeveloperApi :: Identifies a particular Block of data, usually associated with a single file.
:: DeveloperApi :: Identifies a particular Block of data, usually associated with a single file. A Block can be uniquely identified by its filename, but each type of Block has a different set of keys which produce its unique name.
If your BlockId should be serializable, be sure to add it to the BlockId.apply() method.
- Annotations
- @DeveloperApi()
-
class
BlockManagerId extends Externalizable
:: DeveloperApi :: This class represent an unique identifier for a BlockManager.
:: DeveloperApi :: This class represent an unique identifier for a BlockManager.
The first 2 constructors of this class are made private to ensure that BlockManagerId objects can be created only using the apply method in the companion object. This allows de-duplication of ID objects. Also, constructor parameters are private to ensure that parameters cannot be modified from outside this class.
- Annotations
- @DeveloperApi()
- class BlockNotFoundException extends Exception
-
trait
BlockReplicationPolicy extends AnyRef
::DeveloperApi:: BlockReplicationPrioritization provides logic for prioritizing a sequence of peers for replicating blocks.
::DeveloperApi:: BlockReplicationPrioritization provides logic for prioritizing a sequence of peers for replicating blocks. BlockManager will replicate to each peer returned in order until the desired replication order is reached. If a replication fails, prioritize() will be called again to get a fresh prioritization.
- Annotations
- @DeveloperApi()
-
case class
BlockStatus(storageLevel: StorageLevel, memSize: Long, diskSize: Long) extends Product with Serializable
- Annotations
- @DeveloperApi()
-
case class
BlockUpdatedInfo(blockManagerId: BlockManagerId, blockId: BlockId, storageLevel: StorageLevel, memSize: Long, diskSize: Long) extends Product with Serializable
:: DeveloperApi :: Stores information about a block status in a block manager.
:: DeveloperApi :: Stores information about a block status in a block manager.
- Annotations
- @DeveloperApi()
-
case class
BroadcastBlockId(broadcastId: Long, field: String = "") extends BlockId with Product with Serializable
- Annotations
- @DeveloperApi()
-
class
DefaultTopologyMapper extends TopologyMapper with Logging
A TopologyMapper that assumes all nodes are in the same rack
A TopologyMapper that assumes all nodes are in the same rack
- Annotations
- @DeveloperApi()
-
class
FileBasedTopologyMapper extends TopologyMapper with Logging
A simple file based topology mapper.
A simple file based topology mapper. This expects topology information provided as a
java.util.Properties
file. The name of the file is obtained from SparkConf propertyspark.storage.replication.topologyFile
. To use this topology mapper, set thespark.storage.replication.topologyMapper
property to org.apache.spark.storage.FileBasedTopologyMapper- Annotations
- @DeveloperApi()
-
case class
RDDBlockId(rddId: Int, splitIndex: Int) extends BlockId with Product with Serializable
- Annotations
- @DeveloperApi()
-
class
RDDInfo extends Ordered[RDDInfo]
- Annotations
- @DeveloperApi()
-
class
RandomBlockReplicationPolicy extends BlockReplicationPolicy with Logging
- Annotations
- @DeveloperApi()
-
case class
ShuffleBlockId(shuffleId: Int, mapId: Int, reduceId: Int) extends BlockId with Product with Serializable
- Annotations
- @DeveloperApi()
-
case class
ShuffleDataBlockId(shuffleId: Int, mapId: Int, reduceId: Int) extends BlockId with Product with Serializable
- Annotations
- @DeveloperApi()
-
case class
ShuffleIndexBlockId(shuffleId: Int, mapId: Int, reduceId: Int) extends BlockId with Product with Serializable
- Annotations
- @DeveloperApi()
-
class
StorageLevel extends Externalizable
:: DeveloperApi :: Flags for controlling the storage of an RDD.
:: DeveloperApi :: Flags for controlling the storage of an RDD. Each StorageLevel records whether to use memory, or ExternalBlockStore, whether to drop the RDD to disk if it falls out of memory or ExternalBlockStore, whether to keep the data in memory in a serialized format, and whether to replicate the RDD partitions on multiple nodes.
The org.apache.spark.storage.StorageLevel singleton object contains some static constants for commonly useful storage levels. To create your own storage level object, use the factory method of the singleton object (
StorageLevel(...)
).- Annotations
- @DeveloperApi()
-
case class
StreamBlockId(streamId: Int, uniqueId: Long) extends BlockId with Product with Serializable
- Annotations
- @DeveloperApi()
-
case class
TaskResultBlockId(taskId: Long) extends BlockId with Product with Serializable
- Annotations
- @DeveloperApi()
- final class TimeTrackingOutputStream extends OutputStream
-
abstract
class
TopologyMapper extends AnyRef
::DeveloperApi:: TopologyMapper provides topology information for a given host
::DeveloperApi:: TopologyMapper provides topology information for a given host
- Annotations
- @DeveloperApi()
-
class
UnrecognizedBlockId extends SparkException
- Annotations
- @DeveloperApi()
Value Members
-
object
BlockId
- Annotations
- @DeveloperApi()
- object BlockReplicationUtils
-
object
BlockStatus extends Serializable
- Annotations
- @DeveloperApi()
-
object
StorageLevel extends Serializable
Various org.apache.spark.storage.StorageLevel defined and utility functions for creating new storage levels.