public abstract class Partitioner
extends Object
implements scala.Serializable
numPartitions - 1
.
Note that, partitioner must be deterministic, i.e. it must return the same partition id given the same partition key.
Constructor and Description |
---|
Partitioner() |
Modifier and Type | Method and Description |
---|---|
static Partitioner |
defaultPartitioner(RDD<?> rdd,
scala.collection.Seq<RDD<?>> others)
Choose a partitioner to use for a cogroup-like operation between a number of RDDs.
|
abstract int |
getPartition(Object key) |
abstract int |
numPartitions() |
public static Partitioner defaultPartitioner(RDD<?> rdd, scala.collection.Seq<RDD<?>> others)
If spark.default.parallelism is set, we'll use the value of SparkContext defaultParallelism as the default partitions number, otherwise we'll use the max number of upstream partitions.
When available, we choose the partitioner from rdds with maximum number of partitions. If this partitioner is eligible (number of partitions within an order of maximum number of partitions in rdds), or has partition number higher than default partitions number - we use this partitioner.
Otherwise, we'll use a new HashPartitioner with the default partitions number.
Unless spark.default.parallelism is set, the number of partitions will be the same as the number of partitions in the largest upstream RDD, as this should be least likely to cause out-of-memory errors.
We use two method parameters (rdd, others) to enforce callers passing at least 1 RDD.
rdd
- (undocumented)others
- (undocumented)public abstract int numPartitions()
public abstract int getPartition(Object key)