Merge two accumulable objects together
Merge two accumulable objects together
Normally, a user will not want to use this version, but will instead call +=
.
the other R
that will get merged with this
Add more data to this accumulator / accumulable
Add more data to this accumulator / accumulable
Get the current value of this accumulator from within a task.
Get the current value of this accumulator from within a task.
This is NOT the global value of the accumulator. To get the global value after a
completed operation on the dataset, call value
.
The typical use of this method is to directly mutate the local value, eg., to add an element to a Set.
Merge two accumulable objects together
Merge two accumulable objects together
Normally, a user will not want to use this version, but will instead call add
.
the other R
that will get merged with this
Set the accumulator's value; only allowed on master
Set the accumulator's value; only allowed on master
Access the accumulator's current value; only allowed on master.
Access the accumulator's current value; only allowed on master.
Set the accumulator's value; only allowed on master.
Set the accumulator's value; only allowed on master.
A simpler value of Accumulable where the result type being accumulated is the same as the types of elements being merged, i.e. variables that are only "added" to through an associative operation and can therefore be efficiently supported in parallel. They can be used to implement counters (as in MapReduce) or sums. Spark natively supports accumulators of numeric value types, and programmers can add support for new types.
An accumulator is created from an initial value
v
by calling SparkContext#accumulator. Tasks running on the cluster can then add to it using the Accumulable#+= operator. However, they cannot read its value. Only the driver program can read the accumulator's value, using its value method.The interpreter session below shows an accumulator being used to add up the elements of an array:
result type