public class Pipeline extends Estimator<PipelineModel> implements MLWritable
Estimator
or a Transformer
. When fit(org.apache.spark.sql.Dataset<?>)
is called, the
stages are executed in order. If a stage is an Estimator
, its Estimator.fit(org.apache.spark.sql.Dataset<?>, org.apache.spark.ml.param.ParamPair<?>, org.apache.spark.ml.param.ParamPair<?>...)
method will
be called on the input dataset to fit a model. Then the model, which is a transformer, will be
used to transform the dataset as the input to the next stage. If a stage is a Transformer
,
its Transformer.transform(org.apache.spark.sql.Dataset<?>, org.apache.spark.ml.param.ParamPair<?>, org.apache.spark.ml.param.ParamPair<?>...)
method will be called to produce the dataset for the next stage.
The fitted model from a Pipeline
is a PipelineModel
, which consists of fitted models and
transformers, corresponding to the pipeline stages. If there are no stages, the pipeline acts as
an identity transformer.Modifier and Type | Class and Description |
---|---|
static class |
Pipeline.SharedReadWrite$
|
Modifier and Type | Method and Description |
---|---|
static Params |
clear(Param<?> param) |
Pipeline |
copy(ParamMap extra)
Creates a copy of this instance with the same UID and some extra params.
|
static String |
explainParam(Param<?> param) |
static String |
explainParams() |
static ParamMap |
extractParamMap() |
static ParamMap |
extractParamMap(ParamMap extra) |
PipelineModel |
fit(Dataset<?> dataset)
Fits the pipeline to the input dataset with additional parameters.
|
static <T> scala.Option<T> |
get(Param<T> param) |
static <T> scala.Option<T> |
getDefault(Param<T> param) |
static <T> T |
getOrDefault(Param<T> param) |
static Param<Object> |
getParam(String paramName) |
PipelineStage[] |
getStages() |
static <T> boolean |
hasDefault(Param<T> param) |
static boolean |
hasParam(String paramName) |
static boolean |
isDefined(Param<?> param) |
static boolean |
isSet(Param<?> param) |
static Pipeline |
load(String path) |
static Param<?>[] |
params() |
static MLReader<Pipeline> |
read() |
static void |
save(String path) |
static <T> Params |
set(Param<T> param,
T value) |
Pipeline |
setStages(PipelineStage[] value) |
Param<PipelineStage[]> |
stages()
param for pipeline stages
|
static String |
toString() |
StructType |
transformSchema(StructType schema)
:: DeveloperApi ::
|
String |
uid()
An immutable unique ID for the object and its derivatives.
|
MLWriter |
write()
Returns an
MLWriter instance for this ML instance. |
equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
save
clear, copyValues, defaultCopy, defaultParamMap, explainParam, explainParams, extractParamMap, extractParamMap, get, getDefault, getOrDefault, getParam, hasDefault, hasParam, isDefined, isSet, paramMap, params, set, set, set, setDefault, setDefault, shouldOwn
toString
public static Pipeline load(String path)
public static String toString()
public static Param<?>[] params()
public static String explainParam(Param<?> param)
public static String explainParams()
public static final boolean isSet(Param<?> param)
public static final boolean isDefined(Param<?> param)
public static boolean hasParam(String paramName)
public static Param<Object> getParam(String paramName)
public static final <T> scala.Option<T> get(Param<T> param)
public static final <T> T getOrDefault(Param<T> param)
public static final <T> scala.Option<T> getDefault(Param<T> param)
public static final <T> boolean hasDefault(Param<T> param)
public static final ParamMap extractParamMap()
public static void save(String path) throws java.io.IOException
java.io.IOException
public String uid()
Identifiable
uid
in interface Identifiable
public Param<PipelineStage[]> stages()
public Pipeline setStages(PipelineStage[] value)
public PipelineStage[] getStages()
public PipelineModel fit(Dataset<?> dataset)
Estimator
, its Estimator.fit(org.apache.spark.sql.Dataset<?>, org.apache.spark.ml.param.ParamPair<?>, org.apache.spark.ml.param.ParamPair<?>...)
method will be called on the input dataset to fit a model.
Then the model, which is a transformer, will be used to transform the dataset as the input to
the next stage. If a stage is a Transformer
, its Transformer.transform(org.apache.spark.sql.Dataset<?>, org.apache.spark.ml.param.ParamPair<?>, org.apache.spark.ml.param.ParamPair<?>...)
method will be
called to produce the dataset for the next stage. The fitted model from a Pipeline
is an
PipelineModel
, which consists of fitted models and transformers, corresponding to the
pipeline stages. If there are no stages, the output model acts as an identity transformer.
fit
in class Estimator<PipelineModel>
dataset
- input datasetpublic Pipeline copy(ParamMap extra)
Params
defaultCopy()
.copy
in interface Params
copy
in class Estimator<PipelineModel>
extra
- (undocumented)public StructType transformSchema(StructType schema)
PipelineStage
Check transform validity and derive the output schema from the input schema.
We check validity for interactions between parameters during transformSchema
and
raise an exception if any parameter value is invalid. Parameter value checks which
do not depend on other parameters are handled by Param.validate()
.
Typical implementation should first conduct verification on schema change and parameter validity, including complex parameter interaction checks.
transformSchema
in class PipelineStage
schema
- (undocumented)public MLWriter write()
MLWritable
MLWriter
instance for this ML instance.write
in interface MLWritable