Title: | Model-Agnostic Interpretations with Forward Marginal Effects |
---|---|
Description: | Create local, regional, and global explanations for any machine learning model with forward marginal effects. You provide a model and data, and 'fmeffects' computes feature effects. The package is based on the theory in: C. A. Scholbeck, G. Casalicchio, C. Molnar, B. Bischl, and C. Heumann (2022) <doi:10.48550/arXiv.2201.08837>. |
Authors: | Holger Löwe [cre, aut], Christian Scholbeck [aut], Christian Heumann [rev], Bernd Bischl [rev], Giuseppe Casalicchio [rev] |
Maintainer: | Holger Löwe <[email protected]> |
License: | LGPL-3 |
Version: | 0.1.4 |
Built: | 2024-11-05 17:38:59 UTC |
Source: | https://github.com/holgstr/fmeffects |
Computes forward marginal effects (FME) for arbitrary supervised machine learning models. You provide a model and data, and 'fmeffects' gives you feature effects.
Maintainer: Holger Löwe [email protected]
Authors:
Christian Scholbeck [email protected]
Other contributors:
Christian Heumann [email protected] [reviewer]
Bernd Bischl [email protected] [reviewer]
Giuseppe Casalicchio [email protected] [reviewer]
Useful links:
Report bugs at https://github.com/holgstr/fmeffects/issues
This is a wrapper function for AverageMarginalEffects$new(...)$compute()
.
It computes Average Marginal Effects (AME) based on Forward Marginal Effects (FME) for a model. The AME is a simple mean FME and computed w.r.t. a feature variable and a model.
ame(model, data, features = NULL, ep.method = "none")
ame(model, data, features = NULL, ep.method = "none")
model |
The (trained) model, with the ability to predict on new data. This must be a |
data |
The data used for computing AMEs, must be data.frame or data.table. |
features |
If not NULL, a named list of the names of the feature variables for which AMEs should be computed, together with the desired step sizes. For numeric features, the step size must be a single number. For categorial features, the step size must be a character vector of category names that is a subset of the levels of the factor variable. |
ep.method |
String specifying the method used for extrapolation detection. One of |
An AverageMarginalEffects
object, with a field results
containing a list of summary statistics, including
Feature
: The name of the feature.
step.size
: The step.size w.r.t. the specified feature.
AME
: The Average Marginal Effect for a step of length step.size w.r.t. the specified feature.
SD
: The standard deviation of FMEs for the specified feature and step.size.
0.25
: The 0.25-quantile of FMEs for the specified feature and step.size.
0.75
: The 0.75-quantile of FMEs for the specified feature and step.size.
n
: The number of observations included for the computation of the AME. This can vary for the following reasons:
For categorical features, FMEs are only computed for observations where the original category is not the step.size category.
For numerical features, FMEs are only computed for observations that are not extrapolation points (if ep.method is set to "envelope"
).
Scholbeck, C.A., Casalicchio, G., Molnar, C. et al. Marginal effects for non-linear prediction functions. Data Min Knowl Disc (2024). https://doi.org/10.1007/s10618-023-00993-x
# Train a model: library(mlr3verse) library(ranger) data(bikes, package = "fmeffects") set.seed(123) task = as_task_regr(x = bikes, id = "bikes", target = "count") forest = lrn("regr.ranger")$train(task) # Compute AMEs for all features: ## Not run: overview = ame(model = forest, data = bikes) summary(overview) # Compute AMEs for a subset of features with non-default step.sizes: overview = ame(model = forest, data = bikes, features = list(humidity = 0.1, weather = c("clear", "rain"))) summary(overview) # Extract results: overview$results ## End(Not run)
# Train a model: library(mlr3verse) library(ranger) data(bikes, package = "fmeffects") set.seed(123) task = as_task_regr(x = bikes, id = "bikes", target = "count") forest = lrn("regr.ranger")$train(task) # Compute AMEs for all features: ## Not run: overview = ame(model = forest, data = bikes) summary(overview) # Compute AMEs for a subset of features with non-default step.sizes: overview = ame(model = forest, data = bikes, features = list(humidity = 0.1, weather = c("clear", "rain"))) summary(overview) # Extract results: overview$results ## End(Not run)
The AME is a simple mean FME and computed w.r.t. a feature variable and a model.
predictor
Predictor
object
features
vector of features for which AMEs should be computed
ep.method
string specifying extrapolation detection method
results
data.table with AMEs computed
computed
logical specifying if compute() has been run
new()
Create a new AME object.
AverageMarginalEffects$new(model, data, features = NULL, ep.method = "none")
model
The (trained) model, with the ability to predict on new data. This must be a train.formula
(tidymodels
), Learner
(mlr3
), train
(caret
), lm
or glm
object.
data
The data used for computing AMEs, must be data.frame or data.table.
features
If not NULL, a named list of the names of the feature variables for which AMEs should be computed, together with the desired step sizes. For numeric features, the step size must be a single number. For categorial features, the step size must be a character vector of category names that is a subset of the levels of the factor variable.
ep.method
String specifying the method used for extrapolation detection. One of "none"
or "envelope"
. Defaults to "none"
.
A new AME
object.
# Train a model: library(mlr3verse) library(ranger) set.seed(123) data(bikes, package = "fmeffects") task = as_task_regr(x = bikes, id = "bikes", target = "count") forest = lrn("regr.ranger")$train(task) # Compute AMEs for all features: \dontrun{ overview = AverageMarginalEffects$new( model = forest, data = bikes)$compute() summary(overview) # Compute AMEs for a subset of features with non-default step.sizes: overview = AverageMarginalEffects$new(model = forest, data = bikes, features = list(humidity = 0.1, weather = c("clear", "rain")))$compute() summary(overview) }
compute()
Computes results, i.e., AMEs including the SD of FMEs, for an AME
object.
AverageMarginalEffects$compute()
An AME
object with results.
# Compute results: \dontrun{ overview$compute() }
clone()
The objects of this class are cloneable with this method.
AverageMarginalEffects$clone(deep = FALSE)
deep
Whether to make a deep clone.
## ------------------------------------------------ ## Method `AverageMarginalEffects$new` ## ------------------------------------------------ # Train a model: library(mlr3verse) library(ranger) set.seed(123) data(bikes, package = "fmeffects") task = as_task_regr(x = bikes, id = "bikes", target = "count") forest = lrn("regr.ranger")$train(task) # Compute AMEs for all features: ## Not run: overview = AverageMarginalEffects$new( model = forest, data = bikes)$compute() summary(overview) # Compute AMEs for a subset of features with non-default step.sizes: overview = AverageMarginalEffects$new(model = forest, data = bikes, features = list(humidity = 0.1, weather = c("clear", "rain")))$compute() summary(overview) ## End(Not run) ## ------------------------------------------------ ## Method `AverageMarginalEffects$compute` ## ------------------------------------------------ # Compute results: ## Not run: overview$compute() ## End(Not run)
## ------------------------------------------------ ## Method `AverageMarginalEffects$new` ## ------------------------------------------------ # Train a model: library(mlr3verse) library(ranger) set.seed(123) data(bikes, package = "fmeffects") task = as_task_regr(x = bikes, id = "bikes", target = "count") forest = lrn("regr.ranger")$train(task) # Compute AMEs for all features: ## Not run: overview = AverageMarginalEffects$new( model = forest, data = bikes)$compute() summary(overview) # Compute AMEs for a subset of features with non-default step.sizes: overview = AverageMarginalEffects$new(model = forest, data = bikes, features = list(humidity = 0.1, weather = c("clear", "rain")))$compute() summary(overview) ## End(Not run) ## ------------------------------------------------ ## Method `AverageMarginalEffects$compute` ## ------------------------------------------------ # Compute results: ## Not run: overview$compute() ## End(Not run)
This data set contains information on daily bike sharing usage in Washington, D.C. for the years 2011-2012. The target variable is count
, the total number of bikes lent out to users at a specific day.
data(bikes)
data(bikes)
An object of class data.frame
with 731 rows and 10 columns.
This data frame contains the following columns:
season
Season of the year
year
Year; 0=2011, 1=2012
holiday
If a day is a public holiday (y/n)
weekday
Day of the week
workingday
If a day is aworking day (y/n)
weather
Weather situation
temp
Temperature in degrees celsius
humidity
Humidity (relative)
windspeed
Windspeed in miles per hour
count
Total number of bikes lent out to users
The original data can be found on the UCI database (ID = 275
).
Fanaee-T, Hadi, and Gama, Joao, "Event labeling combining ensemble detectors and background knowledge", Progress in Artificial Intelligence (2013): pp. 1-15, Springer Berlin Heidelberg, doi:10.1007/s13748-013-0040-3.
ForwardMarginalEffect
This is a wrapper function that creates the correct subclass of Partitioning
.
It computes feature subspaces for semi-global interpretations of FMEs via recursive partitioning (RP).
came( effects, number.partitions = NULL, max.sd = Inf, rp.method = "ctree", tree.control = NULL )
came( effects, number.partitions = NULL, max.sd = Inf, rp.method = "ctree", tree.control = NULL )
effects |
A |
number.partitions |
The exact number of partitions required.
Either |
max.sd |
The maximum standard deviation required in each partition.
Among multiple partitionings with this criterion identified, the one with lowest number of partitions is selected.
Either |
rp.method |
One of |
tree.control |
Control parameters for the RP algorithm. One of |
Partitioning
Object with identified feature subspaces.
Scholbeck, C.A., Casalicchio, G., Molnar, C. et al. Marginal effects for non-linear prediction functions. Data Min Knowl Disc (2024). https://doi.org/10.1007/s10618-023-00993-x
# Train a model and compute FMEs: library(mlr3verse) library(ranger) data(bikes, package = "fmeffects") task = as_task_regr(x = bikes, id = "bikes", target = "count") forest = lrn("regr.ranger")$train(task) effects = fme(model = forest, data = bikes, features = list("temp" = 1), ep.method = "envelope") # Find a partitioning with exactly 3 subspaces: subspaces = came(effects, number.partitions = 3) # Find a partitioning with a maximum standard deviation of 20, use `rpart`: library(rpart) subspaces = came(effects, max.sd = 200, rp.method = "rpart") # Analyze results: summary(subspaces) plot(subspaces) # Extract results: subspaces$results subspaces$tree
# Train a model and compute FMEs: library(mlr3verse) library(ranger) data(bikes, package = "fmeffects") task = as_task_regr(x = bikes, id = "bikes", target = "count") forest = lrn("regr.ranger")$train(task) effects = fme(model = forest, data = bikes, features = list("temp" = 1), ep.method = "envelope") # Find a partitioning with exactly 3 subspaces: subspaces = came(effects, number.partitions = 3) # Find a partitioning with a maximum standard deviation of 20, use `rpart`: library(rpart) subspaces = came(effects, max.sd = 200, rp.method = "rpart") # Analyze results: summary(subspaces) plot(subspaces) # Extract results: subspaces$results subspaces$tree
This is a wrapper function for FME$new(...)$compute()
.
It computes forward marginal effects (FMEs) for a specified change in feature values.
fme( model, data, features, ep.method = "none", compute.nlm = FALSE, nlm.intervals = 1 )
fme( model, data, features, ep.method = "none", compute.nlm = FALSE, nlm.intervals = 1 )
model |
The (trained) model, with the ability to predict on new data. This must be a |
data |
The data used for computing FMEs, must be data.frame or data.table. |
features |
A named list with the feature name(s) and step size(s). The list names should correspond to the names of the feature variables affected by the step. The list must exclusively contain either numeric or categorical features, but not a combination of both. Numeric features must have a number as step size, categorical features the name of the reference category. |
ep.method |
String specifying the method used for extrapolation detection. One of |
compute.nlm |
Compute NLMs for FMEs for numerical steps. Defaults to |
nlm.intervals |
Number of intervals for computing NLMs. Results in longer computing time but more accurate approximation of NLMs. Defaults to |
If one or more numeric features are passed to the features
argument, FMEs are computed as
where is the step size vector and
the other features.
If one or more categorical features are passed to
features
,
where is the set of selected reference categories in
features
and the other features.
ForwardsMarginalEffect
object with the following fields:
ame
average marginal effect (AME).
anlm
average non-linearity measure (NLM).
extrapolation.ids
observations that have been identified as extrapolation points and not included in the analysis.
data.step
, a data.table
of the feature matrix after the step has been applied.
results
, a data.table
of the individual FMEs (and NLMs, if applicable) for all observations that are not extrapolation points.
Scholbeck, C.A., Casalicchio, G., Molnar, C. et al. Marginal effects for non-linear prediction functions. Data Min Knowl Disc (2024). https://doi.org/10.1007/s10618-023-00993-x
# Train a model: library(mlr3verse) library(ranger) data(bikes, package = "fmeffects") forest = lrn("regr.ranger")$train(as_task_regr(x = bikes, target = "count")) # Compute FMEs for a numerical feature: effects = fme(model = forest, data = bikes, features = list("temp" = 1), ep.method = "envelope") # Analyze results: summary(effects) plot(effects) # Extract results: effects$results # Compute the AME for a categorial feature: fme(model = forest, data = bikes, features = list("weather" = "rain"))$ame
# Train a model: library(mlr3verse) library(ranger) data(bikes, package = "fmeffects") forest = lrn("regr.ranger")$train(as_task_regr(x = bikes, target = "count")) # Compute FMEs for a numerical feature: effects = fme(model = forest, data = bikes, features = list("temp" = 1), ep.method = "envelope") # Analyze results: summary(effects) plot(effects) # Extract results: effects$results # Compute the AME for a categorial feature: fme(model = forest, data = bikes, features = list("weather" = "rain"))$ame
The FME is a forward difference in prediction due to a specified change in feature values.
feature
vector of features
predictor
Predictor
object
step.size
vector of step sizes for features specified by "feature"
data.step
the data.table with the data matrix after the step
ep.method
string specifying extrapolation detection method
compute.nlm
logical specifying if NLM should be computed
nlm.intervals
number of intervals for computing NLMs
step.type
"numerical"
or "categorical"
extrapolation.ids
vector of observation ids classified as extrapolation points
results
data.table with FMEs and NLMs computed
ame
Average Marginal Effect (AME) of observations in results
anlm
Average Non-linearity Measure (ANLM) of observations in results
computed
logical specifying if compute() has been run
new()
Create a new ForwardMarginalEffect object.
ForwardMarginalEffect$new( predictor, features, ep.method = "none", compute.nlm = FALSE, nlm.intervals = 1 )
predictor
Predictor
object.
features
A named list with the feature name(s) and step size(s).
ep.method
String specifying extrapolation detection method.
compute.nlm
Compute NLM with FMEs? Defaults to FALSE
.
nlm.intervals
How many intervals for NLM computation. Defaults to 1
.
A new ForwardMarginalEffect
object.
# Train a model: library(mlr3verse) library(ranger) data(bikes, package = "fmeffects") forest = lrn("regr.ranger")$train(as_task_regr(x = bikes, target = "count")) # Create an `ForwardMarginalEffect` object: effects = ForwardMarginalEffect$new(makePredictor(forest, bikes), features = list("temp" = 1, "humidity" = 0.01), ep.method = "envelope")
compute()
Computes results, i.e., FME (and NLMs) for non-extrapolation points, for a ForwardMarginalEffect
object.
ForwardMarginalEffect$compute()
A ForwardMarginalEffect
object with results.
# Compute results: effects$compute()
plot()
Plots results, i.e., FME (and NLMs) for non-extrapolation points, for an FME
object.
ForwardMarginalEffect$plot(with.nlm = FALSE, bins = 40, binwidth = NULL)
with.nlm
Plots NLMs if computed, defaults to FALSE
.
bins
Numeric vector giving number of bins in both vertical and horizontal directions. Applies only to univariate or bivariate numeric effects.
See ggplot2::stat_summary_hex()
for details.
binwidth
Numeric vector giving bin width in both vertical and horizontal directions. Overrides bins if both set. Applies only to univariate or bivariate numeric effects.
See ggplot2::stat_summary_hex()
for details.
# Compute results: effects$plot()
clone()
The objects of this class are cloneable with this method.
ForwardMarginalEffect$clone(deep = FALSE)
deep
Whether to make a deep clone.
## ------------------------------------------------ ## Method `ForwardMarginalEffect$new` ## ------------------------------------------------ # Train a model: library(mlr3verse) library(ranger) data(bikes, package = "fmeffects") forest = lrn("regr.ranger")$train(as_task_regr(x = bikes, target = "count")) # Create an `ForwardMarginalEffect` object: effects = ForwardMarginalEffect$new(makePredictor(forest, bikes), features = list("temp" = 1, "humidity" = 0.01), ep.method = "envelope") ## ------------------------------------------------ ## Method `ForwardMarginalEffect$compute` ## ------------------------------------------------ # Compute results: effects$compute() ## ------------------------------------------------ ## Method `ForwardMarginalEffect$plot` ## ------------------------------------------------ # Compute results: effects$plot()
## ------------------------------------------------ ## Method `ForwardMarginalEffect$new` ## ------------------------------------------------ # Train a model: library(mlr3verse) library(ranger) data(bikes, package = "fmeffects") forest = lrn("regr.ranger")$train(as_task_regr(x = bikes, target = "count")) # Create an `ForwardMarginalEffect` object: effects = ForwardMarginalEffect$new(makePredictor(forest, bikes), features = list("temp" = 1, "humidity" = 0.01), ep.method = "envelope") ## ------------------------------------------------ ## Method `ForwardMarginalEffect$compute` ## ------------------------------------------------ # Compute results: effects$compute() ## ------------------------------------------------ ## Method `ForwardMarginalEffect$plot` ## ------------------------------------------------ # Compute results: effects$plot()
A wrapper function that creates the correct subclass of Predictor
by automatically from model
. Can be passed to the constructor of FME
.
makePredictor(model, data)
makePredictor(model, data)
model |
the (trained) model, with the ability to predict on new data. |
data |
the data used for computing FMEs, must be data.frame or data.table. |
# Train a model: library(mlr3verse) data(bikes, package = "fmeffects") task = as_task_regr(x = bikes, id = "bikes", target = "count") forest = lrn("regr.ranger")$train(task) # Create the predictor: predictor = makePredictor(forest, bikes) # This instantiated an object of the correct subclass of `Predictor`: class(predictor)
# Train a model: library(mlr3verse) data(bikes, package = "fmeffects") task = as_task_regr(x = bikes, id = "bikes", target = "count") forest = lrn("regr.ranger")$train(task) # Create the predictor: predictor = makePredictor(forest, bikes) # This instantiated an object of the correct subclass of `Predictor`: class(predictor)
This is the abstract superclass for partitioning objects like PartitioningCtree and PartitioningRpart.
A Partitioning contains information about feature subspaces with conditional average marginal effects (cAME) computed for ForwardMarginalEffect
objects.
object
a ForwardMarginalEffect
object with results computed
method
the method for finding feature subspaces
value
the value of method
results
descriptive statistics of the resulting feature subspaces
tree
the tree representing the partitioning, a party
object
tree.control
control parameters for the RP algorithm
computed
logical specifying if compute() has been run
new()
Create a Partitioning object
Partitioning$new(...)
...
Partitioning cannot be initialized, only its subclasses
compute()
Computes the partitioning, i.e., feature subspaces with more homogeneous FMEs, for a ForwardMarginalEffect
object.
Partitioning$compute()
An Partitioning
object with results.
# Compute results for an arbitrary partitioning: # subspaces$compute()
plot()
Plots results, i.e., a decision tree and summary statistics of the feature subspaces, for an Partitioning
object after $compute()
has been run.
Partitioning$plot()
# Plot an arbitrary partitioning: # subspaces$plot()
clone()
The objects of this class are cloneable with this method.
Partitioning$clone(deep = FALSE)
deep
Whether to make a deep clone.
## ------------------------------------------------ ## Method `Partitioning$compute` ## ------------------------------------------------ # Compute results for an arbitrary partitioning: # subspaces$compute() ## ------------------------------------------------ ## Method `Partitioning$plot` ## ------------------------------------------------ # Plot an arbitrary partitioning: # subspaces$plot()
## ------------------------------------------------ ## Method `Partitioning$compute` ## ------------------------------------------------ # Compute results for an arbitrary partitioning: # subspaces$compute() ## ------------------------------------------------ ## Method `Partitioning$plot` ## ------------------------------------------------ # Plot an arbitrary partitioning: # subspaces$plot()
This task specializes Partitioning for the ctree
algorithm for recursive partitioning.
It is recommended to use came()
for construction of Partitioning objects.
fmeffects::Partitioning
-> PartitioningCtree
new()
Create a new PartitioningCtree object.
PartitioningCtree$new(object, method, value, tree.control = NULL)
object
an FME
object with results computed.
method
the method for finding feature subspaces.
value
the value of method
.
tree.control
control parameters for the RP algorithm.
clone()
The objects of this class are cloneable with this method.
PartitioningCtree$clone(deep = FALSE)
deep
Whether to make a deep clone.
This task specializes Partitioning for the rpart
algorithm for recursive partitioning.
It is recommended to use came()
for construction of Partitioning objects.
fmeffects::Partitioning
-> PartitioningRpart
new()
Create a new PartitioningRpart object.
PartitioningRpart$new(object, method, value, tree.control = NULL)
object
An FME
object with results computed.
method
The method for finding feature subspaces.
value
The value of method
.
tree.control
Control parameters for the RP algorithm.
clone()
The objects of this class are cloneable with this method.
PartitioningRpart$clone(deep = FALSE)
deep
Whether to make a deep clone.
Plots an ForwardMarginalEffect object.
## S3 method for class 'ForwardMarginalEffect' plot(x, ...)
## S3 method for class 'ForwardMarginalEffect' plot(x, ...)
x |
object of class |
... |
additional arguments affecting the summary produced. |
Plots an FME Partitioning.
## S3 method for class 'Partitioning' plot(x, ...)
## S3 method for class 'Partitioning' plot(x, ...)
x |
object of class |
... |
additional arguments affecting the summary produced. |
This is the abstract superclass for predictor objects like PredictorMLR3 and PredictorCaret. A Predictor contains information about an ML model's prediction function and training data.
model
The (trained) model, with the ability to predict on new data.
target
A character vector with the name of the target variable.
X
A data.table with feature and target variables.
feature.names
A character vector with the names of the features in X.
feature.types
A character vector with the types (numerical or categorical) of the features in X.
new()
Create a Predictor object
Predictor$new(...)
...
Predictor cannot be initialized, only its subclasses
clone()
The objects of this class are cloneable with this method.
Predictor$clone(deep = FALSE)
deep
Whether to make a deep clone.
This task specializes Predictor for caret
regression models.
The model
is assumed to be a c("train", "train.formula")
.
It is recommended to use makePredictor()
for construction of Predictor objects.
fmeffects::Predictor
-> PredictorCaret
new()
Create a new PredictorCaret object.
PredictorCaret$new(model, data)
model
train, train.formula
object.
data
The data used for computing FMEs, must be data.frame or data.table.
predict()
Predicts on an observation "newdata"
.
PredictorCaret$predict(newdata)
newdata
The feature vector for which the target should be predicted.
clone()
The objects of this class are cloneable with this method.
PredictorCaret$clone(deep = FALSE)
deep
Whether to make a deep clone.
This task specializes Predictor for lm
and lm
-type models.
The model
is assumed to be a lm
.
It is recommended to use makePredictor()
for construction of Predictor objects.
fmeffects::Predictor
-> PredictorLM
new()
Create a new PredictorCaret object.
PredictorLM$new(model, data)
model
train, train.formula
object.
data
The data used for computing FMEs, must be data.frame or data.table.
predict()
Predicts on an observation "newdata"
.
PredictorLM$predict(newdata)
newdata
The feature vector for which the target should be predicted.
clone()
The objects of this class are cloneable with this method.
PredictorLM$clone(deep = FALSE)
deep
Whether to make a deep clone.
This task specializes Predictor for mlr3
models.
The model
is assumed to be a LearnerRegr
or LearnerClassif
.
It is recommended to use makePredictor()
for construction of Predictor objects.
fmeffects::Predictor
-> PredictorMLR3
new()
Create a new PredictorMLR3 object.
PredictorMLR3$new(model, data)
model
LearnerRegr
or LearnerClassif
object.
data
The data used for computing FMEs, must be data.frame or data.table.
predict()
Predicts on an observation "newdata"
.
PredictorMLR3$predict(newdata)
newdata
The feature vector for which the target should be predicted.
clone()
The objects of this class are cloneable with this method.
PredictorMLR3$clone(deep = FALSE)
deep
Whether to make a deep clone.
This task specializes Predictor for parsnip
models.
The model
is assumed to be a model_fit
object.
It is recommended to use makePredictor()
for construction of Predictor objects.
fmeffects::Predictor
-> PredictorParsnip
new()
Create a new PredictorParsnip object.
PredictorParsnip$new(model, data)
model
model_fit
object.
data
The data used for computing FMEs, must be data.frame or data.table.
predict()
Predicts on an observation "newdata"
.
PredictorParsnip$predict(newdata)
newdata
The feature vector for which the target should be predicted.
clone()
The objects of this class are cloneable with this method.
PredictorParsnip$clone(deep = FALSE)
deep
Whether to make a deep clone.
Prints an ForwardMarginalEffect object.
## S3 method for class 'ForwardMarginalEffect' print(x, ...)
## S3 method for class 'ForwardMarginalEffect' print(x, ...)
x |
object of class |
... |
additional arguments affecting the summary produced. |
Prints an FME Partitioning.
## S3 method for class 'Partitioning' print(x, ...)
## S3 method for class 'Partitioning' print(x, ...)
x |
object of class |
... |
additional arguments affecting the summary produced. |
Prints summary of an AverageMarginalEffects object.
## S3 method for class 'AverageMarginalEffects' summary(object, ...)
## S3 method for class 'AverageMarginalEffects' summary(object, ...)
object |
object of class |
... |
additional arguments affecting the summary produced. |
Prints summary of an ForwardMarginalEffect object.
## S3 method for class 'ForwardMarginalEffect' summary(object, ...)
## S3 method for class 'ForwardMarginalEffect' summary(object, ...)
object |
object of class |
... |
additional arguments affecting the summary produced. |
Prints summary of an FME Partitioning.
## S3 method for class 'Partitioning' summary(object, ...)
## S3 method for class 'Partitioning' summary(object, ...)
object |
object of class |
... |
additional arguments affecting the summary produced. |