Regression - Driver Analysis

From Q
Jump to navigation Jump to search


A Driver Analysis models the relationship between a dependent variable and one or more independent variables and quantifies the importance of each of the independent variables in predicting the dependent variable relative to the other independent variables.

Interpretation

Driver analysis computes an estimate of the importance of various independent variables in predicting a dependent variable. Most commonly, the dependent variable measures preference or usage of a particular brand (or brands), and the independent variables measure characteristics of this brand (or brands). For example, the dependent variable may be a measure of overall satisfaction and the independent variables may be measurements of satisfaction with bank fees, efficiency, friendliness, wait times, etc.

Variable statistics

Importance score the magnitude of the importance coefficient indicates the contribution each independent variable has in explaining the overall outcome variable relative to the other independent variables in the model. These importance scores are scaled to be a proportion of 100 to allow an easier numeric scale to interpret.

Raw score the magnitude of the raw importance contribution the independent variable has to the overall outcome variable. This raw importance is the contribution of the independent variable has in explaining the model R-squared relative to the other variables.

The coefficient is colored if the variable is statistically significant at the 5% level.

Standard Error measures the accuracy of an estimate. The smaller the standard error, the more accurate the predictions.

t-statistic the estimate divided by the standard error. The magnitude (either positive or negative) indicates the significance of the variable. The values are highlighted based on their magnitude.

p-value expresses the t-statistic as a probability. A p-value under 0.05 means that the variable is statistically significant at the 5% level; a p-value under 0.01 means that the variable is statistically significant at the 1% level. P-values under 0.05 are shown in bold.

Overall statistics

n the sample size of the model

R-squared assess the goodness of fit of the model. A larger number indicates that the model captures more of the variation in the dependent variable.

See also Regression Diagnostics.

Create a Linear Regression Model in Displayr

With unstacked data the process is similar to a standard Regression model.

1. Go to Anything > Advanced Analysis > Regression > Driver Analysis
2. Under Inputs > Outcome, select your dependent variable
3. Under Inputs > Predictor(s), select your independent variables

Stacked data can be handled with a

1. Go to Anything > Advanced Analysis > Regression > Driver Analysis
2. Check the 'Allow stacked data' control to allow stacked data.
2. Under Inputs > Outcome, select a single dependent variable, if stacked it would have a multi structure.
3. Under Inputs > Predictor(s), select your independent variable set, this should have a grid structure that suitably matches the outcome variable above.

See Question Types for more information on grid and multi type structures.

Object Inspector Options

Outcome The variable to be predicted by the predictor variables.

Predictors The variable(s) to predict the outcome.

Algorithm The fitting algorithm. Defaults to Regression but may be changed to other machine learning methods.

Type: You can use this option to toggle between different types of regression models, but note that certain types are not appropriate for certain types of outcome variable.

Linear Appropriate for a continuous outcome variable. See Regression - Linear Regression.
Binary Logit Appropriate if the outcome is binary (i.e. falls in one of two categories). See Regression - Binary Logit.
Ordered Logit Appropriate for a discrete outcome where the categories have a natural order (e.g. Low, Medium, High). See Regression - Ordered Logit.
Multinomial Logit Appropriate for a discrete outcome with unordered categories. See Regression - Multinomial Logit.
Poisson Appropriate for count outcomes (i.e. outcomes that take only positive integer values). See Regression - Poisson Regression.
Quasi-Poisson Appropriate for count outcomes. See Regression - Quasi-Poisson Regression.
NBD Appropriate for count outcomes. See Regression - NBD Regression.

Robust standard errors Computes standard errors that are robust to violations of the assumption of constant variance (i.e., heteroscedasticity). See Robust Standard Errors. This is only available when Type is Linear.

Missing data See Missing Data Options.

Output

Summary The default; as shown in the example above.
Detail Typical R output, some additional information compared to Summary, but without the pretty formatting.
ANOVA Analysis of variance table containing the results of Chi-squared likelihood ratio tests for each predictor.
Relative Importance Analysis The results of a relative importance analysis (also known as Johnson's relative weights). See here and the references for more information. This option is not available for Multinomial Logit. Note that categorical predictors are not converted to be numeric, unlike in Driver (Importance) Analysis - Relative Importance Analysis.
Shapley Regression See here and the references for more information. This option is only available for Linear Regression. Note that categorical predictors are not converted to be numeric, unlike in Driver (Importance) Analysis - Shapley.
Jaccard Coefficient Computes the relative importance of the predictor variables against the outcome variable with the Jaccard Coefficients. See Driver (Importance_ Analysis - Jaccard Coefficient. This option requires both binary variables for the outcome variable and the predictor variables.
Correlation Computes the relative importance of the predictor variables against the outcome variable via the bivariate Pearson product moment correlations. See Driver (Importance) Analysis - Correlation and references therein for more information.
Effects Plot Plots the relationship between each of the Predictors and the Outcome. Not available for Multinomial Logit.

Correction The multiple comparisons correction applied when computing the p-values of the post-hoc comparisons.

Variable names Displays Variable Names in the output instead of labels.

Absolute importance scores Whether the absolute value of Relative Importance Analysis scores should be displayed.

Auxiliary variables Variables to be used when imputing missing values (in addition to all the other variables in the model).

Weight. Where a weight has been set for the R Output, it will automatically applied when the model is estimated. By default, the weight is assumed to be a sampling weight, and the standard errors are estimated using Taylor series linearization (by contrast, in the Legacy Regression, weight calibration is used). See Weights, Effective Sample Size and Design Effects.

Filter The data is automatically filtered using any filters prior to estimating the model.

Crosstab Interaction Optional variable to test for interaction with other variables in the model. The interaction variable is treated as a categorical variable. Coefficients in the table are computed by creating separate regressions for each level of the interaction variable. To evaluate whether a coefficient is significantly higher (blue) or lower (red), we perform a t-test of the coefficient compared to the coefficient using the remaining data as described in Driver Analysis. P-values are corrected for multiple comparisons across the whole table (excluding the NET column). The P-value in the sub-title is calculated using a the likelihood ratio test between the pooled model with no interaction variable, and a model where all predictors interact with the interaction variable.

Automated outlier removal percentage A numeric value between 0 and 50 (including 0 but not 50) is used to specify the percentage of the data that is removed from analysis due to outliers. All regression types except for the case of Multinomial Logit support this feature. If a zero-value is selected for this input control then no outlier removal is performed and a standard regression output for the entire (possibly filtered) dataset is applied. If a non-zero value is selected for this option then the regression model is fitted twice. The first regression model uses the entire dataset (after filters have been applied) and identifies the observations that generate the largest residuals. The user specified percent of cases in the data that have the largest residuals are then removed. The regression model is refitted on this reduced dataset and output returned. The specific residual used varies depending on the regression Type.

Linear: The studentized residual in an unweighted regression and the Pearson residual in a weighted regression. The Pearson residual in the weighted case adjusts appropriately for the provided survey weights.
Binary Logit and Ordered Logit: A type of surrogate residual from the sure R package (see Greenwell, McCarthy, Boehmke and Liu (2018)[1] for more details). In Binary Logit it uses the resids function with the jitter parametrization. In Ordered Logit it uses the resids function with the latent parametrization to exploit the ordered logit structure.
NBD Regression, Poisson Regression: A studentized deviance residual in an unweighted regression and the Pearson residual in a weighted regression.
Quasi-Poisson Regression: A type of quasi-deviance residual via the rstudent function in an unweighted regression and the Pearson residual in a weighted regression.

The studentized residual computes the distance between the observed and fitted value for each point and standardizes (adjusts) based on the influence and an externally adjusted variance calculation . The studentized deviance residual computes the contribution the fitted point has to the likelihood and standardizes (adjusts) based on the influence of the point and an externally adjusted variance calculation (see rstudent function in R and Davison and Snell (1991)[2] for more details). The Pearson residual in the weighted case computes the distance between the observed and fitted value and adjusts appropriately for the provided survey weights. See rstudent function in R and Davison and Snell (1991) for more details of the specifics of the calculations.

Note that this feature is not supported when using the Multiple imputation option for handling Missing data.

Stack data Whether the input data should be stacked before analysis. Stacking can be desirable when each individual in the data set has multiple cases and an aggregate model is desired. More information is available at Stacking Data Files. If this option is chosen then the Outcome needs to be a single Question that has a Multi type structure suitable for regression such as a Pick One - Multi, Pick Any or Number - MultiVariable Set that has a Multi type structure suitable for regression such as a Binary - Multi, Nominal - Multi, Ordinal - Multi or Numeric - Multi. Similarly, the Predictor(s) need to be a single Question that has a Grid type structure such as a Pick Any - Grid or a Number - GridVariable Set that has a Grid type structure such as a Binary - Grid or a Numeric - Grid. In the process of stacking, the data reduction is inspected. Any constructed NETs are removed unless comprised of source values that are mutually exclusive to other codes, such as the result of merging two categories.

Random seed Seed used to initialize the (pseudo)random number generator for the model fitting algorithm. Different seeds may lead to slightly different answers, but should normally not make a large difference.

Increase allowed output size Check this box if you encounter a warning message "The R output had size XXX MB, exceeding the 128 MB limit..." and you need to reference the output elsewhere in your document; e.g., to save predicted values to a Data Set or examine diagnostics.

Maximum allowed size for output (MB). This control only appears if Increase allowed output size is checked. Use it to set the maximum allowed size for the regression output in Megabytes. The warning referred to above about the R output size will state the minimum size you need to increase to to return the full output. Note that having very many large outputs in one document or page may slow down the performance of your document and increase load times.

Additional options are available by editing the code.

DIAGNOSTICS

Plot - Cook's Distance Creates a line/rug plot showing Cook's Distance for each observation.

Plot - Cook's Distance vs Leverage Creates a scatterplot showing Cook's distance vs leverage for each observation.

Plot - Influence Index Creates index plots of studentized residuals, hat values, and Cook's distance.

Multicollinearity Table (VIF) Creates a table containing variance inflation factors (VIF) to diagnose multicollinearity.

Plot - Normal Q-Q Creates a normal Quantile-Quantile (QQ) plot to reveal departures of the residuals from normality.

Prediction-Accuracy Table Creates a table showing the observed and predicted values, as a heatmap.

Test Residual Heteroscedasticity Conducts a heteroscedasticity test on the residuals.

Test Residual Normality (Shapiro-Wilk) Conducts a Shapiro-Wilk test of normality on the (deviance) residuals.

Plot - Residuals vs Fitted Creates a scatterplot of residuals versus fitted values.

Plot - Residuals vs Leverage Creates a plot of residuals versus leverage values.

Plot - Scale-Location Creates a plot of the square root of the absolute standardized residuals by fitted values.

Test Residual Serial Correlation (Durbin-Watson) Conducts a Durbin-Watson test of serial correlation (auto-correlation) on the residuals.

SAVE VARIABLE(S)

Fitted Values Creates a new variable containing fitted values for each case in the data.

Predicted Values Creates a new variable containing predicted values for each case in the data.

Residuals Creates a new variable containing residual values for each case in the data.

Additional Properties

When using this feature you can obtain additional information that is stored by the R code which produces the output.

  1. To do so, select Create > R Output.
  2. In the R CODE, paste: item = YourReferenceName
  3. Replace YourReferenceName with the reference name of your item. Find this in the Report tree or by selecting the item and then going to Properties > General > Name from the object inspector on the right.
  4. Below the first line of code, you can paste in snippets from below or type in str(item) to see a list of available information.

For a more in depth discussion on extracting information from objects in R, checkout our blog post here.

Properties which may be of interest are:

  • Summary outputs from the regression model:
item$summary$coefficients # summary regression outputs

More information

What is Linear Regression?

Acknowledgements

See Regression - Generalized Linear Model.

Technical Details

There are two main approaches offered to determine the importance of variables in a Driver analysis, Shapley regression and Relative Importance Analysis. Both techniques consider a way to decompose the contribution each predictor variable (driver) has towards the outcome variable. Shapley regression[3] is localised to Linear regression and is an approach that uses an exhaustive search of all possible linear regression models to compute the contribution each predictor variable has in the R-square statistic. Relative Importance Analysis[4] takes a different approach to use an orthogonal representation of the predictor variables and reconciles this with the original variables to determine their contribution. An overview comparing the two approaches is given in the two posts [5] and [6].

In the case of Relative importance analysis the original predictor variables are standardised and then transformed to be orthogonal. In particular, the predictors are first standardised to have mean zero and variance equal to one (score standardised) and then transformed to be orthogonal. The orthogonal transformation is based on the singular value decomposition and takes the following structure. Assuming the predictor variables have been score standardised are represented via a matrix [math]\displaystyle{ X }[/math] which has the Singular Value Decomposition [math]\displaystyle{ X = U\Sigma V^T }[/math]. The orthogonal representation of [math]\displaystyle{ X }[/math] is defined in [math]\displaystyle{ Z = UV^T }[/math]. The importance of these orthogonal predictors can be measured in the case of linear regression by regressing on the outcome variable [math]\displaystyle{ Y }[/math] with the estimates [math]\displaystyle{ \beta^* = (Z^TZ)^{-1}Z^TY }[/math]. Due to the orthogonal and thus uncorrelated structure of [math]\displaystyle{ Z }[/math], the [math]\displaystyle{ \beta^* }[/math] values represent the relative importance of each of the predictors in the [math]\displaystyle{ Z }[/math] representation. To link the orthogonal [math]\displaystyle{ Z }[/math] variables back to the original [math]\displaystyle{ X }[/math] variables, define [math]\displaystyle{ \Lambda = (Z^TZ)^{-1}Z^TX }[/math]. This is used to determine the relative importance of the original predictors with the quantity [math]\displaystyle{ \varepsilon = \Lambda^2 {\beta^*}^2 }[/math] where the square operator is used elementwise. The significance of the relative importance values is examined with a t-test by computing the standard error of the [math]\displaystyle{ \varepsilon }[/math] statistic. This is estimated with [math]\displaystyle{ \widehat{\sigma_\varepsilon} }[/math] where the standard error of the [math]\displaystyle{ i^{\rm th} }[/math] element of [math]\displaystyle{ \varepsilon }[/math] is defined [math]\displaystyle{ \widehat{\sigma_{\varepsilon, i}}= \sqrt{\sum_{j = 1}^p\left( \Lambda_{ij}^4 \sigma_j^4\left(2 + 4\frac{{\beta_j^*}^2}{\sigma_j^2}\right) \right)} }[/math] where [math]\displaystyle{ \sigma_j }[/math] denotes the standard error of the [math]\displaystyle{ j^{\rm th} }[/math] regression coefficient. The above approach assumed a standard multiple regression. In the case of a Generalized Linear Model, the above approach is still used with the estimation of the [math]\displaystyle{ \beta^* }[/math] coefficients using the appropriate GLM regressed on the orthogonal [math]\displaystyle{ Z }[/math] variables.

References

  1. Greenwell, B., M., McCarthy, A., J., Boehmke, B., C., & Liu, D. (2018). Residuals and Diagnostics for Binary and Ordinal Regression Models: An Introduction to the sure Package. The R Journal, 10(1), 381. https://doi.org/10.32614/rj-2018-004
  2. Davison, A. C. and Snell, E. J. (1991) Residuals and diagnostics. In: Statistical Theory and Modelling. In Honour of Sir David Cox, FRS, eds. Hinkley, D. V., Reid, N. and Snell, E. J., Chapman & Hall.
  3. Bock, T., "What is Shapley Value Regression?" [Blog post]. Accessed from [1]
  4. Johnson J. W. (2000). A Heuristic Method for Estimating the Relative Weight of Predictor Variables in Multiple Regression. Multivariate behavioral research, 35(1), 1–19. https://doi.org/10.1207/S15327906MBR3501_1
  5. Yap, J., "The Difference Between Shapley Regression and Relative Weights" [Blog post]. Accessed from [2]
  6. Yap, J., "When to Use Relative Weights Over Shapley" [Blog post]. Accessed from [3]

Code

var controls = [];

// ALGORITHM
var algorithm = form.comboBox({label: "Algorithm",
                               alternatives: ["CART", "Deep Learning", "Gradient Boosting", "Linear Discriminant Analysis",
                                              "Random Forest", "Regression", "Support Vector Machine"],
                               name: "formAlgorithm", default_value: "Regression",
                               prompt: "Machine learning or regression algorithm for fitting the model"});

controls.push(algorithm);
algorithm = algorithm.getValue();

var regressionType = "";
if (algorithm == "Regression")
{
    regressionTypeControl = form.comboBox({label: "Regression type", 
                                           alternatives: ["Linear", "Binary Logit", "Ordered Logit", "Multinomial Logit", "Poisson",
                                                          "Quasi-Poisson", "NBD"], 
                                           name: "formRegressionType", default_value: "Linear",
                                           prompt: "Select type according to outcome variable type"});
    regressionType = regressionTypeControl.getValue();
    controls.push(regressionTypeControl);
}

// DEFAULT CONTROLS
missing_data_options = ["Error if missing data", "Exclude cases with missing data", "Imputation (replace missing values with estimates)"];

// AMEND DEFAULT CONTROLS PER ALGORITHM
if (algorithm == "Support Vector Machine")
    output_options = ["Accuracy", "Prediction-Accuracy Table", "Detail"];
if (algorithm == "Gradient Boosting") 
    output_options = ["Accuracy", "Importance", "Prediction-Accuracy Table", "Detail"];
if (algorithm == "Random Forest")
    output_options = ["Importance", "Prediction-Accuracy Table", "Detail"];
if (algorithm == "Deep Learning")
    output_options = ["Accuracy", "Prediction-Accuracy Table", "Cross Validation", "Network Layers"];
if (algorithm == "Linear Discriminant Analysis")
    output_options = ["Means", "Detail", "Prediction-Accuracy Table", "Scatterplot", "Moonplot"];

if (algorithm == "CART") {
    output_options = ["Sankey", "Tree", "Text", "Prediction-Accuracy Table", "Cross Validation"];
    missing_data_options = ["Error if missing data", "Exclude cases with missing data",
                             "Use partial data", "Imputation (replace missing values with estimates)"]
}
if (algorithm == "Regression") {
    if (regressionType == "Multinomial Logit")
        output_options = ["Summary", "Detail", "ANOVA"];
    else if (regressionType == "Linear")
        output_options = ["Summary", "Detail", "ANOVA", "Relative Importance Analysis", "Shapley Regression", "Jaccard Coefficient", "Correlation", "Effects Plot"];
    else
        output_options = ["Summary", "Detail", "ANOVA", "Relative Importance Analysis", "Effects Plot"];
}

// COMMON CONTROLS FOR ALL ALGORITHMS
var outputControl = form.comboBox({label: "Output", prompt: "The type of output used to show the results",
                                   alternatives: output_options, name: "formOutput",
                                   default_value: algorithm === "Regression" ? "Relative Importance Analysis": output_options[0]});
controls.push(outputControl);
var output = outputControl.getValue();

if (algorithm == "Regression") {
    if (regressionType == "Linear") {
        if (output == "Jaccard Coefficient" || output == "Correlation")
            missing_data_options = ["Error if missing data", "Exclude cases with missing data", "Use partial data (pairwise correlations)"];
        else
            missing_data_options = ["Error if missing data", "Exclude cases with missing data", "Dummy variable adjustment", "Use partial data (pairwise correlations)", "Multiple imputation"];
    }        
    else
        missing_data_options = ["Error if missing data", "Exclude cases with missing data", "Dummy variable adjustment", "Multiple imputation"];
}

var missingControl = form.comboBox({label: "Missing data", 
                                    alternatives: missing_data_options, name: "formMissing", default_value: "Exclude cases with missing data",
                                    prompt: "Options for handling cases with missing data"});
var missing = missingControl.getValue();
controls.push(missingControl);
controls.push(form.checkBox({label: "Variable names", name: "formNames", default_value: false, prompt: "Display names instead of labels"}));


// CONTROLS FOR SPECIFIC ALGORITHMS

if (algorithm == "Support Vector Machine")
    controls.push(form.textBox({label: "Cost", name: "formCost", default_value: 1, type: "number",
                                prompt: "High cost produces a complex model with risk of overfitting, low cost produces a simpler mode with risk of underfitting"}));

if (algorithm == "Gradient Boosting") {
    controls.push(form.comboBox({label: "Booster", 
                                 alternatives: ["gbtree", "gblinear"], name: "formBooster", default_value: "gbtree",
                                 prompt: "Boost tree or linear underlying models"}));
    controls.push(form.checkBox({label: "Grid search", name: "formSearch", default_value: false,
                                 prompt: "Search for optimal hyperparameters"}));
}

if (algorithm == "Random Forest")
    if (output == "Importance")
        controls.push(form.checkBox({label: "Sort by importance", name: "formImportance", default_value: true}));

if (algorithm == "Deep Learning") {
    controls.push(form.numericUpDown({name:"formEpochs", label:"Maximum epochs", default_value: 10, minimum: 1, maximum: Number.MAX_SAFE_INTEGER,
                                      prompt: "Number of rounds of training"}));
    controls.push(form.textBox({name: "formHiddenLayers", label: "Hidden layers", prompt: "Comma delimited list of the number of nodes in each hidden layer", required: true}));
    controls.push(form.checkBox({label: "Normalize predictors", name: "formNormalize", default_value: true,
                                 prompt: "Normalize to zero mean and unit variance"}));
}

if (algorithm == "Linear Discriminant Analysis") {
    if (output == "Scatterplot")
    {
        controls.push(form.colorPicker({label: "Outcome color", name: "formOutColor", default_value:"#5B9BD5"}));
        controls.push(form.colorPicker({label: "Predictors color", name: "formPredColor", default_value:"#ED7D31"}));
    }
    controls.push(form.comboBox({label: "Prior", alternatives: ["Equal", "Observed",], name: "formPrior", default_value: "Observed",
                                 prompt: "Probabilities of group membership"}));
}

if (algorithm == "CART") {
    controls.push(form.comboBox({label: "Pruning", alternatives: ["Minimum error", "Smallest tree", "None"], 
                                 name: "formPruning", default_value: "Minimum error",
                                 prompt: "Remove nodes after tree has been built"}));
    controls.push(form.checkBox({label: "Early stopping", name: "formStopping", default_value: false,
                                 prompt: "Stop building tree when fit does not improve"}));
    controls.push(form.comboBox({label: "Predictor category labels", alternatives: ["Full labels", "Abbreviated labels", "Letters"],
                                 name: "formPredictorCategoryLabels", default_value: "Abbreviated labels",
                                 prompt: "Labelling of predictor categories in the tree"}));
    controls.push(form.comboBox({label: "Outcome category labels", alternatives: ["Full labels", "Abbreviated labels", "Letters"],
                                 name: "formOutcomeCategoryLabels", default_value: "Full labels",
                                 prompt: "Labelling of outcome categories in the tree"}));
    controls.push(form.checkBox({label: "Allow long-running calculations", name: "formLongRunningCalculations", default_value: false,
                                 prompt: "Allow predictors with more than 30 categories"}));
}

var stacked_check = false;
if (algorithm == "Regression") {
    if (missing == "Multiple imputation")
        controls.push(form.dropBox({label: "Auxiliary variables",
                                    types:["Variable: Numeric, Date, Money, Categorical, OrderedCategorical"], 
                                    name: "formAuxiliaryVariables", required: false, multi:true,
                                    prompt: "Additional variables to use when imputing missing values"}));
    controls.push(form.comboBox({label: "Correction", alternatives: ["None", "False Discovery Rate", "Bonferroni"], name: "formCorrection",
                                 default_value: "None", prompt: "Multiple comparisons correction applied when computing p-values of post-hoc comparisons"}));
    var is_RIA_or_shapley = output == "Relative Importance Analysis" || output == "Shapley Regression";
    var is_Jaccard_or_Correlation = output == "Jaccard Coefficient" || output == "Correlation";
    if (regressionType == "Linear" && missing != "Use partial data (pairwise correlations)" && missing != "Multiple imputation")
        controls.push(form.checkBox({label: "Robust standard errors", name: "formRobustSE", default_value: false,
                                     prompt: "Standard errors are robust to violations of assumption of constant variance"}));
    if (is_RIA_or_shapley)
        controls.push(form.checkBox({label: "Absolute importance scores", name: "formAbsoluteImportance", default_value: false,
                                     prompt: "Show absolute instead of signed importances"}));
    if (regressionType != "Multinomial Logit" && (is_RIA_or_shapley || is_Jaccard_or_Correlation || output == "Summary"))
        controls.push(form.dropBox({label: "Crosstab interaction", name: "formInteraction", types:["Variable: Numeric, Date, Money, Categorical, OrderedCategorical"],
                                    required: false, prompt: "Categorical variable to test for interaction with other variables"}));
    if (regressionType !== "Multinomial Logit")
        controls.push(form.numericUpDown({name : "formOutlierProportion", label:"Automated outlier removal percentage", default_value: 0, 
                                          minimum:0, maximum:49.9, increment:0.1,
                                          prompt: "Data points removed and model refitted based on the residual values in the model using the full dataset"}));
    stacked_check_box = form.checkBox({label: "Stack data", name: "formStackedData", default_value: false,
                                       prompt: "Allow input into the Outcome control to be a single multi variable and Predictors to be a single grid variable"})
    stacked_check = stacked_check_box.getValue();
    controls.push(stacked_check_box);
}

controls.push(form.numericUpDown({name:"formSeed", label:"Random seed", default_value: 12321, minimum: 1, maximum: Number.MAX_SAFE_INTEGER,
                                  prompt: "Initializes randomization for imputation and certain algorithms"}));

let allowLargeOutputsCtrl = form.checkBox({label: "Increase allowed output size",
					   name: "formAllowLargeOutputs", default_value: false,
					   prompt: "Increase the limit on the maximum size allowed for the output to fix warnings about it being too large"});
controls.push(allowLargeOutputsCtrl);
if (allowLargeOutputsCtrl.getValue())
    controls.push(form.numericUpDown({name:"formMaxOutputSize", label:"Maximum allowed size for output (MB)", default_value: 128, minimum: 1, maximum: Number.MAX_SAFE_INTEGER,
                                  prompt: "The maximum allowed size for the returned output in MB. Very large outputs may impact document performance"}));

var outcome = form.dropBox({label: "Outcome", 
                            types: [ stacked_check ? "VariableSet: BinaryMulti, NominalMulti, OrdinalMulti, NumericMulti" : "Variable: Numeric, Date, Money, Categorical, OrderedCategorical"], 
                            multi: false,
                            name: "formOutcomeVariable",
                            prompt: "Independent target variable to be predicted"});
var predictors = form.dropBox({label: "Predictor(s)",
                               types:[ stacked_check ? "VariableSet: BinaryGrid, NumericGrid" : "Variable: Numeric, Date, Money, Categorical, OrderedCategorical"], 
                               name: "formPredictorVariables", multi: stacked_check ? false : true,
                               prompt: "Dependent input variables"});
controls.unshift(predictors);
controls.unshift(outcome);

form.setInputControls(controls);
if (regressionType == "") {
    form.setHeading(algorithm);
    if (form.setObjectInspectorTitle)
	form.setObjectInspectorTitle(algorithm, algorithm + " outputs");
}else {
    form.setHeading(regressionType + " " + algorithm);
    if (form.setObjectInspectorTitle)
	form.setObjectInspectorTitle(algorithm);
}
library(flipMultivariates)
if (get0("formAllowLargeOutputs", ifnotfound = FALSE))
    QAllowLargeResultObject(1e6*get0("formMaxOutputSize"))

WarnIfVariablesSelectedFromMultipleDataSets()

model <- MachineLearning(formula = if (isTRUE(get0("formStackedData"))) as.formula(NULL) else QFormula(formOutcomeVariable ~ formPredictorVariables),
                         algorithm = formAlgorithm,
                         weights = QPopulationWeight, subset = QFilter,
                         missing = formMissing,
                         output = if (formOutput == "Shapley Regression") "Shapley regression" else formOutput,
                         show.labels = !formNames,
                         seed = get0("formSeed"),
                         cost = get0("formCost"),
                         booster = get0("formBooster"),
                         grid.search = get0("formSearch"),
                         sort.by.importance = get0("formImportance"),
                         hidden.nodes = get0("formHiddenLayers"),
                         max.epochs = get0("formEpochs"),
                         normalize = get0("formNormalize"),
                         outcome.color = get0("formOutColor"),
                         predictors.color = get0("formPredColor"),
                         prior = get0("formPrior"),
                         prune = get0("formPruning"),
                         early.stopping = get0("formStopping"),
                         predictor.level.treatment = get0("formPredictorCategoryLabels"),
                         outcome.level.treatment = get0("formOutcomeCategoryLabels"),
                         long.running.calculations = get0("formLongRunningCalculations"),
                         type = get0("formRegressionType"),
                         auxiliary.data = get0("formAuxiliaryVariables"),
                         correction = get0("formCorrection"),
                         robust.se = get0("formRobustSE", ifnotfound = FALSE),
                         importance.absolute = get0("formAbsoluteImportance"),
                         interaction = get0("formInteraction"),
                         outlier.prop.to.remove = if (get0("formRegressionType", ifnotfound = "") != "Multinomial Logit") get0("formOutlierProportion")/100 else NULL,
                         stacked.data.check = get0("formStackedData"),
                         unstacked.data = if (isTRUE(get0("formStackedData"))) list(Y = get0("formOutcomeVariable"), X = get0("formPredictorVariables")) else NULL,
                         use.combined.scatter = TRUE)