# Regression - Linear Regression

The *Linear Regression* models the linear relationship between a dependent variable and one or more independent variables. The linear regression option is most commonly used when the dependent variable is continuous.

## Interpretation

Variable statistics measure the impact and significance of individual variables within a model, while overall statistics apply to the model as a whole. Both are shown in the regression output.

### Variable statistics

**Estimate** the magnitude of the coefficient indicates the size of the change in the independent variable as the value of the dependent variable changes. A positive number indicates a direct relationship (y increases as x increases), and a negative number indicates an inverse relationship (y decreases as x increases.

The coefficient is colored if the variable is statistically significant at the 5% level.

**Standard Error** measures the accuracy of an estimate. The smaller the standard error, the more accurate the predictions.

**t-statistic** the estimate divided by the standard error. The magnitude (either positive or negative) indicates the significance of the variable. The values are highlighted based on their magnitude.

**p-value** expresses the t-statistic as a probability. A p-value under 0.05 means that the variable is statistically significant at the 5% level; a p-value under 0.01 means that the variable is statistically significant at the 1% level. P-values under 0.05 are shown in bold.

### Overall statistics

**n** the sample size of the model

**R-squared** assess the goodness of fit of the model. A larger number indicates that the model captures more of the variation in the dependent variable.

**AIC** Akaike Information Criterion is a measure of the quality of the model. When comparing similar models, the AIC can be used to identify the superior model.

## Example

The example below is a model that predicts a survey respondent’s average monthly spending on fast-food products based on characteristics like age, gender, and work status.

### Create a Linear Regression Model in Displayr

- 1. Go to
**Insert > Regression > Linear Regression** - 2. Under
**Inputs > Outcome**, select your dependent variable - 3. Under
**Inputs > Predictor(s)**, select your independent variables

## Object Inspector Options

**Outcome** The variable to be predicted by the *predictor variables*.

**Predictors** The variable(s) to predict the *outcome*.

**Algorithm** The fitting algorithm. Defaults to *Regression* but may be changed to other machine learning methods.

**Type**: You can use this option to toggle between different types of regression models, but note that certain types are not appropriate for certain types of outcome variable. The other types are not appropriate for a continuous outcome variable.

**Linear**.**Binary Logit**See Regression - Binary Logit.**Ordered Logit**See Regression - Ordered Logit.**Multinomial Logit**See Regression - Multinomial Logit.**Poisson**See Regression - Poisson Regression.**Quasi-Poisson**See Regression - Quasi-Poisson Regression.**NBD**See Regression - NBD Regression.

**Robust standard errors** Computes standard errors that are robust to violations of the assumption of constant variance (i.e., heteroscedasticity). See Robust Standard Errors. This is only available when **Type** is **Linear**.

**Missing data** See Missing Data Options.

**Output**

**Summary**The default; as shown in the example above.**Detail**Typical R output, some additional information compared to**Summary**, but without the pretty formatting.**ANOVA**Analysis of variance table containing the results of Chi-squared likelihood ratio tests for each predictor.**Relative Importance Analysis**The results of a relative importance analysis. See here and the references for more information. This option is not available for Multinomial Logit. Note that categorical predictors are not converted to be numeric, unlike in Driver (Importance) Analysis - Relative Importance Analysis.**Shapley Regression**See here and the references for more information. This option is only available for Linear Regression. Note that categorical predictors are not converted to be numeric, unlike in Driver (Importance) Analysis - Shapley.**Effects Plot**Plots the relationship between each of the*Predictors*and the*Outcome*. Not available for Multinomial Logit.

**Correction** The multiple comparisons correction applied when computing the *p*-values of the *post-hoc* comparisons.

**Variable names** Displays Variable Names in the output instead of labels.

**Absolute importance scores** Whether the absolute value of Relative Importance Analysis scores should be displayed.

**Auxiliary variables** Variables to be used when imputing missing values (in addition to all the other variables in the model).

**Weight**. Where a weight has been set for the R Output, it will automatically applied when the model is estimated. By default, the weight is assumed to be a *sampling weight*, and the standard errors are estimated using *Taylor series linearization* (by contrast, in the Legacy Regression, *weight calibration* is used). See Weights, Effective Sample Size and Design Effects.

**Filter** The data is automatically filtered using any filters prior to estimating the model.

**Crosstab Interaction** Optional variable to test for interaction with other variables in the model. The interaction variable is treated as a categorical variable. Coefficients in the table are computed by creating separate regressions for each level of the interaction variable. To evaluate whether a coefficient is significantly higher (blue) or lower (red), we perform a t-test of the coefficient compared to the coefficient using the remaining data as described in Driver Analysis. P-values are corrected for multiple comparisons across the whole table (excluding the NET column). The P-value in the sub-title is calculated using a the likelihood ratio test between the pooled model with no interaction variable, and a model where all predictors interact with the interaction variable.

**Automated outlier removal percentage** A numeric value between 0 and 50 (including 0 but not 50) to specify the percentage of the data that is removed from analysis. If a zero-value is selected for this input control then no outlier removal is performed and a standard regression output for the entire (possibly filtered) dataset is applied. If a non-zero value is selected for this option then the regression model is fitted twice. The first regression model uses the entire dataset (after filters have been applied) and identifies the observations that generate the largest residuals. The user specified percent of cases in the data that have the largest residuals are then removed. The regression model is refitted on this reduced dataset and output returned. The specific residual used in linear regression is the studentized residual in an unweighted regression and the Pearson residual in a weighted regression. The studentized residual computes the distance between the observed and fitted value for each point and standardizes (adjusts) based on the influence and an externally adjusted variance calculation (see `rstudent` function in `R` and Davison and Snell (1991) for more details). The Pearson residual in the weighted case adjusts appropriately for the provided survey weights.

**Random seed** Seed used to initialize the (pseudo)random number generator for the model fitting algorithm. Different seeds may lead to slightly different answers, but should normally not make a large difference.

Additional options are available by editing the code.

## Additional Properties

When using this feature you can obtain additional information that is stored by the R code which produces the output.

- To do so, select
**Create > R Output**. - In the
**R CODE**, paste:`item = YourReferenceName` - Replace
`YourReferenceName`with the reference name of your item. Find this in the**Report tree**or by selecting the item and then going to**Properties > General > Name**from the object inspector on the right. - Below the first line of code, you can paste in snippets from below or type in
`str(item)`to see a list of available information.

For a more in depth discussion on extracting information from objects in R, checkout our blog post here.

**Properties which may be of interest are:**

- Summary outputs from the regression model:

`item$summary$coefficients # summary regression outputs`

## Diagnostics

## More information

## Acknowledgements

See Regression - Generalized Linear Model.

## References

For residual definitions and information: Davison, A. C. and Snell, E. J. (1991) Residuals and diagnostics. In: Statistical Theory and Modelling. In Honour of Sir David Cox, FRS, eds. Hinkley, D. V., Reid, N. and Snell, E. J., Chapman & Hall.

For relative importance analysis: Johnson, J. W. (2000). "A heuristic method for estimating the relative weight of predictor variables in multiple regression". Multivariate behavioral research, 35(1), 1-19.

For Shapley:

Bock, T., "What is Shapley Value Regression?" [Blog post]. Accessed from [1]

Yap, J., "When to Use Relative Weights Over Shapley" [Blog post]. Accessed from [2]

Yap, J., "The Difference Between Shapley Regression and Relative Weights" [Blog post]. Accessed from [3]

## Code

```
form.dropBox({label: "Outcome",
types:["Variable: Numeric, Date, Money, Categorical, OrderedCategorical"],
name: "formOutcomeVariable",
prompt: "Independent target variable to be predicted"});
form.dropBox({label: "Predictor(s)",
types:["Variable: Numeric, Date, Money, Categorical, OrderedCategorical"],
name: "formPredictorVariables", multi:true,
prompt: "Dependent input variables"});
// ALGORITHM
var algorithm = form.comboBox({label: "Algorithm",
alternatives: ["CART", "Deep Learning", "Gradient Boosting", "Linear Discriminant Analysis",
"Random Forest", "Regression", "Support Vector Machine"],
name: "formAlgorithm", default_value: "Regression",
prompt: "Machine learning or regression algorithm for fitting the model"}).getValue();
var regressionType = "";
if (algorithm == "Regression")
regressionType = form.comboBox({label: "Regression type",
alternatives: ["Linear", "Binary Logit", "Ordered Logit", "Multinomial Logit", "Poisson",
"Quasi-Poisson", "NBD"],
name: "formRegressionType", default_value: "Linear",
prompt: "Select type according to outcome variable type"}).getValue();
form.setHeading((regressionType == "" ? "" : (regressionType + " ")) + algorithm);
// DEFAULT CONTROLS
missing_data_options = ["Error if missing data", "Exclude cases with missing data", "Imputation (replace missing values with estimates)"];
// AMEND DEFAULT CONTROLS PER ALGORITHM
if (algorithm == "Support Vector Machine")
output_options = ["Accuracy", "Prediction-Accuracy Table", "Detail"];
if (algorithm == "Gradient Boosting")
output_options = ["Accuracy", "Importance", "Prediction-Accuracy Table", "Detail"];
if (algorithm == "Random Forest")
output_options = ["Importance", "Prediction-Accuracy Table", "Detail"];
if (algorithm == "Deep Learning")
output_options = ["Accuracy", "Prediction-Accuracy Table", "Cross Validation", "Network Layers"];
if (algorithm == "Linear Discriminant Analysis")
output_options = ["Means", "Detail", "Prediction-Accuracy Table", "Scatterplot", "Moonplot"];
if (algorithm == "CART") {
output_options = ["Sankey", "Tree", "Text", "Prediction-Accuracy Table", "Cross Validation"];
missing_data_options = ["Error if missing data", "Exclude cases with missing data",
"Use partial data", "Imputation (replace missing values with estimates)"]
}
if (algorithm == "Regression") {
if (regressionType == "Multinomial Logit")
output_options = ["Summary", "Detail", "ANOVA"];
else if (regressionType == "Linear")
output_options = ["Summary", "Detail", "ANOVA", "Relative Importance Analysis", "Shapley Regression", "Effects Plot"];
else
output_options = ["Summary", "Detail", "ANOVA", "Relative Importance Analysis", "Effects Plot"];
if (regressionType == "Linear")
missing_data_options = ["Error if missing data", "Exclude cases with missing data", "Use partial data (pairwise correlations)", "Multiple imputation"];
else
missing_data_options = ["Error if missing data", "Exclude cases with missing data", "Multiple imputation"];
}
// COMMON CONTROLS FOR ALL ALGORITHMS
var output = form.comboBox({label: "Output", prompt: "The type of output used to show the results",
alternatives: output_options, name: "formOutput", default_value: output_options[0]}).getValue();
var missing = form.comboBox({label: "Missing data",
alternatives: missing_data_options, name: "formMissing", default_value: "Exclude cases with missing data",
prompt: "Options for handling cases with missing data"}).getValue();
form.checkBox({label: "Variable names", name: "formNames", default_value: false, prompt: "Display names instead of labels"});
// CONTROLS FOR SPECIFIC ALGORITHMS
if (algorithm == "Support Vector Machine")
form.textBox({label: "Cost", name: "formCost", default_value: 1, type: "number",
prompt: "High cost produces a complex model with risk of overfitting, low cost produces a simpler mode with risk of underfitting"});
if (algorithm == "Gradient Boosting") {
form.comboBox({label: "Booster",
alternatives: ["gbtree", "gblinear"], name: "formBooster", default_value: "gbtree",
prompt: "Boost tree or linear underlying models"})
form.checkBox({label: "Grid search", name: "formSearch", default_value: false,
prompt: "Search for optimal hyperparameters"});
}
if (algorithm == "Random Forest")
if (output == "Importance")
form.checkBox({label: "Sort by importance", name: "formImportance", default_value: true});
if (algorithm == "Deep Learning") {
form.numericUpDown({name:"formEpochs", label:"Maximum epochs", default_value: 10, minimum: 1, maximum: 1000000,
prompt: "Number of rounds of training"});
form.textBox({name: "formHiddenLayers", label: "Hidden layers", prompt: "Comma delimited list of the number of nodes in each hidden layer", required: true});
form.checkBox({label: "Normalize predictors", name: "formNormalize", default_value: true,
prompt: "Normalize to zero mean and unit variance"});
}
if (algorithm == "Linear Discriminant Analysis") {
if (output == "Scatterplot")
{
form.colorPicker({label: "Outcome color", name: "formOutColor", default_value:"#5B9BD5"});
form.colorPicker({label: "Predictors color", name: "formPredColor", default_value:"#ED7D31"});
}
form.comboBox({label: "Prior", alternatives: ["Equal", "Observed",], name: "formPrior", default_value: "Observed",
prompt: "Probabilities of group membership"})
}
if (algorithm == "CART") {
form.comboBox({label: "Pruning", alternatives: ["Minimum error", "Smallest tree", "None"],
name: "formPruning", default_value: "Minimum error",
prompt: "Remove nodes after tree has been built"})
form.checkBox({label: "Early stopping", name: "formStopping", default_value: false,
prompt: "Stop building tree when fit does not improve"});
form.comboBox({label: "Predictor category labels", alternatives: ["Full labels", "Abbreviated labels", "Letters"],
name: "formPredictorCategoryLabels", default_value: "Abbreviated labels",
prompt: "Labelling of predictor categories in the tree"})
form.comboBox({label: "Outcome category labels", alternatives: ["Full labels", "Abbreviated labels", "Letters"],
name: "formOutcomeCategoryLabels", default_value: "Full labels",
prompt: "Labelling of outcome categories in the tree"})
form.checkBox({label: "Allow long-running calculations", name: "formLongRunningCalculations", default_value: false,
prompt: "Allow predictors with more than 30 categories"});
}
if (algorithm == "Regression") {
if (missing == "Multiple imputation")
form.dropBox({label: "Auxiliary variables",
types:["Variable: Numeric, Date, Money, Categorical, OrderedCategorical"],
name: "formAuxiliaryVariables", required: false, multi:true,
prompt: "Additional variables to use when imputing missing values"});
form.comboBox({label: "Correction", alternatives: ["None", "False Discovery Rate", "Bonferroni"], name: "formCorrection",
default_value: "None", prompt: "Multiple comparisons correction applied when computing p-values of post-hoc comparisons"});
var is_RIA_or_shapley = output == "Relative Importance Analysis" || output == "Shapley Regression";
if (regressionType == "Linear" && missing != "Use partial data (pairwise correlations)" && missing != "Multiple imputation")
form.checkBox({label: "Robust standard errors", name: "formRobustSE", default_value: false,
prompt: "Standard errors are robust to violations of assumption of constant variance"});
if (is_RIA_or_shapley)
form.checkBox({label: "Absolute importance scores", name: "formAbsoluteImportance", default_value: false,
prompt: "Show absolute instead of signed importances"});
if (regressionType != "Multinomial Logit" && (is_RIA_or_shapley || output == "Summary"))
form.dropBox({label: "Crosstab interaction", name: "formInteraction", types:["Variable: Numeric, Date, Money, Categorical, OrderedCategorical"],
required: false, prompt: "Categorical variable to test for interaction with other variables"});
if (regressionType !== "Multinomial Logit")
form.numericUpDown({name : "formOutlierProportion", label:"Automated outlier removal percentage", default_value: 0,
minimum:0, maximum:49.9, increment:0.1,
prompt: "Data points removed and model refitted based on the residual values in the model using the full dataset"})
}
form.numericUpDown({name:"formSeed", label:"Random seed", default_value: 12321, minimum: 1, maximum: 1000000,
prompt: "Initializes randomization for imputation and certain algorithms"});
```

```
library(flipMultivariates)
model <- MachineLearning(formula = QFormula(formOutcomeVariable ~ formPredictorVariables),
algorithm = formAlgorithm,
weights = QPopulationWeight, subset = QFilter,
missing = formMissing,
output = if (formOutput == "Shapley Regression") "Shapley regression" else formOutput,
show.labels = !formNames,
seed = get0("formSeed"),
cost = get0("formCost"),
booster = get0("formBooster"),
grid.search = get0("formSearch"),
sort.by.importance = get0("formImportance"),
hidden.nodes = get0("formHiddenLayers"),
max.epochs = get0("formEpochs"),
normalize = get0("formNormalize"),
outcome.color = get0("formOutColor"),
predictors.color = get0("formPredColor"),
prior = get0("formPrior"),
prune = get0("formPruning"),
early.stopping = get0("formStopping"),
predictor.level.treatment = get0("formPredictorCategoryLabels"),
outcome.level.treatment = get0("formOutcomeCategoryLabels"),
long.running.calculations = get0("formLongRunningCalculations"),
type = get0("formRegressionType"),
auxiliary.data = get0("formAuxiliaryVariables"),
correction = get0("formCorrection"),
robust.se = get0("formRobustSE", ifnotfound = FALSE),
importance.absolute = get0("formAbsoluteImportance"),
interaction = get0("formInteraction"),
outlier.prop.to.remove = if (formRegressionType != "Multinomial Logit") get0("formOutlierProportion")/100 else NULL)
```