# Regression - Generalized Linear Model

The Generalized Linear Model feature models the relationships between a dependent variable and one or more independent variables. There are seven types of regression analysis to choose from. The linear regression model is the default.

## Regression Types

Linear

The Linear Regression models the linear relationship between a dependent variable and one or more independent variables. The linear regression option is most commonly used when the dependent variable is continuous. See Regression - Linear Regression.

Binary Logit

The Binary Logit is a form of regression analysis that models a binary dependent variable (e.g. yes/no, pass/fail, win/lose). It is also known as a Logistic regression, and Binomial regression. See Regression - Binary Logit.

Ordered Logit

The Ordered Logit is a form of regression analysis that models a discrete and ordinal dependent variable with more than two outcomes (Net promoter Score, Customer Satisfaction rating, etc.). It is also known as an Ordinal Logistic Regression and the cumulative link model. See Regression - Ordered Logit.

Multinomial Logit

The Multinomial Logit is a form of regression analysis that models a discrete and nominal dependent variable with more than two outcomes (Yes/No/Maybe, Red/Green/Blue, Brand A/Brand B/Brand C, etc.). It is also known as a multinomial logistic regression and multinomial logistic discriminant analysis. See Regression - Multinomial Logit.

Poisson

The Poisson Regression is used to model count data with the assumption that the dependent variable has a Poisson distribution, where the mean is equal to the variance. If there is a high level of variance (overdispersion), the Quasi-Poisson or NBD may be a better option. See Regression - Poisson Regression.

Quasi-Poisson

The Quasi-Poisson Regression is a generalization of the Poisson regression and is used when modeling an overdispersed count variable. The Quasi-Poisson model assumes that the variance is a linear function of the mean. See Regression - Quasi-Poisson Regression.

NBD

The Negative Binomial Distribution (NBD) Regression is a generalization of the Poisson regression and is used when modeling an overdispersed count variable. The NBD model assumes that the variance is a quadratic function of the mean. See Regression - NBD Regression.

## Create a Generalized Linear Model in Displayr

1. Go to Insert > Regression > Generalized Linear Model
2. Under Inputs > Outcome, select your dependent variable
3. Under Inputs > Predictor(s), select your independent variables
4. Under Inputs > Regression, select the model you want to use

## Object Inspector Options

Outcome The variable to be predicted by the predictor variables.

Predictors The variable(s) to predict the outcome.

Type:

Linear See Regression - Linear Regression.
Binary Logit See Regression - Binary Logit.
Ordered Logit See Regression - Ordered Logit.
Multinomial Logit See Regression - Multinomial Logit.
Poisson See Regression - Poisson Regression.
Quasi-Poisson See Regression - Quasi-Poisson Regression.
NBD See Regression - NBD Regression.

Robust standard errors Computes standard errors that are robust to violations of the assumption of constant variance (i.e., heteroscedasticity). See Robust Standard Errors. This is only available when Type is Linear.

Missing data See Missing Data Options.

Summary The default; as shown in the example above.
Detail Typical R output, some additional information compared to Summary, but without the pretty formatting.
ANOVA Analysis of variance table containing the results of Chi-squared likelihood ratio tests for each predictor.
Relative Importance Analysis The results of a relative importance analysis. See here and the references for more information. This option is not available for Multinomial Logit. Note that categorical predictors are not converted to be numeric, unlike in Driver (Importance) Analysis - Relative Importance Analysis.
Effects Plot Plots the relationship between each of the Predictors and the Outcome. Not available for Multinomial Logit.

Correction The multiple comparisons correction applied when computing the p-values of the post-hoc comparisons.

Variable names Displays Variable Names in the output instead of labels.

Absolute importance scores Whether the absolute value of Relative Importance Analysis scores should be displayed.

Auxiliary variables Variables to be used when imputing missing values (in addition to all the other variables in the model).

Weight. Where a weight has been set for the R Output, it will automatically applied when the model is estimated. By default, the weight is assumed to be a sampling weight, and the standard errors are estimated using Taylor series linearization (by contrast, in the Legacy Regression, weight calibration is used). See Weights, Effective Sample Size and Design Effects.

Filter The data is automatically filtered using any filters prior to estimating the model.

Crosstab Interaction Optional variable to test for interaction with other variables in the model. See Linear Regression for more details.

Automated outlier removal percentage A numeric value between 0 and 50 (including 0 but not 50) to specify the percentage of the data that is removed from analysis. If a zero-value is selected for this input control then no outlier removal is performed and a standard regression output for the entire (possibly filtered) dataset is applied. If a non-zero value is selected for this option then the regression model is fitted twice. The first regression model uses the entire dataset (after filters have been applied) and identifies the observations that generate the largest residuals. The user specified percent of cases in the data that have the largest residuals are then removed. The regression model is refitted on this reduced dataset and output returned. The specific residual used in a generalized linear model (GLM) depends on the type of GLM. Typically a studentized deviance residual in an unweighted GLM and the Pearson residual in a weighted GLM, although sometimes surrogate residuals are used. See the Wiki pages for each model type above for more details.

Random seed Seed used to initialize the (pseudo)random number generator for the model fitting algorithm. Different seeds may lead to slightly different answers, but should normally not make a large difference.

Additional options are available by editing the code.

### DIAGNOSTICS

Cook's distance plot Creates a line/rug plot showing Cook's Distance for each observation.

Cook's distance vs leverage plot Creates a scatterplot showing Cook's distance vs leverage for each observation.

Influence index plot Creates index plots of studentized residuals, hat values, and Cook's distance.

Multicollinearity (VIF) table Creates a table containing variance inflation factors (VIF) to diagnose multicollinearity.

Normal Q-Q plot Creates a normal Quantile-Quantile (QQ) plot to reveal departures of the residuals from normality.

Prediction-accuracy table Creates a table showing the observed and predicted values, as a heatmap.

Residual heteroscedasticity test Conducts a heteroscedasticity test on the residuals.

Residual normality (Shapiro-Wilk) test Conducts a Shapiro-Wilk test of normality on the (deviance) residuals.

Residuals vs fitted plot Creates a scatterplot of residuals versus fitted values.

Residuals vs leverage plot Creates a plot of residuals versus leverage values.

Scale-location plot Creates a plot of the square root of the absolute standardized residuals by fitted values.

Serial correlation (Durbin-Watson) test Conducts a Durbin-Watson test of serial correlation (auto-correlation) on the residuals.

### SAVE VARIABLE(S)

Save fitted values Creates a new variable containing fitted values for each case in the data.

Save predicted values Creates a new variable containing predicted values for each case in the data.

Save residuals Creates a new variable containing residual values for each case in the data.

When using this feature you can obtain additional information that is stored by the R code which produces the output.

1. To do so, select Create > R Output.
2. In the R CODE, paste: item = YourReferenceName
3. Replace YourReferenceName with the reference name of your item. Find this in the Report tree or by selecting the item and then going to Properties > General > Name from the object inspector on the right.
4. Below the first line of code, you can paste in snippets from below or type in str(item) to see a list of available information.

For a more in depth discussion on extracting information from objects in R, checkout our blog post here.

Properties which may be of interest are:

• Summary outputs from the regression model:
item\$summary\$coefficients # summary regression outputs

## Acknowledgements

Estimated using:

• R (R Core Team 2016).
• survey (Lumley 2014,2014), and MASS packages.
• car (Fox and Weisberg 2011)
• MASS (Venables and Ripley 2002)

See How to Read a Standard R Table for acknowledgements regarding the outputs.