Interpretation

Interpreting model outputs.
Sleep Disordered Breathing

Analysis of nocturnal paediatric oximetry uses trained Support Vector Machines (SVMs). If Regression is selected, a Support Vector Regressor (SVR) is used to predict the AHI. If Classification is selected, a Support Vector Classifier (SVC) produces a risk classification and a raw, unscaled output.

Regression

Predicted AHI

If Regression is selected, an SVR produces a point estimate of the apnoea-hypopnea index (AHI). Measures of uncertainty are provided through a Laplace distribution that is computed and embedded within the SVR shortly after training. This is invoked during prediction and provides a confidence interval around the AHI point-estimate provided by the SVR.

Classification

Score

If Classification is selected, an SVC calculates the unitless distance of the final prediction to the decision boundary (scale: -inf to +inf, with 0 representing the boundary). The sign is matched to the final prediction: negative values represent a negative result, and positive values a positive result. The magnitude of the Score represents the degree of belonging to the assigned class.

Calibrated Probability

The reporting of a calibrated probability allows clinicians to relate the Score to the probability of the real-world outcome. The calibrated probability of an AHI of ≥5 is calculated using a Ridge Logistic Regression model that is trained shortly after the SVC using cross-validation data. The model is embedded within the SVC, and is invoked during prediction to provide a calibrated probability point-estimate.

Model Explanations

Feature importance values or explanations attempt to describe the impact of the feature on the classification decision.

Global Explanations

To estimate the overall importance of each feature to the prediction, Global Explanations are approximated using Monte Carlo resampling of support vectors. Global Explanations are the mean of the marginal contributions of each feature. The size of the absolute value is a measure of the overall importance of that feature, and the sign is matched to the directionality of the influence on the prediction.

Local Explanations

To understand the behaviour of the SVM near to the classification, a linear model is trained. Local Explanation values are the beta-coefficients of the linear model, and provide an estimate of how the prediction changes with small, localised fluctuations in feature values.