Short bio
I am senior research scientist at the Norwegian Computing Center and had a postdoctoral position at University of Oslo, Dept of Mathematics in the FocuStat project, with a PhD in Biostatistics.
My research interest high-dimensional statistics, supervised learning, dimension reduction techniques and model selection. I also enjoy teaching, and have taught various courses within statistics and machine learning.
Profiles
Recent publications
** K. H. Hellton** , N. L. Hjort (2018). Fridge: Focused fine-tuning of ridge regression for personalized predictions. Statistics in Medicine, 37(8), 1290–1303.
Abstract: Statistical prediction methods typically require some form of fine-tuning of tuning parameter(s), with \(k\)-fold cross-validation as the canonical procedure. For ridge regression, there exist numerous procedures, but common for all, including cross-validation, is that one single parameter is chosen for all future predictions. We propose instead to calculate a unique tuning parameter for each individual for which we wish to predict an outcome. This generates an individualized prediction by focusing on the vector of covariates of a specific individual. The focused ridge–fridge–procedure is introduced with a two-part contribution: First we define an oracle tuning parameter minimizing the mean squared prediction error of a specific covariate vector, and then we propose to estimate this tuning parameter by using plug-in estimates of the regression coefficients and error variance parameter. The procedure is extended to logistic ridge regression by using parametric bootstrap. For high-dimensional data, we propose to use ridge regression with cross-validation as the plug-in estimate, and simulations show that fridge gives smaller average prediction error than ridge with cross-validation for both simulated and real data. We illustrate the new concept for both linear and logistic regression models in two applications of personalized medicine: predicting individual risk and treatment response based on gene expression data. The method is implemented in the R package fridge.
L. M. Giil, D. Aarsland, K. H. Hellton, A. Lund, H. Heidecke, K. Schulze-Forster, G. Riemekasten, A. O. Vik-Mo, E. K. Kristoffersen, C. A. Vedeler, J. E. Nordrehaug, Jan Erik (2018). Antibodies to multiple receptors are associated with neuropsychiatric symptoms and mortality in Alzheimer’s disease: A longitudinal study. Journal of Alzheimer’s Disease, 64(3), 761–774.
Background: Endogenous antibodies to signaling molecules and receptors (Abs) are associated with Alzheimer’s disease (AD). Objectives: To investigate the association of 33 Abs to dopaminergic, serotoninergic, muscarinic, adrenergic, vascular, and immune receptors with cognitive, neuropsychiatric, and mortality outcomes. Methods: Ninety-one patients with mild AD were followed annually for 5 years with the Mini-Mental State Examination (MMSE) and the Neuropsychiatric Inventory (NPI; composite outcomes: “psychosis” (item 1 + 2), “mood” (item 4 + 5 + 7), and “agitation” (item 3 + 8 + 9)). Abs were quantified in sera obtained at baseline by ELISA and reduced to principal components (PCs). Associations between Abs and outcomes were assessed by a mixed model (MMSE decline), zero-inflated fixed effects count models (composite NPI scores), and Cox regression (mortality). The resulting p-values were adjusted for multiple testing according to a false discovery rate of 0.05 (Benjamini-Hochberg). Results: The measured levels of the 33 Abs formed four PCs. PC1 (dopaminergic and serotonergic Abs) was associated with increased mortality (Hazard ratio 2.57, p < 0.001), PC2 (serotonergic, immune, and vascular Abs) with decreased agitation symptoms (?? - 0.19, p < 0.001), and PC3 (cholinergic receptor Abs) with increased mood symptoms (?? 0.04, p = 0.002), over time. There were no associations between Abs and MMSE decline. Conclusion: The associations between Abs, mortality, and neuropsychiatric symptoms reported in this cohort are intriguing. They cannot, however, be generalized. Validation in independent sample sets is required.
Ø. Sørensen, K. H. Hellton, A. Frigessi, M. Thoresen (2018). Covariate Selection in High-Dimensional Generalized Linear Models With Measurement Error. Journal of Computational And Graphical Statistics, 27(4), 739–749.
Abstract: In many problems involving generalized linear models, the covariates are subject to measurement error. When the number of covariates p exceeds the sample size n, regularized methods like the lasso or Dantzig selector are required. Several recent papers have studied methods which correct for measurement error in the lasso or Dantzig selector for linear models in the \(p > n\) setting. We study a correction for generalized linear models, based on Rosenbaum and Tsybakov’s matrix uncertainty selector. By not requiring an estimate of the measurement error covariance matrix, this generalized matrix uncertainty selector has a great practical advantage in problems involving high-dimensional data. We further derive an alternative method based on the lasso, and develop efficient algorithms for both methods. In our simulation studies of logistic and Poisson regression with measurement error, the proposed methods outperform the standard lasso and Dantzig selector with respect to covariate selection, by reducing the number of false positives considerably. We also consider classification of patients on the basis of gene expression data with noisy measurements. Supplementary materials for this article are available online.
T. L. Thorarinsdottir, K. H. Hellton, G. H. Steinbakk, L. Schlichting, K. Engeland (2018). Bayesian regional flood frequency analysis for large catchments. Water Resources Research, 54(9), 6929–6947.
Abstract: In many problems involving generalized linear models, the covariates are subject to measurement error. When the number of covariates p exceeds the sample size n, regularized methods like the lasso or Dantzig selector are required. Several recent papers have studied methods which correct for measurement error in the lasso or Dantzig selector for linear models in the p > n setting. We study a correction for generalized linear models, based on Rosenbaum and Tsybakov’s matrix uncertainty selector. By not requiring an estimate of the measurement error covariance matrix, this generalized matrix uncertainty selector has a great practical advantage in problems involving high-dimensional data. We further derive an alternative method based on the lasso, and develop efficient algorithms for both methods. In our simulation studies of logistic and Poisson regression with measurement error, the proposed methods outperform the standard lasso and Dantzig selector with respect to covariate selection, by reducing the number of false positives considerably. We also consider classification of patients on the basis of gene expression data with noisy measurements. Supplementary materials for this article are available online.
K. H. Hellton, M. Thoresen (2017). When and Why are Principal Component Scores a Good Tool for Visualizing High-dimensional Data?. Scandinavian Journal of Statistics, 44(1), 581–597.
Abstract: Principal component analysis is a popular dimension reduction technique often used to visualize high-dimensional data structures. In genomics, this can involve millions of variables, but only tens to hundreds of observations. Theoretically, such extreme high dimensionality will cause biased or inconsistent eigenvector estimates, but in practice, the principal component scores are used for visualization with great success. In this paper, we explore when and why the classical principal component scores can be used to visualize structures in high-dimensional data, even when there are few observations compared with the number of variables. Our argument is twofold: First, we argue that eigenvectors related to pervasive signals will have eigenvalues scaling linearly with the number of variables. Second, we prove that for linearly increasing eigenvalues, the sample component scores will be scaled and rotated versions of the population scores, asymptotically. Thus, the visual information of the sample scores will be unchanged, even though the sample eigenvectors are biased. In the case of pervasive signals, the principal component scores can be used to visualize the population structures, even in extreme high-dimensional situations.
K. H. Hellton, M. Thoresen (2016). Integrative clustering of high-dimensional data with joint and individual clusters. Biostatistics, 17(3), 537–548.