78
Sparse Covariance Estimation, Variable Selection and Testing for Large Datasets Yiannis Dendramis University of Cyprus, Department of Accounting and Finance Yiannis Dendramis acknowledges support from the European Unions Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 708501. Dendramis () Regularization-Estimation-Model Selection 1 / 78

Sparse Covariance Estimation, Variable Selection and Testing for … · 2018. 9. 27. · Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparse For su¢ ciently

  • Upload
    others

  • View
    5

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Sparse Covariance Estimation, Variable Selection and Testing for … · 2018. 9. 27. · Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparse For su¢ ciently

Sparse Covariance Estimation, Variable Selection andTesting for Large Datasets

Yiannis DendramisUniversity of Cyprus, Department of Accounting and Finance

Yiannis Dendramis acknowledges support from the European Union�s Horizon 2020research and innovation programme under the Marie Sklodowska-Curie grant

agreement No 708501.

Dendramis () Regularization-Estimation-Model Selection 1 / 78

Page 2: Sparse Covariance Estimation, Variable Selection and Testing for … · 2018. 9. 27. · Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparse For su¢ ciently

Introduction: Big Datasets in Economics

Info extracted from large datasets is a key to various scienti�c discoveries and forthis reason it is important to have the right tools to handle them

Nowadays computers are in the middle of most economic transactions. These�computer-mediated transactions�generate huge amounts of data, that can beused in order to extract information that is relevant for economic policy

The main problem with large datasets is that conventional statistical andeconometric techniques such as regression fail to work consistently due to thedimensionality of the examined economic relationships

e.g. in a linear economic relationship we frequently have T observations of adependent variable (y) as a function of many potential predictors. When thenumber of predictors (p) is large or larger than the temporal dimension (T ) then aregression with all available covariates becomes problematic if not impossible.

This generates the need for techniques which are specialized for analyzing hugeloads of information

Dendramis () Regularization-Estimation-Model Selection 2 / 78

Page 3: Sparse Covariance Estimation, Variable Selection and Testing for … · 2018. 9. 27. · Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparse For su¢ ciently

The main problem that we consider

A linear regression model with p possible covariates

yt =p

∑i=1

βi xit + ut , ut � N�0, σ2y

�, t = 1, .,T (1)

and sparse parameter vector (w.l.o.g.)

fβigki=1 6= 0 and fβig

pi=k+1 = 0 (2)

k �nite number, or at least k < T

i.e. there are k true regressors and p � k noise variables

In the high dimensional setting that we study the T and p are large and it ispossible to have T < p.

Our interest : In this setup, we wish to estimate the true parameter vector β by:-selecting the true regressors (signals) that explain the dependent (y)-discard the noise variables

Dendramis () Regularization-Estimation-Model Selection 3 / 78

Page 4: Sparse Covariance Estimation, Variable Selection and Testing for … · 2018. 9. 27. · Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparse For su¢ ciently

Some Literature on Model Selection and Estimation

There are two basic strands in the literature:

N penalized regression methods: use all available possible covariates at the sametime

bβ = argminβ

8<: T

∑t=1

yt �

p

∑i=1

βi xit

!2+ Pλ (β)

9=; (3)

Pλ (β): depends on the tuning parameter vector λ and penalizes the complexityof β:-Lasso : Pλ (β) = λL1-main theoretical contributors (among others): Tibshirani (1996), Fan and Li(2001), Zhou and Hastie (2005), Lv and Fan (2009), Bickel, Ritov, and Tsybakov(2009), Fan and Lv (2013).-main drawbacks: the choice of penalty function, the choice of the tuningparameter-both alter signi�cantly the theoretical and small sample performance of themethods

Dendramis () Regularization-Estimation-Model Selection 4 / 78

Page 5: Sparse Covariance Estimation, Variable Selection and Testing for … · 2018. 9. 27. · Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparse For su¢ ciently

Some Literature on Model Selection and Estimation

N greedy methods: sequential procedures that focus on the predictive power ofindividual regressors-do not consider all the p covariates together-regressors are chosen based on their individual ability to explain the dependent y-machine learning methods such as boosting, regression trees, and step wiseregressions are casted in this category-most of these methods lack of theoretical foundations

I combine greedy with penalized regressions: "Sure Screening" (Fan and Lv2008)-step 1: rank absolute correlations between regressors and dependent variable andselect a �xed proportion of regressors.-step 2: penalized regression on the selected from the �rst step variables

Dendramis () Regularization-Estimation-Model Selection 5 / 78

Page 6: Sparse Covariance Estimation, Variable Selection and Testing for … · 2018. 9. 27. · Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparse For su¢ ciently

Some Literature on Model Selection and Estimation

I OCMT method: Chudik, Kapetanios Pesaran (2018)-inferential approach for model selection-initial regressions are common to boosting and sure screening-propose a multiple testing method for controlling the probability of selecting thetrue model-advantage: it does not rely on tuning parameters

Dendramis () Regularization-Estimation-Model Selection 6 / 78

Page 7: Sparse Covariance Estimation, Variable Selection and Testing for … · 2018. 9. 27. · Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparse For su¢ ciently

Basic idea of what we propose

In a sparse linear model like (1) with p, T large and possibly p > T

- assume a sparse covariance structure for the zt = (yt , xt ), i.e. Σz is sparse- then the true parameter is

β = Σ�1x Σxy (4)

with Σx be the submatrix of Σz that corresponds to the covariance of xt , and Σxybe the partition of Σz that corresponds to cov (yt , xt )- provide a consistent estimator bβ as a function of this covariance estimate, whichcan extract the true signals and discard the noise variables.- we modify OLS estimation by using threshold estimates of covariances to enableapplication to large datasets.- crucially all the available variables are considered in a single regression type step.- provide extensions and enhancements to this basic estimator

Dendramis () Regularization-Estimation-Model Selection 7 / 78

Page 8: Sparse Covariance Estimation, Variable Selection and Testing for … · 2018. 9. 27. · Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparse For su¢ ciently

Basic idea of what we propose

To the best of our knowledge what we propose is a new method that:

- is able to base the selection/estimation procedure directly on β estimate, giventhat p is large

and we can do this with:

- no need to rely and choose a penalty function over the large set of proposalswhich are available in the literature- no need to examine each regressor separately- no need to standardize possible covariates as in penalized methods.- it can be extended to an inferential procedure that can be directly linked to theclassical statistical analysis- it has nice theoretical properties

Dendramis () Regularization-Estimation-Model Selection 8 / 78

Page 9: Sparse Covariance Estimation, Variable Selection and Testing for … · 2018. 9. 27. · Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparse For su¢ ciently

The Regularization approach

De�ne the vector zt = (yt , xt ), with xt = (x1t , x2t , .., xpt )

Σz = covariance matrix of zt , of dimension pz � pz ( pz = p + 1)

When Σz is sparse, i.e.

Σz 2 Q (npz ,K ) =(

Σ : σii � K ,∑i1�

σij 6= 0� npz

)(5)

or equivalently, it has at most npz non zero elements in a row, with npz < pz

Dendramis () Regularization-Estimation-Model Selection 9 / 78

Page 10: Sparse Covariance Estimation, Variable Selection and Testing for … · 2018. 9. 27. · Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparse For su¢ ciently

The Regularization approach

Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparseFor su¢ ciently large κ, when npλ = o (1)

�bΣz�� Σz = Op (npλ) (6) Tλ

�bΣz��1 � Σ�1z

= Op (npλ) for min eig(Σz ) > 0 (7)

�bΣz� is the regularized covariance matrix estimator

Dendramis () Regularization-Estimation-Model Selection 10 / 78

Page 11: Sparse Covariance Estimation, Variable Selection and Testing for … · 2018. 9. 27. · Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparse For su¢ ciently

The Regularization approach

e.g. for hard thresholding

�bΣz� = bσij I �jbσij j > λ�

(8)

bσij = hbΣz iij, bΣz = T�1 T

∑t=1

(zt � z)0 (zt � z) , z = T�1T

∑t=1

zt (9)

λ = κ

rlog pzT

, κ > 0 (10)

The λ parameter sets a universal lower bound for the elements of the covariancebΣz

Dendramis () Regularization-Estimation-Model Selection 11 / 78

Page 12: Sparse Covariance Estimation, Variable Selection and Testing for … · 2018. 9. 27. · Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparse For su¢ ciently

The Regularization approach

Cai and Liu (2011): for independent samples of zt from more general distributions(heavy tailed)-instead of setting a universal bound, propose a threshold procedure that isadaptive to the variability of individual entries.-This estimator is consistent over a larger class of sparse covariance matricesunder the spectral norm

λij = κ

sbθij log pzT

, κ > 0 (11)

bθij = 1T

T

∑t=1

�(zi ,t � z i )

�zj ,t � z j

�� bσij �2 (12)

Dendramis () Regularization-Estimation-Model Selection 12 / 78

Page 13: Sparse Covariance Estimation, Variable Selection and Testing for … · 2018. 9. 27. · Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparse For su¢ ciently

The Regularization approach

Dendramis, Giraitis and Kapetanios (2018): showed that (6), (7) hold under moregeneral assumptions:Let c > 0, ε > 0 and suppose that T � cpε, (practically this means that p can bemuch larger than T )

1) zt be a stationary pz dimensional a-mixing process2) zt can be heavy tailed distributed random variable (departure from Gaussianity)at a cost of smaller p

moreover:3) when zt is heteroscedastic (Σz becomes Σz ,t ), time varying thresholdingestimator can be introduced with good theoretical and small sample properties

Dendramis () Regularization-Estimation-Model Selection 13 / 78

Page 14: Sparse Covariance Estimation, Variable Selection and Testing for … · 2018. 9. 27. · Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparse For su¢ ciently

The Regularization approach

For the DGP (1) we consider the following assumptions:

1) The error term in ut is a MDS with respect to the �ltrationF ut�1 = σ (ut�1, ut�2, ..) with zero mean and a constant variance0 < σ2 < C < ∞.2) There exist su¢ ciently large positive constants C0, C1, C2, C3 and su , sz > 0such that zit and ut satisfy the following conditions

Pr (jzit j � ζ) � C0 exp (�C1ζsz ) (13)

Pr (jut j � ζ) � C2 exp (�C3ζsu )

This can be weakened to a polynomial rate, following Dendramis, Giraitis andKapetanios (2018) but at a cost of smaller p3) Regressors are uncorrelated with the errors E (xitut jFt�1) = 0 for allt = 1, 2, ..,T , i = 1, .., p and

Ft�1 = F ut�1 [ F xt�1 = σ (ut�1, ut�2, ..) [n[pj=1σ

�xjt�1, xjt�2, ..

�o.

Dendramis () Regularization-Estimation-Model Selection 14 / 78

Page 15: Sparse Covariance Estimation, Variable Selection and Testing for … · 2018. 9. 27. · Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparse For su¢ ciently

The Regularization approach

4) the xit for t = 1, ..,T , i = 1, .., p either non stochastic or random with �nite8th order moment5) the number of true regressors k is �nite or at least k < T .6) E

�xitxjt � E

�xitxjt

�jFt�1

�= 0, for all i , j

Dendramis () Regularization-Estimation-Model Selection 15 / 78

Page 16: Sparse Covariance Estimation, Variable Selection and Testing for … · 2018. 9. 27. · Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparse For su¢ ciently

The Regularization approach

Comments on the assumptions:1) Assumption 1 allows for some dependence through e.g. an arch process but noserial correlation.2) Assumption 2 can be relaxed to allow the vector zt (regressors and dependentvariable) to be heavy tailed distributed. This is a departure from the usual in theliterature exponentially declining bound for the probability tails. For the error weadopt the exponentially declining bound which can be simpli�ed as a Normalityassumption.3) Assumption 3 is a MDS assumption on the xitut with respect to Ft�1.6) xitxjt � E

�xitxjt

�is an MDS

Dendramis () Regularization-Estimation-Model Selection 16 / 78

Page 17: Sparse Covariance Estimation, Variable Selection and Testing for … · 2018. 9. 27. · Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparse For su¢ ciently

The Regularization approach

De�ne the regularized estimator as

β̂λ = Tλ

�Σ̂x��1

Tλ(Σ̂xy ) (14)

with Tλ

�Σ̂x�, Tλ(Σ̂xy ) being the submatrice/vector of Tλ

�bΣz� that correspondto the regressors covariance matrix and covariance of y with xT column vector,respectively and

�bΣz� = bσij I �jbσij j > λ�, (15)

bσij = hbΣz iij, λ = κ

rlog pzT

, pz = p + 1

bΣz = T�1 T

∑t=1

(zt � z)0 (zt � z) , z = T�1T

∑t=1

zt

Dendramis () Regularization-Estimation-Model Selection 17 / 78

Page 18: Sparse Covariance Estimation, Variable Selection and Testing for … · 2018. 9. 27. · Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparse For su¢ ciently

The Regularization approach

Theorem 1: Consider the DGP de�ned in (1), and suppose that assumptions 1-7hold. Then for su¢ ciently large κ, the regularized estimator β̂λ satis�es β̂λ � β

= Op �n3/2pz λpz

�(16)

Also, for the probability bound of β̂λ � β

and constants C1, C2, C3, C4 wehave that:

Pr� β̂λ � β

> n3/2pz λpz

�� p2zC1 exp

��C2Tλ2pz

�+ pzC3 exp (�C4Tλpz )

(17)

- The proposed estimator converges to the true β which means thatasymptotically we are able to extract the true signals (βi 6= 0).

Dendramis () Regularization-Estimation-Model Selection 18 / 78

Page 19: Sparse Covariance Estimation, Variable Selection and Testing for … · 2018. 9. 27. · Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparse For su¢ ciently

The Regularization approach

We then introduce a testing procedure to combine with the basic estimator:- de�ne the following ratio for each covariate i 2 f1, .., pg

ξλ,i =β̂λ,ibγλ,i

, with bγλ,i =

ss2��T � Tλ

�bΣxx���1�ii

(18)

s2 = e 0e/T , et = yt � xtbβλ , i = 1, .., p (19)

- The bγλ,i is a normalization parameter and it is the vehicle that allows us tointroduce the testing procedure

I�\βλ,i 6= 0

�=

�1 when ξλ,i > cp0 otherwise

(20)

with cp = Φ�1�1� α

cpπ

�(21)

and cp = O�(π log (p))1/2

�= o (T c0 ) , 8c0 > 0 (22)

where Φ�1 is the quantile function of the normal distribution and α is the nominalsize of the individual test, to be set by the researcher (e.g. α = 1% ), and 0 < π,c < ∞.

Dendramis () Regularization-Estimation-Model Selection 19 / 78

Page 20: Sparse Covariance Estimation, Variable Selection and Testing for … · 2018. 9. 27. · Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparse For su¢ ciently

The Regularization approach

cp is a Bonferoni type critical value that corrects for the multiple testing problem,controlling the family wise error at the level of α% (reject at least one H0 whenH0 is true, type I error)

The bγi can be the s.e. of β̂i when the p is small relative to T (e.g. whenp < T�1/2)

The c , π do not a¤ect the asymptotic results but in small samples theperformance can be a¤ected by these parameters. In practise on can set themboth 1, or choose them through cross validation

Dendramis () Regularization-Estimation-Model Selection 20 / 78

Page 21: Sparse Covariance Estimation, Variable Selection and Testing for … · 2018. 9. 27. · Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparse For su¢ ciently

The Regularization approach: Basic idea of the testing procedure

The testing works as follows:- ξλ,i is bounded in T , then xit is a noise variable- ξλ,i is diverges in T , then the xit has power in explaining the yt , i.e. xit is asignal variable

Dendramis () Regularization-Estimation-Model Selection 21 / 78

Page 22: Sparse Covariance Estimation, Variable Selection and Testing for … · 2018. 9. 27. · Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparse For su¢ ciently

The Regularization approach

true positive rate and false positive rate

TPRp,T =#ni : β̂λ,i 6= 0 and βi 6= 0

o# fi : βi 6= 0g

(23)

FPRp,T =#ni : β̂λ,i 6= 0 and βi = 0

o# fi : βi = 0g

(24)

The TPRp,T measures percentage of true signals that we correctly identify(should converge to 1)The FPRp,T measures the percentage of noise variables that we incorrectlyidentify as true signals (should converge to zero)

Dendramis () Regularization-Estimation-Model Selection 22 / 78

Page 23: Sparse Covariance Estimation, Variable Selection and Testing for … · 2018. 9. 27. · Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparse For su¢ ciently

The Regularization approach

We enhance the performance of the proposed β̂λ in terms of model selection wepropose the following1. estimate the parameter of model (1) by

β̂λ = Tλ

�Σ̂x��1

Tλ(Σ̂xy ) (25)

with Tλ

�Σ̂x�, Tλ(Σ̂xy ) being the submatrice/vector of Tλ

�bΣz� that correspondto the regressors covariance matrix and covariance of y with xT column vector,respectively

�bΣz� = bσij I �jbσij j > λ�, (26)

bσij =hbΣz i

ij, λ = κ

rlog pzT

, pz = p + 1

bΣz = T�1T

∑t=1

(zt � z)0 (zt � z) , z = T�1T

∑t=1

zt

Dendramis () Regularization-Estimation-Model Selection 23 / 78

Page 24: Sparse Covariance Estimation, Variable Selection and Testing for … · 2018. 9. 27. · Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparse For su¢ ciently

The Regularization approach

2. �lter the variables that truly explain yt by

I�\βλ,i 6= 0

�=

�1 when ξλ,i > cp0 otherwise

ξλ,i =β̂λ,ibγλ,i

, with bγλ,i =

ss2��T � Tλ

�bΣxx���1�ii

s2 = e 0e/T , et = yt � xtbβλ , cp = Φ�1�1� α

cpπ

�, i = 1, .., p

Dendramis () Regularization-Estimation-Model Selection 24 / 78

Page 25: Sparse Covariance Estimation, Variable Selection and Testing for … · 2018. 9. 27. · Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparse For su¢ ciently

The Regularization approach

Theorem 2: Consider the DGP de�ned in (1), and suppose that assumption 1-7hold. Then the previously de�ned 2-step procedure can identify the true model, orequivalently for the true positive rate we have that

E�TPRp,T

�= k�1

k

∑i=1

Pr�jξλ,i j > cT

�� βi 6= 0�

� 1�O�p2zC1 exp

��C2Tλ2pz

�+ pzC3 exp (�C4Tλpz )

�or equivalently TPRp,T !p 1

and the false positive rate

E�FPRp,T

�= (p � k)�1

p

∑i=k+1

Pr�jξλ,i j > cT

�� βi = 0�

(27)

=1

p � k O�p2z exp

��C �2Tλ2pz

�+ pz exp (�C �4Tλpz )

�(28)

= O�pz exp

��C �2Tλ2pz

�+ exp (�C �4Tλpz )

�(29)

or equivalently FPRp,T !p 0 (30)

Dendramis () Regularization-Estimation-Model Selection 25 / 78

Page 26: Sparse Covariance Estimation, Variable Selection and Testing for … · 2018. 9. 27. · Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparse For su¢ ciently

The Regularization approach

To further enhance the selection procedure we propose the following

1. estimate the parameter of model (1) by

β̂λ = Tλ

�Σ̂x��1

Tλ(Σ̂xy ) (31)

with Tλ

�Σ̂x�, Tλ(Σ̂xy ) being the submatrice/vector of Tλ

�bΣz� that correspondto the regressors covariance matrix and covariance of y with xT column vector,respectively

�bΣz� = bσij I �jbσij j > λ�, (32)

bσij =hbΣz i

ij, λ = κ

rlog pzT

, pz = p + 1

bΣz = T�1T

∑t=1

(zt � z)0 (zt � z) , z = T�1T

∑t=1

zt

Dendramis () Regularization-Estimation-Model Selection 26 / 78

Page 27: Sparse Covariance Estimation, Variable Selection and Testing for … · 2018. 9. 27. · Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparse For su¢ ciently

The Regularization approach

2. �lter the non important variables that explain the yt by

I�\βλ,i 6= 0

�=

�1 when ξλ,i > cp0 otherwise

ξλ,i =β̂λ,ibγλ,i

, with bγλ,i =

ss2��T � Tλ

�bΣxx���1�ii

s2 = e 0e/T , et = yt � xtbβλ , cp = Φ�1�1� α

cpπ

�, i = 1, .., p

3. on the important variables run a �nal ols regression

β̂λ,f =�ex 0T ,�JexT ,�J ��1 exT ,�JyTexT ,�J is the xT , excluding columns J = nj : I

�\βλ,j 6= 0

�= 0

o

Dendramis () Regularization-Estimation-Model Selection 27 / 78

Page 28: Sparse Covariance Estimation, Variable Selection and Testing for … · 2018. 9. 27. · Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparse For su¢ ciently

The Regularization approach

Theorem 3: Consider the DGP de�ned in (1), and suppose that assumption 1-9hold. Then for su¢ ciently large constants C1, C2, C3, C4, the previously de�ned3-step procedure provides consistent estimates β̂λ,f � β

= Op � 1pT

�+Op

�p2zC1 exp

��C2Tλ2pz

�+ pzC3 exp (�C4Tλpz )

Dendramis () Regularization-Estimation-Model Selection 28 / 78

Page 29: Sparse Covariance Estimation, Variable Selection and Testing for … · 2018. 9. 27. · Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparse For su¢ ciently

The Regularization approach

Sure Independence Screening (Fan and Lv 2008)-step 1: rank absolute correlations between regressors and dependent variable andselect a �xed proportion of regressors.-step 2: penalized regression on the selected from the �rst step variables

This will imply a signi�cant decrease in the dimensionality of the problem,speeding up estimators

We can combine a sure screening idea in the following two step procedure

-step 1: from the estimated covariance use Tλ

�bΣxy� to drop the zero correlationnoise variables.-step 2: apply all the proposed methods

Dendramis () Regularization-Estimation-Model Selection 29 / 78

Page 30: Sparse Covariance Estimation, Variable Selection and Testing for … · 2018. 9. 27. · Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparse For su¢ ciently

Simulation Experiments

Generate data from

yt = β0 +4

∑i=1

βi xit + εt , εt � NIID�0, σ2y

�, βi = 1, 8 i = 1, ., 4 (33)

-For gt � N (0, 1), εit � N (0, 1), the T � p matrix x is generated by:�DGP 1: uncorrelated covariates

xt � N (0, Ip)

�DGP 52: pseudo signal variables

xit = (εit + gt ) /p2, for i = 1, .., 4

x5t = ε5t + κx1t , forκ = 1.33

x6t = ε6t + κx2t , for κ = 1.33, xit = (εit + εi�1t ) /p2, for i > 6

�DGP 43: Temporally uncorrelated-weakly collinear covariates

xit = (εit + gt ) /p2, for i = 1, .., 4, x5t = ε5t

xit = (εit + εi�1t ) /p2, for i > 5

Dendramis () Regularization-Estimation-Model Selection 30 / 78

Page 31: Sparse Covariance Estimation, Variable Selection and Testing for … · 2018. 9. 27. · Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparse For su¢ ciently

Simulation Experiments

�DGP 44: Strongly collinear noise variables due to a persistent unobservedcommon factor

xit = (εit + gt ) /p2, for i = 1, .., 4, x5t = (ε5t + bi ft ) /

p3

xit =�(εit + εi�1t ) /

p2+ bi ft

�/p3, for i > 5, bi � N (1, 1)

ft = 0.95ft�1 +p1� 0.952vt , vt � N (0, 1)

�DGP 45: Temporally correlated and weakly collinear covariatesxit = (εit + gt ) /

p2, for i = 1, .., 4, x5t = ε5t

xit = (εit + εi�1t ) /p2, for i > 5

εit = ρεit�1 +q1� ρ2vit , vt � N (0, 1) , ρ = 0.5

�DGP 51: all variables collinear with the true signalsxt � N (0,Σp)

σij = 0.5ji�j j, 1 � i , j � p

Dendramis () Regularization-Estimation-Model Selection 31 / 78

Page 32: Sparse Covariance Estimation, Variable Selection and Testing for … · 2018. 9. 27. · Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparse For su¢ ciently

Simulation Experiments

e.g. a 10� 10 covariance from DGP=52: pseudo signal variables

Σ10,(52) =

2666666666666664

1 0.5 0.5 0.5 1.3 0.7 0 0 0 00.5 1 0.5 0.5 0.7 1.3 0 0 0 00.5 0.5 1 0.5 0.7 0.7 0 0 0 00.5 0.5 0.5 1 0.7 0.7 0 0 0 01.3 0.7 0.7 0.7 2.8 0.9 0 0 0 00.7 1.3 0.7 0.7 0.9 2.8 0.7 0 0 00 0 0 0 0 0.7 1 0.5 0 00 0 0 0 0 0 0.5 1 0.5 00 0 0 0 0 0 0 0.5 1 0.50 0 0 0 0 0 0 0 0.5 1

3777777777777775

Dendramis () Regularization-Estimation-Model Selection 32 / 78

Page 33: Sparse Covariance Estimation, Variable Selection and Testing for … · 2018. 9. 27. · Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparse For su¢ ciently

Simulation Experiments

e.g. a 10� 10 covariance from DGP=43: Temporally uncorrelated-weaklycollinear covariates

Σ10,(43) =

2666666666666664

1 0.5 0.5 0.5 0 0 0 0 0 00.5 1 0.5 0.5 0 0 0 0 0 00.5 0.5 1 0.5 0 0 0 0 0 00.5 0.5 0.5 1 0 0 0 0 0 00 0 0 0 1 0.7 0 0 0 00 0 0 0 0.7 1 0.5 0 0 00 0 0 0 0 0.5 1 0.5 0 00 0 0 0 0 0 0.5 1 0.5 00 0 0 0 0 0 0 0.5 1 0.50 0 0 0 0 0 0 0 0.5 1

3777777777777775

Dendramis () Regularization-Estimation-Model Selection 33 / 78

Page 34: Sparse Covariance Estimation, Variable Selection and Testing for … · 2018. 9. 27. · Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparse For su¢ ciently

Simulation Experiments

e.g. a 7� 7 covariance from DGP=51: all variables collinear with the true signals

Σ7,(51) =

2666666664

1 0.5 0.25 0.125 0.0625 0.0313 0.01560.5 1 0.5 0.25 0.125 0.0625 0.03130.25 0.5 1 0.5 0.25 0.125 0.06250.125 0.25 0.5 1 0.5 0.25 0.1250.0625 0.125 0.25 0.5 1 0.5 0.250.0313 0.0625 0.125 0.25 0.5 1 0.50.0156 0.0313 0.0625 0.125 0.25 0.5 1

3777777775

Dendramis () Regularization-Estimation-Model Selection 34 / 78

Page 35: Sparse Covariance Estimation, Variable Selection and Testing for … · 2018. 9. 27. · Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparse For su¢ ciently

Simulation Experiments

� Adaptive Lasso- is applied as described in Section 2.8.4 of Buhlmann and van de Geer (2011).

� Comparison:- focus on Lasso/adaptive Lasso as these tend to perform better than Boostingand the other penalized regression methods

� Tunning parameters:- Lasso/adaptive Lasso using the 10-fold cross validation, as this is the benchmarkin the literature- Regularized methods: κ through leave one out cross validation over the availablesample- signi�cance level: α is set to 1%, but results don not change signi�cantly forother values (e.g. 0.5% or 5%)

Dendramis () Regularization-Estimation-Model Selection 35 / 78

Page 36: Sparse Covariance Estimation, Variable Selection and Testing for … · 2018. 9. 27. · Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparse For su¢ ciently

Simulation Experiments

� Evaluation- RMSE of the parameter vector- FPR, TPR- true model : I

�n∑ki=1 I

�bβi 6= 0� = ko \ n∑pi=k+1 I�bβi 6= 0� = 0o�

- out of sample RMSFE- RMSE, RMSFE are in ratios from the Lasso- the total number of simulations is 100

Dendramis () Regularization-Estimation-Model Selection 36 / 78

Page 37: Sparse Covariance Estimation, Variable Selection and Testing for … · 2018. 9. 27. · Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparse For su¢ ciently

Simulation Experiments

� Method Names

Letter ExplanationS SIS

R-BL Regularized estimator with Bickel and LevinaR-CL Regularized estimator with Cai and Liut testing procedurer �nal regression on important covariates

Dendramis () Regularization-Estimation-Model Selection 37 / 78

Page 38: Sparse Covariance Estimation, Variable Selection and Testing for … · 2018. 9. 27. · Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparse For su¢ ciently

Simulation Experiments

DGP 1: uncorrelated covariates

R2

DGP=1, T= 200, p= 200rmse

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL0.5

0.6

0.7

0.8

0.9

1

1.1

1.2

1.3

tpr

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL

0.84

0.86

0.88

0.9

0.92

0.94

0.96

0.98

1fpr

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL

0

0.1

0.2

0.3

0.4

0.5

0.6

truth

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1rmsfe

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL0.92

0.94

0.96

0.98

1

1.02

1.04

1.06

1.08

1.1

Dendramis () Regularization-Estimation-Model Selection 38 / 78

Page 39: Sparse Covariance Estimation, Variable Selection and Testing for … · 2018. 9. 27. · Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparse For su¢ ciently

Simulation Experiments

DGP 1: uncorrelated covariates

DGP=1, T=200, p=200, nsim=100

Lasso A-Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-BL-t

R-BL-t-r

R-BL

R-CL-t

R-CL-t-r

R-CL

R squared=0.7

TPR 1 1 1 1 1 1 1 1 1 1 1 1 1 1FPR 0.08 0.08 0 0 0 0 0.01 0 0 0 0 0 0 0.07

RMSE(beta) 1 1.31 0.63 0.52 0.81 0.62 0.66 0.64 0.64 0.49 0.81 0.66 0.53 0.68TrueModel 0.01 0 1 0.71 0.73 0.97 0.35 1 1 0.8 0.73 0.94 0.53 0.84RMFSE 1 1.06 0.94 0.92 1.04 0.95 1 0.98 0.97 0.97 1.04 0.99 0.91 1.01

R squared=0.5

TPR 1 1 0.99 1 0.97 1 1 1 1 1 0.97 1 1 0.99FPR 0.08 0.08 0 0 0 0 0.01 0 0 0 0 0 0 0.31

RMSE(beta) 1 1.28 0.65 0.55 0.84 0.7 0.67 0.67 0.65 0.52 0.84 0.6 0.55 0.71TrueModel 0.01 0 0.83 0.65 0.55 0.66 0.36 0.77 0.85 0.71 0.55 0.88 0.66 0.49RMFSE 1 1.02 0.98 0.94 1.02 0.96 0.96 0.96 0.99 0.96 1.02 0.98 0.97 0.98

R squared=0.3

TPR 0.99 0.99 0.9 0.92 0.88 0.91 0.92 0.9 0.82 0.83 0.88 0.82 0.83 0.89FPR 0.08 0.08 0 0.01 0 0 0.01 0.01 0 0 0 0 0 0.68

RMSE(beta) 1 1.3 0.95 0.86 1 0.96 0.86 0.99 0.97 0.93 1 1 0.93 1.06TrueModel 0 0 0.28 0.21 0.21 0.22 0.2 0.22 0.26 0.23 0.21 0.26 0.22 0.03RMFSE 1 1.05 1.07 0.99 1.11 1.07 1 1.06 1.08 1.08 1.11 1.1 1.02 1.05

Dendramis () Regularization-Estimation-Model Selection 39 / 78

Page 40: Sparse Covariance Estimation, Variable Selection and Testing for … · 2018. 9. 27. · Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparse For su¢ ciently

Simulation Experiments

DGP 1: uncorrelated covariates

R2

DGP=1, T= 300, p= 200rmse

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL 0.6

0.7

0.8

0.9

1

1.1

1.2

1.3

tpr

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL

0.955

0.96

0.965

0.97

0.975

0.98

0.985

0.99

0.995

1fpr

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL

0

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

0.45

truth

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1rmsfe

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL 0.88

0.9

0.92

0.94

0.96

0.98

1

1.02

1.04

1.06

Dendramis () Regularization-Estimation-Model Selection 40 / 78

Page 41: Sparse Covariance Estimation, Variable Selection and Testing for … · 2018. 9. 27. · Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparse For su¢ ciently

Simulation Experiments

DGP 1: uncorrelated covariates

DGP=1, T=300, p=200, nsim=100

Lasso A-Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-BL-t

R-BL-t-r

R-BL

R-CL-t

R-CL-t-r

R-CL

R squared=0.7

TPR 1 1 1 1 1 1 1 1 1 1 1 1 1 1FPR 0.07 0.07 0 0.01 0 0 0.01 0 0 0 0 0 0 0.03

RMSE(beta) 1 1.33 0.62 0.69 0.69 0.69 0.71 0.66 0.64 0.56 0.69 0.65 0.59 0.67TrueModel 0 0 1 0.26 0.94 0.88 0.35 1 1 0.59 0.94 1 0.48 0.9RMFSE 1 1.07 0.98 1.05 1.01 1 1.04 0.99 1 1.01 1.01 1 1.03 0.97

R squared=0.5

TPR 1 1 1 1 1 1 1 1 1 1 1 1 1 1FPR 0.08 0.07 0 0 0 0 0.01 0 0 0 0 0 0 0.13

RMSE(beta) 1 1.37 0.68 0.65 0.6 0.81 0.77 0.71 0.6 0.57 0.6 0.62 0.59 0.62TrueModel 0 0 0.84 0.45 0.87 0.59 0.27 0.78 0.92 0.6 0.87 0.9 0.56 0.78RMFSE 1 1.01 0.98 0.97 1 1 1.03 0.97 0.94 1 1 0.94 1 0.99

R squared=0.3

TPR 1 1 0.98 0.99 0.95 0.99 0.99 0.98 0.97 0.97 0.95 0.97 0.98 0.98FPR 0.07 0.07 0 0 0 0 0.01 0 0 0 0 0 0 0.49

RMSE(beta) 1 1.33 0.74 0.7 0.86 0.8 0.74 0.84 0.69 0.66 0.86 0.7 0.67 0.79TrueModel 0 0 0.56 0.42 0.44 0.43 0.33 0.38 0.7 0.6 0.44 0.66 0.53 0.29RMFSE 1 1 0.93 0.97 0.91 0.92 0.93 0.87 0.96 0.97 0.91 0.94 0.99 0.95

Dendramis () Regularization-Estimation-Model Selection 41 / 78

Page 42: Sparse Covariance Estimation, Variable Selection and Testing for … · 2018. 9. 27. · Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparse For su¢ ciently

Simulation Experiments

DGP 1: uncorrelated covariates

R2

DGP=1, T= 150, p= 200rmse

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL

0.6

0.7

0.8

0.9

1

1.1

1.2

1.3

tpr

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL0.65

0.7

0.75

0.8

0.85

0.9

0.95

1fpr

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

0.45

0.5

truth

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

rmsfe

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL0.9

0.95

1

1.05

1.1

Dendramis () Regularization-Estimation-Model Selection 42 / 78

Page 43: Sparse Covariance Estimation, Variable Selection and Testing for … · 2018. 9. 27. · Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparse For su¢ ciently

Simulation Experiments

DGP 1: uncorrelated covariates

DGP=1, T=150, p=200, nsim=100

Lasso A-Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-BL-t

R-BL-t-r

R-BL

R-CL-t

R-CL-t-r

R-CL

R squared=0.7

TPR 1 1 1 1 1 1 1 1 1 1 1 1 1 1FPR 0.09 0.09 0 0 0 0 0.01 0 0 0 0 0 0 0.17

RMSE(beta) 1 1.32 0.72 0.54 1 0.69 0.66 0.73 0.71 0.52 1 0.72 0.52 0.78TrueModel 0 0 0.94 0.65 0.44 0.86 0.33 0.87 0.93 0.69 0.44 0.9 0.59 0.72RMFSE 1 1.13 0.95 0.91 0.98 0.9 0.89 0.9 0.91 0.88 0.98 0.92 0.89 0.92

R squared=0.5

TPR 1 1 0.96 0.98 0.95 0.98 0.99 0.98 0.95 0.96 0.95 0.96 0.97 0.94FPR 0.08 0.08 0 0.01 0 0 0.01 0.01 0 0 0 0 0 0.51

RMSE(beta) 1 1.3 0.91 0.76 1.07 0.89 0.75 0.94 0.92 0.74 1.07 0.9 0.7 1.07TrueModel 0 0 0.49 0.3 0.3 0.43 0.29 0.41 0.45 0.37 0.3 0.51 0.38 0.12RMFSE 1 1.03 1.07 1 1.02 1.03 0.98 1.03 0.98 0.97 1.02 1.03 0.94 1.04

R squared=0.3

TPR 0.96 0.95 0.72 0.77 0.76 0.74 0.77 0.77 0.63 0.64 0.76 0.63 0.66 0.75FPR 0.07 0.07 0 0.01 0.01 0 0.01 0.01 0 0 0.01 0 0 0.46

RMSE(beta) 1 1.26 1.13 1.03 1.19 1.12 1.03 1.16 1.14 1.1 1.19 1.17 1.08 1.17TrueModel 0 0 0.03 0 0.01 0.05 0.02 0.01 0.01 0.01 0.01 0.01 0 0RMFSE 1 1.02 1.03 1.02 1.05 1.06 1.01 1.02 1.07 1.08 1.05 1.09 1.09 1.06

Dendramis () Regularization-Estimation-Model Selection 43 / 78

Page 44: Sparse Covariance Estimation, Variable Selection and Testing for … · 2018. 9. 27. · Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparse For su¢ ciently

Simulation Experiments

DGP 43: Temporally uncorrelated-weakly collinear covariates

R2

DGP=43, T= 200, p= 200rmse

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL

0.8

0.9

1

1.1

1.2

1.3

1.4

1.5tpr

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL

0.5

0.55

0.6

0.65

0.7

0.75

0.8

0.85

0.9

0.95

1fpr

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL

0

0.01

0.02

0.03

0.04

0.05

0.06

truth

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1rmsfe

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL

1

1.1

1.2

1.3

1.4

1.5

1.6

1.7

1.8

Dendramis () Regularization-Estimation-Model Selection 44 / 78

Page 45: Sparse Covariance Estimation, Variable Selection and Testing for … · 2018. 9. 27. · Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparse For su¢ ciently

Simulation Experiments

DGP 43: Temporally uncorrelated-weakly collinear covariates

DGP=43, T=200, p=200, nsim=100

Lasso A-Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-BL-t

R-BL-t-r

R-BL

R-CL-t

R-CL-t-r

R-CL

R squared=0.7

TPR 1 1 0.69 1 0.84 1 1 1 0.65 1 0.78 1 1 1FPR 0.03 0.03 0 0 0.06 0 0 0 0 0 0 0 0 0

RMSE(beta) 1 1.32 2.7 0.77 4.79 0.78 0.85 0.77 5.04 0.77 5.35 0.78 0.77 0.78TrueModel 0.07 0.05 0 0.99 0 1 0.8 1 0 1 0.19 1 1 1RMFSE 1 1.1 1.15 0.98 1.58 0.95 0.99 0.98 1.54 0.98 1.83 0.97 0.98 0.99

R squared=0.5

TPR 1 1 0.53 1 0.75 0.83 0.83 1 0.64 1 0.73 0.84 0.86 1FPR 0.03 0.03 0 0 0.04 0 0 0 0 0 0 0 0 0

RMSE(beta) 1 1.4 2.73 0.78 3.77 1.35 1.46 0.76 3.61 0.74 3.75 1.26 1.34 0.74TrueModel 0.1 0.08 0 0.98 0.02 0.3 0.29 1 0.02 1 0.16 0.31 0.36 1RMFSE 1 0.92 1.2 0.95 1.29 0.98 0.95 0.96 1.27 0.96 1.39 0.99 0.96 0.96

R squared=0.3

TPR 0.96 0.96 0.46 0.97 0.74 0.5 0.52 0.93 0.53 0.97 0.74 0.46 0.65 1FPR 0.03 0.03 0 0 0.01 0 0 0 0 0 0 0 0 0

RMSE(beta) 1 1.38 2.14 0.82 2.61 1.56 1.78 0.86 2.31 0.78 2.54 1.61 1.5 0.77TrueModel 0.05 0.05 0 0.83 0.09 0 0 0.71 0.01 0.83 0.14 0 0.22 1RMFSE 1 1 1.07 0.94 1.05 1.05 1.02 0.93 1.07 0.95 1.06 1.03 0.97 0.95

Dendramis () Regularization-Estimation-Model Selection 45 / 78

Page 46: Sparse Covariance Estimation, Variable Selection and Testing for … · 2018. 9. 27. · Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparse For su¢ ciently

Simulation Experiments

DGP 43: Temporally uncorrelated-weakly collinear covariates

R2

DGP=43, T= 300, p= 200rmse

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL

0.8

0.9

1

1.1

1.2

1.3

1.4

1.5tpr

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL0.5

0.55

0.6

0.65

0.7

0.75

0.8

0.85

0.9

0.95

1fpr

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL

0

0.02

0.04

0.06

0.08

0.1

0.12

0.14

truth

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1rmsfe

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL 0.9

1

1.1

1.2

1.3

1.4

1.5

Dendramis () Regularization-Estimation-Model Selection 46 / 78

Page 47: Sparse Covariance Estimation, Variable Selection and Testing for … · 2018. 9. 27. · Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparse For su¢ ciently

Simulation Experiments

DGP 43: Temporally uncorrelated-weakly collinear covariates

DGP=43, T=300, p=200, nsim=100

Lasso A-Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-BL-t

R-BL-t-r

R-BL

R-CL-t

R-CL-t-r

R-CL

R squared=0.7

TPR 1 1 0.97 1 1 1 1 1 0.67 1 0.75 1 1 1FPR 0.04 0.03 0 0 0.07 0 0 0 0 0 0.01 0 0 0

RMSE(beta) 1 1.37 1.15 0.9 3.72 0.78 0.95 0.76 6.07 0.78 6.63 0.76 0.76 0.76TrueModel 0.07 0.02 0.72 0.67 0 0.99 0.6 1 0 0.94 0.03 1 1 1RMFSE 1 1.02 1.03 0.99 1.2 0.98 1.01 1 1.36 0.99 1.52 1 1 1

R squared=0.5

TPR 1 1 0.79 1 0.96 0.93 0.92 1 0.65 1 0.74 0.93 0.94 1FPR 0.04 0.03 0 0 0.11 0 0 0 0 0 0 0 0 0

RMSE(beta) 1 1.38 1.71 0.8 3.37 1.08 1.15 0.75 3.86 0.76 4.1 0.99 1.01 0.74TrueModel 0.08 0.05 0.05 0.92 0 0.57 0.49 1 0.02 0.97 0.12 0.65 0.7 1RMFSE 1 1.03 1.06 0.94 1.12 0.92 0.92 0.93 1.23 0.89 1.33 0.92 0.96 0.88

R squared=0.3

TPR 0.98 0.98 0.47 0.98 0.86 0.66 0.67 0.97 0.64 0.99 0.72 0.63 0.73 1FPR 0.03 0.03 0 0 0.15 0 0 0 0 0 0 0 0 0

RMSE(beta) 1 1.37 2.07 0.83 3.36 1.54 1.73 0.87 2.87 0.8 2.99 1.54 1.55 0.79TrueModel 0.06 0.05 0 0.87 0 0 0 0.79 0.01 0.94 0.1 0 0.19 1RMFSE 1 0.98 1.03 0.96 1.17 0.97 0.94 0.97 1.08 0.96 1.17 0.97 0.95 0.95

Dendramis () Regularization-Estimation-Model Selection 47 / 78

Page 48: Sparse Covariance Estimation, Variable Selection and Testing for … · 2018. 9. 27. · Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparse For su¢ ciently

Simulation Experiments

DGP 43: Temporally uncorrelated-weakly collinear covariates

R2

DGP=43, T= 150, p= 200rmse

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL

0.8

0.9

1

1.1

1.2

1.3

1.4

1.5tpr

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL

0.5

0.6

0.7

0.8

0.9

1fpr

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL

0

0.01

0.02

0.03

0.04

0.05

truth

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1rmsfe

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL1

1.1

1.2

1.3

1.4

1.5

1.6

1.7

1.8

Dendramis () Regularization-Estimation-Model Selection 48 / 78

Page 49: Sparse Covariance Estimation, Variable Selection and Testing for … · 2018. 9. 27. · Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparse For su¢ ciently

Simulation Experiments

DGP 43: Temporally uncorrelated-weakly collinear covariates

DGP=43, T=150, p=200, nsim=100

Lasso A-Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-BL-t

R-BL-t-r

R-BL

R-CL-t

R-CL-t-r

R-CL

R squared=0.7

TPR 1 1 0.58 1 0.77 0.99 0.98 1 0.63 1 0.77 0.98 0.98 1FPR 0.03 0.03 0 0 0.03 0 0 0 0 0 0 0 0 0

RMSE(beta) 1 1.33 4.05 0.74 4.71 0.81 0.86 0.74 4.46 0.74 4.73 0.84 0.85 0.76TrueModel 0.03 0.01 0 1 0.02 0.92 0.86 1 0 1 0.13 0.9 0.91 1RMFSE 1 1.17 1.61 1.02 1.72 1.04 1.05 1.02 1.54 1.02 1.82 1.06 1.05 1.04

R squared=0.5

TPR 0.99 0.99 0.54 0.99 0.76 0.71 0.77 1 0.52 1 0.76 0.7 0.8 1FPR 0.04 0.03 0 0 0.02 0 0 0 0 0 0 0 0 0

RMSE(beta) 1 1.35 2.94 0.76 3.33 1.55 1.51 0.76 2.96 0.76 3.28 1.49 1.42 0.74TrueModel 0.05 0.01 0 0.97 0.1 0 0.12 0.98 0 0.98 0.15 0 0.15 1RMFSE 1 1.05 1.19 1.01 1.56 1.1 1.02 1.01 1.37 1.02 1.67 1.1 0.99 1

R squared=0.3

TPR 0.91 0.9 0.43 0.93 0.76 0.44 0.46 0.92 0.44 0.85 0.76 0.43 0.56 1FPR 0.03 0.03 0 0 0 0 0 0 0 0 0 0 0 0.06

RMSE(beta) 1 1.36 2.02 0.86 2.3 1.5 1.68 0.89 2.01 0.94 2.3 1.72 1.5 1.13TrueModel 0.03 0.02 0 0.65 0.14 0 0 0.65 0 0.43 0.13 0.01 0.03 0.94RMFSE 1 1.03 1.02 1.05 1.31 1.06 1.08 1.04 1.09 1.02 1.33 1.03 1.08 1.01

Dendramis () Regularization-Estimation-Model Selection 49 / 78

Page 50: Sparse Covariance Estimation, Variable Selection and Testing for … · 2018. 9. 27. · Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparse For su¢ ciently

Simulation Experiments

DGP 52: pseudo signal variables

R2

DGP=52, T= 200, p= 200rmse

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL 0.9

1

1.1

1.2

1.3

1.4

1.5tpr

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL0.3

0.4

0.5

0.6

0.7

0.8

0.9

1fpr

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

truth

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL

0

0.1

0.2

0.3

0.4

0.5

rmsfe

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL

0.95

1

1.05

1.1

1.15

1.2

1.25

1.3

1.35

1.4

Dendramis () Regularization-Estimation-Model Selection 50 / 78

Page 51: Sparse Covariance Estimation, Variable Selection and Testing for … · 2018. 9. 27. · Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparse For su¢ ciently

Simulation Experiments

DGP 52: pseudo signal variables

DGP=52, T=200, p=200, nsim=100

Lasso A-Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-BL-t

R-BL-t-r

R-BL

R-CL-t

R-CL-t-r

R-CL

R squared=0.7

TPR 1 1 0.54 0.99 0.76 0.9 0.92 1 0.41 0.98 0.42 0.64 0.83 1FPR 0.04 0.04 0 0.01 0.08 0 0 0.01 0.01 0 0.01 0 0.01 0.72

RMSE(beta) 1 1.33 3.21 0.88 4.46 1.45 1.23 1 4.74 0.89 4.86 2.89 1.8 3.41TrueModel 0.06 0.01 0.01 0.24 0 0.49 0.45 0 0.02 0.56 0 0.02 0.08 0RMFSE 1 1.01 1.26 0.9 1.43 1 1 0.94 1.3 0.96 1.4 1.29 1 0.99

R squared=0.5

TPR 1 0.99 0.41 0.97 0.47 0.64 0.72 1 0.51 0.9 0.35 0.41 0.71 1FPR 0.04 0.04 0 0 0.05 0 0 0.01 0 0 0.01 0 0.01 0.82

RMSE(beta) 1 1.33 2.78 0.92 3.5 1.92 1.71 1.05 3.15 1.09 3.4 2.51 1.86 2.78TrueModel 0.03 0.01 0.01 0.56 0 0 0.03 0.01 0.01 0.43 0 0 0 0RMFSE 1 0.99 1.22 0.92 1.2 1.07 1.03 0.96 1.03 0.93 1.21 1.14 0.97 0.94

R squared=0.3

TPR 0.96 0.95 0.46 0.85 0.32 0.45 0.46 0.95 0.55 0.68 0.32 0.28 0.53 1FPR 0.03 0.03 0 0 0.01 0 0 0.01 0 0 0.01 0 0.01 0.72

RMSE(beta) 1 1.35 2.06 1.05 2.43 1.62 1.81 0.96 2.18 1.37 2.43 2.35 1.7 3.81TrueModel 0.01 0 0 0.23 0 0 0 0.07 0 0.01 0 0 0 0RMFSE 1 1.02 1.1 0.99 1.11 1.07 1.03 0.91 1.04 0.96 1.11 1.15 1.05 1.13

Dendramis () Regularization-Estimation-Model Selection 51 / 78

Page 52: Sparse Covariance Estimation, Variable Selection and Testing for … · 2018. 9. 27. · Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparse For su¢ ciently

Simulation Experiments

DGP 52: pseudo signal variables

R2

DGP=52, T= 300, p= 200rmse

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL0.9

1

1.1

1.2

1.3

1.4

1.5tpr

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL0.3

0.4

0.5

0.6

0.7

0.8

0.9

1fpr

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9truth

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

rmsfe

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL 0.95

1

1.05

1.1

1.15

1.2

1.25

1.3

1.35

1.4

1.45

Dendramis () Regularization-Estimation-Model Selection 52 / 78

Page 53: Sparse Covariance Estimation, Variable Selection and Testing for … · 2018. 9. 27. · Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparse For su¢ ciently

Simulation Experiments

DGP 52: pseudo signal variables

DGP=52, T=300, p=200, nsim=100

Lasso A-Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-BL-t

R-BL-t-r

R-BL

R-CL-t

R-CL-t-r

R-CL

R squared=0.7

TPR 1 1 0.87 1 1 0.98 0.99 1 0.41 1 0.44 0.76 0.94 0.99FPR 0.03 0.03 0.01 0.01 0.08 0 0 0.01 0.01 0.01 0.01 0 0.01 0.9

RMSE(beta) 1 1.32 2.45 0.93 3.54 0.97 0.9 0.93 5.88 0.9 5.94 3.13 1.38 4.61TrueModel 0.07 0.05 0.09 0.09 0 0.88 0.57 0 0 0.08 0 0.08 0.1 0RMFSE 1 0.98 1.1 0.98 1.28 0.92 0.93 0.96 1.45 0.98 1.48 1.09 0.94 1.07

R squared=0.5

TPR 1 1 0.6 1 1 0.77 0.84 1 0.43 1 0.4 0.52 0.83 1FPR 0.03 0.03 0 0.01 0.14 0 0 0.01 0.01 0 0.01 0 0.01 0.88

RMSE(beta) 1 1.41 2.4 0.86 3.57 1.83 1.58 0.93 3.99 0.88 4.16 2.98 1.73 4TrueModel 0.06 0.03 0.01 0.19 0 0.14 0.14 0 0.06 0.34 0 0 0.01 0RMFSE 1 0.97 1.1 0.99 1.34 1.07 1.03 0.98 1.17 0.95 1.27 1.15 1.03 0.97

R squared=0.3

TPR 0.97 0.97 0.41 0.96 0.6 0.54 0.52 0.97 0.55 0.92 0.29 0.29 0.63 1FPR 0.04 0.04 0 0 0.12 0 0 0.01 0 0 0.01 0 0.01 0.85

RMSE(beta) 1 1.29 2.11 0.93 3.08 1.61 1.88 1 2.5 0.94 2.72 2.25 1.64 3.57TrueModel 0.03 0.01 0 0.36 0 0 0 0.14 0.02 0.45 0 0 0.01 0RMFSE 1 0.93 1.06 0.97 1.1 1.02 1.07 0.96 1.05 0.97 1.08 1.13 1 1.01

Dendramis () Regularization-Estimation-Model Selection 53 / 78

Page 54: Sparse Covariance Estimation, Variable Selection and Testing for … · 2018. 9. 27. · Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparse For su¢ ciently

Simulation Experiments

DGP 52: pseudo signal variables

R2

DGP=52, T= 150, p= 200rmse

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL 0.95

1

1.05

1.1

1.15

1.2

1.25

1.3

1.35

1.4

1.45

tpr

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL

0.4

0.5

0.6

0.7

0.8

0.9

1fpr

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

0.45

0.5

0.55

truth

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL

0

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

0.45

rmsfe

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL

1

1.1

1.2

1.3

1.4

1.5

Dendramis () Regularization-Estimation-Model Selection 54 / 78

Page 55: Sparse Covariance Estimation, Variable Selection and Testing for … · 2018. 9. 27. · Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparse For su¢ ciently

Simulation Experiments

DGP 52: pseudo signal variables

DGP=52, T=150, p=200, nsim=100

Lasso A-Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-BL-t

R-BL-t-r

R-BL

R-CL-t

R-CL-t-r

R-CL

R squared=0.7

TPR 1 1 0.4 0.98 0.35 0.77 0.87 1 0.51 0.84 0.33 0.51 0.8 0.98FPR 0.04 0.04 0.01 0 0.01 0 0 0.01 0 0 0.01 0 0.01 0.53

RMSE(beta) 1 1.3 3.8 0.93 4.14 1.98 1.45 1.36 3.88 1.55 4.14 3.17 1.76 4.23TrueModel 0.06 0.03 0 0.49 0 0.07 0.24 0 0 0.3 0 0.01 0.05 0RMFSE 1 1.09 1.49 0.92 1.57 1.07 0.99 0.96 1.36 1.06 1.58 1.45 1.09 1.32

R squared=0.5

TPR 0.99 0.98 0.43 0.93 0.34 0.53 0.63 1 0.53 0.62 0.34 0.38 0.68 1FPR 0.04 0.04 0 0 0.01 0 0 0.01 0 0 0.01 0 0.01 0.56

RMSE(beta) 1 1.32 2.59 0.97 2.89 1.87 1.69 1.18 2.57 1.62 2.89 2.49 1.68 3.43TrueModel 0.01 0 0 0.36 0 0 0 0.01 0 0.05 0 0 0 0RMFSE 1 1 1.29 1.03 1.35 1.21 1.03 1.06 1.31 1.13 1.37 1.48 1.09 1.25

R squared=0.3

TPR 0.9 0.88 0.49 0.83 0.33 0.43 0.41 0.93 0.37 0.37 0.33 0.31 0.54 1FPR 0.03 0.03 0 0 0.01 0 0 0.01 0 0 0.01 0 0.01 0.53

RMSE(beta) 1 1.35 1.98 1.07 2.16 1.53 1.69 1.13 1.88 1.57 2.16 2.17 1.6 4.63TrueModel 0.05 0.01 0 0.14 0 0 0 0.08 0 0 0 0 0 0RMFSE 1 1.07 1.06 0.99 1.09 1.05 1.04 1.01 1.07 1.07 1.09 1.23 1.04 1.23

Dendramis () Regularization-Estimation-Model Selection 55 / 78

Page 56: Sparse Covariance Estimation, Variable Selection and Testing for … · 2018. 9. 27. · Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparse For su¢ ciently

Simulation Experiments

DGP 51: all variables collinear with the true signals

R2

DGP=51, T= 200, p= 200rmse

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL 0.7

0.8

0.9

1

1.1

1.2

1.3

1.4

1.5tpr

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL

0.5

0.6

0.7

0.8

0.9

1fpr

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

truth

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

rmsfe

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL0.9

1

1.1

1.2

1.3

1.4

1.5

1.6

1.7

Dendramis () Regularization-Estimation-Model Selection 56 / 78

Page 57: Sparse Covariance Estimation, Variable Selection and Testing for … · 2018. 9. 27. · Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparse For su¢ ciently

Simulation Experiments

DGP 51: all variables collinear with the true signals

DGP=51, T=200, p=200, nsim=100

Lasso A-Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-BL-t

R-BL-t-r

R-BL

R-CL-t

R-CL-t-r

R-CL

R squared=0.7

TPR 1 1 0.69 1 0.8 0.99 1 1 0.69 1 0.74 0.81 0.97 0.89FPR 0.04 0.04 0 0 0.02 0 0 0 0 0 0 0 0 0.45

RMSE(beta) 1 1.37 3.25 0.72 4.47 1.26 0.84 1.28 4.49 0.69 4.67 2.83 1.01 2.92TrueModel 0.01 0 0 0.67 0.02 0.66 0.58 0.48 0.01 0.9 0.07 0.2 0.49 0.19RMFSE 1 1.06 1.21 1.08 1.62 1.12 1.06 1.12 1.77 1.05 1.78 1.15 1.07 1.18

R squared=0.5

TPR 1 1 0.61 1 0.75 0.85 0.86 1 0.6 1 0.75 0.6 0.84 0.83FPR 0.04 0.04 0 0 0 0 0 0 0 0 0 0 0 0.45

RMSE(beta) 1 1.32 2.37 0.71 2.86 1.38 1.29 0.99 2.76 0.66 2.82 2.43 1.33 2.26TrueModel 0.07 0.03 0 0.74 0.12 0.3 0.29 0.56 0.01 0.95 0.16 0.01 0.2 0.07RMFSE 1 1.03 1.03 0.97 1.11 0.97 0.98 0.99 1.06 0.95 1.11 1 0.98 1.01

R squared=0.3

TPR 0.97 0.96 0.53 0.96 0.73 0.59 0.63 0.96 0.53 0.93 0.73 0.43 0.72 0.93FPR 0.04 0.04 0 0 0 0 0 0 0 0 0 0 0 0.77

RMSE(beta) 1 1.35 1.96 0.77 2.08 1.57 1.55 1.11 2.02 0.8 2.08 2.08 1.3 2.48TrueModel 0.06 0.02 0 0.67 0.07 0 0 0.47 0 0.64 0.07 0 0.08 0.01RMFSE 1 1.01 0.99 0.91 0.98 0.98 0.96 0.91 1.02 0.89 0.98 1.02 0.97 0.99

Dendramis () Regularization-Estimation-Model Selection 57 / 78

Page 58: Sparse Covariance Estimation, Variable Selection and Testing for … · 2018. 9. 27. · Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparse For su¢ ciently

Simulation Experiments

DGP 51: all variables collinear with the true signals

R2

DGP=51, T= 300, p= 200rmse

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL0.8

0.9

1

1.1

1.2

1.3

1.4

1.5tpr

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL0.55

0.6

0.65

0.7

0.75

0.8

0.85

0.9

0.95

1fpr

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

truth

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

rmsfe

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL 0.95

1

1.05

1.1

1.15

1.2

1.25

1.3

1.35

1.4

Dendramis () Regularization-Estimation-Model Selection 58 / 78

Page 59: Sparse Covariance Estimation, Variable Selection and Testing for … · 2018. 9. 27. · Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparse For su¢ ciently

Simulation Experiments

DGP 51: all variables collinear with the true signals

DGP=51, T=300, p=200, nsim=100

Lasso A-Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-BL-t

R-BL-t-r

R-BL

R-CL-t

R-CL-t-r

R-CL

R squared=0.7

TPR 1 1 0.93 1 0.99 1 1 1 0.81 1 0.78 0.97 1 1FPR 0.04 0.04 0 0.01 0.04 0 0.01 0.01 0.01 0 0 0 0 0.86

RMSE(beta) 1 1.37 1.71 0.85 4.08 1.11 0.95 1.22 3.51 0.75 5.45 1.89 0.79 2.35TrueModel 0.07 0.05 0.58 0.28 0 0.71 0.34 0.22 0.03 0.66 0.05 0.58 0.53 0.12RMFSE 1 0.99 1.04 0.93 1.2 0.97 0.96 0.97 1.23 0.93 1.43 0.98 0.93 0.98

R squared=0.5

TPR 1 1 0.77 1 0.93 0.96 0.97 1 0.63 1 0.73 0.77 0.91 0.99FPR 0.04 0.04 0 0 0.06 0 0 0.01 0 0 0 0 0 0.88

RMSE(beta) 1 1.34 2.17 0.82 3.48 1.21 1.09 1.11 2.95 0.76 3.68 2.22 1.33 2.08TrueModel 0.06 0.02 0.03 0.45 0 0.52 0.37 0.36 0.02 0.72 0.07 0.1 0.24 0.08RMFSE 1 1.04 1.04 1.02 1.18 1.04 1.03 1.04 1.02 1.01 1.26 1.03 0.94 1.03

R squared=0.3

TPR 0.99 0.98 0.59 0.97 0.78 0.72 0.72 0.99 0.52 0.98 0.74 0.57 0.75 1FPR 0.04 0.04 0 0 0.02 0 0 0.01 0 0 0 0 0 0.99

RMSE(beta) 1 1.33 1.91 0.83 2.5 1.53 1.55 1.05 2.19 0.77 2.41 2.16 1.45 1.78TrueModel 0.02 0 0 0.62 0.06 0 0 0.38 0.01 0.79 0.07 0 0.06 0RMFSE 1 1.05 1.1 1.11 1.23 1.13 1.09 1.1 1.16 1.09 1.23 1.08 1.05 1.09

Dendramis () Regularization-Estimation-Model Selection 59 / 78

Page 60: Sparse Covariance Estimation, Variable Selection and Testing for … · 2018. 9. 27. · Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparse For su¢ ciently

Simulation Experiments

DGP 51: all variables collinear with the true signals

R2

DGP=51, T= 150, p= 200rmse

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL0.7

0.8

0.9

1

1.1

1.2

1.3

1.4

1.5tpr

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL0.4

0.5

0.6

0.7

0.8

0.9

1fpr

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL

0

0.05

0.1

0.15

0.2

0.25

truth

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

rmsfe

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL1

1.1

1.2

1.3

1.4

1.5

1.6

1.7

Dendramis () Regularization-Estimation-Model Selection 60 / 78

Page 61: Sparse Covariance Estimation, Variable Selection and Testing for … · 2018. 9. 27. · Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparse For su¢ ciently

Simulation Experiments

DGP 51: all variables collinear with the true signals

DGP=51, T=150, p=200, nsim=100

Lasso A-Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-BL-t

R-BL-t-r

R-BL

R-CL-t

R-CL-t-r

R-CL

R squared=0.7

TPR 1 1 0.66 1 0.75 0.97 0.99 1 0.66 1 0.74 0.76 0.93 0.83FPR 0.05 0.04 0 0 0 0 0 0 0 0 0 0 0 0.21

RMSE(beta) 1 1.33 3.27 0.69 3.89 1.28 0.84 1.22 3.78 0.65 3.89 2.76 1.15 2.73TrueModel 0.01 0.01 0.01 0.84 0.08 0.7 0.71 0.65 0 0.98 0.08 0.15 0.44 0.21RMFSE 1 1.04 1.32 0.98 1.77 1.03 0.95 1.05 1.69 0.97 1.77 1.23 1.02 1.17

R squared=0.5

TPR 1 0.99 0.54 0.99 0.75 0.75 0.79 1 0.52 0.98 0.75 0.56 0.77 0.7FPR 0.04 0.04 0 0 0 0 0 0 0 0 0 0 0 0.15

RMSE(beta) 1 1.28 2.37 0.73 2.51 1.6 1.44 1.09 2.51 0.7 2.51 2.26 1.46 2.29TrueModel 0.06 0.05 0 0.74 0.14 0 0.08 0.63 0.01 0.9 0.14 0 0.08 0.08RMFSE 1 1.08 1.2 0.99 1.3 1.02 1.01 1.02 1.13 1 1.3 1.08 1.04 1.09

R squared=0.3

TPR 0.92 0.93 0.46 0.92 0.77 0.54 0.55 0.86 0.47 0.8 0.77 0.36 0.61 0.87FPR 0.03 0.03 0 0 0 0 0 0 0 0 0 0 0 0.26

RMSE(beta) 1 1.31 1.78 0.84 1.84 1.56 1.52 1.29 1.79 0.98 1.84 1.92 1.34 3.29TrueModel 0.05 0 0 0.55 0.14 0 0 0.29 0 0.27 0.14 0 0.03 0.06RMFSE 1 1.03 1.11 0.98 1.06 1.02 1.05 0.99 1.06 0.99 1.06 1.09 1.04 1.17

Dendramis () Regularization-Estimation-Model Selection 61 / 78

Page 62: Sparse Covariance Estimation, Variable Selection and Testing for … · 2018. 9. 27. · Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparse For su¢ ciently

Simulation Experiments

DGP 44: Strongly collinear noise variables due to a persistent unobserved commonfactor

R2

DGP=44, T= 200, p= 200rmse

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL

0.8

0.9

1

1.1

1.2

1.3

1.4

1.5tpr

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL 0.5

0.55

0.6

0.65

0.7

0.75

0.8

0.85

0.9

0.95

1fpr

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL

0

0.005

0.01

0.015

0.02

0.025

0.03

truth

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

rmsfe

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL0.9

1

1.1

1.2

1.3

1.4

1.5

1.6

1.7

1.8

Dendramis () Regularization-Estimation-Model Selection 62 / 78

Page 63: Sparse Covariance Estimation, Variable Selection and Testing for … · 2018. 9. 27. · Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparse For su¢ ciently

Simulation Experiments

DGP 44: Strongly collinear noise variables due to a persistent unobserved commonfactor

DGP=44, T=200, p=200, nsim=100

Lasso A-Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-BL-t

R-BL-t-r

R-BL

R-CL-t

R-CL-t-r

R-CL

R squared=0.7

TPR 1 1 0.82 0.99 0.88 0.97 0.98 1 0.65 0.99 0.74 0.67 0.98 0.69FPR 0.03 0.03 0 0 0.02 0 0.01 0.02 0 0 0 0 0 0

RMSE(beta) 1 1.43 3.2 0.86 3.58 1.67 1.12 2.05 4.47 0.76 4.69 4.39 0.8 4.41TrueModel 0.08 0.05 0.08 0.7 0.01 0.34 0.26 0.01 0 0.93 0.16 0.07 0.91 0.1RMFSE 1 0.99 1.18 0.92 1.21 0.93 0.88 0.93 1.46 0.96 1.86 1.42 0.94 1.47

R squared=0.5

TPR 1 1 0.64 0.99 0.81 0.77 0.81 1 0.64 0.96 0.73 0.68 0.84 0.82FPR 0.03 0.03 0 0 0.02 0 0 0.02 0 0 0 0 0 0

RMSE(beta) 1 1.43 2.86 0.74 3.17 1.8 1.52 1.94 3.15 0.78 3.3 3.09 1.28 3.22TrueModel 0.15 0.12 0 0.9 0.08 0.03 0.16 0.01 0.02 0.9 0.16 0.14 0.35 0.44RMFSE 1 1.01 1.1 0.93 1.21 1.03 1.03 1.03 1.22 0.97 1.37 1.13 0.94 1.2

R squared=0.3

TPR 0.96 0.95 0.6 0.98 0.72 0.49 0.48 0.98 0.66 0.95 0.72 0.49 0.59 0.99FPR 0.02 0.02 0 0 0 0 0 0.01 0 0 0 0 0 0

RMSE(beta) 1 1.47 2.27 0.76 2.4 1.51 1.84 1.34 2.36 0.82 2.39 1.88 1.56 1.09TrueModel 0.1 0.07 0 0.91 0.06 0 0 0.47 0 0.78 0.1 0.07 0.05 0.94RMFSE 1 1.03 1.24 0.94 1.22 1.01 1.04 0.98 1.18 0.99 1.21 1.1 1 1

Dendramis () Regularization-Estimation-Model Selection 63 / 78

Page 64: Sparse Covariance Estimation, Variable Selection and Testing for … · 2018. 9. 27. · Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparse For su¢ ciently

Simulation Experiments

DGP 44: Strongly collinear noise variables due to a persistent unobserved commonfactor

R2

DGP=44, T= 300, p= 200rmse

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL

0.8

0.9

1

1.1

1.2

1.3

1.4

1.5tpr

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL0.6

0.65

0.7

0.75

0.8

0.85

0.9

0.95

1fpr

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL

0

0.005

0.01

0.015

0.02

0.025

0.03

0.035

0.04

0.045

0.05

truth

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1rmsfe

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL

1

1.1

1.2

1.3

1.4

1.5

1.6

1.7

Dendramis () Regularization-Estimation-Model Selection 64 / 78

Page 65: Sparse Covariance Estimation, Variable Selection and Testing for … · 2018. 9. 27. · Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparse For su¢ ciently

Simulation Experiments

DGP 44: Strongly collinear noise variables due to a persistent unobserved commonfactor

DGP=44, T=300, p=200, nsim=100

Lasso A-Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-BL-t

R-BL-t-r

R-BL

R-CL-t

R-CL-t-r

R-CL

R squared=0.7

TPR 1 1 0.98 1 0.98 1 1 1 0.67 1 0.74 0.64 1 0.64FPR 0.02 0.02 0 0 0.04 0 0 0.02 0 0 0.01 0 0 0

RMSE(beta) 1 1.37 1.98 0.82 3.04 1.47 1.02 2.3 6.12 0.73 6.38 6.04 0.73 6.11TrueModel 0.16 0.13 0.43 0.74 0 0.52 0.41 0.02 0 0.94 0.15 0.03 1 0.01RMFSE 1 0.97 1.08 0.94 1.1 1.01 0.98 1.1 1.62 0.93 1.7 1.4 0.97 1.42

R squared=0.5

TPR 1 1 0.84 0.99 0.95 0.9 0.91 1 0.63 0.99 0.73 0.74 0.87 0.79FPR 0.02 0.02 0 0 0.05 0 0 0.02 0 0 0.01 0 0 0

RMSE(beta) 1 1.45 2 0.85 2.97 1.67 1.4 2.16 3.99 0.8 4.17 4.06 1.35 4.1TrueModel 0.07 0.03 0.24 0.79 0.01 0.24 0.2 0.02 0 0.91 0.09 0.21 0.44 0.36RMFSE 1 1.02 1.08 0.99 1.24 1.04 1.04 1.02 1.24 0.97 1.34 1.24 0.98 1.29

R squared=0.3

TPR 0.99 0.99 0.57 0.97 0.83 0.6 0.61 1 0.63 0.94 0.72 0.6 0.7 1FPR 0.02 0.02 0 0 0.03 0 0 0.01 0 0 0.03 0 0 0

RMSE(beta) 1 1.48 2.34 0.82 2.93 1.56 1.79 1.54 2.66 0.85 2.76 1.72 1.49 0.98TrueModel 0.07 0.06 0.01 0.84 0.02 0 0 0.34 0 0.79 0.08 0.05 0.13 0.99RMFSE 1 1.02 1.09 1.01 1.13 0.97 0.98 1.01 1.1 0.99 1.19 1 0.96 1.01

Dendramis () Regularization-Estimation-Model Selection 65 / 78

Page 66: Sparse Covariance Estimation, Variable Selection and Testing for … · 2018. 9. 27. · Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparse For su¢ ciently

Simulation Experiments

DGP 44: Strongly collinear noise variables due to a persistent unobserved commonfactor

R2

DGP=44, T= 150, p= 200rmse

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL0.7

0.8

0.9

1

1.1

1.2

1.3

1.4

1.5tpr

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL

0.5

0.6

0.7

0.8

0.9

1fpr

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL

0

0.005

0.01

0.015

0.02

0.025

0.03

0.035

0.04

0.045

truth

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1rmsfe

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL 1

1.1

1.2

1.3

1.4

1.5

1.6

1.7

1.8

1.9

2

Dendramis () Regularization-Estimation-Model Selection 66 / 78

Page 67: Sparse Covariance Estimation, Variable Selection and Testing for … · 2018. 9. 27. · Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparse For su¢ ciently

Simulation Experiments

DGP 44: Strongly collinear noise variables due to a persistent unobserved commonfactor

DGP=44, T=150, p=200, nsim=100

Lasso A-Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-BL-t

R-BL-t-r

R-BL

R-CL-t

R-CL-t-r

R-CL

R squared=0.7

TPR 1 1 0.7 1 0.77 0.94 0.95 1 0.66 1 0.74 0.68 0.97 0.73FPR 0.03 0.03 0 0 0 0 0 0.01 0 0 0 0 0 0

RMSE(beta) 1 1.45 3.81 0.68 4.11 1.84 1.22 2.09 4.18 0.68 4.39 4.15 0.88 4.17TrueModel 0.09 0.07 0.01 0.94 0.05 0.21 0.16 0 0.02 1 0.12 0.06 0.84 0.16RMFSE 1 1.01 1.47 0.97 1.95 1.06 1.03 1.13 1.73 0.99 2.06 1.82 1.01 1.91

R squared=0.5

TPR 0.99 0.99 0.64 1 0.77 0.66 0.71 1 0.66 0.97 0.77 0.66 0.78 0.84FPR 0.02 0.03 0 0 0 0 0 0.01 0 0 0.02 0 0 0.01

RMSE(beta) 1 1.45 2.69 0.72 2.92 1.95 1.67 1.86 2.8 0.79 2.94 2.83 1.34 2.82TrueModel 0.07 0.06 0.01 0.98 0.13 0 0.05 0.02 0.01 0.88 0.17 0.06 0.22 0.47RMFSE 1 1 1.11 1.02 1.34 1.11 1 1.05 1.13 0.99 1.34 1.13 1.02 1.16

R squared=0.3

TPR 0.92 0.92 0.58 0.96 0.73 0.42 0.42 0.98 0.64 0.86 0.74 0.53 0.56 0.98FPR 0.03 0.03 0 0 0 0 0 0.01 0 0 0.05 0 0 0.01

RMSE(beta) 1 1.53 1.88 0.77 1.99 1.36 1.58 1.17 1.91 0.83 1.97 2.03 1.28 1.27TrueModel 0.02 0.01 0 0.8 0.12 0 0 0.52 0 0.49 0.13 0.12 0.06 0.93RMFSE 1 1.05 1.08 0.99 1.13 1.01 1.04 0.95 1.08 0.99 1.12 1.13 1.06 1.01

Dendramis () Regularization-Estimation-Model Selection 67 / 78

Page 68: Sparse Covariance Estimation, Variable Selection and Testing for … · 2018. 9. 27. · Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparse For su¢ ciently

Simulation Experiments

DGP 45: Temporally correlated and weakly collinear covariates

R2

DGP=45, T= 200, p= 200rmse

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL

0.8

0.9

1

1.1

1.2

1.3

1.4

1.5tpr

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL0.5

0.55

0.6

0.65

0.7

0.75

0.8

0.85

0.9

0.95

1fpr

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL0

0.005

0.01

0.015

0.02

0.025

0.03

0.035

0.04

truth

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1rmsfe

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL

1

1.1

1.2

1.3

1.4

1.5

1.6

Dendramis () Regularization-Estimation-Model Selection 68 / 78

Page 69: Sparse Covariance Estimation, Variable Selection and Testing for … · 2018. 9. 27. · Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparse For su¢ ciently

Simulation Experiments

DGP 45: Temporally correlated and weakly collinear covariates

DGP=45, T=200, p=200, nsim=100

Lasso A-Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-BL-t

R-BL-t-r

R-BL

R-CL-t

R-CL-t-r

R-CL

R squared=0.7

TPR 1 1 0.61 1 0.77 1 1 1 0.66 1 0.78 1 1 1FPR 0.04 0.04 0 0 0.02 0 0 0 0 0 0 0 0 0

MSE(beta) 1 1.41 3.77 0.74 5.23 0.76 0.81 0.75 5.13 0.74 5.41 0.78 0.76 0.75TrueModel 0.02 0.02 0 0.99 0.06 0.98 0.83 0.98 0 1 0.19 0.98 0.99 1RMFSE 1 1 1.18 0.91 1.37 0.91 0.96 0.92 1.35 0.92 1.62 0.91 0.92 0.91

R squared=0.5

TPR 1 1 0.52 1 0.71 0.8 0.8 1 0.62 1 0.71 0.8 0.83 1FPR 0.04 0.03 0 0 0.01 0 0 0 0 0 0 0 0 0

MSE(beta) 1 1.29 2.77 0.75 3.37 1.31 1.45 0.73 3.21 0.73 3.37 1.27 1.33 0.74TrueModel 0.09 0.06 0 1 0.06 0.22 0.22 0.99 0 1 0.1 0.24 0.3 1RMFSE 1 1 1.1 0.95 1.46 0.96 1.02 0.94 1.17 0.95 1.45 0.96 0.98 0.93

R squared=0.3

TPR 0.96 0.95 0.49 0.96 0.75 0.49 0.51 0.95 0.52 0.96 0.75 0.47 0.66 1FPR 0.03 0.03 0 0 0 0 0 0 0 0 0 0 0 0.01

MSE(beta) 1 1.34 2.35 0.82 2.53 1.59 1.84 0.84 2.34 0.78 2.53 1.81 1.49 1.04TrueModel 0.03 0.01 0 0.79 0.15 0 0 0.78 0.01 0.79 0.15 0.03 0.16 0.99RMFSE 1 1.05 1.06 1 1.28 1.05 0.98 1.02 1.04 1.02 1.28 1.08 1.01 1.03

Dendramis () Regularization-Estimation-Model Selection 69 / 78

Page 70: Sparse Covariance Estimation, Variable Selection and Testing for … · 2018. 9. 27. · Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparse For su¢ ciently

Simulation Experiments

DGP 45: Temporally correlated and weakly collinear covariates

R2

DGP=45, T= 300, p= 200rmse

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL

0.8

0.9

1

1.1

1.2

1.3

1.4

1.5tpr

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL

0.55

0.6

0.65

0.7

0.75

0.8

0.85

0.9

0.95

1fpr

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL0

0.01

0.02

0.03

0.04

0.05

0.06

0.07truth

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1rmsfe

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL

1

1.1

1.2

1.3

1.4

1.5

1.6

Dendramis () Regularization-Estimation-Model Selection 70 / 78

Page 71: Sparse Covariance Estimation, Variable Selection and Testing for … · 2018. 9. 27. · Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparse For su¢ ciently

Simulation Experiments

DGP 45: Temporally correlated and weakly collinear covariates

DGP=45, T=300, p=200, nsim=100

Lasso A-Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-BL-t

R-BL-t-r

R-BL

R-CL-t

R-CL-t-r

R-CL

R squared=0.7

TPR 1 1 0.97 1 0.98 1 1 1 0.67 1 0.75 1 1 1FPR 0.03 0.03 0 0 0.07 0 0 0 0 0 0.01 0 0 0

MSE(beta) 1 1.33 1.41 0.84 4.36 0.72 0.83 0.72 6.42 0.76 6.73 0.72 0.72 0.72TrueModel 0.09 0.05 0.56 0.69 0 1 0.74 1 0.01 0.88 0.06 1 1 1RMFSE 1 0.98 0.94 0.98 1.25 0.99 0.96 0.99 1.47 0.99 1.65 0.99 0.98 0.99

R squared=0.5

TPR 1 1 0.72 1 0.85 0.94 0.94 1 0.64 1 0.74 0.95 0.93 1FPR 0.04 0.04 0 0 0.07 0 0 0 0 0 0 0 0 0

MSE(beta) 1 1.42 2.11 0.8 3.85 1.02 1.08 0.73 4.06 0.74 4.27 0.9 1.06 0.72TrueModel 0.06 0.07 0 0.87 0 0.64 0.55 1 0 0.97 0.1 0.76 0.67 1RMFSE 1 0.99 1.03 1.03 1.21 1.04 1.04 1.03 1.27 1.03 1.45 1.04 1.03 1.03

R squared=0.3

TPR 0.99 0.98 0.5 0.99 0.75 0.64 0.65 0.98 0.62 1 0.71 0.64 0.76 1FPR 0.03 0.03 0 0 0.04 0 0 0 0 0 0 0 0 0

MSE(beta) 1 1.38 2.25 0.81 3.04 1.56 1.76 0.79 2.8 0.79 2.95 1.51 1.43 0.75TrueModel 0.05 0.02 0 0.88 0.06 0 0 0.88 0.01 0.95 0.12 0 0.23 1RMFSE 1 1.02 0.93 0.98 1.18 0.99 1 0.97 1.09 0.95 1.11 1 0.96 0.95

Dendramis () Regularization-Estimation-Model Selection 71 / 78

Page 72: Sparse Covariance Estimation, Variable Selection and Testing for … · 2018. 9. 27. · Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparse For su¢ ciently

Simulation Experiments

DGP 45: Temporally correlated and weakly collinear covariates

R2

DGP=45, T= 150, p= 200rmse

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL0.8

0.9

1

1.1

1.2

1.3

1.4

1.5tpr

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL

0.5

0.6

0.7

0.8

0.9

1fpr

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL0

0.005

0.01

0.015

0.02

0.025

0.03

0.035

0.04

truth

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1rmsfe

0.7

0.5

0.3

Lasso

Ad. Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

R-BL-t

R-BL-t-r

R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-CL-t

R-CL-t-r

R-CL1

1.2

1.4

1.6

1.8

2

2.2

Dendramis () Regularization-Estimation-Model Selection 72 / 78

Page 73: Sparse Covariance Estimation, Variable Selection and Testing for … · 2018. 9. 27. · Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparse For su¢ ciently

Simulation Experiments

DGP 45: Temporally correlated and weakly collinear covariates

DGP=45, T=150, p=200, nsim=100

Lasso A-Lasso

S-R-BL-t

S-R-BL-t-r

S-R-BL

S-R-CL-t

S-R-CL-t-r

S-R-CL

R-BL-t

R-BL-t-r

R-BL

R-CL-t

R-CL-t-r

R-CL

R squared=0.7

TPR 1 1 0.56 1 0.78 0.97 0.97 1 0.63 1 0.77 0.93 0.97 1FPR 0.04 0.04 0 0 0.01 0 0 0 0 0 0 0 0 0.01

MSE(beta) 1 1.35 4.42 0.75 4.85 0.94 0.93 0.81 4.55 0.75 4.89 1.19 0.94 1.08TrueModel 0.02 0.01 0 1 0.13 0.8 0.79 0.95 0 1 0.2 0.74 0.85 0.98RMFSE 1 1.01 1.69 1 2.06 1.04 1.02 1.05 1.73 1 2.27 1.05 1.07 1.08

R squared=0.5

TPR 0.99 0.99 0.51 0.99 0.75 0.66 0.7 0.99 0.54 0.99 0.75 0.65 0.77 1FPR 0.04 0.04 0 0 0 0 0 0 0 0 0 0 0 0.03

MSE(beta) 1 1.32 2.82 0.81 3.12 1.6 1.71 0.87 2.84 0.79 3.12 1.6 1.46 0.93TrueModel 0.06 0.06 0 0.91 0.16 0 0.02 0.9 0 0.94 0.16 0.01 0.17 0.95RMFSE 1 0.95 1.16 0.99 1.29 1.04 1.09 0.98 1.16 0.98 1.29 1.02 1 0.97

R squared=0.3

TPR 0.9 0.89 0.41 0.94 0.74 0.44 0.47 0.85 0.44 0.83 0.74 0.46 0.61 1FPR 0.03 0.03 0 0 0 0 0 0 0 0 0 0 0 0.02

MSE(beta) 1 1.28 1.91 0.84 2.16 1.5 1.64 0.97 1.94 0.93 2.16 1.98 1.34 1.79TrueModel 0.07 0.02 0 0.69 0.14 0 0 0.49 0 0.38 0.14 0.05 0.12 0.97RMFSE 1 1.02 1.11 0.96 1.23 0.97 0.93 0.99 1.12 0.99 1.23 1.06 0.95 1

Dendramis () Regularization-Estimation-Model Selection 73 / 78

Page 74: Sparse Covariance Estimation, Variable Selection and Testing for … · 2018. 9. 27. · Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparse For su¢ ciently

Simulation Experiments

� Comments on the simulation results

- in the vanilla case of uncorrelated covariates, overall the methods that wepropose outperform the Lasso/Adaptive Lasso. The larger the T and R2 thebetter the performance.

- Overall: the (S)-R-BL-t-r seems to be the most reliable method for estimationand model selection. The larger the T and R2 the better the performance.

Dendramis () Regularization-Estimation-Model Selection 74 / 78

Page 75: Sparse Covariance Estimation, Variable Selection and Testing for … · 2018. 9. 27. · Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparse For su¢ ciently

Empirical Exercise - (preliminary)

� out of sample forecasting of Nondurable goods Price Index and Householdoperation Price Index- a rolling estimation window of 120 observations- quarterly data from Stock and Watson (2012), a total of 108 series.- one step ahead rolling forecasts- data from 08-Jan-1961 until 02-Jan-2009, forecasts starts at 08-Jan-1991- the tuning parameter is estimated using leave one out cross validation.

� methods for comparison- Lasso/Adaptive Lasso- compare results with the 3 step procedure as it is more likely to work satisfactoryon less sparse datasets as the one that it is examined- AR(p), p is selected using BIC (max(p)=4), in a rolling sense- factor augmented AR(p), p is selected using BIC, number of factors using Baiand Ng (2002) information criterion (max(#factors)=8), in a rolling sense

Dendramis () Regularization-Estimation-Model Selection 75 / 78

Page 76: Sparse Covariance Estimation, Variable Selection and Testing for … · 2018. 9. 27. · Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparse For su¢ ciently

Empirical Exercise - (preliminary)

The percentage of times that each regressor is included into the forecastingequation over the forecast evaluation interval

frequency of regressors over the forecasting interval (Nondurable goods Price Index)

1 2 3 4 5 6 7 8 910 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 10

010

110

210

310

410

510

610

710

810

911

011

111

2

Lasso

Ad. Lasso

S-R-BL-t-r

S-R-CL-t-r

R-BL-t-r

R-CL-t-r

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

frequency of regressors over the forecasting interval (Household operation Price Index)

1 2 3 4 5 6 7 8 910 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 10

010

110

210

310

410

510

610

710

810

911

011

111

2

Lasso

Ad. Lasso

S-R-BL-t-r

S-R-CL-t-r

R-BL-t-r

R-CL-t-r

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

- With Lasso/Adaptive Lasso larger forecasting models are chosen- Di¤erent set of important covariates

Dendramis () Regularization-Estimation-Model Selection 76 / 78

Page 77: Sparse Covariance Estimation, Variable Selection and Testing for … · 2018. 9. 27. · Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparse For su¢ ciently

Empirical Exercise - (preliminary)

Forecasting Exercise: from 08-Jan-1961 until 02-Jan-2009, rolling (120 obs) forecast starts at 08-Jan-1991Nondurable goods Price Index Household operation Price Index

RMSFE ratio from ar(p)Lasso 0.92 1.06A-Lasso 0.97 1.16

ar(p)-factor 0.99 1.0ar(p) 1 1

S-R-BL-t-r 0.92 0.97S-R-CL-t-r 0.91 1.02R-BL-t-r 0.96 0.96R-CL-t-r 0.92 0.89

- There are forecasting gains from using the proposed methods- Nondurable goods Price Index: Lasso almost equivalent to the proposed methods- Household operation Price Index: proposed methods perform better

Dendramis () Regularization-Estimation-Model Selection 77 / 78

Page 78: Sparse Covariance Estimation, Variable Selection and Testing for … · 2018. 9. 27. · Bickel and Levina (2008): when zt is an iid Gaussian process and Σz is sparse For su¢ ciently

Conclusion

- We have considered the problem of model speci�cation and estimation in largedimensional settings- Our aim was to provide an alternative to penalized regression methods andgreedy methods, which can be related to the classical statistical analysis- a fundamental step for this was the sparse covariance matrix assumption,between the dependent and the full set of possible covariates- our theoretical, simulation and empirical results provide evidences that themethods that we propose, provide some gains over the methods which arecurrently applied in the literature

Dendramis () Regularization-Estimation-Model Selection 78 / 78