Skip to content
gngdb edited this page Nov 9, 2014 · 18 revisions

2014-11-08 Investigating Raw vs ICA vs CSP for MVAR GPDC

Suspect apriori that this should be a very useful feature. Found exceptionally high separation on iso-plots. Expected ICA to outperform CSP, and both to outperform RAW. Found CSP outperformed ICA (though very narrowly)!

mvar_raw
Public AUROC 0.68743
Expected values:
Dog_1           Dog_2           Dog_3           Dog_4           Dog_5           Patient_1       Patient_2           Overall
0.450020833333  0.777798564477  0.769036265432  0.753514200268  0.881382632633  0.814868731309  0.360648148148  0.769611833552

mvar_ica
Public AUROC 0.74684
0.476815972222  0.953311776156  0.809512731481  0.779338422456  0.929014014014  0.869317201518  0.425092592593  0.830230271653

mvar_csp
Public AUROC 0.74837
0.426253472222  0.919725826918  0.825064236111  0.776440593067  0.930520520521  0.853518907563  0.432222222222  0.819721454993

probablygoodplusraw

Thought that combining the features used in our previous best classifier along with those we used for our current best submission would probably work well. The following features were used:

"FEATURES": ["cln,csp,dwn_feat_lmom-3_",
        "cln,ica,dwn_feat_xcorr-ypeak_",
        "cln,csp,dwn_feat_pib_ratioBB_",
        "cln,ica,dwn_feat_mvar-GPDC_",
        "cln,ica,dwn_feat_PSDlogfcorrcoef_",
        "cln,ica,dwn_feat_pwling1_",
        "raw_feat_corrcoef_",
        "raw_feat_cov_",
        "raw_feat_pib_",
        "raw_feat_var_",
        "raw_feat_xcorr_"],

And the predicted performance was:

predicted AUC score for Dog_1: 0.53
predicted AUC score for Dog_2: 0.95
predicted AUC score for Dog_3: 0.82
predicted AUC score for Dog_4: 0.77
predicted AUC score for Dog_5: 0.94
predicted AUC score for Patient_1: 0.77
predicted AUC score for Patient_2: 0.45
predicted AUC score over all subjects: 0.83

Then, submitted and got 0.76012.

Only slightly worse, probably relies on the features for the current best and these raw features aren't useful.

Feature selection

Using a simple variance threshold and then also filtering by f-scores. Doing both predicted AUC was 0.86, but I had been fiddling with the cross-val code so that won't map onto other results. Full AUC results can be found here.

Submitted and got 0.77171, moving up the leaderboard 5 places.

Now running with just the variance threshold to see what its contribution is on its own.

Also got a predicted AUC of 0.86. Submitted and got 0.77171, exactly the same, which doesn't make a lot of sense. Going to check that it actually took out the f1-score selector.

Will run f1-score on its own tommorow.

Clone this wiki locally