-
Notifications
You must be signed in to change notification settings - Fork 38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Apply scaling before first level GLM, to be able to compare results at group level #9
Comments
According to Enders, et al. (Front. Neuroinform., 2014), the permutation tests via RandomiseGroupLevel use the same functions as the first-level analysis. Are the subject-level spatial maps that are inputs for this function also scaled? I am asking because I performed group-level analysis (along with first-level analysis) in AFNI and want to evaluate significance through permutation tests with Broccoli. However, the t-statistics at each voxel from the group analysis evaluated by Broccoli and AFNI are different, and I am trying to trace where this difference arose. It;s no clear whether I have incorrectly input data into Broccoli or there are divergences in group analysis methodology between AFNI & Broccoli. Thank you. |
There is no scaling.
Did you use 3dttest or 3dMEMA in AFNI?
2017-03-29 1:37 GMT+02:00 cmehta126 <[email protected]>:
… According to Enders, et al. (Front. Neuroinform., 2014), the permutation
tests via *RandomiseGroupLevel* use the same functions as the first-level
analysis. Are the subject-level spatial maps placed into this function also
scaled?
I am asking because I performed group-level analysis (along with
first-level analysis) in AFNI and want to evaluate significance through
permutation tests with Broccoli. However, the t-statistics at each voxel
from the group analysis evaluated by Broccoli and AFNI are different, and I
am trying to trace where this difference arose. It;s no clear whether I
have incorrectly input data into Broccoli or there are divergences in group
analysis methodology between AFNI & Broccoli.
Thank you.
—
You are receiving this because you were assigned.
Reply to this email directly, view it on GitHub
<#9 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AEGryCwpUXqyUGsr-zjRmzAJwOXSkvJcks5rqZnKgaJpZM4DK40l>
.
--
Anders Eklund, PhD
|
I used 3dMVM with univarate response model of the form ~a0 + a1age_i + a2sex_i + a3*Z_i, where age_i, sex_i, and Z_i are subject-specific predictors. I'm testing the null hypothesis a3=0 over a sample of, say, n=500. The contrast file looks like:
The head of the design matrix file looks like:
In both files, the columns are tab-delimited. The input spatial maps are a nifiti file consisting of 500 spatial maps for subjects corresponding to the same order as the design matrix. There are big differences in t-stat maps generated by AFNI's 3dMVM & Broccoli. I verified AFNI's output by performing the same regression at some random voxels in R and obtained identical results as AFNI. Is there anything specific to be aware of that may explain different results with Brocccoli? Thank you. |
I don't think you can compare BROCCOLI to 3dMVM, as that function does more
advanced modelling. You need to compare to 3dttest++, which does simple OLS.
2017-03-29 13:57 GMT+02:00 cmehta126 <[email protected]>:
… I used 3dMVM with univarate response model of the form ~a0 + a1*age_i +
a2*sex_i + a3*Z_i, where age_i, sex_i, and Z_i are subject-specific
predictors. I'm testing the null hypothesis a3=0 over a sample of, say,
n=500. The contrast file looks like:
NumRegressors 4
NumContrasts 1
0.0 0.0 0.0 1.0
The head of the design matrix file looks like:
NumRegressors 4
NumSubjects 500
1 25 0 4.52
1 28 1 7.22
1 21 1 2.94
1 32 0 1.42
In both files, the columns are tab-delimited. The input spatial maps are a
nifiti file consisting of 500 spatial maps for subjects corresponding to
the same order as the design matrix.
There are big differences in t-stat maps generated by AFNI's 3dMVM &
Broccoli. I verified AFNI's output by performing the same regression at
some random voxels in R and obtained identical results as AFNI.
Is there anything specific to be aware of that may explain different
results with Brocccoli?
Thank you.
—
You are receiving this because you were assigned.
Reply to this email directly, view it on GitHub
<#9 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AEGryOZuHUyrZhCTIP_KNiPnQFTys0gLks5rqkcagaJpZM4DK40l>
.
--
Anders Eklund, PhD
|
3dMVM certainly has advanced modeling, but I'm restricting to a simple univariate linear regression model and simple hypothesis that's equivalent to 3dttest++. I confirmed this by re-running my analysis with 3dttest++ and its results were identical with those from 3dMVM. I generated random data for 10 subjects to compare the t-statistics computed by Broccoli and 3dttest++ to give an idea of the analysis pipeline I'm using. The t-statistics were different again with the random data, and I've attached this in compare_afni_broccoli.tar.gz
For 3dttest++, input files there are
For Broccoli, they generate
You can reproduce all of the input files by running the R-file and then bash file sequentially with everything in the same folder. Do you think that the way I'm creating the group level volumes file (volumes.nii) using 3dTcat function, as given in compare_afni_broccoli.sh, is leading to differences in how broccoli is evaluating the t-statistics? EDIT: In the R-file, I had set the working directory to "~/scratch60/random_data", which would need to be adjusted to be specific to the user. The R-file also requires the "oro.nifti" package be installed for reading and writing nifti files. |
Please pardon my comment if it is off, but I am under the impression that permutation testing should be performed using non-smoothed data (to be comparable to any GLM performed on smoothed fMRI data) and that even so, the two methods are expected to produce different values which come closer to agreement only for the most significant voxels, i.e. the overlap between the two results after correction for multiple comparisons if you like. Perhaps this could account for part of the discrepancy!? |
@Metasoliton, what you say about permutation testing in this context makes complete sense to me. However, the t-statistics observed at individual voxels are not related to clusters. It's not clear to me that there would be a discrepancy in t-statistics at a given voxel created by any method. I am unaware that either 3dttest++ or broccoli are applying spatial smoothing to t-statistics after computing them. |
@cmehta126, just to clarify what I meant:
|
The tvalues (generated from the unpermuted data) from Broccoli should be
very similar to the tvalues from 3dttest, for smoothed and unsmoothed data,
unless Im missing something. The only difference I can think of right now
is how covariates are handled.
Den 29 mars 2017 23:10 skrev "Metasoliton" <[email protected]>:
@cmehta126 <https://github.com/cmehta126>, just to clarify what I meant:
- the smooth (for GLM) vs non-smooth (for permutation) referred to
preprocessing procedures. That is, you might have to use different versions
of the data as input to 3dttest++ (smoothed data) vs Broccoli (unsmoothed
data), to get comparable t-maps.
- assuming that is done, the "clusters point" was meant to suggest
comparing the values only in the voxels that turned out to be significant
via both methods. That is, you can expect equivalent results in the voxels
where a "true effect" was identified by both methods but not in voxels that
happened to be more sensitive to outliers, noise etc (and hence were false
positives or false negatives in the GLM).
I hope now it makes more sense : )
—
You are receiving this because you were assigned.
Reply to this email directly, view it on GitHub
<#9 (comment)>,
or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AEGryHRpBC8-9SySrfgogXYJHc_WCygNks5rqsjFgaJpZM4DK40l>
.
|
@wanderine, thank you. Is it right that the intercept must be represented by a column of ones in the Design matrix input into BROCCOLI? Also, would you confirm whether the following contrast file is accurately constructed to obtain a t-statistic for the null hypothesis that the coefficient of the second predictor is zero?
Thank you for your guidance. |
@Metasoliton, thank you for the insight. I performed all pre-processing and subject-level analysis in AFNI. The resulting subject-spatial map inputs to 3dttest++ and BROCCOLI were identical (there were no differences in smoothing). I would not expect BROCCOLI and anything from AFNI to assess the same or even similar significance (i.e. p-values) for clusters because AFNI does not have support for permutation testing (AFAIK). However, the t-statistics from unpermuted data both programs must be the same (upto rounding error) if there are identical inputs (both spatial maps and covariates) because t-stats from OLS are a standard computation. |
@wanderine I think I figured out the reason for the discrepancy. Thank you. |
What is the reason for the difference?
AFNI has just recently added a non-parametric option in 3dttest++, the flag
-Clustsim will run a permutation test to generate non-parametric cluster
p-values.
2017-03-30 3:11 GMT+02:00 cmehta126 <[email protected]>:
… @wanderine <https://github.com/wanderine> I think I figured out the
reason for the discrepancy. Thank you.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#9 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AEGryDbxGmrzllMlxFHL0UipRUxl_OUDks5rqwE_gaJpZM4DK40l>
.
--
Anders Eklund, PhD
|
I believe the issue was in how I was concatenating subject-level spatial maps into the "volumes.nii" input file for group analysis with Broccoli. I used AFNI's "3dTcat" function to do this but I'm not confident that this is the right way. Specifically, there may have been issues in ensuring consistent subject ordering with rows in design matrix, along with inconsistencies in the header information. As I'm new to fMRI analysis, I cannot be sure I did this right. Thank you for pointing out the "Clustsim" flag in 3dttest++, which avoids the aforementioned issue. |
I figured out my error was with incorrect use of 3dTcat for combining subjects' spatial maps. In case anyone else has a similar problem, here is a link to a post I just made on the AFNI message board to describe one way of using AFNI functions to create the input sample volumes file for Broccoli. This ensures that orderings of the subject volumes and rows in the design matrix can match. (i.e. so no differences in t-statistics computed by 3dttest++ and broccoli). I successfully utilized Broccoli to verify significance of one result because the Clustsim flag with 3dttest++ you referred me to isn't working on my system. If you have a moment, could you help me determine the best precise to interpret Broccoli's output? Let's say there were 2 clusters comprised of voxels taking values between 0.995 & 1.0 in Broccoli's p-value maps and the final printout from broccoli states:
where inferencemode was set to "cluster-extent" and significance being 0.05. Is it appropriate to claim these two regions attain cluster-extent significance at level 0.05? Is X the 95th percentile of the maximum cluster sizes during the permutations? If so, what was the primary threshold for designating clusters during each permutation? Thank you again for your kind assistance. Please forgive me if these are novice questions. |
If you run @update.afni.binaries that should give you the latest AFNI
version, which has support for -Clustsim
The p-values from Broccoli are inverted (as in randomise in FSL, for
visualization purposes), meaning that any voxel/cluster with a value higher
than 0.95 is significant, so yes these clusters are significant at p = 0.05
(corrected for multiple comparisons).
In each permutation the size of the largest cluster over the entire brain
is saved, to form the maximum null distribution. These values are then
sorted, and X corresponds to the value that is larger than 95% of the
values.
The default primary threshold for obtaining the clusters is a t-score of
2.5, this can be changed with the option -cdt, e.g. use -cdt 3.0 if you
instead want to use a t-score of 3.0.
2017-03-31 14:03 GMT+02:00 cmehta126 <[email protected]>:
… I figured out my error was with incorrect use of 3dTcat for combining
subjects' spatial maps. In case anyone else has a similar problem, here is
a link to a post I just made on the AFNI message board
<https://afni.nimh.nih.gov/afni/community/board/read.php?1,154364,154382#msg-154382>
to describe one way of using AFNI functions to create the input sample
volumes file for Broccoli. This ensures that orderings of the subject
volumes and rows in the design matrix can match. (i.e. so no differences in
t-statistics computed by 3dttest++ and broccoli).
I successfully utilized Broccoli to verify significance of one result
because the Clustsim flag with 3dttest++ you referred me to isn't working
on my system. If you have a moment, could you help me determine the best
precise to interpret Broccoli's output?
Let's say there were 2 clusters comprised of voxels taking values between
0.995 & 1.0 in Broccoli's p-value maps and the final printout from broccoli
states:
Permutation threshold for contrast 1 for a significance level of 0.050000
is X
where *inferencemode* was set to "cluster-extent" and *significance*
being 0.05. Is it appropriate to claim these two regions attain
cluster-extent significance at level 0.05? Is X the 95th percentile of the
maximum cluster sizes during the permutations? If so, what was the primary
threshold for designating clusters during each permutation?
Thank you again for your kind assistance. Please forgive me if these are
novice questions.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#9 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AEGryApFWe8HGi_HqKLS8msRda-b7-Kjks5rrOutgaJpZM4DK40l>
.
--
Anders Eklund, PhD
|
No description provided.
The text was updated successfully, but these errors were encountered: