Compare correlation coefficients is Trivial?



Hi everyone,

my problem is relatively simple, but trivial. I've already read some topics about that, but I really need straightforward answers and/or opinions.

I have data divided by one fixed effect (Emotion, with 2 levels: negative and neutral), and I have a covariate (reaction time) for 20 subjects. My aim is to 1) evaluate correlation between data and covariate;
2) know if there is a significantly bigger correlation between data in negative condition and negative reaction times, than between data in neutral condition and neutral reaction time (or viceversa).
That means, analitically, to compare regression slopes.

I evaluated, by now, three different strategies:

a) use of Cohens'q. That gives no probability of a significance (no p-value), but an 'effect size' (small/medium/large) based on the difference between r values (transformed in z values using Fisher's procedure).

b) use of Fisher's method. That takes in account sample size, other than transformed r values.

c) use of ANCOVA. In particular, an analysis of covariance that doesn't force the slope to be the same. MatLab's 'aoctool' can do that work.
To be clear:
Standard ANCOVA regression -> y = (a1 + a2) + b*X + e
ANCOVA with different slopes -> y = (a1 + a2) + (b1 + b2)*X + e
In this case, not only sample sizes are taken in account, but also variability in each group.

Cohens'd is just about correlations. Instead, Fishers' method takes also in account the sample size, and ANCOVA tools also account for variability; that means that the latter two methods, with a small sample size, will tend to reject a difference between slopes.
Actually, in neuroimaging studies, it is common to have 15-25 subjects. Therefore it appears to be useful to use Cohens'q, to have an estimate of the difference between correlations. Anyway, I'd like to hear comments about that.
What would you recommend to test between correlation values?

Thanks in advance, Simone