Composite nodes re-normalize values from 0-100 , basically stretching them out. This doesn’t work well for something like a Piotroski F-Score where there are 8 on/off conditions and stocks are bunched up in the same rank but in very different quantities. What you’d like to see in an F-Score rank is an interval of 12.5 (approx) percentile between ranks:
100 rank for stocks that pass all 8 conditions
87.5 for stocks that pass any 7 conditions
75 for stocks that pass any 6 conditions
etc.
Cool. I get how that might work for boolean use cases, such as a traditional Piotroski analysis. I seem to be getting better result when I use summation at a low level within a ranking and then re-normalize as a last step to get that even distribution of quantiles.
I think that summation behaves a little better in some cases because less information is lost through persistent renormalizations. For example, even though some company could look “average” (i.e., it fits in the median quantile) according to some metrics, the company can still be a dog. Normalizing metrics upon metrics would obscure the true data. causing this dog of company to look average. However, if we just average the metrics, some of the “suck” might still shine through.