- Joined
- Dec 18, 2014
In light of a previous "wrong Olympics results thread", I have started a thread to rescore 2010 Olympics and see how interesting it would be.
I see you still don't have enough judges... I can send you my numbers today
In light of a previous "wrong Olympics results thread", I have started a thread to rescore 2010 Olympics and see how interesting it would be.
I see you still don't have enough judges... I can send you my numbers today
great, welcome Judge Number 8
Another thing I forgot to point out is Weir's combination spin in the SP was only called Level 3 when it is clearly Level 4 - he does difficult sit position (feature 1), held for at least 8 rotations (feature 2), a very clean edge change held for at least 2 rotations (feature 3), and difficult upright position (feature 4). I'm really not sure how they messed that call up, there's nothing even questionable about it. It's the same combo spin he did in the LP and the same combo spin Lysacek did in both programs, which were all Level 4 calls.
Hihihi...I think I made a totally unexpected skater the winner....Whatever the discussions by others. To be fair: I really am not as good as some of us in determining whether there's a call or not. But my winner didn't even get a medal in the end. Sorry guys! Exactly the reason why I don't dispute the outcome usually.
Just in case certain judge(s) go a bit too extreme in their generosity/criticism?
Just confirming - the highest and lowest still get thrown out in the final GOE/PCS calculations, correct?
It would be interesting to see the deviation from the rest of the panel and where the outliers are.
Just confirming - the highest and lowest still get thrown out in the final GOE/PCS calculations, correct?
Just in case certain judge(s) go a bit too extreme in their generosity/criticism?
It would be interesting to see the deviation from the rest of the panel and where the outliers are.
I was thinking about it too. I'm not sure 'extremes' were thrown away back then.
That's why judges could score however they wanted to?! I was looking at the judges scores from Vancouver and that kind of discrepancy between them is really odd to me. It's like we were judging the competition, and not the actual judges I'm ok with the difference between judges scores but 2 points difference in a component is an abnormality to me. Or maybe i'm just customized to todays way of judging, and all judging in the past look just weird to me :confused2:
Also remember that for most of the IJS years (plus under 6.0 the "interim system" of 2003 and 2004), the judges were anonymized by reporting their scores in random order rather than Judge 1 in column 1, etc. At first the same judge's scores remained in the same column for the whole competition segment, but that made it relatively easy to figure out which judge was in which column, so for a while the columns were shuffled randomly for each skater, not just once for the whole event.
I don't remember whether that started before or after 2010.
When that was in effect, you couldn't look down a single column and compare how the same judge scored more than one skater. Column 1 for Skater A would have been marks from a different judge than in Column 1 for Skater B.
Thanks. I understand that. I just don't understand how one judge could judge one skater in some component with 6, and the other judge in that same component of that same skater in that same competition with 8.5. It's like they were watching totally different things in that same moment :confused2:
Possible explanations, without positing any ill intent on anyone's part:
They really did see some important parts of the program somewhat differently, sitting at opposite ends of the panel and glancing away for a few seconds here and there to enter scores or take notes at different moments.
The judge with the 6 was using a low scoring range for all skaters, and the judge with the 8.5 was using a higher range for all skaters.
One judge tended to keep all their component scores pretty close together for each skater, and another made a concerted effort to show clear distinctions between the best and worst aspects of the performance whenever there was a big difference in that judge's perception.
They each put the most weight on different criteria of that component. E.g., one judge paid most attention to overall speed/power/flow in the Skating Skills component, and another judge put more emphasis on the difficulty of the skating particularly multidirectional and one-foot skating . . . and a third judge was all about the balance and edge quality. Or one judge is a stickler for Carriage & Clarity of Movement when judging Performance/Execution (and penalizes heavily for any obvious errors like falls or stumbles), whereas another is most strongly impressed by Individuality/Personality (and would reward a skater for laughing off a mistake within the character of the program).
One judge just really enjoys the music or the choreographic style of a particular program and is therefore unconsciously influenced to score that skater higher than a strictly objective analysis would demand, and another judge just doesn't "get" what the skater is trying to do or really hates that kind of music/costume/choreography and is therefore unconsciously predisposed to underscore the program.
Etc.
Assuming all those kinds of differences might be in play to different degrees depending how each skater skates, for some programs the differences between judges' approaches might cancel each other out and they would end up with similar scores, for but for other programs each little influence might all trend in the same direction, each of the above reasons bumping up judge A's scores another 0.25 or 0.5, and vice versa for judge B, so for that skater they'd end up further apart than usual.
Giggle....if that's the case and I'm the outlier all the time, you may even bash me as being a terrible judge. I probably am!