Judges & Referees Webinar for GOE +5 to -5 | Page 4 | Golden Skate

Judges & Referees Webinar for GOE +5 to -5

NymphyNymphy

On the Ice
Joined
Aug 26, 2017
Can someone please explain this system for singles? I finished the video and still have no clue how it works.
PS: glad we got to see Hanyu and Chan in the demo.
 

schizoanalyst

Medalist
Joined
Oct 26, 2016

This is interesting enough I guess. Trained judges/coaches should already be capable of making these assessments they are making without this guidance if they took the time, but it's informative enough. It won't matter in practice though since no judge counts bullets for this.

I find it a bit unseemly that they use current skaters to either praise or criticize in this type of official setting. They have enough footage of retired skaters.
 

Eclair

Medalist
Joined
Dec 10, 2012
why are the bullet points in ice dance mandatory features that add or subtract to your end GOE, while in single and pair's skating the bullet points are parts of a bicycle that can add up to more or less of it's sum?
 

gkelly

Record Breaker
Joined
Jul 26, 2003
Because there are/have been different people on the ice dance vs. singles and pairs technical committees and so the approaches to refining the rules over the past 15 years have taken different paths?
 

cohen-esque

Final Flight
Joined
Jan 27, 2014
Because there are/have been different people on the ice dance vs. singles and pairs technical committees and so the approaches to refining the rules over the past 15 years have taken different paths?
Well in this case it certainly seems like the ice dance committee has a much, much clearer idea of what path they wanted to take.
 

DorYiu

Let’s go crazy
On the Ice
Joined
Jun 13, 2017
That webinar was rather underwhelming. I stupidly thought that these would be really in depth and clear up the questions I had about the system. What it did show reaffirmed what I already suspected so at least I feel like I'm on the right track.

.....I'm still confused about the "element matches the music" bullet. Medvedeva jumps to both an accent to the music and uses an arm flourish on the jump. I still don't know if people are supposed to get credit as long as the element is placed somewhere that makes sense for the music or if they need to do something special with the element itself to get credit. The former means almost everything from top skaters get credit and the latter means almost nothing gets credit. It would be helpful if they also discussed exactly why the examples they showed were chosen to represent that bullet.

Agree, I am deeply confused about all the "element matches the music" examples that they show in the single's Webinar, seems like they want the skaters to just make a move when the musical note drop :scratch3:
They only read the bullet out and play example, they didn't even try to explain why they think that's an example that represents that particular bullet... :mad: Knowing ISU, I know I am asking for too much, but this is not clear enough.
I am moving to the ice dance one, hopefully, that's a better one as others mentioned.
 

yelyoh

Medalist
Joined
Jul 26, 2003
Country
United-States
Oh great!! ... the judges will have as much training and qualification as anybody here who bothers to take up these webinars!

Will there an examination at the end, where the judges are subject to +5-5 Grade on Examination?


Made my morning!!!!
 

Eclair

Medalist
Joined
Dec 10, 2012
Because there are/have been different people on the ice dance vs. singles and pairs technical committees and so the approaches to refining the rules over the past 15 years have taken different paths?

well I'm curious to see which approach will turn out to create conditions skaters, coaches and judges alike will be more content with.
 

draqq

FigureSkatingPhenom
Record Breaker
Joined
May 10, 2010
Agree, I am deeply confused about all the "element matches the music" examples that they show in the single's Webinar, seems like they want the skaters to just make a move when the musical note drop :scratch3:
They only read the bullet out and play example, they didn't even try to explain why they think that's an example that represents that particular bullet... :mad: Knowing ISU, I know I am asking for too much, but this is not clear enough.
I am moving to the ice dance one, hopefully, that's a better one as others mentioned.

Indeed, the examples for "element matches the music" is a bit boggling. While I found that Giada Russo's footwork was a spot-on example, Brezina's spin and Zagitova's choreo sequence were not the best to show that particular feature. Brezina's spin was just fast and centered, and Zagitova's choreo sequence was just intricate and had good ice coverage, but neither of them highlighted the rhythm of the music in my opinion.

A better example would be Jason Brown's "Hamilton" ending spin here: https://youtu.be/Nq_otujYUdY?t=2m39s
...and Osmond's choreo sequence in her "La Boheme": https://youtu.be/8Sc7N5syUzw?t=48m57s
 

Mrs. P

Uno, Dos, twizzle!
Record Breaker
Joined
Dec 27, 2009
A better example would be Jason Brown's "Hamilton" ending spin here: https://youtu.be/Nq_otujYUdY?t=2m39s
...and Osmond's choreo sequence in her "La Boheme": https://youtu.be/8Sc7N5syUzw?t=48m57s

They could have selected any number of Jason's spins for that bullet (or any of them), lol. It was kinda weird that they didn't include him in any of the spin segments. And I'm not just saying that cause I am a fan. He consistently scores higher than pretty much anyone in GOE.
 

draqq

FigureSkatingPhenom
Record Breaker
Joined
May 10, 2010
Having watched the singles skating webinar, I'm dubious to the language. Over and over, they emphasize that these bullet points are just "guidelines," "general recommendations," and "depends on judges' evaluation" - that these bullet points are merely a "possibility to increase the value of the element". There's this sense of imprecise precision throughout the explanation of this system which is inherent to IJS since the beginning and only exaggerated here. It's essential of course to give judges leeway to give marks as they feel like but there's room for clarity.

It makes me wish that the ISU would incorporate and invest in technology that will help ascertain some of the more quantitative bullet points. The technology would not replace judges but aid them in their overall assessment. For instance, set a standard for what "good height" on a jump actually means by actually assessing how much air a skater gets. The same goes for "good ice coverage" - just calculate how far the skater goes across the ice on a jump or footwork sequence. Measure the speed of rotation in a spin, measure the speed of a jump landing's flow-out, measure how centered a spin actually is. Technology exists in other sports that would easily translate over to this one and would greatly aid both the viewer and the judges into grading with better precision, especially now that the first three bullet points of each element type are stressed.
 

Tavi...

Record Breaker
Joined
Feb 10, 2014
They could have selected any number of Jason's spins for that bullet (or any of them), lol. It was kinda weird that they didn't include him in any of the spin segments. And I'm not just saying that cause I am a fan. He consistently scores higher than pretty much anyone in GOE.

It looks to me like they took all of the examples from the Olympics and 2018 Worlds.
 

Mrs. P

Uno, Dos, twizzle!
Record Breaker
Joined
Dec 27, 2009
It looks to me like they took all of the examples from the Olympics and 2018 Worlds.

They had a few examples from both 2017 4CC and Worlds too... (the ones from Zhenya and Wakaba come to mind) so :scratch3:
 

halulupu

Final Flight
Joined
Oct 21, 2017
so in the guidelines mandatory bullets are included and they can be added up as a bicycle or something else :reye::noshake::agree::confused2::laugh2::rofl:
 

gkelly

Record Breaker
Joined
Jul 26, 2003
There's this sense of imprecise precision throughout the explanation of this system which is inherent to IJS since the beginning and only exaggerated here.

Well, it's inherent to evaluation of figure skating since the beginning, made slightly more precise and certainly more granular since the beginning of IJS, but not to the point of matching the precision of scoring in less complex or less qualitative sports.

6.0 short program judging had mandatory deductions (many of which included a numerical range, as do many of the GOE reductions), but other than that the guidelines for scoring programs under 6.0 were much less specific than they are under IJS. Judges go through years of trial judging and judging lower level events as they work their way up to international and if possible ISU judging appointments. During that process -- and before, if they were skaters themselves -- they get a sense of what is "average" or "good" or "very good" for each kind of element and for each overall program-wide quality or criterion (program component). What to expect as average for novices or juniors vs. seniors. For men vs. women.

That standard is always going to be a work in progress as judges get more experience watching skaters at all levels, and as the field of skaters as a whole tends to improve dramatically at some kinds of skills and to let other skills that are no longer considered as important to fall by the wayside.

And then they're supposed to consider all the good points and all the bad points in coming up with their scores.

Under 6.0 there was one technical score for everything in the whole program -- judges had to weigh the good qualities and the errors or weaknesses and the difficulty of the content and balance it all out to come up with one score. There were no explicit guidelines about what was more important and what was worth less. Each judge had to decide that for themselves, influenced by what they learned from more experienced judges through the trialing and early career judging process.

With IJS, it's just the good qualities and errors or weaknesses of one element at a time -- with tech panel/scale of values taking care of the difficulty determination. So that's a lot more precise than 6.0 judging. But there are still caveats that the range of quality from very poor to outstanding is continuous but the scores available for GOEs are discrete integers, errors can occur with varying degrees of severity (which is reflected in the rules where a range of deductions/reductions is suggested), and positive qualities can also occur with a range of quality from above average to good, very good, excellent, and outstanding. (The +GOE bullet point rules either +3 or +5 do not explicitly allow for rewarding one point for "good" and more for "excellent" execution of the same quality, as the earliest GOE guidelines ca. 2003-05 did -- perhaps calling the bullet points "guidelines" is a way to allow for that flexibility.)

It's essential of course to give judges leeway to give marks as they feel like but there's room for clarity.


Clarity in the form of spelling out ranges of positives, or giving video examples of poor, fair, average, good, excellent? Stating explicitly where judges have room to exercise judgment or to balance out strong positives, mild positives, strong weaknesses/errors, and mild weaknesses, vs. where there is only one correct final GOE?


Up to a few years ago there used to be a number of errors that required negative GOE or required -3 GOE especially in short programs, but then the rules/guidelines were explicitly loosened to allow judges to offset reductions for errors with positive qualities.


Now the only mandatory GOE is for doing only one valid jump in the SP jump combination slot.


It makes me wish that the ISU would incorporate and invest in technology that will help ascertain some of the more quantitative bullet points. The technology would not replace judges but aid them in their overall assessment. For instance, set a standard for what "good height" on a jump actually means by actually assessing how much air a skater gets. The same goes for "good ice coverage" - just calculate how far the skater goes across the ice on a jump or footwork sequence. Measure the speed of rotation in a spin, measure the speed of a jump landing's flow-out, measure how centered a spin actually is.


If you're going to measure height and distance on jumps or rotational speed on spins, should the cutoff between "good enough to earn this bullet point" or not be the same for a 4'10" woman or a 6'2" man? Or even skaters of the same sex and skill level but with a full foot of height difference?


Or should the technology just tell judges the number of inches/centimeters or the number of RPMs and let judges draw their own mental guidelines – to which they could then be more consistent in drawing their own lines as to when to award the bullet point.


Spin centering is less dependent on the size of the skater, so that could be more absolute. But could the technology account for the inevitable change of circle size that occurs with a switch to a forward edge while spinning, so that change-edge spins aren’t penalized for a slight change of centering that would be penalized in a backward-edge-only spin?


Similarly, could the technology have different settings to assess the rideout on the first jump landing of a combination depending whether the subsequent jump is a loop or a toe loop?


Etc.


I’m sure it would theoretically be possible to account for all these variables. But at what point does the gain in precision justify the cost of developing and implementing the systems and technology?


Would it make sense just to determine certain aspects of a performance that can be measured absolutely and just plug those numbers directly into the scoring with no consideration of the size of the skater or the exact element being performed, and let the judges concentrate on judging all the qualities that can’t be measured objectively at all?
 

cohen-esque

Final Flight
Joined
Jan 27, 2014
Clarity in the form of spelling out ranges of positives, or giving video examples of poor, fair, average, good, excellent? Stating explicitly where judges have room to exercise judgment or to balance out strong positives, mild positives, strong weaknesses/errors, and mild weaknesses, vs. where there is only one correct final GOE?
Yes.

With IJS, it's just the good qualities and errors or weaknesses of one element at a time -- with tech panel/scale of values taking care of the difficulty determination. So that's a lot more precise than 6.0 judging. But there are still caveats that the range of quality from very poor to outstanding is continuous but the scores available for GOEs are discrete integers, errors can occur with varying degrees of severity (which is reflected in the rules where a range of deductions/reductions is suggested), and positive qualities can also occur with a range of quality from above average to good, very good, excellent, and outstanding. (The +GOE bullet point rules either +3 or +5 do not explicitly allow for rewarding one point for "good" and more for "excellent" execution of the same quality, as the earliest GOE guidelines ca. 2003-05 did -- perhaps calling the bullet points "guidelines" is a way to allow for that flexibility.)

But the singles webinar didn’t really address any of this. Literally all they did was go through examples of individual bullets, apparently the gold standards.

They did not explain why those examples were the gold standards, at all. I don’t even agree with some, but I don’t know where we differ or what they saw that they thought should be so highly valued in their examples, because they didn’t explain it at all.

Well, if we have gold standards, then what are examples of varying quality of individual GOE bullets, outside of whole elements? They gave none. How should the bullets generally be applied in these cases, where a bullet isn’t achieved as well as it could be, or multiple bullets are not? They didn’t give any example— not one— of an element with a GOE grade, or why they though it deserved that grade. What about elements where maybe all the same bullets were achieved, but to separate extents? How did they handle that sort of instance?

So the element is more than the sum of a mathematical calculation of bullets but meanwhile we don’t even really know how the basic calculation is supposed to work to get beyond that point.

If the element can be graded “as is” rather than “according to just the achieved bullets” then what is even the point of having the GOE bullets? If we can’t expect some actual numerical standard, even lenient standards, then it’s probably simpler, and more accurate, even, to just let the judges have a +/-5 range and award elements of varying quality as they see fit without all the fuss over whether it meets the bullets or the compulsory bullets.

And on that point, if an element can be evaluated as more than the sum of its bullets, can it be evaluated as more than the sum of its compulsory bullets? If it meets only 2/3 of them but as a whole, it’s an excellent, breathtaking execution, can it earn +4 or +5? Are they actually compulsory? I feel like they implied that they aren’t really, in which case what’s the point of including them, but who even knows? They never actually bothered to address that point in any detail after they threw it out there. (“...first three are compulsory. However...”)

They could have given some actual, you know, guidelines, still without forcing anything in stone. But they did not. As was, they gave a list of bullets, examples of elements that apparently are the gold standard of those individual bullets for unknown reasons, and then concluded that the bullet scheme doesn’t really matter anyway and they were actually just wasting all our time.

We just got some IKEA bicycle analogy instead. This webinar, in its brief 24 minute time span, succeeded only in leaving me legitimately puzzled as to why the current system it’s supposed to be explaining even exists.

Or to sum up, to me it all came across as:
—ISU: Make GOE bullets for a +/-5 system.
—Committee: We don’t really care, but sure, yeah, GOE bullets, whatever.


Compare to the Ice Dance seminar where they did just about all of the above, acknowledging that their are going to be value calls as to what counts as being fulfilled or is do be rewarded (the entrance of one of the dance spin example, right off the bat) but also explaining their thoughts on those value calls, and showing examples of elements able to earn higher GOE and why, or why they capped something, or why they went low. It was informative and had some actual idea about how the new GOE standards should work and what they envisioned when designing them.

I understood the point of the Ice Dance system they designed and how they imagined it should be applied to the sport after they finished. Which was rather the point of these webinars.


All that said, I commend the ISU for making these webinars publicly available, to the fans and not holding the private seminars for only coaches and judges.
 

gkelly

Record Breaker
Joined
Jul 26, 2003
Yes, I agree that the singles and pairs webinars are not sufficient to teach judges (or fans) from scratch how to judge GOEs, nor even to guide experienced judges in the subtleties of how to apply the new values and the mandatory bullets.

There's only so much that can be put on line for reasons of time and privacy -- it's not appropriate to call out current young skaters as examples of bad technique. But it would have been nice if they had erred further in the direction of more detail.
 

TryMeLater

On the Ice
Joined
Mar 31, 2013
Yes.



All that said, I commend the ISU for making these webinars publicly available, to the fans and not holding the private seminars for only coaches and judges.

They didn't do this for the fans. They did it for the judges, coaches and skaters to save money on holding meetings.
 

cohen-esque

Final Flight
Joined
Jan 27, 2014
They didn't do this for the fans. They did it for the judges, coaches and skaters to save money on holding meetings.
Beautifully cynical outlook, except that they’re still holding the private, ISU only seminars and meetings. Hopefully in more detail.
 
Top