2017_18 ISU Judging Anomalies | Page 18 | Golden Skate

2017_18 ISU Judging Anomalies

OS

Sedated by Modonium
Record Breaker
Joined
Mar 23, 2010
Everybody do this... Federations like Canada, China and US can not afford not to do this to counter inherent slants from the European block that are dominated by Russia/France/Eastern block, the old guards of the sport. The only way to get rid of it is get of federation appointed judges all together get rid of ISU management under Lakernik or people like him. Make sure there should be no more than 4 year term per a senior management, they need to rotate. Any position of power hold long enough, leaves room for self interests and corruption.

There should be real consequences for judges caught cheating, not just suspension 6 months (which means nothing in a figure skating season means any suspension downtime just happen to match season downtime and the cheating judge can still make next year elite event.)

Love this rant https://www.youtube.com/watch?v=iMoRf-RPssQ

Figure skating is an individual sport. They should shift the power back to the figure skaters who alone should be in charge of their own fate, not the federations. Too much power are given to the federations and it made the sport murky.
 

Eclair

Medalist
Joined
Dec 10, 2012
Everybody do this... Federations like Canada, China and US can not afford not to do this to counter biases from the European block that are dominated by Russia/France/Eastern block, the old guards of the sport. The only way to get rid of it is get of federation appointed judges all together get rid of ISU management under Lakernik or people like him. Make sure there should be no more than 4 year term per a senior management, they need to rotate. Any position of power hold long enough, leaves room for self interests and corruption.

There should be real consequences for judges caught cheating, not just suspension 6 months (which means nothing in a figure skating season means any suspension downtime just happen to match season downtime and the cheating judge can still make next year elite event.)

Love this rant https://www.youtube.com/watch?v=iMoRf-RPssQ

Figure skating is an individual sport. They should shift the power back to the figure skaters who alone should be in charge of their own fate, not the federations. Too much power are given to the federations and it made the sport murky.

Dick Button's rant is the best!
 

Metis

Shepherdess of the Teal Deer
Record Breaker
Joined
Feb 14, 2018
I don't think that it can be manipulated that easy - A judge doesn't have an access to the other judges mark, which makes it quite difficult to control (unless you look at other judges score). It's quite a hard system to manipulate if you work at it alone (because unlike in 6.0 the high and the low get thrown out and there are too many numbers to keep up). The judge that was sanctioned had the protocol of the SP in the LP and tried to use those marks as a benchmark.
Well, they do, unfortunately, see each other’s marks, but let’s assume each judge is completely walled off from one another, and no one has shared their opinions or preferences regarding the skaters prior to judging. If you think your scores for a given skater may be high enough to be the marks that are thrown out but you aren’t certain of that outcome, you have two choices: roll the dice and hope your marks make the average (in which case someone either skated phenomenally or another judge just went for max possible GOEs and PCS inflation) or you could try to remove as much risk as possible and adjust your scores so they come in just high enough to avoid looking utterly crazy while also maximizing the chance your scores are the one removed as the high in the average. The latter is the safer move; even with peeking, you don’t know exactly what everyone else is doing, and despite your own inflation, you may still come in under the highest score if someone else adds 20+ points to a skater’s average PCS while you went with a more conservative 18.5. On the GOEs, you’re trying to make sure you give the max possible value whenever possible, along with the lowest deductions stipulated, and throwing in an extra one or two +3 GOE scores makes it likelier a scoresheet that is quite generous goes into the average (mandatory deductions only, highest positive GOEs that the skater could have pretended to earn).

Why is it optimal? Because if you dice-roll but you make the average, the higher scores removed from the average might have been more advantageous to keep (say, +2-5 points higher in total), but because you played it “safe,” you removed marks that could have easily passed as legit, and you brought down the average. (This is the nightmare when scores are very close for the top skaters; if you wanted a given skater to win, losing any points is minimally sub-optimal and potentially changes podium order.) On the other hand, if you deliberately go for the highest marks you can “get away” with (or even not get away with — who cares about perceptions of bias?), you can predict to a much greater degree of certainty what you’d need to mark down to get your scores thrown as the high. (Do you know the skater’s SB and PB? Congrats, you’re basically done.) And if they somehow aren’t the high? (This is the case for not inflating to the point of lunacy; this is also why none of this takes coordination between two or more parties, just common sense by one.) You achieved the best possible outcome. A winner is... uh, not “you,” our hypothetical judge, but the skater they favor.

This strategy is obviously not optimal in an environment in which each judge trusts one another to score each element appropriately and according to the rules, but there’s a lot going on, including humans being frankly terrible with anchor numbers (the 7.7 problem when 5 is “average”), positive GOEs being awarded for basic execution rather than demonstration of mastery, etc. As an individual, you can’t control the average, but by deliberately removing yourself from it, you can “uncap” the high end of the score distribution, which is huge. Submitting a set of more conservative and honest-with-some-inflation scores risks making a set of far less generous scores the discarded high, which is why it’s “optimal” (from a purely rationalist standpoint) to adjust your marks in such a way that you can be reasonably certain that yours are now the highest in the set, as that preserves all scores that fall closer to your end of the distribution for the average. (You could also try to be the “low” in the average if you suspect the scores are going to cluster toward that end; in that situation, trying to “stop the bleeding” by preserving all scores above yours for the average may do more for the overall outcome if the highest score is going to be a true outlier due to a horrific skate.)

You’re right that this is hard to manipulate, at least in theory, due to what game theory calls “private information” and issues of “concealed preference.” But I’m not talking about “guaranteeing” an outcome — just what the most rational, optimal choice is for an individual judge given how the system is currently implemented. (Rational as in “rational choice,” as in game theory; the whole system is utterly irrational on a macro level.) There’s also the fact that judges aren’t exactly closed books (they have left plenty of data behind in the form of prior scores), that the composition of the judging panel is known to the judges (including nationality, which an economist already did an analysis of in terms of how it effects judges from the same country as a given skater), and scoring itself is an “iterated game” — the short and the free programs take place over two days. Even if a given judge knew nothing about anyone else on the panel, it only takes the first flight or two to figure out where the group naturally averages, who the sticklers are, etc. By the free skate, judges have a ludicrous amount of data just from observation (seeing how scoring went in the short).

Again: this isn’t about guaranteeing podium order, judges conspiring with one another, or anything so involved. (Though it doesn’t take a genius for the average judge to figure out that some of their fellows are going to undermark certain skaters just by observation; if you’re the only person giving a +1 GOE to Hanyu’s 3A, you really aren’t even trying to blend in.) It’s about making the most of the terrible system we’re stuck with and the fact that improper use of GOEs and PCS scores as a whole have made PCS marks functionally ordinal rankings (would love to test to see if I could find various correlations but yeah). Because GOEs aren’t being assigned uniformly across competitions or even in competitions (let alone accurately per ISU’s own rules), we can all read the scores and see the inflation and general “what fresh hell is this” in the numbers... as a judge, it would be irrational to go by the book unless you actually known the other judges and have solid cause to believe the majority are going to assign points as the system intended. Absent that, if you think a skater deserves to win... you could give them the score they earned, or you could all but ensure your score is dropped from the average, leaving space for all the other ridiculous values to go in.

As a US resident, I half want to believe that’s what the American judge was doing with Chen’s scores, but let’s be honest: “rational choice,” “probabilistic forecasts,” “Pareto equilibrium” — these are not words found in the language of far too many of our people. The guy was likely throwing darts at a chalkboard, though on Chen’s score, those darts landed exactly where he needed them to... it’s either proof of concept or a broken clock being right.

As a final note on optimization, if every single judge goes for “try to be high enough to be thrown out of the average without being conspicuous” for their preferred skaters, I wonder what happens to PCS values.... My point being: if enough judges are rational actors, a +20 PCS jump from one competition to another may not seem quite so insane. I don’t think that’s the whole explanation, but it’s a hypothesis that has yet to be disproven, and it may well account for some of the variance. It’s a thought!

I rather think it shows that the Chinese judge is doing his job the dumbest way possible and has no clue how the judging system works. Just over inflating will lead to investigation and other judges looking at the scoresheet and see the obvious bias. He/She should have scored like the Canadian/ American or Russian judges do - slightly over score on PCS and over score only on about 3/4 of the elements in GOE, so that it's not as glaring, but still elevates the total scores.

Well, yeah, this is what would fall under “irrational” and “not optimal play.” The point of inflating a score to deliberately be the highest in the average is to do so to preserve higher marks behind yours, not to signal LOOK AT ME. He could have been attempting optimization (based on his history: LOLNO) and this is what happens when your read on the judging panel’s average is way off, but the hilarious level of PCS inflation is a dead giveaway that no real thought was put into those marks. Though, ironically, they probably don’t rise to my “top 5 worst” from this OWG so far.

Rizzo’s PCS marks in the team event free skate were truly a thing of wonder when he skated immediately after Kolyada (and he and Tanaka wound up with an identical PCS score). I actually don’t have an issue with some of Kolyada’s PCS values, but the transition marks were the classic “7.7” issue.
 

drivingmissdaisy

Record Breaker
Joined
Feb 17, 2010
Maybe Lorrie could have separated Nathan and Yuzu’s PCS a bit but GOE is tied into the BV and Nathan getting a bunch of 2’s seems about right to me.

For me, 2's seem high for a lot of Nathan's quads. If they are giving a bit of a bonus for difficulty, then that's one thing. But if he were doing triples like this and getting +2, it would be excessive. The 4Lz, 4F-2T, and 4T-3T had no steps in, no steps out, no air position variation, wide-swinging free leg on the landings of the quads, and little flow out of the jumps. He did those jumps cleanly, but no aspect of them warranted +2 or +3.
 

Sam-Skwantch

“I solemnly swear I’m up to no good”
Record Breaker
Joined
Dec 29, 2013
Country
United-States
For me, 2's seem high for a lot of Nathan's quads. If they are giving a bit of a bonus for difficulty, then that's one thing. But if he were doing triples like this and getting +2, it would be excessive. The 4Lz, 4F-2T, and 4T-3T had no steps in, no steps out, no air position variation, wide-swinging free leg on the landings of the quads, and little flow out of the jumps. He did those jumps cleanly, but no aspect of them warranted +2 or +3.

Maybe instead of increasing the GOE corridor for quads they should just go to a -1 , 0, or +1 for the GOE and let the BV be reward enough?

-1 for yucky jumps

0 for good jumps

+1 for excellent jumps

Heck I’d even take -3 to +1 and be alright :agree:
 

Metis

Shepherdess of the Teal Deer
Record Breaker
Joined
Feb 14, 2018
Maybe instead of increasing the GOE corridor for quads they should just go to a -1 , 0, or +1 for the GOE and let the BV be reward enough?

Apparently, they’re lowering BV but expanding GOE to -5/+5, because we all want more math.

The only way I can see the expansion of GOEs working is if it’s incremental, in a point-per-bullet sense, so that basic execution is 0, difficult entry +1, exit +1, height in air +1, etc. That would also make the GOE padding relatively less subjective, as if it’s one point for each added element of mastery, you can’t get a split table of +2s and +3s. So that will never, ever happen.

Chen’s GOEs are ... odd, to put it mildly. I find it difficult to award him +1 on most jumps, let alone +2, though his spins are excellent and actually do deserve at least a +2 if not a +3 (occasional traveling).
 

CanadianSkaterGuy

Record Breaker
Joined
Jan 25, 2013
Yes and they're investigating him. But the US judges should be investigated too.

I don't think it's as egregious as the Chinese judge. I'm assuming for Vincent he was marked lower on PCS than Fernandez/Hanyu/etc. (but not to the degree that people expect) and surpassed them on TES.

Zhou's BV was 18 points higher than Hanyu and 26 points higher than Fernandez.... and that partially contributed to that judge scoring him higher, obviously that judge marked Hanyu/Fernandez a bit more conservatively on GOE and Zhou a bit more generously. It was out of line judging, but given national bias and the fact that Zhou had much higher tech content than Hanyu/Fernandez it's hardly unexpected.

Also, a judge giving a +2 instead of a +1 on a quad/3A means 1 more point added to that judges "individual score". So if they add 1 or more extra GOE to each of their skater's elements, that adds up. For example, on the 4Z+3T, the US judge gave 2.4 points of GOE versus someone like judge 6 giving 0 points of GOE. For the 4Tx, judge 2 gave 2.2 points of GOE versus judge 6 giving 0 points.

At least when judge 2 was an anomaly they might be 1 GOE extra. Compare that to judge 7 for Jin -- giving +3s across the board and a slew of 9.50's. Like, that Chinese judge didn't care - if Jin stayed on his feet, it was a +3.
 

charlotte14

Medalist
Joined
Aug 16, 2017
I don't think it's as egregious as the Chinese judge. I'm assuming for Vincent he was marked lower on PCS than Fernandez/Hanyu/etc. (but not to the degree that people expect) and surpassed them on TES.

Zhou's BV was 18 points higher than Hanyu and 26 points higher than Fernandez.... and that partially contributed to that judge scoring him higher, obviously that judge marked Hanyu/Fernandez a bit more conservatively on GOE and Zhou a bit more generously. It was out of line judging, but given national bias and the fact that Zhou had much higher tech content than Hanyu/Fernandez it's hardly unexpected.

Also, a judge giving a +2 instead of a +1 on a quad/3A means 1 more point added to that judges "individual score". So if they add 1 or more extra GOE to each of their skater's elements, that adds up. For example, on the 4Z+3T, the US judge gave 2.4 points of GOE versus someone like judge 6 giving 0 points of GOE. For the 4Tx, judge 2 gave 2.2 points of GOE versus judge 6 giving 0 points.

At least when judge 2 was an anomaly they might be 1 GOE extra. Compare that to judge 7 for Jin -- giving +3s across the board and a slew of 9.50's. Like, that Chinese judge didn't care - if Jin stayed on his feet, it was a +3.
So if the judge is being investigated, what will happen? Will they be allowed to judge again? Or will they be fined? How do they investigate? Like isn’t just looking at the scores is enough to know it’s ridiculous judging?
 
Joined
Dec 9, 2017
Well, this is sad. As long as it doesn't affect Team China's placements and reputation, though, it's fine. Does it?

What's the source on his investigation?
 

OS

Sedated by Modonium
Record Breaker
Joined
Mar 23, 2010
The thing is they should make any penalty not just to cheating judges themselves, but to the federations that assign them. I bet this will solve a problem hugely.

Otherwise, the federation can just sacrifice one cheating judge (while cheated result still stands as 30 mins challenge time is not enough and rarely done), and replace them with another one.

Or put in rules in place where the cheated judge mark should not count towards final result just like doping athletes can get their medals taken away, and that any competitions result can be overturned. 30 mins challenge should be extended to 3 weeks leave time for investigation.
 
Joined
Dec 9, 2017
So if the judge is being investigated, what will happen? Will they be allowed to judge again? Or will they be fined? How do they investigate? Like isn’t just looking at the scores is enough to know it’s ridiculous judging?

The Golden Spin judge was suspended for 6 months.
 

gkelly

Record Breaker
Joined
Jul 26, 2003
Well, they do, unfortunately, see each other’s marks,
Not on a regular basis. The average GOEs aren't published during the performance nor immediately afterward while the judges are still on the stand, let alone

The scoretracker box shown on TV is not visible to the judges.

It is often physically possible for judges to peek at scores entered or written down by the judges immediately next to them. So if any judges in the history of IJS have ever peeked, then yes, it is true that some of them do see some other judges' marks sometimes.

even with peeking, you don’t know exactly what everyone else is doing,

Right.

and despite your own inflation, you may still come in under the highest score if someone else adds 20+ points to a skater’s average PCS while you went with a more conservative 18.5.

The high and low scores are discarded on an element-by-element and component-by-component basis. And if two or more judges give the same highest or lowest score for that element or component it's hard to say which one's was thrown out and which stayed in.

So it's generally not all that meaningful to say that one judge added 20+ or 18.5 points to the average. They're always part of the average.

[/quote]the fact that improper use of GOEs and PCS scores as a whole have made PCS marks functionally ordinal rankings (would love to test to see if I could find various correlations but yeah)[/quote]

Yes, you'd need to define what kinds of patterns would qualify as using PCS as functionally ordinal rankings vs. scoring the PCS independently of TES but with connections to each other vs. scoring each component completely independently.

Apparently, they’re lowering BV but expanding GOE to -5/+5, because we all want more math.

The only way I can see the expansion of GOEs working is if it’s incremental, in a point-per-bullet sense, so that basic execution is 0, difficult entry +1, exit +1, height in air +1, etc. That would also make the GOE padding relatively less subjective,

Yes, that would make the GOE scoring somewhat more consistent.

as if it’s one point for each added element of mastery, you can’t get a split table of +2s and +3s. So that will never, ever happen.

You would very much still be able to have split panels of +2 and +3 for the same element. Because for any given element some judges might think that it achieved a given bullet point and other judges would think it fell short. Most of the bullet points are continuous variables, not binary conditions. Each judge needs to decide for him- or herself whether the element met the standard to earn that bullet. And even with the most intensive most specific training possible, there would still be room for disagreement. Skating is large a qualitative sport and qualitative judgment is inherently subjective even with no bias toward specific skaters or types of skating factored in.
 

Metis

Shepherdess of the Teal Deer
Record Breaker
Joined
Feb 14, 2018
It is often physically possible for judges to peek at scores entered or written down by the judges immediately next to them. So if any judges in the history of IJS have ever peeked, then yes, it is true that some of them do see some other judges' marks sometimes.

That’s what I’m referring to, not anything shown to the viewer. Again, the composition of the judging panel and any observations made over time count for a lot here. I’m also being somewhat academic and applying game theory to figure skating scoring, and the “rational actor” model that underpins game theory isn’t one I subscribe to (people are, in fact, predictably irrational, or to take a more famous unresolved quandary: war is always irrational, yet it occurs). And figure skating marks are a special kind of hell.

Judging panel composition does matter, as that was one of the main arguments in the Sotnikova-Yuna controversy (if two of the Russian judges went high, it didn’t matter if one of them had their marks tossed), although it’s somewhat less relevant now that judging isn’t anonymous... in theory.

It’s a thought exercise, to an extent, on how screwed up the system is, as while the intent of using a “trimmed mean” (toss the low and high, then take the average) makes sense, it’s also very easy to see how easy it is to arrive at a situation in which the “best” (best being optimal from a rational choice perspective) option isn’t to actually judge the skater but to line up marks in such a way as to have the best chance of inflating the average. And it’s not like there’s an overpopulation crisis in the world of figure skating judges.

The high and low scores are discarded on an element-by-element and component-by-component basis. And if two or more judges give the same highest or lowest score for that element or component it's hard to say which one's was thrown out and which stayed in.
It incentivizes going for a 3 (or the highest mark available, assuming you’re forced to give a deduction and/or a 3 can’t be justified AT ALL) under current GOE rules, since if there’s at least one other 3, at worst yours is thrown out and the other kept (or vice versa), especially in cases where a 2.5 would actually be useful but this is the system we have. It also encourages rounding up/padding in PCS; if I had the energy to build an actual database, one thing I’d look for is the rate at which certain values appear, simply because of how the human brain processes numbers on a 10.0 scale when told 5 is average. (Which is why “Rate X on a scale of 1 to 10 but you can’t use 7” or “Rate X on a scale of 1 to 5 but you can’t use 3” force people to actually reveal their preferences; 3 on a 5 scale and 7 on a 10 are essentially meaningless, whereas 2/4 and 6/8 give actual information.)

So it's generally not all that meaningful to say that one judge added 20+ or 18.5 points to the average. They're always part of the average.
Yes. To belatedly clarify, I did deliberately choose relatively small variance in numbers as a 20 point gap in total PCS marks between judges is... the point of this thread.

Yes, you'd need to define what kinds of patterns would qualify as using PCS as functionally ordinal rankings vs. scoring the PCS independently of TES but with connections to each other vs. scoring each component completely independently.
I’m less interested in the women, as there’s frankly less to explain there, but for the men, there are a couple of hypotheses that would need to be tested, although I suspect multiple variables are working together, so it’s not as simple as “70% of the scoring effect can be explained by the skater and judge being from the same nation.”

When I say PCS (and I mean PCS only, not GOEs) is being used as ordinals with a new coat of paint, this is what I mean: whatever the intention of the marks and the inherent difficulty of turning the subjective nature of FS into numbers, the PCS numbers are, at best, tangentially related to the concepts they’re meant to be evaluating and instead are better understood as the judges ranking skaters of each group, with PCS rising as the groups are seen as more “competitive.” The values themselves mean absolutely nothing: Transitions is the category where your numbers drop because the judges feel safe docking skaters there, for example, whereas how much you can get out of Interpretation/Choreo/SS may very well be reputation based (which would suggest it’s better to start with a reputation as an artistic skater and then learn to jump, rather than a jumper who becomes artistic — I can’t stand Boyang’s free program, but his PCS scores are comically low, and, yes, Chen still beats him there; this is something that would be interesting to look at over time with a huge data set and with skaters ID’d as “jumpers”).

GOEs are just inconsistent across competitions and even within them, but not what I’m referencing when I say you’d get as much mileage out of PCS as you would with ordinals, as the judges aren’t even trying to cover their tracks there. I don’t think Nathan Chen is devoid of artistry (I actually think he could really open up men’s, as he’s at his best when he isn’t trying to do classical music and check the box of Traditional Figure Skater), but his free skate program is an awful fit for him and, on the artistic side, it was far from his average at the OWG, but he was punching above his weight in his flight, his BV was frankly insane, and BV alone can hit 100 while PCS never will because no panel is going to hand out tens across the board.

I do suspect BV inflates PCS for skaters who aren’t considered to be completely lacking in artistry, as a one-sided score looks bad and/or technical content causes PCS to rise (although I don’t think there’s a tidy ratio). (I suspect the latter is definitely going on at a non-random rate, but in order to measure that, you have to check programs that “should” be lopsided in scoring, which is a subjective judgment, as otherwise you capture “well-performed and technically challenging program scores well,” which gives you, for example, Hanyu’s numbers at 2017 Worlds.) Skaters are definitely being marked relative to each other in PCS, not on their own merits, so skating order is as much of a problem as it was before. PCS just puts an opaque layer of seeming objectivity over “we like this person and are putting them in first,” in which case... put us out of your misery, judges, and just return to ordinals. Which had the benefit of being transparent, especially along Eastern versus Western bloc lines.

Alternatively, force the judges to sign off on their scores by making them review them before submitting and by reminding them that these marks are to represent a given performance, not a comparative review of two skaters. This would likely do nothing, but it’s the thought that counts. It’s just clearly a failed system. I have no dog in this fight, as I’m not especially invested in Chen or Rippon, but Chen did beat Rippon in the PCS, which defies logic and what the PCS allegedly measures.


You would very much still be able to have split panels of +2 and +3 for the same element. Because for any given element some judges might think that it achieved a given bullet point and other judges would think it fell short. Most of the bullet points are continuous variables, not binary conditions. Each judge needs to decide for him- or herself whether the element met the standard to earn that bullet. And even with the most intensive most specific training possible, there would still be room for disagreement. Skating is large a qualitative sport and qualitative judgment is inherently subjective even with no bias toward specific skaters or types of skating factored in.
Yeah, I was being optimistic and also thinking of how they might bracket various GOE criteria. If the earlier levels are the least ambiguous/most cleanly delineated, that would actually go a long way, I think, as +1-3(?) should be basic to intermediate mastery. 4 and 5, especially 5, should be reserved for a “true” +3 in today’s system.

I also wonder what the negative gradations will be. If a +4 or even +5 to a triple jump is suddenly in play, then even at current BV, that may open up options beyond rotated quad >> fall, eat the deduction. Though I half-expect and half-hope GOEs are going to go after low quality quads (well, jumps period) by giving -1/-2 GOE scores for reasons beyond two-foots... -1 for hammering? Free leg not clear on landing?

PS: I was a student of mixed methodology (quant and qual) a very long time ago. (Figure skating is basically... a lot like my school days: ERROR. Cannot return valid conclusions using only one form of analysis. Please brain harder.) I don’t have much hope that any scoring changes will fix much, but if I ever hear “figure skating has become a math problem” again, I’m going to cry. It’s creating some uninspired layouts as well, as while the programs may change, where the elements are placed never do because of min-maxing.
 

CanadianSkaterGuy

Record Breaker
Joined
Jan 25, 2013
So if the judge is being investigated, what will happen? Will they be allowed to judge again? Or will they be fined? How do they investigate? Like isn’t just looking at the scores is enough to know it’s ridiculous judging?

Given the ISU's history of punitive measures, I'm guessing this judge will get a light slap on the wrists, and be awarded head judge of the Beijing 2022 judging panel. :rolleye: :sarcasm:
 

CanadianSkaterGuy

Record Breaker
Joined
Jan 25, 2013
Well, this is sad. As long as it doesn't affect Team China's placements and reputation, though, it's fine. Does it?

What's the source on his investigation?

I don't think it will. I mean, Chen could come 4th instead of 5th, which only slightly affects ranking points at this point in time.

Even though Chen absolutely destroyed everyone in the free skate - including all the guys on the podium - I think 5th is a suitable place for him overall after that SP. Honestly, I think Jin deserved 4th and the placements shouldn't change.
 
Top