Celtic Clash

Can't say I have an answer about the best method. But, by rule, the course should be measured on the shortest distance. This was a rule change a couple of years ago. Used to be the middle of the course.
This is exactly correct and it is the most important thing to remember. In particular for courses marked with only one line it is important to measure as straight a line as possible from turn to turn, because it is often difficult to walk along and know what the inside edge of the course is. If you go look at a single line course the evening after (or even the following morning) a race you can see how far athletes strayed from the line. Stakes or poles at the turns really help, so long as they stay in place. (Last year we hosted Olentangy Orange for a dual meet. Some of our kids warming up knocked over one of the poles, and moved it, decreasing the length of the loop by about 40 meters, twice. It took many of our kids until late in the season to beat those times.)
 
After looking at the Berlin Bear Den Dash results (also an incredibly fast course) I have a better handle on how to compare the Jerome course to other courses. My regression model excluded really big drops or jumps in season's best times by athletes because those are usually a result of getting getting sick/injured or recovering from being sick/injured. It also struggles because I only have three years of data for this course at Jerome, and two years ago it was really muddy and times were not at all like last year and this year. I also changed baseline to be the average of the four regional courses, which effectively works out to be about 5-8 seconds slower than Troy, 21-24 seconds slower than Tiffin, 5-8 seconds faster than Pickerington and 14-17 seconds slower than Boardman. This was to make it easier for people who haven't run primarily central and SW Ohio courses over the last half decade or so to interpret.

So I forced it to accept all the too fast outliers, and with the Bear Den Dash results I think I have a reasonable handle for comparison. Please note that this is going to be a little conservative in terms of predicting the effect of the course that day because of the changes. For a 17:00 athlete (baseline) Jerome was 4.02% (.0402) faster than the baseline. Meaning a 17:00 baseline is 16:19 at Jerome. For a 21:00 athlete the course was 4.37% (.0437) faster, meaning a 21:00 baseline is a 20:05 at Jerome. If you look at the data, it is clear that some athletes and teams really ran fast even relative to the fast course generally. For what it is worth the conversions for the Bear Den Dash are about .0301 and .0337 respectively. So it is also blazing fast.

I am a believer on getting accurate measurements for the course the athletes ran, because it makes it easier to judge whether a performance was good or bad for any given athlete. For the "times don't matter, places do" folks, we have to remember that athletes are not always running against common opponents, or opponents who are close to them in ability. It is completely possible to get 3rd place one week running a bad race and get 10th the next running a great race, without any of the athletes finishing immediately ahead or behind you being the same. Or to get first or last two meets in a row and run well one time and badly the other. Times help us determine whether a race was good or bad, but they need to be comparable from race to race in order to do that.

So this brings us back to the whole "it's short/it's not short/times don't matter/those times are bogus" debate. (In general, not just about the Celtic Clash in 2021.) I get about 90% of the predictive accuracy from the model I use from the first 10% of the work done, which means a simpler, and more accessible, model for comparing courses is probably a better choice. Maybe this is even something that a group of coaches could hash out together and get posted someplace to allow us to more easily compare courses and then we wouldn't need to have the arguments quite as often. Or at least we could have more nuanced and productive arguments. (Since I genuinely don't like making people angry if it isn't necessary, and this past week was exhausting, this idea may be a little self serving.)
 
For the "times don't matter, places do" folks, we have to remember that athletes are not always running against common opponents, or opponents who are close to them in ability. It is completely possible to get 3rd place one week running a bad race and get 10th the next running a great race, without any of the athletes finishing immediately ahead or behind you being the same. Or to get first or last two meets in a row and run well one time and badly the other. Times help us determine whether a race was good or bad, but they need to be comparable from race to race in order to do that.
This is sort of why I feel that it is still valuable for teams to occasionally run dual meets or maybe triangulars. They help the ability to "race" and aid in developing skills. All too often in large invitationals runners will get complacent and will run against the clock instead of focusing on running against and beating an opponent. Dual meets force runners to concentrate on beating opponents.

Dual meets are largely a thing of the past and have lost favor to large invitationals. A positive that did come out of covid is that smaller meets, including duals or triangulars, started to be contested again due to the need to limit field size.
 
After looking at the Berlin Bear Den Dash results (also an incredibly fast course) I have a better handle on how to compare the Jerome course to other courses. My regression model excluded really big drops or jumps in season's best times by athletes because those are usually a result of getting getting sick/injured or recovering from being sick/injured. It also struggles because I only have three years of data for this course at Jerome, and two years ago it was really muddy and times were not at all like last year and this year. I also changed baseline to be the average of the four regional courses, which effectively works out to be about 5-8 seconds slower than Troy, 21-24 seconds slower than Tiffin, 5-8 seconds faster than Pickerington and 14-17 seconds slower than Boardman. This was to make it easier for people who haven't run primarily central and SW Ohio courses over the last half decade or so to interpret.

So I forced it to accept all the too fast outliers, and with the Bear Den Dash results I think I have a reasonable handle for comparison. Please note that this is going to be a little conservative in terms of predicting the effect of the course that day because of the changes. For a 17:00 athlete (baseline) Jerome was 4.02% (.0402) faster than the baseline. Meaning a 17:00 baseline is 16:19 at Jerome. For a 21:00 athlete the course was 4.37% (.0437) faster, meaning a 21:00 baseline is a 20:05 at Jerome. If you look at the data, it is clear that some athletes and teams really ran fast even relative to the fast course generally. For what it is worth the conversions for the Bear Den Dash are about .0301 and .0337 respectively. So it is also blazing fast.

I am a believer on getting accurate measurements for the course the athletes ran, because it makes it easier to judge whether a performance was good or bad for any given athlete. For the "times don't matter, places do" folks, we have to remember that athletes are not always running against common opponents, or opponents who are close to them in ability. It is completely possible to get 3rd place one week running a bad race and get 10th the next running a great race, without any of the athletes finishing immediately ahead or behind you being the same. Or to get first or last two meets in a row and run well one time and badly the other. Times help us determine whether a race was good or bad, but they need to be comparable from race to race in order to do that.

So this brings us back to the whole "it's short/it's not short/times don't matter/those times are bogus" debate. (In general, not just about the Celtic Clash in 2021.) I get about 90% of the predictive accuracy from the model I use from the first 10% of the work done, which means a simpler, and more accessible, model for comparing courses is probably a better choice. Maybe this is even something that a group of coaches could hash out together and get posted someplace to allow us to more easily compare courses and then we wouldn't need to have the arguments quite as often. Or at least we could have more nuanced and productive arguments. (Since I genuinely don't like making people angry if it isn't necessary, and this past week was exhausting, this idea may be a little self serving.)
I would be very curious to know what the top 10 fastest and top 10 slowest courses/meets in Ohio are based on your model.
 
I would be very curious to know what the top 10 fastest and top 10 slowest courses/meets in Ohio are based on your model.
I don't actually have a convenient way to figure that out. First because my model really only has courses my teams have raced (so it is heavily central and to some extent SW Ohio) and the courses our opponents have run. Second, because it doesn't have convenient way to list the courses. Third, there is a tendency for some courses to be more affected by adverse conditions, chiefly mud and heat. With that in mind, my not comprehensive list of fastest courses, from fastest to slowest:
1. Celtic Clash
2. Groveport
3. Berlin
4. Galion
5. Darby
6. Lebanon
7. Tiffin
8. Central Ohio (Three Creeks Course)
9. Troy
10. Eisenhart

The slowest courses are much harder to gauge. The weather plays such a big role in the really slow courses. Of the ones I track regularly, CVNP is clearly the slowest. Then Lancaster, followed by Pickerington North and Boardman. Boardman is weird because just looking at the invitationals, Boardman is slower than PickNorth, but on the average Pick North rates a little slower for the regional meet. This seems to be a weather thing. Boardman is more likely to be a complete mess than Pick North conditions wise.
 
I honestly think that using some sort of "adjusted for the speed of the course" modifier option for team rankings would go a long way toward making people less angry about the really fast/short courses. What seems to set people off is when, say, the Dublin Coffman boys team is "ranked" #2 in the state by Milesplit when they are clearly not the #2 team in the state. Because they ran super fast times on the Berlin course that will not be replicated at other courses. If there were an option that took into account the speed of the courses for virtual meet/ranking purposes it would probably make people less angry.
 
I honestly think that using some sort of "adjusted for the speed of the course" modifier option for team rankings would go a long way toward making people less angry about the really fast/short courses.
There is a PhD student at CalTech, Bijan Mazaheri, that is doing something like that for collegiate XC. He looks at runners and teams and if you play around on his website you can simulate races. I am not sure of what his methodology is and how he crunches his data but it is interesting to look at.

The name of his website is LACCTic and can be found here. This is how Mazaheri describes his work:

"LACCTiC stands for Logarithmically Adjusted Cross Country Time Comparisons. This site converts performances from varying course difficulties to their track 5k equivalents. The results are used to provide sophisticated rankings and race simulations."

 
There is a PhD student at CalTech, Bijan Mazaheri, that is doing something like that for collegiate XC. He looks at runners and teams and if you play around on his website you can simulate races. I am not sure of what his methodology is and how he crunches his data but it is interesting to look at.

The name of his website is LACCTic and can be found here. This is how Mazaheri describes his work:

"LACCTiC stands for Logarithmically Adjusted Cross Country Time Comparisons. This site converts performances from varying course difficulties to their track 5k equivalents. The results are used to provide sophisticated rankings and race simulations."

I honestly think that using some sort of "adjusted for the speed of the course" modifier option for team rankings would go a long way toward making people less angry about the really fast/short courses. What seems to set people off is when, say, the Dublin Coffman boys team is "ranked" #2 in the state by Milesplit when they are clearly not the #2 team in the state. Because they ran super fast times on the Berlin course that will not be replicated at other courses. If there were an option that took into account the speed of the courses for virtual meet/ranking purposes it would probably make people less angry.
Isn't that what speed ratings are intended to do, adjust for the speed of the course? It would be interesting to see a ranking based on those, although I realize not every race is speed rated.

As for fast/short courses, there is a difference between fast and short. Most teams run a relatively fast course at some point during the season; neither Mason nor St. X ran at Berlin but if you look at MileSplit they're still #1 and #3, respectively (not that time rankings mean all that much, but that horse has been beaten to death). The issue is when 5 teams at a race all of a sudden are the top 5 team times in the state, that's when you know something's off.
 
Isn't that what speed ratings are intended to do, adjust for the speed of the course? It would be interesting to see a ranking based on those, although I realize not every race is speed rated.

As for fast/short courses, there is a difference between fast and short. Most teams run a relatively fast course at some point during the season; neither Mason nor St. X ran at Berlin but if you look at MileSplit they're still #1 and #3, respectively (not that time rankings mean all that much, but that horse has been beaten to death). The issue is when 5 teams at a race all of a sudden are the top 5 team times in the state, that's when you know something's off.
I would not be surprised at all if MileSplit starts doing a speed-rating ranking. I'm sure with all their data they could pull it off.

But even then, as the Wood Report (check that out here if you love NCAA Cross-Country) likes to say:

"COMPUTERS DON'T RUN RACES!"
 
Mile split did start a speed ranking years ago and you can still find the data on some of the venue pages. It listed how fast or slow a course ran compared with an average course. They seem to have abandoned it when MS Ohio changed management a few years back.
 
For grins I pulled the speed ratings for the Hot Summer Bash, Loveland Invitational, Tiffin Carnival and Boardman Spartan (these were the most easily accessible that had a large number of Ohio teams, as opposed to pulling the McQuaid Invite in NY that HD just won). I capped each team at 5 runners, and for teams that ran in more than one of those races took their best speed rating. I then sorted them and gave each runner a score. The results are below, I realize some teams may have held out runners and some of these races were the first of the season, but in general it's probably a more accurate ranking than times (and follows somewhat closely with the OATCCC poll).

Mason 119
St. Ignatius 120
St. Xavier 149
Hil. Davidson 158
Loveland 173
St. Edward 220
Mas. Jackson 257
Unioto 264
Mass. Perry 284
Olen. Orange 287
Hoover 291
Solon 292
GlenOak 303
Dub. Jerome 306
Mentor 321
 
Isn't that what speed ratings are intended to do, adjust for the speed of the course? It would be interesting to see a ranking based on those, although I realize not every race is speed rated.

As for fast/short courses, there is a difference between fast and short. Most teams run a relatively fast course at some point during the season; neither Mason nor St. X ran at Berlin but if you look at MileSplit they're still #1 and #3, respectively (not that time rankings mean all that much, but that horse has been beaten to death). The issue is when 5 teams at a race all of a sudden are the top 5 team times in the state, that's when you know something's off.
Yes speed ratings are supposed to be a way to compare. If we had a way to rank based on times scaled for the course it would make some people more happy.

In practice it is difficult to know exactly how much of a fast course is due to length and how much due to conditions and layout. Several times in my career we have run the course at Darby twice, two weeks apart. Running exactly the same course each time with one time being screaming fast and the other being kind of meh because of different conditions. For my part, I try not worry about the fast vs. short debate so much. It is hard, because I have a cross country coach's natural inclination to get fired up over inaccurately posted distances. But I also know that hills, mud, smoothness and other factors can make as much difference. Also, when you think about it, how many people come on Yappi and complain when a course is too long? That is technically against the rules of the sport, while being short of 5000 meters is not. But long courses do not fire up our righteous indignation like short courses do.

When I am looking at my own athlete's performances I am comparing them to other athletes they have raced this season or in past seasons, both individually and in the aggregate. If you beat some athletes you have not previously beaten, or even finished much closer to athletes who usually beat you by more, the chances are you had a good race. If the average athlete in a race is running a 40 second season's best, and you run 1:15 season's best, you probably had a relatively better race.
 
For grins I pulled the speed ratings for the Hot Summer Bash, Loveland Invitational, Tiffin Carnival and Boardman Spartan (these were the most easily accessible that had a large number of Ohio teams, as opposed to pulling the McQuaid Invite in NY that HD just won). I capped each team at 5 runners, and for teams that ran in more than one of those races took their best speed rating. I then sorted them and gave each runner a score. The results are below, I realize some teams may have held out runners and some of these races were the first of the season, but in general it's probably a more accurate ranking than times (and follows somewhat closely with the OATCCC poll).

Mason 119
St. Ignatius 120
St. Xavier 149
Hil. Davidson 158
Loveland 173
St. Edward 220
Mas. Jackson 257
Unioto 264
Mass. Perry 284
Olen. Orange 287
Hoover 291
Solon 292
GlenOak 303
Dub. Jerome 306
Mentor 321
Not sure the HD faithful would be happy omitting McQuaid. Those are by far the best Ohio boys speed ratings of the season.
 
Not sure the HD faithful would be happy omitting McQuaid. Those are by far the best Ohio boys speed ratings of the season.
There were some huge jumps for HD at McQuaid, 2 runners that were in the low/mid 150's at the Hot Summer Bash and MSU Spartan posted 168 and 166. If you take St. X from Trinity Valkyrie and Mason from Loveland, Mason and Culver meets you get this:

HD: 192, 181, 169, 168, 166 (McQuaid)
St. X: 173, 173, 169, 165, 164 (Trinity Valkyrie)
Mason: 175, 173, 171, 170, 168 (Super-scored from all 3 meets)

But as Supertramp noted above, computers don't run races. St. X and HD raced once and St. X beat them, we get to see a rematch this weekend. Personally I think St. X will beat them again. And Mason didn't have the same top 5 in any of those meets, so good luck figuring them out.
 
This is so stupid, teams develop on a time line and they get better or worse, the smart teams lay low early then progress, that is what you will see soon. As the weather changers and the ground gets softer, the real teams keep getting faster while the ones with great times early fall back, I just love watching this every year.....
 
This is so stupid, teams develop on a time line and they get better or worse, the smart teams lay low early then progress, that is what you will see soon. As the weather changers and the ground gets softer, the real teams keep getting faster while the ones with great times early fall back, I just love watching this every year.....
What does that even mean and what teams in particular do this and are more successful than teams that don't in the end?
 
This is so stupid, teams develop on a time line and they get better or worse, the smart teams lay low early then progress, that is what you will see soon. As the weather changers and the ground gets softer, the real teams keep getting faster while the ones with great times early fall back, I just love watching this every year.....
I'm not sure what softer ground has to do with teams running faster? ?‍♂️
 
This is so stupid, teams develop on a time line and they get better or worse, the smart teams lay low early then progress, that is what you will see soon. As the weather changers and the ground gets softer, the real teams keep getting faster while the ones with great times early fall back, I just love watching this every year.....
What does that even mean and what teams in particular do this and are more successful than teams that don't in the end?
I think he was talking about the Woodridge 2020 boys team. They were a unanimous #1 in the preseason poll, but laid low by winning a bunch of meets in the regular season before sneaking up on everyone and winning the state title out of nowhere.
 
I think he was talking about the Woodridge 2020 boys team. They were a unanimous #1 in the preseason poll, but laid low by winning a bunch of meets in the regular season before sneaking up on everyone and winning the state title out of nowhere.
Not sure if many people noticed, but on that Woodridge team from last season, the #2 returning state placer did not run. "Played Football". Came back and helped the track team big time, but did not run CC as a senior. He could have been the state champion individual and chose not to run. He is a tough competitor that did not really love CC and thought the team would win without him and he wasn't needed. (Which they did.) It was an odd and awkward situation.
 
Not sure if many people noticed, but on that Woodridge team from last season, the #2 returning state placer did not run. "Played Football". Came back and helped the track team big time, but did not run CC as a senior. He could have been the state champion individual and chose not to run. He is a tough competitor that did not really love CC and thought the team would win without him and he wasn't needed. (Which they did.) It was an odd and awkward situation.
Why was it awkward? Would you want your son to play a sport he didn’t think was fun just because you knew he was very good at it? One of my sons was one of the best soccer players in his class. He gave it up to give full commitment to CC. The school soccer team would have been better with him. Nobody involved thought it was awkward. Granted, it didn’t cost the school a state title. Sounds like the expectations at Woodridge are putting kids in awkward positions that are not fair.
 
Why was it awkward? Would you want your son to play a sport he didn’t think was fun just because you knew he was very good at it? One of my sons was one of the best soccer players in his class. He gave it up to give full commitment to CC. The school soccer team would have been better with him. Nobody involved thought it was awkward. Granted, it didn’t cost the school a state title. Sounds like the expectations at Woodridge are putting kids in awkward positions that are not fair.
Yeah, like none of that is true. It was just awkward because it was a kid that could be state champ and even football parents would ask what's going on? Questions about Covid when it had nothing to do with it. A lot had to do with not being able to go for anything like NTN. He's a big fish in a small pond and CC is sort of a big deal at Woodridge. He seemed happy and his teammates seemed happy. I found it funny that nobody really asked about it on here or even at meets. (Although, they might have. I only went to 2 duals, CVNP and STATE.) He performed in the Spring. Big Time. I think after he won the 800m, a little regret creeped in about not competing for that state title in CC. He is a competitor. He has the ability to get uncomfortable and stay uncomfortable for a long time in races. It's one thing to be able to be uncomfortable for 120 meters. A whole other thing to be uncomfortable for 799m.
 
  • Like
Reactions: SLS
Might as well dig this thread up from the depths of page 5.

I only have familiarity with 1 team that competed at this meet both last year and this year. Their times from Sat. night appeared to be reasonable compared with other meets they've attended this year. In fact, many of them ran faster on their home course last weekend - a course that I have also run on before and found to be of a legitimate distance. Their ground, which includes a wooded section of hard-packed dirt, feels great underneath my feet compared to most of the other courses I've run (was once a Sunday hobby of mine to go to nearby courses and run them the day after an invite when the markings were still intact). I captured the spirit of it but could never get Woodridge's course at CVNP 100% right since they can't apply many paint markings on the ground since it's national park land.

I was told a few weeks ago that Jerome all but admitted the 2021 course was short due to some construction on campus. I found a course map from 2021 but could not find one from tonight to see if anything changed. I've been to Jerome's course for the MS meet, and I know it's nearly pan flat (only hill I saw was no higher than my waist). I imagine the final 350m on the track would be a welcomed sight for most runners.

Would anyone who was there both last year and last night care to weigh in on any differences?
 
This year's course was completely different, although it utilized roughly the same flat terrain. It was certified by a third party. We had some personal bests but nothing like last year. Last year's course was short by my wheel but not as short as alleged. Last year, The Clash was on the first day of the season that wasn't blazing hot. I think a significant portion of the big improvements over previous weeks were weather-related.

If anything, this incident led to meet directors being extra careful on measurements. I noticed some of the courses that were eye-raisingly fast in previous seasons ran slower than usual.
 
This year's course was completely different, although it utilized roughly the same flat terrain. It was certified by a third party. We had some personal bests but nothing like last year. Last year's course was short by my wheel but not as short as alleged. Last year, The Clash was on the first day of the season that wasn't blazing hot. I think a significant portion of the big improvements over previous weeks were weather-related.

If anything, this incident led to meet directors being extra careful on measurements. I noticed some of the courses that were eye-raisingly fast in previous seasons ran slower than usual.
Good.
 
If anything, this incident led to meet directors being extra careful on measurements. I noticed some of the courses that were eye-raisingly fast in previous seasons ran slower than usual.

Shouldn't this be the norm if you are hosting a large invitational??
 
Top