Calpreps Strength of Schedule Ratings

concha

Active member
I thought of Dad when I saw this. Of the Calpreps Top 1000, here are the top 5 SoS ratings:

1) St. Xavier (OH) 43.4

2) LaSalle (OH) 41.1

3) St. Edward (OH) 40.4

4) Moeller (OH) 38.7

5) Elder (OH) 38.2


All but St. Edward are in the same league.
 
 
It has only been a week, so I am going to guess that you would still be a fool for believing in this SOS ranking.

Because I am lazy, can you tell me how many of these teams St. Eds has played?
 
Their SOS ratings and overall ratings are all related. And those ratings have zero Florida teams-MNW included- in the top 48 nationally! Which can be traced to the way the computer claculates SOS...which is undoubtedly flawed. In other words, their SOS ratings are no more reliable than there overall ratings.
 
And, as a result of having no teams from Florida rated highly, there is no way for any of them to climb up the ladder the rest of the season. They'll get credit for merely beating lower rated teams. So, it's a viscious cycle where teams from Ohio etc will always be rated highly-even oif they lose it will be to a highly rated team, and teams from Florida will be rated lower-even if they win it will be over a lower rated team. What happens if a team from Florida beats a team from a more highly regarded state like Texas? The ratings for Texas go down!
 
Are you saying that somehow Ohio teams get an artificial bonus for being from Ohio?

I think one reason for the high SoS for Ohio teams is that many of the D1 private schools play far more interstate powers than any other state (ex. the Herbie) and Ohio powers interplay one another far more in the regular season than any other state.

Ex. X played Glenville, Ignatius, Elder, Moeller, DeMatha, Trinity

I don't think there is an equivalent situation in Florida. You don't have, for example, MNW, Lakeland, BTW, STA etc all playing each other in the regular season.
 
I haven't figured out exactyl why there is the error that favors teams from some states. I know that calpreps has said the computer program has no concept of 'state'. So, a state having a certain record against teams from other states-at face value- is not of consequence. What the computer would calculate is X beating Dematha, who beat so and so, who beat so and so and on and on.
It's not just Ohio which has been overvalued. Check the top 1000 ratings. Many obscure states have teams showing up well above where they should ever be. The fact that Marion Local is rated 13th best in the country and second in Ohio overall lends insight into the nature of the flaw in the system.
 
I haven't figured out exactyl why there is the error that favors teams from some states.

Think of it this way:

Say you are trying to put together a ranking for teams from four states--Ohio, West Virginia, Kentucky, and Virginia. Now, teams from Ohio never play teams from Virginia, but since Ohio teams play Kentucky and West Virginia teams, and Virginia teams play Kentucky and West Virginia teams, you can rank them against each other, right?

The problem is that this is similar to looking at DLS 35 Mission Viejo 7, Mission Viejo 26 Moeller 22, and St. X 28 Moeller 14, adding up the margins of victory, and concluding that DLS is 18 points better than St. X--it's not a very accurate way of comparing teams, or states. And the more steps in the chain of games you're using to compare teams, or states, the less accurate it becomes.

If you have enough of these "connections", and you don't have to go through too many steps to make them, things will probably average out to a reasonable result. But when you're comparing states from different ends of the country, you've got a lot of steps and not many connections, leading to poor results.
 
Perhaps thery should just join the conference.

They probably would if it weren't for the fact that they are located in opposite corners of the state, with St. Ed being in Lakewood (near Cleveland) and the GCL-S being located in Cincy.
 
Think of it this way:

Say you are trying to put together a ranking for teams from four states--Ohio, West Virginia, Kentucky, and Virginia. Now, teams from Ohio never play teams from Virginia, but since Ohio teams play Kentucky and West Virginia teams, and Virginia teams play Kentucky and West Virginia teams, you can rank them against each other, right?

The problem is that this is similar to looking at DLS 35 Mission Viejo 7, Mission Viejo 26 Moeller 22, and St. X 28 Moeller 14, adding up the margins of victory, and concluding that DLS is 18 points better than St. X--it's not a very accurate way of comparing teams, or states. And the more steps in the chain of games you're using to compare teams, or states, the less accurate it becomes.

If you have enough of these "connections", and you don't have to go through too many steps to make them, things will probably average out to a reasonable result. But when you're comparing states from different ends of the country, you've got a lot of steps and not many connections, leading to poor results.

I agree. Though I would suggest-as you may be- that comparing teams from Ohio with those from Virginia in the example you gave would also be too many steps.
 
Think of it this way:

Say you are trying to put together a ranking for teams from four states--Ohio, West Virginia, Kentucky, and Virginia. Now, teams from Ohio never play teams from Virginia, but since Ohio teams play Kentucky and West Virginia teams, and Virginia teams play Kentucky and West Virginia teams, you can rank them against each other, right?

The problem is that this is similar to looking at DLS 35 Mission Viejo 7, Mission Viejo 26 Moeller 22, and St. X 28 Moeller 14, adding up the margins of victory, and concluding that DLS is 18 points better than St. X--it's not a very accurate way of comparing teams, or states. And the more steps in the chain of games you're using to compare teams, or states, the less accurate it becomes.

If you have enough of these "connections", and you don't have to go through too many steps to make them, things will probably average out to a reasonable result. But when you're comparing states from different ends of the country, you've got a lot of steps and not many connections, leading to poor results.

Here is the post that started the thread:

I hate to do this ... But

St Xavier 28. Moeller 14

Mission Viejo 26, Moeller 22 (MV travels 2,000 miles)

De La Salle 35, Mission Viejo 7 (DLS travels 400 miles)

St X is 14 points better than Moeller
MV is 4 points better than Moeller
DLS is 28 points better than MV

based on these scores, would St X beat MV by 28 as DLS did?
would DLS beat Moeller by 14 as St X did

we would have to assume the answer to the first question is no as ST X beat a team MV beat by 14 so it reasons that they would beat MV by less, since MV beat them also

we assume DLS who beat MV by 28, who beat Moeller by 4 would certainly beat Moeller by more than 14 that ST X did

St X dropped several rungs in some of the polls last week

will that continue this week


can you please show me where I concluded DLS would beat ST X by 18????

that is the problem, I posed a question, And St X wannabes jumped to conclusions THAT WERE NOT IN THE POST!!!!!!!!!!!!!!!!!!
 
I agree. Though I would suggest-as you may be- that comparing teams from Ohio with those from Virginia in the example you gave would also be too many steps.

Yes, it probably is. Comparing Ohio Division 1 teams with Division 5 teams is probably too many steps. So you can imagine what happens when you compare Ohio teams with Oregon teams.
 
Can you show me where I accused you of concluding that? I was just using a set of games recently mentioned here to make the point.

since it was MY thread and MY post

I was the one who mentioned them

as you can see, NOWHERE ON THE POST was the conclusion that DLS was 18 points better than St X

you injected that (unadvisedly) on your own
 
Yes, it probably is. Comparing Ohio Division 1 teams with Division 5 teams is probably too many steps. So you can imagine what happens when you compare Ohio teams with Oregon teams.

Yup. Truthfully, you'd be a fool if you said the best team from Oregon is better than the best from Ohio. You'd be equal the fool if you said the best from Ohio is better than the best from Oregon. We have no basis to draw a conclusion on that either way. We can make subjective judgements, and we all know which team would be more highly regarded. But we don't really know. And, certainly, there is no way to numerically, statistically link the teams in a reliable enough manner to meaningfully compare those teams. We'd like to do so. That's why when somebody publishes a rankings list like that, we take a peek. And, if it says somewhat what we want it to say, we try to justify why it is correct.
 
Top