Measuring SOS of playoff teams

skyway28

New member
For the longest time, people have preached the importance of evaluating the strength of schedule of teams as part of the process of ranking teams. I agree strongly with those people. The difficulty lies in how one measures SOS. Traditionally, and by defualt if you are running a computer program, we have looked at the records of opponents and then the records of opponents of opponents. This is a precisely correct method for a league, such as the NFL, where the teams being measured are part of a relatively small group of teams that play most of the other teams within the group. When you expand the group of teams being measured 100-fold, and the overwhelming majority of teams do not play the overwhelming majority of teams within the group you are measuring, the above method is much less than reliable. It is almost useless.


Looking through the thread on the number of undefeated teams entering the Texas and Ohio playoffs, you get a picture as to how this can affect prep football. Individual states group teams together differently and in differing group sizes. During the regular season, the way shools' schedules are set up differ from area to area. The collection of factors in some states and areas simply lend themselves to more or less teams with sterling records due to things such as lack of overlap of common opponents. (We can go into this in more depth later.)

In Florida last year, Class 6A had a grand, aggregate total of one (1) undefeated team entering the playoffs last year. Northwestern had zero chance to play even a single udefeated team. Mostly because many of the teams within the 6A tourney had played others within the 6A tourney during the season. Somebody had to lose. This year, they have what may be a recent record of four unbeaten teams.

Class 5A had a total of two unbeaten teams last year. This year they have one.

Point is, the opportunity to play teams with no losses just does not exist for the elite teams in Florida. There's something in the way teams are classified or grouped or in their scheduling that makes it mathematically impossible for there to be as many teams with unblemished records in Florida as many other states. This is at the root of why people-and computers- often attack the strength of schedule of Florida teams. The reality isn't that we can say these teams with losses aren't good. they just play each other and beat each other up. Measuring SOS in prep football is far more complex than just looking at oppnents' records.
 
 
For the longest time, people have preached the importance of evaluating the strength of schedule of teams as part of the process of ranking teams. I agree strongly with those people. The difficulty lies in how one measures SOS. Traditionally, and by defualt if you are running a computer program, we have looked at the records of opponents and then the records of opponents of opponents. This is a precisely correct method for a league, such as the NFL, where the teams being measured are part of a relatively small group of teams that play most of the other teams within the group. When you expand the group of teams being measured 100-fold, and the overwhelming majority of teams do not play the overwhelming majority of teams within the group you are measuring, the above method is much less than reliable. It is almost useless.


Looking through the thread on the number of undefeated teams entering the Texas and Ohio playoffs, you get a picture as to how this can affect prep football. Individual states group teams together differently and in differing group sizes. During the regular season, the way shools' schedules are set up differ from area to area. The collection of factors in some states and areas simply lend themselves to more or less teams with sterling records due to things such as lack of overlap of common opponents. (We can go into this in more depth later.)

In Florida last year, Class 6A had a grand, aggregate total of one (1) undefeated team entering the playoffs last year. Northwestern had zero chance to play even a single udefeated team. Mostly because many of the teams within the 6A tourney had played others within the 6A tourney during the season. Somebody had to lose. This year, they have what may be a recent record of four unbeaten teams.

Class 5A had a total of two unbeaten teams last year. This year they have one.

Point is, the opportunity to play teams with no losses just does not exist for the elite teams in Florida. There's something in the way teams are classified or grouped or in their scheduling that makes it mathematically impossible for there to be as many teams with unblemished records in Florida as many other states. This is at the root of why people-and computers- often attack the strength of schedule of Florida teams. The reality isn't that we can say these teams with losses aren't good. they just play each other and beat each other up. Measuring SOS in prep football is far more complex than just looking at oppnents' records.

the West Catholic League in California has 7 teams
they are 27-2-1 in non-league games
1 loss to national power DLS
1 loss to a 9-1 team 20-13
and a tie vs. a 9-0-1 team 3-3

but they beat up on each other in league games

the last place team went 0-5-1 in league but 4-0 out of league
the second place team beat the first and third place team but lost to the 5th and 6th place team
 
Florida 6A has 83 teams.

Ohio Division 1 has 116.

Texas 5A has something like 256.

Does that help you any?

I fully realize that. Ultimately, what I'm getting at is this:

Many teams are credited for playing tough schedules solely on the basis of the record of their opponents. Take SLC in 2005, for instance. They beat how many ever undefeated teams in the playoffs and as such were lauded as playing the toughest schedule ever etc. Fact is, there were just more teams who racked up undefeated records against weak schedules for SLC to play. Ya know, if Prep Team X were to play the Dolphins, Raiders and Rams in the playoffs their opponents would have a combined record of 3-24. If you didn't account-in your mind or in the programming of a computer- for the different level in quality of opposition you'd conclude that Team X played a weak schedule.

Or, for those who still don't get it, look at college football. When Hawaii plays Boise State Nov. 23, they'll be playing a team that will be 10-1 and ranked no worse than #17 in the country. Clearly, many will credit Hawaii's schedule strength for that particular game. Yet, Boise State-in 2007- has beaten NOBODY of significance and actually has two poor showings. They lost by two touchdowns at a Washington team that is 1-6 in the Pac-10. They needed a gazillion overtimes to win a home game against a Nevada team that lost 52-10 to Nebraska (no, not the '95 Huskers). Beating Boise State should not carry much weight on the basis of BSU being 10-1.

Records alone can't tell you how strong a schedule is.
 
LOL, I agree and understand where you're coming from. But, you must not have read some of the posts from others over the years. MANY people do not get it.
 
I fully realize that. Ultimately, what I'm getting at is this:

Many teams are credited for playing tough schedules solely on the basis of the record of their opponents. Take SLC in 2005, for instance. They beat how many ever undefeated teams in the playoffs and as such were lauded as playing the toughest schedule ever etc. Fact is, there were just more teams who racked up undefeated records against weak schedules for SLC to play. Ya know, if Prep Team X were to play the Dolphins, Raiders and Rams in the playoffs their opponents would have a combined record of 3-24. If you didn't account-in your mind or in the programming of a computer- for the different level in quality of opposition you'd conclude that Team X played a weak schedule.

Or, for those who still don't get it, look at college football. When Hawaii plays Boise State Nov. 23, they'll be playing a team that will be 10-1 and ranked no worse than #17 in the country. Clearly, many will credit Hawaii's schedule strength for that particular game. Yet, Boise State-in 2007- has beaten NOBODY of significance and actually has two poor showings. They lost by two touchdowns at a Washington team that is 1-6 in the Pac-10. They needed a gazillion overtimes to win a home game against a Nevada team that lost 52-10 to Nebraska (no, not the '95 Huskers). Beating Boise State should not carry much weight on the basis of BSU being 10-1.

Records alone can't tell you how strong a schedule is.

Calpreps' model, despite any shortcomings, addresses this issue to a large degree. It does not simply look at wins and losses. It analyzes margins of victory based on your opponents' results (which are based on their opponents' results etc etc).

As such, if you play a 10-0 team with victories over weak opponents with poor records, you won't do much for you SoS. However, if that team beat a bunch of teams with respectable records, your SoS would improve.
 
CalPreps SOS is average power rating of opponents, not combined record of opponents.

Yes. As I mentioned, it factors in the record/margins of victory of teams and their opponents, not just records. I believe the power rating is derived from margins of victory.
 
The power ratings are derived 100% by the combination of opponents records and margin of victory.

Which means that, if the NFL teams were included in the prep rankings, all of the NFL teams would rank towards the bottom of the ratings. Their opponents' records and margin of victory couldn't compare with that of a Marion Local, for instance. OK, maybe the Patriots could crack the top 100 or so, but that's it. And, perhaps you guys feel that would be accurate, lol. Laugh now, but that's to some degree how teams like Marion Local, OH and Gilmer, TX are ranked so highly. If all you're looking at is records and margin of victory-with no assigned value to difficulty of the level of competition-you can get results like this. And never mind the teams like Garden Plain, Kansas.
 
Yes. As I mentioned, it factors in the record/margins of victory of teams and their opponents, not just records. I believe the power rating is derived from margins of victory.

which if true would knock dls who are famous for playing third and fourth stringers in the second half and seeing 34-0 halftime leads end up 40-22

or 48-0 leads end up 48-20

or 49-18 leads end up 56-38
anyone thinks that elder belonged on the saem field with dls is on drugs
 
which if true would knock dls who are famous for playing third and fourth stringers in the second half and seeing 34-0 halftime leads end up 40-22

or 48-0 leads end up 48-20

or 49-18 leads end up 56-38
anyone thinks that elder belonged on the saem field with dls is on drugs

True indeed. There are MANY factors which can influence margin of victory which tell us nothing of how the two teams really matched up. Humans, if they do their homework, can go through and more accurately identify these things and account for them. The computer simply sees DLS as 18 points better than Elder. It gave maybe 1.5 points for home-field, which we know was worth more than that for Elder. And then the 18 point margin. The reality is DLS traveled 2,000 miles to play in front of a hostile sea of purple and obliterated Elder.
 
I am pretty sure they attempt to create standings that best "predict" results that have already happened.

In their latest system, if the NFL and high schools were being taken together, the NFL would be artificially ranked higher at the beginning of the season, and once all of the teams were linked, the effects of the preseason artificial rankings would be eliminated.

Unless a high school team beat an NFL team, the NFL teams would be ranked higher.

If the teams were never linked, the relative rankings would be meaningless (but I believe the CalPreps site addresses this).
 
True indeed. There are MANY factors which can influence margin of victory which tell us nothing of how the two teams really matched up. Humans, if they do their homework, can go through and more accurately identify these things and account for them. The computer simply sees DLS as 18 points better than Elder. It gave maybe 1.5 points for home-field, which we know was worth more than that for Elder. And then the 18 point margin. The reality is DLS traveled 2,000 miles to play in front of a hostile sea of purple and obliterated Elder.

the post game comments by the elder coach said it all
 
"if the NFL and high schools were being taken together, the NFL would be artificially ranked higher at the beginning of the season"

As I read it on the site, the ratings used before the season are influenced only by past ratings. If the NFL was newly introduced, there would be no past ratings to go off of. And, the calpreps site and guy who posts messages, is admant that there are no subjective values or weighting going on in the programming of the computer. The ratings would then just look at records and margin of victory.

Is there really a "link" between Garden Plain, KS and Miami Northwestern? Garden Plain is rated 24 spots higher than MNW. When you begin to try to trace the "linkage" between the two teams, it doesn't take long before you realize it is so far-fetched it's silly.
 
It looks like the preseason ratings now have a subjective component, which is eventually eliminated.

Yes, those teams are probably linked. It may be a weak link, and it may not result in accurate ratings (however you define that), but there is probably a link.
 
which if true would knock dls who are famous for playing third and fourth stringers in the second half and seeing 34-0 halftime leads end up 40-22

or 48-0 leads end up 48-20

or 49-18 leads end up 56-38
anyone thinks that elder belonged on the saem field with dls is on drugs

Few if any teams belonged on the filed with DLS in 2006.

Elder shouldn't have stepped on the field in nearly half of its games that year.

Hey, aren't you the guy always begging and pleading for people to say that Elder was a really great Ohio team that year?
 
Top