Loading ...
Sorry, an error occurred while loading the content.

ratings

Expand Messages
  • D. Atkinson
    I was surprised to see all the messages supporting a traditional disk after the online poll had a crushing 11-2 victory for the extended ratings the last time
    Message 1 of 50 , Mar 27, 2001
    • 0 Attachment
      I was surprised to see all the messages supporting a traditional disk after the
      online poll had a crushing 11-2 victory for the extended ratings the last time
      I looked.
      My goal is to do a 1-5 disk again. Jeff hasn't signed back onto this list
      since recently getting a new email address, but my last emails from him
      yesterday pretty much confirm that he doesn't want to mess with 1-9
      either.
      With that said, there are a few of things that can happen:
      1) the email group splits and goes its separate ways, one group
      doing the 1-5 and the other group doing the 1-9
      2) a 1-5 disk gets done as in the past, and those who want extended
      ratings lead the charge in an organized manner to modify the disk in that way.
      3) a 1-5 disk gets done as in the past, and those who want extended
      ratings all run off and do their own thing

      Although I won't have the time or energy to go beyond the 1-5 and
      help with extended ratings, I really hope that scenario 3 doesn't happen.
      If you want to do extended ratings right, you'll want to have a good
      group together, with EXTENSIVE testing to get things right. If everyone
      does their own extended ratings, most of them will be crap. I'd hate to
      see scenario 1 happen, as it will dilute the rating talent on both sides.
      I thought that perhaps providing the fractional ratings to an extended
      ratings team would help, but I realized that well over 100 skaters get
      tweaked, often in multiple areas due to testing results. What the extended
      people would be dealing with ratings wise would be very different from
      what the 1-5 disk was sitting at. So, the extended ratings people will have
      to decide if:
      1) they want to change the final 1-5 disk
      2) they want to use the early pure fractional ratings, without regular tweaking,
      and build from there.

      Those of you really set on doing extended ratings should help answer
      some of these items, so that a good dialog can get going and get
      everyone what they want from the disk. I guess I'll informally serve as
      the 1-5 contact. Anyone want to lead the decision making for the
      extended team, so that we can make sure that we have some mutually
      beneficial activities?

      Also, both sides will need some extra grunt work this year getting the
      stats from the nhl.com page. They changed software systems, and a
      full clean dump doesn't seem possible anymore. I'd love to get the full
      stat set again, but we may end up piecemealing downloads of individual
      stats. If we do it as a team, we can get quite a bit, but I don't know if
      we will be able to get everything. It is vital that we get the important
      stats that aren't available on normal stats services (like shorthanded ice
      time and faceoff data) that we really need for the disk making. We can
      get games/goals/assists/shots anywhere. I guess we'll be looking for
      volunteers shortly to start the grunt process of inputting a stat
      catagory, and cutting/pasting the listings off the page. We should try to
      be organized about it, so that we avoid duplication of effort. Anyone
      have a better idea of how/where to get full stats without pulling it off
      the nhl.com site one stat group at a time?

      One other thing, those of you keeping scouting reports:
      you have my sincere thanks.
      We will want to use them once we get the stats issue
      straightened out and get the review process going.

               Dave

    • D. Atkinson
      Great description Herb. And thanks for all the hard work. There would be no disk without you either, as the entire package is definitely too much for one
      Message 50 of 50 , Oct 22, 2010
      • 0 Attachment
        Great description Herb. And thanks for all the hard work. There would be no disk without you either, as the entire package is definitely too much for one person. I appreciate your efforts. It was fun to hang out with you in Toronto and see that Leafs game....I had been to the old Maple Leaf Gardens, but not the new place. The seats were much more comfortable in the new building :)

        One thing to add below....not only did all the teams come in strong on SOG and SOG allowed, but also GF and GA. All less than 5 % out. Individual players also were all less than 5% out on SOG/min played (well, all but 18 of the 750 or so skaters were in, and those 18 were all low use players that are hard to get in line due to low minutes played). The variance on the numbers were smaller this year after testing than last year, or any other year that I can remember. It was a good set. There may be a few ratings in there that could go either way, and even a few that one may argue are off, but they were the borderline reviews that needed to be where they were at to get the disk performance in line. Consider them players that under or overplayed last season. The disk is pretty solid based on the approximately 800,000 test games played.

        There are some issues with the disk that were unfixable, as in last seasons. First and foremost, players with a fairly high assists/min (> 0.015 ast/min) run very low in assist totals in tests, especially if they took quite a bit of shots. This is game engine issue, and has been present forever in the game.
        There are a few goons that get too many penalty minutes. I have all ratings set to zero on a few players (except for fights), and they still get way way too many PIM. This is also a known issue. Good thing is, this is mainly with players that have nearly as many PIM as minutes played, so they have little impact (especially while spending substantial time in the box). So, if you end up playing Bissonette 10-12 minutes per game, he may end up with 500 PIM.

        There are also a few goalies that get too many PIM. I still don't know why this happens with a couple goalies every season. This is the old Jason Muzzati issue (for those of you that remember that one). So, there are 3 or 4 goalies with 2 or 4 PIM in real life that got 15-20 consistently in the test seasons. The pen rates are all set to zero, so I'm not sure what drives this. Impact should be small overall.

        Other than those things, the disk is pretty solid, and leagues should be pretty satisfied with it.

        If anyone finds anymore errors, please report them ASAP so that they can get fixed before leagues start using the files.

        Thanks, everyone for their patience this year. It was a challenging year time-wise for both Herb and I.

                  d

        Herb Garbutt wrote:
         

        Hey everyone,

        Meant to post something earlier but was busy setting up my league and revamping our website. Anyway, I see the ratings questions starting to pop up so I thought I would explain this year’s process.

         

        First, I sent out the a base set to those who expressed an interest in rating (last year’s ratings, plus my own ratings for rookies).

         

        Next, once the raters reported back, I went through each team. If the majority of people rating a team had a change for a certain rating, I made the change. At the same time I began doing checks for consistency (basically this entails calculating the average rating on each team for each skill). This is important because with different people rating different teams, they all have different standards as to what is a 2, 3, 4 etc.  The purpose isn’t to get everyone the same but to identify teams that are much higher or lower in a particular rating. Forecheck seemed to have the biggest variance.

         

        Then Dave ran the first set of tests.

         

        At this point two things happened. When the results came back it identified teams that needed changes. So I went back to the raters submissions and made further changes (where it was not a majority but suited the purpose of bringing the team in line). The second was I began doing individual player reviews to  check for ratings that deserved changes that may have got by the raters. My individual reviews were for the most part on players chosen at random, so as to not show any bias toward certain players or teams.

         

        Then Dave tested again.

         

        By this point, most of the suggested rating changes by the reviewers were exhausted. So I began concentrating my individual reviews on the teams that were still not performing accordingly…. Although between test runs I continued to do reviews on random players throughout the league. This process continued probably through about eight or nine rounds of testings.

         

        Last year, I did individual reviews on every player. This year, because of the computer problems I got a late start. (In the end there were only three teams where I reviewed every player—NJ, Mon, Ott and there were some where I did as little as 3 -- Det)  I hadn’t updated the scouting report so as I did an individual review on a player, I updated his report. It was a good chance to update the scouting report as well as check ratings with the most current information.

         

        In the end, I only did individual player reviews on a little under half of the players (386 of 814). So even now as I’m setting up my own league I’m seeing some that I don’t quite agree with (Wade Redden a 3 comes to mind….wish I’d reviewed him). If a team was performing within the parameters (which Toronto was throughout testing), the only reason a rating would change is if I selected a player for an individual review.

         

        No process is perfect. As I’ve said in past years, we could probably have pro scouts do the ratings and there would still be ratings we don’t agree with.  In terms of testing, Dave said this year’s set produced the best results we’ve ever had. Each year the goal is to bring teams within 5% of shots for and against. This year we got 29 of 30 teams within 4% (20 of 30 inside 3%). That may not seem like a big deal but considering some teams start out as far out as 12%, it’s pretty good.

         

        I’m not even sure we got everyone within 5% last year. There are always one or two problem teams each year and I think in the past couple of years, we’ve completely exhausted every change we could make on at least one team and basically had to accept them as close as they were. In some years we’ve even tried making changes on their division rivals, hoping that would have enough effect to bring them in line. We didn’t get to that point this year.

         

        So we did well, but are all 8,140 ratings going to be correct? No. If we get 95% right I’d say we’ve done a remarkable job considering we’re not pro scouts…. But even at that rate, we’d have 407 incorrect ratings. We will never get it perfect.  All we can do is the best we can.

         

        I’d just like to say thanks to Dave for everything he’s done for APBA hockey leagues all over North America. Without him, we wouldn’t even be discussing whether Gunnarsson is a 3 or 4. I finally got the chance to meet Dave this year and as we were talking we had the realization that we’d been communicating by e-mail for more than a decade. He’s been producing these disks for all of us for 13 years (or is it 14 now?)….using his vacation time, giving up time with his family and running test season after test season while the sun is shining outside. Without him, this game (IMO the best there is and even better since Dave started doing the season disks) would have died a long, long time ago.

         

        Enjoy your seasons and for all of you with Gunnarsson on your team, enjoy your gift.

         

        Herb

         


      Your message has been successfully submitted and would be delivered to recipients shortly.