Clarification: Expectations for Filling Out Evaluations

If people don’t write evals they personally promised to write, the problem is not with the requirement not being explicit enough.

And in an earlier comment you write that you expect this clarification to affect 10-15% of games. So unless there is a significant correlation between an event being a keystone in an officials career and that event having a CH that falls under your criteria, you’d save having to ask for these helpful evals in 10-15% of cases it seems.

Based on my experience the important part about how someone is providing feedback is about the parts that are provided (or not provided) in person - what, when, and how. Mandatory evals won’t tell you this.

It seems to me that your proposed solution doesn’t solve the problems you want to solve. But I can see some things that might be more helpful while creating less work:

  • For items 2 and 3: Spell out the criteria that make an event likely to produce helpful evals and encourage people to ask for evals for these events. The community can assist in removing barriers here by making “I’d like an eval” a default checkbox on application forms as suggested by Twixxi above.
  • For item 4: Make it more clear that how a CH is giving feedback is important info to include in OOSs and evals. Maybe even encourage CHs to ask for evals from people they are giving feedback to. Or allow for one OOS to be from a mentee when someone is applying for a high level. (There is one required from someone who often sees you when they are in a head role - maybe when you are usually the head then this can be from someone who often sees you when you are in a head role.)
2 Likes

There are many good points above as to why making this a requirement is burdensome for not only Head Officials and specific demographics, but I would also think for Cert as well, in needing to cross reference staffing, against opt-out, against game/tourney specific opt-out. It just sounds like a lot of work for everyone.

So in an effort to not rehash those points, I’ve got a few questions/thoughts.

  1. Is the issue that there are not enough evals being submitted, or that the evals that are submitted don’t contain enough information, or specific information? Because those are two different things and flooding the system with more evals doesn’t solve not having enough information.

I would hope that Certification views lack of information in a positive way and assume that because there is nothing to the contrary that the Official is performing at the expected level.

  1. Has Cert determined the root cause of lack of evals, or lack of specific information on an eval? Is it the form? Is the questions on the form? Is it the format of the form, or way it has to be submitted? Is it accessibility issues? Is it an “expectation” of the form (ie time, amount of information, etc)? What is holding people back from submitting timely with the information Cert needs? And is THIS requirement the answer, does it really fix that issue, or is there a better solution?

  2. Will the same requirement be made of teams participating in a sanctioned game/tournament? Way, way back in the day, a specific function of CERT was to provide skaters / teams with a way to quantify an Official’s level of experience and abilities. Skaters / teams were also a more prominent part of the process and system. By flooding the system with more and more evals from Officials, will that then drown out the skater voice? Is there a reason this requirement isn’t also happening on that side of things?

8 Likes

But this Clarification doesn’t actually solve these issues. There’s a very significant portion of officials who rarely or even never work with “higher level” Certified Officials at all, especially ones who are focusing on the Head Official role. This Clarification does not help them at all, but instead will put them at an even greater disadvantage because they’ll have significantly fewer evals than those who happen to work with the Head Officials this Clarification applies to. So instead those individuals will be under even more pressure to request evals, meaning they’ll have to try more frequently to somehow find ways to overcome those barriers this Clarification seeks to remedy.

It doesn’t solve this issue either, as much of the best feedback does not come from the evaluation process. Not being good at writing Evaluations does not mean you are bad at providing feedback, and honestly being able to write good evaluations doesn’t mean you’re good at providing feedback. It will also significantly disproportionately mpact those who are dyslexic, or have other reasons why that kind of writing is hard. The way to resolve item 4 seems to be too encourage HOs to request evaluations from their crews.

I have other points too, but I’ll save them for after other points made previously have been addressed.

6 Likes

How often is this and the other reasons the case for you in total? 20 times a month? I’m just guessing.

How many officials are there in the system? 1000? I’m just guessing.

How often do those officials work with each other that would generate an expected eval? Hard to say. I’ll trow out a number. Let’s say 24 per official per year or two per month per official.

How many evals in theory would your expectation result in if fully implemented? Doing the math, that’s 2,000 evals per month that you’re saying is the “expectation.” If each of those evals took 10 minutes, that’s 333 labor hours to write those 2,000 evals per month, if my math is correct.

Do you feel that the “extra” work you’re doing as a committee following up or writing reviews with less-than-perfect packets of info would be reduced by so much that it’s worth the 333 labor hours per month from the whole community to get you the equivalent function of the 20 instances per month where you, Cert, wish you had a little more info which you say you get when you reach out to the head officials directly? Something tells me you’re not putting in 333 extra hours per month (the equivalent of two full jobs) due to the issues you’ve mentioned. Maybe you are.

Why is everyone else doing extra labor on average that month (labor that may never get used for any review) the solution instead of following up on those 20 specific instances? How many people signed up for the system but will never apply for cert? 50%? 80% So you’re asking us to put labor into a void for people who’ll never go up for cert, you’ll never read the evals, we’ll never be compensated for the time to save you the time for those 20 instances per month (or whatever it is you say missing evals is the critical piece).

I’ll do the labor if you’re telling me that that is what makes sense instead of changing the OOS questions or something else to target the real issue, whatever it is. But tell me why me doing the extra labor and the 999 other officials in the system doing the labor is the best solution to the real problem.

Is the real problem… people aren’t applying to cert? ok, and? so they’re not applying. Maybe they don’t care. Maybe there’s no benefit to it. Maybe they don’t have mentors helping them down the path and that’s the real problem. Me writing more evals doesn’t fix that.

Is the real problem… people are under awarded levels? Ok, well, Cert is a subjective system and this is always going to be a problem in a subjective rankings system. Can it improve, sure. Why are you convinced more evals will help this fundamental flaw with a subjective ranking system?

So people aren’t awarded a high enough level. Ok. And? What’s the consequence? Cert will thus seem insufficient to help skaters understand who to staff? THs don’t have enough info to staff events? I’ve helped staff quite a bit this year, it hasn’t been my experience. Maybe others think this particular situation is BAD ENOUGH that the logical solution is more evals that may or may not ever be used? You’re going to have a hard time drawing a one-to-one line here for me that ties these together, if that’s the case.

Should I be better about turning in timely evals. Sure. OOSes, recommendations for playoffs and other events, providing feedback in-game and one-on-one after games… yes, all that and it’s not enough. I can always do more. We all can. It just sucks that we switched to OOSes to avoid all these evals we used to have to write and read, and yet now we’re pushing to add evals back in so people have to do OOSes AND evals now for people as an “expectation.” It didn’t fix things in the old system and something tells me it’s not the key component that’s going the “fix” things now.

Though I think defining the problem better and making it clear to me how published “expectations” are the best attempt we have at addressing the real problem in a way that’s sustainable in a volunteer model for a subjective ranking system.

7 Likes

The amount of people I have asked for evals from and been told, I don’t really do them, or they have said yes but like bullseye, I’m still waiting 6 months later from my postseason games evals.

Would certification be okay with all my evals coming from the same 3 people? And then that being the people that probably do my OOS?
Because there literally isn’t that many people who head official in Australia, I feel like I’m gonna be stuck or forced to travel beyond my financial means to be able to get different people to eval.

1 Like

I have a few things I’d love to say here but I want to start with getting Cert’s clarification to one question : how does Cert identify who is “focussing on the Head Official role”?

For example, if I apply for Level 3 can I email Cert and say that I am not focusing on HNSO, and therefore be exempt from these Eval expectations?

Is it that Cert assumes that an Official’s focus role is the one they’ve done the most, if so is there a window of “last X months/years” that Cert uses?

Does Cert allow Officials to focus on multiple roles, or is there only one at a time?

The concept of “focus role” has been used repeatedly in this thread and I still don’t know what it means

4 Likes

Obviously first, congratulations to Certification on your first post.

The headline here is I don’t like the additional requirements these clarifications will create, and I think it’s clear from people’s responses that, despite Cert’s stance that these evals were already required and that these are not new requirements but clarified requirements, most everyone feels they are new.

But also, I know I’m bad at writing evals. But also but also, I don’t think the current system helps. After regionals I sat down and made notes for evals for my entire crew. Then I went in and wrote the evals for everyone I could find on the form. I was surprised how many people weren’t listed! Then I found out months later that I missed at least one high level Official because, for whatever reason, I didn’t find their name on the drop down menu. I feel terrible and I hope I didn’t miss others, but I didn’t not write that eval because I didn’t think it mattered or I didn’t want to. The form confused me. So that’s one reason.

My biggest issue though, by far, as others have mentioned, is that for as long as certification has been around it has been about performance on the track (politics aside, that was the intent). This adds an additional off the track component and says “if you’re not good at writing feedback down and submitting it in a timely fashion, you’re not a good leader of a crew”, and I’m sorry but that’s just not true. Those are two different skill sets. This requirement will reward those who are poor communicators in person, poor communicators in the moment, poor at leading in a game, but good at writing down what they were trying to convey. And it will penalize those who run the game well with quick tidbits of needed feedback and then forget about it because the game moves on. It creates the possibility that our best crew head will be penalized.

We used to require evals from captains. Skaters rightly pointed out that the skills to be a good captain did not include evaluating officials. I will now point out that the skills required in being a good official do not include evaluating officials. Even as a leader.

What certification clearly wants are evaluators. And they should want that, we all should. Consistent voices who are good at the evaluation and then the writing down of that evaluation and the submission in a timely fashion of that evaluation.

But I don’t think this is how we get that. This feels like introducing Certification level four and saying “we’ve always been a volunteer organization, so obviously the best officials volunteer in the organization, so if you want to be a level four official you need to have been on a committee for two years or be serving as a chair of one now”. It’s as true that we should all volunteer on committees as it’s true we should ask write evals but neither one makes someone a better official.

I also worry that this will get people writing evals for their own certification evaluation, not the subject’s. As in “I better put that I corrected them on this and that, so I can get my level three”.

10 Likes

I think we should expect our crew heads and tournament heads to be evaluating their officials without being explicitly asked, because most of us are doing that constantly. If we’re giving feedback after an event that’s based on our evaluation of a person, and we could do the extra couple of steps to do this on an eval form. For our part, we can look at how we reduce the burden of doing that form - for example, we’re moving to a docs based OOS which has guidance in line, supports speech to text and translation, and we already accept feedback in any language.

We overwhelmingly see the best quality and quantity of evidence to certify from people with greater privilege - whether in access to games, protected characteristics or any other axis of privilege. That means it is easier for us to make decisions about those officials and further entrenches systemic inequalities (the rich-get-richer or the old-boys-club)

There are also other ways you can demonstrate that you’re giving great actionable feedback, through OOSes and evals from peer officials and those you’ve mentored. We know that not everyone is able to, or has writing skills, capacity or time.

As cert we have limited levers to improve this, and we’ve been told time and again that the biggest barrier to getting evals is feeling you have to ask, and not having the confidence to do so, or feeling it’s an imposition (I have nearly 400 games and don’t have cert, in part because as an indie official I don’t have any athlete participants I have the confidence to ask for an OOS).

Are we going to check whether you’ve done evals for every game ever (especially when you’re working with the same people over and over , or getting out the calculator to check your percentages? No. We’re going to look at the numbers over the next 6 months and we’ll also keep open minds as to what this looks like.

But we are going to be asking questions if as an L3 applicant you’ve been a THO or CHO 20 times this year and have never written an eval. You benefitted from the support and mentorship of others, who have invested time in you down the years (and for this application), so we want to see that you’re doing the same and not pulling the ladder up behind you.

2 Likes

"if as an L3 applicant you’ve been a THO or CHO 20 times this year and have never written an eval. "

How will Cert know whether or not the officials opted out, even though they’re eligible?

4 Likes

I agree - this statement is very poorly thought out. It’s made on the assumption that everyone has the same density of certified officials available to them. As where in geographically isolated areas (taking Australia as an example) we have less than 50 in the entire country and while a tournament is unlikely to ever not have more than the THO/CHO as a certified official, but it’s entirely plausible that as a CHO they are the only certified official on that crew
I’ve quite regularly been to games where there are only 2 certified officials at the whole event (and neither of us in a Head Official role, or remotely in a position to give evals to each other)…

4 Likes

@wishbonebreaker this is pretty hypothetical but, someone who only has a couple games as Head Official probably isn’t focusing on that role. Someone with a bunch of games but very few eligible crewmembers has low demand on themself for the evals so we would expect at least some to come in. But someone with many games for whom every qualified official is opting out of an eval…that sounds like there is more than meets the eye. The message here isn’t “our system will be fine,” it’s that we’re willing to dig in and see what the real message is in cases like this. Edge scenarios get explored more carefully.

@bullseye and @muggles it sounds like you’re saying, there are few officials in your area to whom this clarification of expectations would apply. We hear you and acknowledge that there are many problems with evals as they exist, beyond the ones we hope to address. Please stay tuned for more about evals, this week.

@revroit I trust that @sticksandstoner’s response addresses your concerns – you can show good leadership and feedback in other ways, but if you choose to engage in Certification for yourself you will need to contribute to it as well for those coming up behind you.

@blind.io_he.him When someone applies for certification, we evaluate them across roles. To meet Level 2 or Level 3 standards, you must be excellent or exemplary at at least one role. We call that the “focus role.” The naming is not perfect so thank you for asking. This clarification is that, to be considered excellent or exemplary in the Head Official role, you need to show that you provide feedback to your crew. And to be part of the Certification system at those higher levels, you need to do so in a way that supports your crew’s Certification goals. That’s evals.

So, if you’re applying for Level 3, you don’t need to email Cert to tell us your focus roles. We will look at all roles based on all of your feedback. It will be harder to reach an L2 or L3 standard in the Head Official role if you are not filling out evals for your crew, but you could reach that standard in other roles instead, and still get Level 3.

@ everyone – thank you again for your comments. Cert Oversight is continuing to read and watch these threads and the goal is for us to respond to address questions and factual inaccuracies but to mostly try to just read what folks are sharing. Over the next months we hope to integrate these threads with the survey results, report back on the contents, and then start thinking about ways to address concerns for the future. Please continue to share your reactions and thoughts!

1 Like

I’m not a fan of this clarification.

I feel like Cert is slowly drifting and changing expectations. From the first paragraph on evals on the how to get certified page:

Evaluations are available to Certified Officials, and also to Uncertified officials who opt in. Evaluations are not required as part of the certification process, but are strongly encouraged, especially for Level 2 and 3, to be able to illustrate the breadth of knowledge and performance."

So the public page is indicating that Eval are not required, even bolding it, and are just strongly encouraged. Then we “clarify” here that actually, it’s a core expectation. I think that makes it not a clarification and a new expectation. I feel like the definition of eval on that page is more clear than the “long stated” expectations of level 2 and 3 you mentioned in the first post, as it doesn’t need the bracket additions to connect things like “formalized feedback must equal eval must equal paperwork” which I feel like is a not obvious connection being made.

It’s also an expectation that I’d only know about if I’m keeping up on the forums, which is not an expectation (or is it?) for every official going through cert.
I’ll concede maybe not every official going through cert needs to stay up to date on the forums, since this only applies to a subset of officials, but if this is how we are normalizing our change management, we are definitely leaving the non-forum readers behind. I suspect this also hurts remote areas (like mine) where no one can nudge to read the forums when things like this come up, and also likely hurts people with weaker english skills.

I will summarize my opinions here with:

  • We have a cert system that already has many areas that are not engaging with it due to perceive lack of value to work ratio. (Eg: Areas of Europe, Canada, Australia)
  • Cert is steadily moving to seem to be making MORE work for everyone involved in cert, without changing the value that comes out of that work
  • Areas I think I’m strong at (giving feedback) are being devalued because I don’t provide feedback in the way cert requires, by doing it in person.
13 Likes

For me this topic raises questions about a more general topic, which is how well Cert is functioning in recent months as a system, and the role that evaluations play in this system. Before I talk about that though, I want to make it clear that while I identify issues with the system, I believe that the people working on Cert are doing a very important work, and I am grateful towards all of them. Your work is greatly appreciated, I don’t know if you hear it enough. Thank you.

I want to look at two aspects: the volume of certification applications being processed, and the work required. I look at these two aspects in comparison with the old system, which stopped in 2016 because it had become unsustainable.

  • Volume of applications: we are nowhere near the old volume. In the last two months of the old system there were 61 SO applications, see the old thread here for those who still have access. Recent information about the volume of applications for 2024, presented here, mentions 55 SOs for the whole year.
  • Work required: I think we have all seen that list getting increasingly longer last year in that thread. My certification review took 3 months last year. In the old system, my certification reviews took 1 month each. We cannot compare things 1-to-1 because the system is different and the workforce is different; nonetheless, from an applicant perspective, the new system is less efficient than the old unsustainable system. I believe we also generally expect the new system to function better, i.e. to make better-informed decisions, and of course this requires work.

Now the way I remember things, one important aspect of the “new” (2018) system was to make the workload more manageable by the unsung (or not sung enough) heroes that work for Cert. And, still according to my memory, one major innovation was to reduce the number of evaluations and make them optional, while introducing OOSes which contain more meaningful information. Sure, an OOS represents more work than an eval, but we write way less of them, meaning less work for HOs, and instead of having many data points to process and to make sense of, Cert receives information that is already somewhat structured. Importantly, OOSes already take care of informing Cert of the evolution of the applicant over time, which was sometimes an issue of the old system, I think some of us can relate.

Putting all these bits of information together, it looks to me that we might be reversing course on evals and forgetting important lessons from the past. I have to say that I find the general direction of “the role and importance of evals” to be quite fuzzy in the past year or two, I would like a clear direction. On a more personal level I would like this direction to be “evals are valuable but we don’t incentivise people to collect as many as possible”, because otherwise, well, see above, we’ve been there, until we stopped Cert for two years to figure out a new system. I see value for evals in general, and I also see value in having two sources of information (OOS and eval). But I am not excited at the idea of generating more work for everyone, and ultimately burning out people, both HOs and Cert heroes.

I don’t want to offer only caution and criticism, so here is also a suggestion: could we make evals by traceable request only? Imagine this: I want an eval from my CHR after an event, I request it using a form that informs said CHR that the eval has been requested, they then have 2-3 months to submit it. This allows to only generate work that is required. And if Cert wants to evaluate how diligent people are at filling evals, it can then be done a lot more accurately than by just looking at arbitrary numbers without context.

12 Likes