For me this topic raises questions about a more general topic, which is how well Cert is functioning in recent months as a system, and the role that evaluations play in this system. Before I talk about that though, I want to make it clear that while I identify issues with the system, I believe that the people working on Cert are doing a very important work, and I am grateful towards all of them. Your work is greatly appreciated, I don’t know if you hear it enough. Thank you.
I want to look at two aspects: the volume of certification applications being processed, and the work required. I look at these two aspects in comparison with the old system, which stopped in 2016 because it had become unsustainable.
- Volume of applications: we are nowhere near the old volume. In the last two months of the old system there were 61 SO applications, see the old thread here for those who still have access. Recent information about the volume of applications for 2024, presented here, mentions 55 SOs for the whole year.
- Work required: I think we have all seen that list getting increasingly longer last year in that thread. My certification review took 3 months last year. In the old system, my certification reviews took 1 month each. We cannot compare things 1-to-1 because the system is different and the workforce is different; nonetheless, from an applicant perspective, the new system is less efficient than the old unsustainable system. I believe we also generally expect the new system to function better, i.e. to make better-informed decisions, and of course this requires work.
Now the way I remember things, one important aspect of the “new” (2018) system was to make the workload more manageable by the unsung (or not sung enough) heroes that work for Cert. And, still according to my memory, one major innovation was to reduce the number of evaluations and make them optional, while introducing OOSes which contain more meaningful information. Sure, an OOS represents more work than an eval, but we write way less of them, meaning less work for HOs, and instead of having many data points to process and to make sense of, Cert receives information that is already somewhat structured. Importantly, OOSes already take care of informing Cert of the evolution of the applicant over time, which was sometimes an issue of the old system, I think some of us can relate.
Putting all these bits of information together, it looks to me that we might be reversing course on evals and forgetting important lessons from the past. I have to say that I find the general direction of “the role and importance of evals” to be quite fuzzy in the past year or two, I would like a clear direction. On a more personal level I would like this direction to be “evals are valuable but we don’t incentivise people to collect as many as possible”, because otherwise, well, see above, we’ve been there, until we stopped Cert for two years to figure out a new system. I see value for evals in general, and I also see value in having two sources of information (OOS and eval). But I am not excited at the idea of generating more work for everyone, and ultimately burning out people, both HOs and Cert heroes.
I don’t want to offer only caution and criticism, so here is also a suggestion: could we make evals by traceable request only? Imagine this: I want an eval from my CHR after an event, I request it using a form that informs said CHR that the eval has been requested, they then have 2-3 months to submit it. This allows to only generate work that is required. And if Cert wants to evaluate how diligent people are at filling evals, it can then be done a lot more accurately than by just looking at arbitrary numbers without context.