The issues I have can be summarized as:
- There is a disconnect between what wins awards, and what people buy (at least when it comes to small batch craft chocolate); and
- There is a disconnect between what the awards are supposed to recognize and how the awards are used by winners and consumers.
These disconnects are complicated by a very important (in my estimation) distinction:
- Are these awards competitions; or
- Are they recognition programs?
The distinction between the two speaks directly to concerns about prize inflation. If the awards are competitions, a reasonable expectation would be that there would be a limited number of prizes – one each for first, second, and third – awarded per category. If the awards are recognition programs, then there is no reason to limit the amount of recognition that can be given.
It has been suggested elsewhere that the reasonable response to unhappiness with the current situation is to go and set up a competing awards program. However, it is my belief that this is one situation where more is not better, more is simply more confusing for everyone involved.
Awards program organizers have displayed reluctance to talk in detail about their awards as businesses. This is understandable.
However, I propose that awards organizers think of this in a customer service context. From a strict customer service perspective it makes sense for awards organizers to provide more information so that all their customer constituencies can have confidence in the process.
As a member of the press, I am a customer of awards program. I use the information provided to let others know about the awards programs and the winners. However, I do not wish to simply parrot the information given to me. I want to comment on them, intelligently. And in order to so so, (e.g., first, second, third) I need more information about the process and the program.
Entrants are customers of awards programs. They want to know more about who and what the competition is and was: how many entries there were, how many judges were involved, how much time was allotted to judging my entries, how many entries received awards and of what type, and more. This helps them understand their chances of getting an award and help me evaluate their return on investment in entering an awards program.
Distributors and retailers are customer of awards program, too. They want to have confidence in the process so they can make good purchasing decisions and good recommendations to their customers. Their information requirements are similar to that of an entrant.
Consumers are the most obvious customer of the awards. Unless they are award nerds, however, all they are likely to know is what they see on a package. One thing the cannot be sure of – though neither the organizers or entrants appears to be willing to communicate this to consumers voluntarily – is whether or not what is in the package is what was actually given the award. In the case of a small batch bean-to-bar maker the likelihood is very high that what consumers purchase is almost certainly not what was given the award.
I am using the International Chocolate Awards as an example here not to single them out, but only because I have more recent public information and experience with and about them. My thoughts can (and should) be applied to any chocolate awards programs. I have also been a judge at the Good Food Awards, for which I was a member of the committees for Chocolate and Confectionery. While I have been asked to judge the Academy of Chocolate Awards, I have never done so. I was the head judge responsible for developing entry criteria, judging criteria, and processes, for Curtis Vreeland’s Next Generation Chocolatier Competition back in 2008.
At the ICA Worlds presentation in London this past October, Martin Christy mentioned from the stage that there were over 2500 entries in the awards. About two weeks later, in Paris at the cocoa flavor assessment workshop organized by the Cocoa of Excellence program, Martin’s presentation indicated that the ICAs held 17 awards programs (16 semi-finals and 1 final?) in 2017 and there were 2500 entries (overall?).
The numbers as presented are inconsistent. Clarification is, I believe, important, in order to gain a true understanding of the scale and impact of the ICAs. For example, if we go with the London numbers, 10% of all entries in the Worlds round were recognized in some fashion. The Paris numbers make no sense whatsoever using an average for the number of entries per round with respect to the number of certificates presented in London.
Given the discrepancy in reporting cited above, I don’t think it’s unreasonable to ask the ICAs for a clarification on this point, among others. I will go further and propose that the ICAs *and all other awards programs* be proactive in providing this information on at least an annual basis.
Information I think awards organizers should provide, at the very minimum, includes:
- The number of competitions in the reporting period; and
- The number of judging categories/sub-categories; and
- The number of entrants in *each* competition; and
- The total of *unique* entrants across all competitions; and
- The number of entries in *each* competition; and
- The number of entries *judged* in each competition; and
- A table of costs for entering as a part of the report.
Each awards program publishes a list of prizes, so with the above information a customer who is interested can do some simple math to determine the percentage of entrants who were awarded prizes as well as their distribution at least with respect to position.
Speaking to point 6 above, I would request organizers publish their respective policies on how they handle samples that arrive damaged and cannot be judged. How are the entrants notified (if at all)? What is the refund policy (if there is one)?
More information I think customers deserve to know are:
- The number of judges in each round of competition; and
- The number of judging sessions in each competition; and
- Over how many days the judging in each competition was conducted; and
- If the judging was split (i.e., a preliminary followed by a secondary judging). In the case of split judging, I think the information for each segment should be reported separately along including an explanation of any differences in the judging criteria and process for each section.
With the above information an interested customer can do some math – in conjunction with the information about the number of entries – to figure out how much time, on average, judges have to evaluate entries.
Finally, I think customers also deserve to know:
- What, if any, procedures are in place to identify and deal with potential recognition bias on the part of judges; and
- What, if any, procedures are in place to identify and deal with potential conflicts of interest on the part of judges; and
- What, if any, procedures are in place to identify entrants who break entry rules; and
- Disclosure of all sponsoring entities; and
- Disclosure of any contractual agreements that involvement payment for organizing an awards program.
As a judge for different awards programs over the past five+ years, I can tell you that I recognize a lot of the pieces I am asked to judge from the mold and/or decoration or other subtle clues. While judges are not supposed to let this influence their decisions, how can we – the customers of the awards – know that it does not?
Similarly, if a judge acts in an importer/distributor/retailer capacity for a product they recognize they have a financial interest in the outcome. While judges are not supposed to let this influence their decisions, how can we – the customers of the awards – *know* that it does not?
With respect to point 3 above, it would be interesting to know if awards organizers have a formal dispute resolution and disciplinary process if it becomes known that an entrant has broken one of the entry rules.
Finally, some awards competitions exist because the organizers have a contractual agreement to run them. As a customer of the awards, knowing that an agreement is in place would let me evaluate those awards from a different perspective and help make sense of different patterns in the way awards are given out in those cases, when compared with competitions that do not have contractual relationships. I advocate for disclosure of these agreements if they exist.
No awards program is above criticism. None are perfect and all can be tweaked to address the issues that their customers raise to make them more responsive to market demands. Going forward, it’s important that all of the customers of the awards have confidence that the awards being conducted fairly, and that the organizers are committed to addressing shortcomings in the process that customers raise. In the long run, lack of customer confidence in any one awards program has the potential to undermine the value of all awards programs.
I’d appreciate hearing your thoughts on these ideas. Do you see any value in transparency reporting? Have I missed anything that should be included? Are there things I mention that you think are unreasonable?
In the ICAs I know the scoring by the judges is done on a numerical scale across various metrics that should result in a score in the 0~5 range. There is some mechanism during the grand jury where these scores get converted into bronze, silver, and gold. It’s clear that judging is done on a strict numerical basis but it’s less clear how ties are resolved. It could be that when two or more entries tie for the gold that the grand jury makes a decision about which one will get the gold and the other(s) will get a silver – but I have never been made privy to the process so I can’t tell you for sure. I can tell you that the one time I participated in a grand jury session this was not the case. That said, I do need to make it clear that the ICA grand jury judging session I participated was unusual in that the organizers ran out of time and the category did not get judged in two rounds – it went straight to the grand jury. In the end, eight of the nine entries in the category were given prizes, which, IMO, was not representative of the quality of the entries.
As I have not been judge at the Academy of Chocolate Awards I don’t know how the scoring and selection is done.
At the Good Food Awards – when I was involved – I know that different systems were used for Confectionery and Chocolate. It’s been several years since I was last involved so I don’t know how the scoring and selection are done now. Chocolate used a pretty conventional numerical system and Confectionery used a ranking system where judges would rank order their favorites in each flight and the collective rankings were used to determine which ones made it to the next round. If there was no clear winner, the judges would discuss their reasoning and make a selection.
One of the reasons I would advocate against using a numerical scale in this context is psychological. There are only two points difference between 87/89 and 89 /91. However, the 91 rating has more weight psychologically — a 91 seems better than an 89, even though the distance between the scores is the same.
An example of a specific rule would be one where a product that was entered was not available for retail sale. This can happen several ways, but is likely more common in the micro-batch categories.