Standardisation Of Brew Comps

Australia & New Zealand Homebrewing Forum

Help Support Australia & New Zealand Homebrewing Forum:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.
It may have already been highlighted but in mho, I think we face two major problems ( I've run two comps, judged at one State comp and hold recognised rank)

1. Number of categorys/classes offered at a competition.
2. Resources and management of judging day.

1. We continue ( self included) try to be all things to all brewers when it comes to the range of classes offered at comps. This can result in judges having only one to three beers per class let alone category. With this limited range of options it's very easy to develop a cellar pallate for the style and tis may add to the variability between comps. My guess is that the mash paddle sees much more consistent result in judging. (within the comp , not between comps). Perhaps the time has come to start restriciting the number of classes available at competitions and provide an "evaluation" service for those who want a beer reviewed.

2. In mho we simply don't have enough people willing to get involved. to enable experienced judges to oversee the results ( and train novice judges in the process) as they are being assesed. From an organisation point of view it very easy for things to become a mad scramble in the back room and with the best will in the world, miss wants going on at the judging tables. David's comment that he wanted to have a well run but smaller comp is right on the money.

I think it's time to bite the bullet and pull back on the number of categorys offered for competition as opposed to evaluation.
 
Judges should be honing their skills at local levels and not at a state level comp.

But there just aren't that many comps being run at a local level. In NSW this year, there's been the Bathurst comp, the NSW comp and the Top Twisted comp (as well as the nationals coming up). With the last two held on the same weekend, there's really only one local comp that a judge could hone their skills on, and that a long way from where the majority of those in NSW live.
 
I agree with Gatgodzilla. Clubs can organise comps and the members judge the beers according to BJCP or whatever the guidelines of their state comps are. The IBUs (God bless non-formal clubs) had one style comp in the leadup to the NSW. About 6 of us sat around a table and tasted the beers. We checked our scores against each others and discussed them with a little more time than you get at a state comp. At least three of the judges around that table judged at the NSW comp and I reckon we were pretty close in scoring on that day. However, scoring across styles and flights will be difficult to calibrate, which is why I strongly support a separate best of show round. The best of each class should be together judged by the same panel to take out the unavoidable drift in scores across numerous panels throughout a weekend of judging.

Newguy, that scoresheet is pretty ordinary. When I judge, I try to put as much feedback as possible on scoresheets, but that isn't always possible if flights are long and time is short. The focus on the day seems to be getting thru all the beers and scoring them to get definite placements. Smaller comps should be able to give you better feedback - alternatively, larger comps with enough judges for the number of entries.
 
But there just aren't that many comps being run at a local level. In NSW this year, there's been the Bathurst comp, the NSW comp and the Top Twisted comp (as well as the nationals coming up). With the last two held on the same weekend, there's really only one local comp that a judge could hone their skills on, and that a long way from where the majority of those in NSW live.

I don't want to sound too gungho, but the answer is that more local competitions are required. Now I know that is a simplistic statement, but it's the only obvious one. Let evolution take its course - it may take (quite) a few years to change this situation but it can happen. The obvious problems is the time constraints that we all have in terms of travelling to venues, but we have some pretty talented people out there. All resources must be canvassed including the professional craft brewers (those boys know who their best customers are !!!) but start the circuit simple and see how it goes. It only takes one dedicated person (fool) to start off the coercion process. Because I don't know the various boards we have, I'll leave it at that, but I'll keep my ear to the ground. No doubt we will discuss this further at the Nationals.
 
I don't want to sound too gungho, but the answer is that more local competitions are required.

I totally agree with you. I think Ray's idea of an inter-club comp is a good start, and hopefully more people can get involved in other competitions as well.

It only takes one dedicated person (fool) to start off the coercion process.

I'd say somebody on the South Coast would be perfect for this role. :lol:
 
I totally agree with you. I think Ray's idea of an inter-club comp is a good start, and hopefully more people can get involved in other competitions as well.
I'd say somebody on the South Coast would be perfect for this role. :lol:

I reckon a south coast accountant would be even better :ph34r: B)
 
Interesting thread. Just a quick one on consistency in judging and how to improve parity between scores and comments. The way I judge is that before I even look at the beer, I regard it as having full marks in every evaluation category. Then, rather than award points, I deduct points for things that are not there that should or shouldn't be there but are. There is still some subjectivity and experience required in deciding how much to weight different faults, but one of the key points is that it forces you to think about reasons why your score ends up at what it is. And having decided that, you can easily put the reasons as feedback on the judging sheet. Using this approach I find I have very little need to think about how a given beer compares to others in it's flight. Each one stands or falls on its own merits. The fundamental requirement is having a good knowledge of the style requirements and archetype beers for the style. When you get yourself into unfamiliar territory, using this approach or any other, you can come unstuck very quickly. Unfortunately not all judges are brutally honest with themselves about this point and will stick their hands up for any flight, even when there are styles in there that they are not good at.
:icon_cheers:
 
. . . . . About 6 of us sat around a table and tasted the beers. We checked our scores against each others and discussed them with a little more time than you get at a state comp. . . .

I think Po-Mo's comment here has a potential answer to the issue in it.

Just to come at it from a slightly different angle. I've heard talk (mainly on US sites/forums) of changing the way in which beers are judged completely. Rather than two or three judges scoring individually and averaging the results .. a single consensus score from all the judges on the panel. Maybe a quick individual score.. then compare notes, discuss the beer and come up with scores and comments the whole panel can agree on.

As has been pointed out, the same sort of thing happens when the individual score sheets are too drastically different ... why not just make it the norm? From a learning perspective for new judges, the conversation that takes place around the table would be a fantastic learning experience. Shuffle the judges around on a few different panels over a day or two of judging and everybody gets to work with and calibrate themselves against everybody else; ironing out a chunk of the variability in results. Panels would include a variety of palates that would be sensitive to different tastes and flaws etc and I think that with collaboration between those palates, the beers would be more thoroughly evaluated. In my informal beer evaluating experience, multiple palates working together, always add up to greater result than would the sum of their parts

Sure, there would be a bit of an issue with "strong personalities" dominating a panel.. but those personalities would stand out pretty fast and the comp organisers would be able to take them into account when arranging the individual panels. And at least any tendancy to stamp a personal opinion all over a judging sheet would have to be done out in the open.

It would undoubtedly take longer per beer ... but I think that in combination with the notion of reducing the number of styles per competition it wouldn't be too big an ask. I think that best in show rounds, where they are separately conducted (I agree that they should be) are usually done by discussion and consensus. So the idea isn't even all that foreign a format for beer judging.

Its an idea anyway. Might be worth a try.

Thirsty
 
This is exactly how we judge beers in our clubs (BABBS) Bi-monthly mini comp. One style is selected, that allows several sub classes & we sit at tables of 6 to 10 tasters. We have our scoring sheets, that describe what we are looking for & we get a general concencus from the table that consists of experienced & novice judges. I'd recommend that more brewers support their local clubs & if you don't have one, get it happening Gold coast, Sunshine Coast, you have the numbers...
Clubs are a wonderful place to hone your skills & get valuable feedback on your brews...

Cheers Ross
 
At the recent Bitter & Twisted comp we had a 3 point spread requirement - meaning that the judges needed to be within 3 points of each other. This was a 2 judge panel but I've seen 3 judge panels use a 5 point spread. Each judge would evaluate the beer silently and come up with a score. If the scores were within 3 points of each other (bjcp style judging out of 50) then no further conversation was necessary, although we usually had a discussion about some of the good/bad points of each particular beer. If the score difference was wider than 3 points then the judges would need to come to a compromise and one or both change their scores. The discussion is then about particular faults which one judge may have missed, or one judge advocating for a beer which he/she thinks is good. The end result is that you should get overall consistency between your judges with a little bit of room for personal preference to play a part. This system seemed to work well on the couple of occasions I've seen it happen.
 
At the recent Bitter & Twisted comp we had a 3 point spread requirement - meaning that the judges needed to be within 3 points of each other. This was a 2 judge panel but I've seen 3 judge panels use a 5 point spread. Each judge would evaluate the beer silently and come up with a score. If the scores were within 3 points of each other (bjcp style judging out of 50) then no further conversation was necessary, although we usually had a discussion about some of the good/bad points of each particular beer. If the score difference was wider than 3 points then the judges would need to come to a compromise and one or both change their scores. The discussion is then about particular faults which one judge may have missed, or one judge advocating for a beer which he/she thinks is good. The end result is that you should get overall consistency between your judges with a little bit of room for personal preference to play a part. This system seemed to work well on the couple of occasions I've seen it happen.


I see what you mean there, and I think that someone mentioned that the Vicbrew comp had a seven point spread. But where you say "If the scores were within 3 points of each other (bjcp style judging out of 50) then no further conversation was necessary" is where I think that there is room for improvement. They way it stands, the comp results are going to be pretty consistent.. but the exchange of knowledge and skill between judges minimised. For mine, the conversation about the good and bad points of the beer and a consensus on what to say about the beer, is at least as important as a consensus on what score to give the beer.

I mean, scoring well and/or winning a comp is a lovely thing.. but realistically, most people aren't even going to place; and I think that most homebrewers genuinely do enter the comps mainly for the feedback... so I think that the feedback should be the main focus... with scoring flowing on as almost a side-effect of a conversation about the beer. Great feedback includes the brewer in that conversation and the scoring that follows on has to make more sense to them as a result.

I think that officially makes me a hippy small L liberal latte drinker .... but thats how I feel dammit :p
 
Back
Top