Judging Standards And Fake Entries

Australia & New Zealand Homebrewing Forum

Help Support Australia & New Zealand Homebrewing Forum:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.
My point is simple, BJCP is untested and as such deserves to be viewed with a critical eye. It's hard to claim the high ground with an unrecognised qualification. An in-house training scheme run by a competition committee would have just as much validity.

Cheers
MAH

Two possible points of reference - the BJCP is recognised by the AHA and MCAB for running America's peak amature comps and the Australian International Beer awards uses the BJCP style guidlines.

I don't think that your in-house training scheme will get the same level of recognition.

David
 
My point is simple, BJCP is untested and as such deserves to be viewed with a critical eye. It's hard to claim the high ground with an unrecognised qualification. An in-house training scheme run by a competition committee would have just as much validity.

Cheers
MAH

Two possible points of reference - the BJCP is recognised by the AHA and MCAB for running America's peak amature comps and the Australian International Beer awards uses the BJCP style guidlines.

I don't think that your in-house training scheme will get the same level of recognition.

David

No it wouldn't have the same level of recognition but it would have the same level of validity, as both are untested. Neither would be independently verified that they actually produce beer judges with the ability to critically analyse beers. This is the big question mark that hangs over the BJCP, particularly when you consider the accrediting exam only has 30% of it's marks given to sensory perception testing and you only have to taste 4 beers, with the emphasis is on a written exam.

Further it's not surprising that AHA recognise the BJCP, they were one of the founders and other significant users have most likley adopted it due to it's accessability. But you can't get away from the point that BJCP is an untested self accrediting organisation. Some one has already mentioned hanging a shingle on the door and BJCP is open to this criticism.

I'll say it agin, BJCP has a lot going for it, but it also has a lot of negatives, not least of which is that it can't point to any real authorities that recognise it, other than interest groups like AHA. And I suppose this gets to the heart of the issue. You're not going to get anything other than interest groups organising things like training programs for HB judges because it's just amatuers (meaning non-professional, not crappy) competing against each other. So claiming things like being qualified under interest group X does little for increasing your credibility IMHO.

HB comps are not for cattle stations, they're for bragging rights. It's only because people get OTT about such things that we get psudeo qualifications and a false sense of the importance of it all. It's supposed to be just a bit of fun.....isn't it?

Cheers
MAH
 
yeah fun is what its all about, isn't that why we homebrew?
I think you would only need intensive training if you were judging a flight of very good beers that were very close, in general though you need to be able to identify faults to give feedback,because mostly the good beers in a comp really stand out.
Unless there is some kind of spectometer that can annalyse a beer scientifically for stylistic departures and faults we are stuck with human senses and opinion.
 
HB comps are not for cattle stations, they're for bragging rights. It's only because people get OTT about such things that we get psudeo qualifications and a false sense of the importance of it all. It's supposed to be just a bit of fun.....isn't it?

MAH, I find a couple of things in here worth commenting on. On the one hand, earlier you were criticizing the BJCP because it does not have some overriding, autonomous, independent, professional body to certify its validity, yet you also say that comps are just for a bit of fun and bragging rights and people shouldn't get so uptight about qualifications. Surely these two positions are not entirely consistent?

I regard the BJCP as simply an attempt at a process whereby a brewer who enters a comp can know that the judges have not been plucked off the street; that they have some level of training. Yes, your club/mentor based process can produce just as good a judge, but with what scrutibility? At least the BJCP process is transparent and you know the minimum level of knowledge/experience that a BJCP certified judge must have. That's minimum level. Without this, poor old comp organizers are continually getting hammered because the "process is too subjective" and the judges "don't know their arse from their elbow", etc, etc. -- despite the fact that it is all just for a bit of fun and bragging rights. So what do organizers do to try and address these criticisms? How about implementing a bit of a system that can provide some rigour and quality assurance? So they do that, and then the process is criticized for not being robust enough...I mean, you are damned if you do and damned if you don't.

I think I would dispute your assertion that "An in-house training scheme run by a competition committee would have just as much validity." I mean, it might well be just as good, but how does it have as much validity? Your claim that the BJCP is not valid also stands questioning. The validity of the BJCP comes from the fact that it was developed for a very large body of enthusiastic members (ever read the HBD?), has been applied and refined over many years, and its procedures and standards are widely disseminated and open to scrutiny, as you have done. Can you say the same of the in-house training scheme? Sure the BJCP can be improved. But is that a reason to reject it out of hand?

The other point I'd like to make is this, and it is purely speculative, but you had a shot at the BJCP titles like Grand Master or whatever they are and then say in the quote above that people get OTT about pseudo-qualifications. But has it occurred to anybody that these grand-sounding BJCP senior titles are in fact tongue-in-cheek? And they say Americans have no sense of irony! These guys are home brewers, remember, and are less inclined than most to take themselves seriously (I may be wrong on this, but it is my deep suspicion that the titles are a bit of a piss-take). Surely the intent of having these classifications is simply to provide the very thing you say the BJCP basic certification doesn't, i.e, a mechanism for recognizing superior levels of practical experience and peer-recognized expertise in beer evaluation. It's not for cattle sattions, but comps and judging are still reasonably important aspects of the hobby for a lot of people. And some people have a lot of experience and are a lot better at judging and sensory evaluation than others. Is it so wrong to try to implement a mechanism for peer-recognition of that?

Remember, its not for cattle stations, all it is doing is providing some kind of framework for those within the hobby to have the confidence that if a comp is labelled "BJCP certified" that a certain level of discipline and quality control will be undertaken. I for one would prefer to be a part of comps run under BJCP guidelines and with BJCP-certified judges than those without. Period. The sad reality is that outside the US I think this is the exception rather than the rule, and what you in fact do is place your faith in the comp having enough competent judges for your entries to get a fair chance of a good evaluation.

In short, I find your criticism that the BJCP is not serious enough (can't be validated) to be completely at odds with your claim that comps are all just for a bit of fun and bragging rights. And please understand, I have no personal hangups about any of this, comps are not even all that important to me and my participation in the hobby. I am just looking at your lines of reasoning. But if I have mis-represented your position in any way, MAH, I am happy to be stood corrected.

Regards
Steve
 
But if I have mis-represented your position in any way, MAH, I am happy to be stood corrected.

Hi Steve

Nope you haven't misrpresented what I have said. I've made two points. A) I don't think the high ground can be claimed based on BJCP accreditation, because IMHO the actual accreditation component, the exam, is not a good basis for determining an individual's ability to assess the quality of beer. B) Beer comps seem to be taken too seriously, it's just a hobby.

Now I don't see these points of view as mutually exclusive. It's because people are OTT about comps that we see people insisting that we have to go down this path of accreditation. Well if you are going to put yourself out their as accredited then it should be a valid process. But we can also go one step further back and question whether it's even needed.

One more time, I'll give praise where it's due and say that the journey of gaining BJCP accreditation is a worthy one. The actual exam and the shingle, I personaly don't give much credit to, but the journey taken, the effort made to go and try a wide variety of beers and expand an individual's knowledge, well a big thumbs ups.

Cheers
MAH
 
MAH, I was relieved to see you didn't take my comments personally. Happens so easily on forums when one attempts to discuss ideas and opinions. Generally I am not an argumentative fellow and definitely don't go out of my way to make trouble, so it is good we can have a civil conversation about this.

Nope you haven't misrpresented what I have said. I've made two points. A) I don't think the high ground can be claimed based on BJCP accreditation, because IMHO the actual accreditation component, the exam, is not a good basis for determining an individual's ability to assess the quality of beer.

I agree that it is not a panacea and that there is no justification for taking any high and mighty position about having certification or following the comp guidelines. What I think it does more than anything else is provide a referent to a whole framework of meaning such that once you say the words "the comp follows BJCP format and every panel will have at least one BJCP certified judge" then prospective entrants can understand what that means, and so it is an efficient way of communicating the comp structure and that it has some level of competency. Otherwise it is a comp-by-comp process of carefully reading the classes and style guidelines etc and wondering whether the judges are as qualified as the bozo mentioned on this thread. So, this is not to say BJCP has any greater authority other than the sheer numbers of amateur brewing enthusiasts involved in honing it over the years, but it does provide some form of standard, and having been involved in comp organizing in Australia in the late 1990s and 2000, I can only say that that is a good thing.

B) Beer comps seem to be taken too seriously, it's just a hobby.

Yes, I couldn't agree more. The sad fact is that as soon as you put the word "competition" into anything, it seems to bring out the worst in some people. I'm afraid I am talking about competitors here (i.e., not all, just a noisy minority who spoil it for others) and inevitably forces organizers into being increasingly transparent and having to guard themselves against attacks from all kinds of angles -- as soon as egos are involved, blech, you are going to get people taking it too seriously and being precious about the results and process. I'm sure that with the size of the hobby in the States, which was built on the philosophy of Charlie Papazian's RDWHAHB, it was this kind of thing, plus the desire to get away from re-inventing the wheel for every comp, that led to the BJCP standardization approach. So, really, I see BJCP as a form of inferring some kind of credibility (albeit with the limitations you point out) while cutting down the work of organizers and providing a sound basis for comparing and assessing beers.

Well if you are going to put yourself out their as accredited then it should be a valid process.

I'm sure this is a legitimate point. But it's one I'd like to see addressed to the BJCP Grand Privy (or whatever the overseeing body is called). As in, I'm sure they would vigorously defend their process and claim that it is valid, or perhaps, if you have some very concrete ideas on what they could do to improve the validity, perhaps they would listen, or perhaps they could assure you that such steps are unnecessary, I don't know. Wouldn't it be better, though, to build on this very substantial base and improve what we have rather than focus on what it is not and ditch it to start something anew? Which brings us to...

But we can also go one step further back and question whether it's even needed.

True. To a large extent though, this is exactly the conversation and process that has taken place within and by many comp organizers in Australia since, as I mentioned, dating back at least 8 years or so. Trying to hold a national comp with that comp and every state comp having different sets of classes and guidelines was a nightmare -- even given that you were just trying to do it for a bit of fun. Judging standards were very inconsistent within and between comps. This is an on-going but, it would seem, ever-diminishing problem (not because of BJCP but because of the growth in the hobby and better flow of informaton on forums like this). Perhaps you haven't been part of the conversation, but I can assure you it has taken place. Definitely something was needed for the reasons I outlined above in answer to your point B), and given that BJCP already exists, and given the small size of the hobby in Australia, and given that there is enough universality with beer styles to make it mostly as applicable here as anywhere...people gravitated towards adopting BJCP guidelines. The judge certification proces was a natural follower I suppose. So, yes, ask the question by all means, but it is the community of competition organizers, as nebulous as that sounds, that are best positioned to decide this.

One more time, I'll give praise where it's due and say that the journey of gaining BJCP accreditation is a worthy one. The actual exam and the shingle, I personaly don't give much credit to, but the journey taken, the effort made to go and try a wide variety of beers and expand an individual's knowledge, well a big thumbs ups.

Then it would seem your main problem is with the exam format. So let's acknowledge everything that the BJCP program is, and perhaps raise the exam format issues with the US administrators and find out what they have to say about it. Or have some involved in administering the program in Oz respond to your specific criticisms (note, I am not qualified to speak about that). BTW, I agree that the shingle should be regarded for exactly what it is: verification that you have been through the aove-mentioned worthy process and thus are assured of having the basic knowledge required to judge a homebrew competition. It doesn't necessarily make you an expert or necessarily superior to others who have acquired their credentials through a less formal process.

For me, it boils down to this: if you are going to have competitions, some kind of standardization and transparency and quality assurance is highly desirable. At the moment, the options for that are BJCP and ...

I guess that's about all I have to say. It' been a pleasure discussing the topic. Shame it was done here and not over a :beer:

Steve
Disclaimer for those who do not know me: I have no affiiliation or vested interest in the BJCP or AHA or anything related to them. I've just been involved in a few comps in the past.
 
I am going to chime in here on discrepancies in the judging process. This is for the AABC vs the other competitions that were run. Now i could have put this into the AABC results thread but it's better not to cloud it.

The specific example is the Pale Ale category. Now the BJCP defines three classes here, that AABA/vicbrew decided to lump into one - Amber Hybrid (including Steam/CC), American Pale Ale, India Pale Ale and Australian Pale Ale.

Now, were these categories separate i would have picked up 2nd place as an APA, but anyway, the specifics here are how different the scores are. This is a list of the beers and their scores at AABC compared with the state comp they qualified in.

1. Asher's APA. 122 AABC, at SABSOSA 111.9
2. Ross M - IPA - 105.5 AABC - ??? at ACT comp
3. Trent Maier - AIPA - 111 AABC, 102.5 NSW
4. Tony W IPA - 87.5 AABC, 119 Vicbrew
5. Ben S APA - 84.5 AABC, 106.5 NSW
=6. Rick A APA - 83 AABC, 107.5 NSW
=6. ? - Steam Beer - 83 AABC, 122 Vicbrew

Notice how different the scores are? If this is SERIOUSLY the judging standard that we are going to expect, with as much as a 39 point difference, what is the point of entering at all? You might as well throw darts at a board to decide the winner.

I shudder to think of what would happen if Ashley H had entered his APA that qualified at SABSOSA!
 
I think what you just said is the root of the problem here. I like the idea of the BJCP, as it allows for a standardised method of judging under a standardised set of guidelines. Lets face it... judging Australian Pale Ales next to India Pale Ales is, quite simply, absurd. I guess it is the extremely ordered, black and white life that I live, but I have yet to see a competition alternative to the BJCP that I think holds a candle to it, and therefore refuse to bother entering a competition that does not at least structure their categories off the BJCP (it need not have the full 23 categories, but blending all pale ales together into the one category, as I have already said, is pitifull and smacks of mediocrity).

Cheers,
TSD (hastily dons flame-suit)
 
We tried as best to keep both camps happy in NSW (esp. with the wide adoption by NSW brewers of the exams) and were shot like ducks for doing this.

Seems there no tolerance for updating or change...

Scotty
 
I think what you just said is the root of the problem here. I like the idea of the BJCP, as it allows for a standardised method of judging under a standardised set of guidelines. Lets face it... judging Australian Pale Ales next to India Pale Ales is, quite simply, absurd. I guess it is the extremely ordered, black and white life that I live, but I have yet to see a competition alternative to the BJCP that I think holds a candle to it, and therefore refuse to bother entering a competition that does not at least structure their categories off the BJCP (it need not have the full 23 categories, but blending all pale ales together into the one category, as I have already said, is pitifull and smacks of mediocrity).

Cheers,
TSD (hastily dons flame-suit)

TSD,

Sometimes it is unavoidable to lump styles together for the award of places etc due to low entry numbers, but in NSW we try to have seperate panels judge 'compatible' styles and then re-combine the scores. While this does keep the judges fresh from only doing 9-12 beers in a session it means that you need to have some consitency between the different panels.

We achieve this by having each sub-panel come together to compare notes and retaste the top 3 beers from each panel and jointly agree on the placings.

We also have all judges (25 in the case of NSW) score a standardising beer before the comp starts. The scoreroom then averages these and announces the average value to the judging hall. Judges can then adjust their approach if they scored the standard high or low etc.

David
 
There is no doubt that competition entrants should expect reasonable treatment from judges and you are quite right to raise your concerns.

However drawing between comp comparisons I think is a very knotty problem, statistically. My guess is that the confidence intervals on judging would be enormous. If for no other reason that we are talking about a small number in a judging panel. Im guessing that it would need some sort of non-parametric ANOVA to begin to test the differences between comps and panels within comps.

For my money in the first instance, consistency within the comp should be the principal concern. We run calibration beers to bring judges to an awareness of scoring differences and maybe we should be doing a better job with that information on the day. (I include me in that).

Comps that receive entrants that have already gone through a qualification process eg State to national should receive a more consistent set of beers that the feeder comp and this difference in reference set may effect scoring. There are behavioural issues at play as well. Judges at comps that receive entries that have already been qualified may be more critical than those from the feeder comp who have to deal with a bigger range from fantastic to terrible. There will always be issues on the day that will effect results.

What is more important is not that there is a difference between state and national comps, but that differences are as consistent as possible. Feedback to feeder comps will be an important part of this process.

T
 
What is more important is not that there is a difference between state and national comps, but that differences are as consistent as possible. Feedback to feeder comps will be an important part of this process.

T
[/quote]

Agree, so as DJR has already noted....

1. Asher's APA. 122 AABC, at SABSOSA 111.9
2. Ross M - IPA - 105.5 AABC - ??? at ACT comp
3. Trent Maier - AIPA - 111 AABC, 102.5 NSW

4. Tony W IPA - 87.5 AABC, 119 Vicbrew
5. Ben S APA - 84.5 AABC, 106.5 NSW
=6. Rick A APA - 83 AABC, 107.5 NSW
=6. ? - Steam Beer - 83 AABC, 122 Vicbrew

What the!

Scotty
 
Sounds like some of you blokes are just oozing sour grapes personally. <_<

Would it have been better had your APA finished 1st/2nd in a 4 horse race? Just appears to me that one set of judges have differing opinions to another set of judges. Nobody is right or wrong, just different on the given day. Also bundling all the beers seems sane to me given the fact there were only a dozen finalists. Also do we just jam the sole Australian PA in it's own category?

Be honest with yourselves... The best beer won on the day. Well done BTW Asher. ;)

Sometimes comps and the various levels of negative feedback gives me the shites. Some people either need to have fun or take a course in learning how to build a bridge.

There's always next year.

Warren -
 
Notice how different the scores are? If this is SERIOUSLY the judging standard that we are going to expect, with as much as a 39 point difference, what is the point of entering at all? You might as well throw darts at a board to decide the winner.

Also try and bear in mind DJR it's better to wait for your judging sheets before pointing fingers. Is there any chance you could have sent a bottle that was say... Less perfect than your state comp bottle? Wouldn't be the first person who has. ;)

Edit: Personally I don't enter them anymore either DJR. I find them much akin to comparing dick sizes. :lol: OTOH I never begrudge those who enjoy doing so. Different strokes for different folks.

Warren -
 
What is more important is not that there is a difference between state and national comps, but that differences are as consistent as possible. Feedback to feeder comps will be an important part of this process.

T



5. Ben S APA - 84.5 AABC, 106.5 NSW
=6. Rick A APA - 83 AABC, 107.5 NSW



Scotty
[/quote]



Scotty

I am not "defending or condeming' the results, but without a lot of work, the results above could well be consistent and with reasonsable confidence limits. I just think that the answer isn't simple and that there could be measures that could be taken in the short term that could help. Why not brew a "standard " beer and distruibute it to all comps for cross checking ?

T
 
Notice how different the scores are? If this is SERIOUSLY the judging standard that we are going to expect, with as much as a 39 point difference, what is the point of entering at all? You might as well throw darts at a board to decide the winner.

Also try and bear in mind DJR it's better to wait for your judging sheets before pointing fingers. Is there any chance you could have sent a bottle that was say... Less perfect than your state comp bottle? Wouldn't be the first person who has. ;)

Edit: Personally I don't enter them anymore either DJR. I find them much akin to comparing dick sizes. :lol: OTOH I never begrudge those who enjoy doing so. Different strokes for different folks.

Warren -

Well at least we agree on something then!

I don't think the "bad bottle" clause has been invoked, as there are 4 entries with a difference of between 20 and 39 points, so yes i'll be waiting for the tasting notes with baited breath!

In other news, the Castle Hill show comp did actually seem to go well for me, and i didn't have to spend $22 getting 1 single bottle into it! (entry fee+postage+packaging)
 
Why not brew a "standard " beer and distruibute it to all comps for cross checking ?

No need to brew it, just buy it. We always use a characterfull commercial beer for the standardising beer. That way it should be a reasonable standard without major faults or wow factors ie middle of the road.

But this would also need the different comps to want to co-operate and raise standards. At the moment there is a fair bit of 'our way is better'. As Warren said "build the bridges" but at the moment they are either crumbling or the chasm is too wide to span.

David
 
"I find them much akin to comparing dick sizes. "



Hey Wazza......Spoken like a true Paco Vale boy

Now who's talking sour grapes...... :D
 
Notice how different the scores are? If this is SERIOUSLY the judging standard that we are going to expect, with as much as a 39 point difference, what is the point of entering at all? You might as well throw darts at a board to decide the winner.

Also try and bear in mind DJR it's better to wait for your judging sheets before pointing fingers. Is there any chance you could have sent a bottle that was say... Less perfect than your state comp bottle? Wouldn't be the first person who has. ;)

Edit: Personally I don't enter them anymore either DJR. I find them much akin to comparing dick sizes. :lol: OTOH I never begrudge those who enjoy doing so. Different strokes for different folks.

Warren -

You're right that it could be a dodgy sample that was sent down, so I'm waiting to see the judges comments. But there do seem to be quite a few entries with a 30+ points difference which looks a little bizarre.

And I don't think any of this is about sour grapes.
 
Back
Top