Wyeast to White Labs and Back Again

The Subjects Under Question (courtesy Bob) For our very get-go experiment we asked our IGORs to tackle a adequately elementary experiment. Tin tasters observe a difference between the same wort fermented with the classics Wyeast 1056 American Ale (nee Chico) and White Labs WLP001 California Ale? Run into the link above for the total writeup on the parameters of the experiment.

The Experiment

Here are the basics - IGORs brewed and split up a batch of our Magnum Blonde ale, chilled and then pitched one part with a pack of Wyeast 1056 and the other with a vial/pack of WLP001. We asked the IGORs to grab yeast samples of roughly the same manufacture date and to pitch without making starters to reduce possible variations. (Idea on that towards the end!) After fermentation, the IGORs were instructed to package the beers in the same way and run a basic triangle test to decide if tasters could reliably detect the unlike beer. Nosotros gave no educational activity on weighting the samples in favor of Wyeast or White Labs.

The Experimenters

Seven IGORs conducted the experiment in fourth dimension for our epitomize episode and this report. We'd similar to thank Andy Turlington, Bob_In_So_CA, Casey Toll, Jason Click, Jason Mundy, Nicki Forster and The Mossy Owl for tackling this effort! (Yous tin always see how many experiments people accept participated in here).

The Brews

Mike'southward Mash Ingredients (find the small corporeality of hops there!) (courtesy of Mike O'Toole) What's the story with the Magnum Blonde? Information technology's i of my favorite beers. That's right, the guy known for putting clams in a beer really loves stupidly simple beer. In this example, the recipe was originally called "California Magnum" because I used it to test Corking Western'southward California State Select 2 Row Malt. It's tasty and super cheap to brand so you don't accept to plough to those tall boys of PBR! Anyhoo.. back to what should be a really easy mash day! For proof, Andy Turlington wrote upward his brew twenty-four hours right here, so go check it out - http://gallowspolebrewing.com/igor-smash-blonde-ale/ This is what it's all near, right? (courtesy Mike O'Toole) Looking through the brewing notes for our reporters, nothing seems very awry about their brew days. Every brewer who reported their gravities reported original gravities pretty much expressionless on target of ane.047. Every batch reported target gravities pretty much in line between strains. In other words, each brewer's Wyeast 1056 batch fermented to the same terminal gravity equally White Labs WLP001 (or within a point). The Gravity of Jason's Situation (courtesy Jason) Interestingly, the range of final gravities was pretty broad. Of those that were reported nosotros had one batch come in at i.012 on the high side (The Mossy Owl) with the depression side being repped at a pretty dry one.003. (Jason Click). Bob's Fermenting Buckets - neatly labelled - sure beats my labelling methods! (courtesy Bob) Fermenters Under Way (courtesy Jason) Mike's Fermenters Under Fashion (courtesy Mike O'Toole) When reported, IGORs reported pretty consistently that the Wyeast 1056 batches started showing krausen faster than the WLP001 batches. Otherwise, everything looked and acted the aforementioned - one tester did study that the WLP001 batch threw a larger krausen (Bob). Could the Wyeast "speed" be from that last infinitesimal smack giving the Wyeast cells a bit of a leg upwards? Basically with a straight pitch of White Labs without a starter, you're completely at the mercy of your yeast viability in the storage medium. With the smack pack, you get a heave of yeast vitality - aka the yeast are primed and fix for fermentation. Could be that Wyeast has an advantage hither, simply that'south probably abrogated by the practise of making a starter. You tin read up on Ray Found's experiment about short term starters designed to maximize vitality at Brulosophy.com. Y'all should know that neither Denny nor I perceive a value in the fine macho art treating your lag times like they're quarter mile launch times. In our feel, it feels like effort without effect.

And Now For Nikki'south Musical Interlude Our testers then packaged their beers up in a mix of bottles (with corn sugar) and kegs. They got downwardly to the hard business of tasting beer!

The Tastings

Side by Side Samples (annotation - not how they were poured for the tasters) (courtesy Jason) Here's where nosotros actually call back our farming out process works similar a amuse. Denny and I could do these experiments but we'd merely be getting the 1 data point and since we're sloppy process controllers (fifty-fifty Denny and the boilerplate "uptight" homebrewer is sloppy in comparison to an honest science experiment), having multiple teams tackling the projection tin help smooth out some of the experimental wrinkles that might creep in. After all, nosotros all can't screw this up in the same style! (or can we - perhaps nosotros can - sounds like a challenge!) Our seven IGORs ran a total of 12 tasting sessions. Smallest group had v tasters which is our minimum for these crowd sourced experiments. The largest panels had fifteen and 16 participants which feels like a great time. In all the panels averaged out to 6.25 tasters. (There were a total of 75 tasters) A number of IGORs took advantage of their local homebrew lodge to serve every bit a source of tasters. We love it and call back that's a great thing. You tin can fully expect to meet members of the Maltose Falcons and the Cascade Brewers Society in the mix for some of our hereafter experiments! The experience level of the tasters was reported as a healthy mix of experienced brewers and beer geeks forth with the beer curious. We asked testers to keep the question in question nether wraps - simply naturally there are people who heed to the podcast that know what'due south going on. Expect that to exist a question of much debate in the near future! So how did nosotros practise... Well first..

Outliers - A Matter of Science

People tend to think of science as a car. Execute an experiment, get results, feed results into an algorithm, spit out the respond - voila - 42. But... we are homo beings and humans share one fantastic super power - the power to mess things up. Things get wrong - something doesn't ferment correct, we don't hit the calendar time right, someone does something to make the tasting get all pear shaped. Scientists take debated for years, because scientists have been screwing upward for years, what to do when something goes wrong or the data you collect is but then far out of whack to make admittedly zilch sense. From a "pure" perspective - shouldn't the data go composite in? After all the universe did provide it to you and information technology could possible be valid. From a "practical" perspective - a mis-execution in an experiment means you're no longer testing your question (e.grand. Do Wyeast 1056 and WLP001 produce detectably different beers), you're testing a new one ("Tin can tasters detect a flawed beer") and therefore your information isn't for the right question. Naturally, this is a very sensitive question. Get too cavalier with tossing results that don't meet your expectations and y'all're not really looking for answers. You're looking for a way to confirm your beliefs. This ways you're not performing scientific discipline, you're performing politics! Get likewise stringent with including all the results and you lot run the risk of getting the right answer to the wrong question or at least muddying the waters sufficiently. In that location are accepted methodologies to pass up "outlier" data. The commencement two I tin can think of are Chauvenet'southward Criterion and Peirce'southward Benchmark. Both employ standard deviations and statistical analysis to provide firm mathematical pinnings for rejecting data. For tasting results like ours, there are iterative tests like Grubb'south that can assist as well. (To see Grubbs in Action) For this experiment though, I don't retrieve we need to worry about the math because it's pretty articulate one test had a misfire. The batch produced by Andy Turlington, who was brewing in a hurry, developed a rather noticeable phenol character in the WLP001 portion. When presented to tasters all 11 correctly picked out the different beer. In this particular instance, that seems pretty clear a non-standard examination issue and I (and Denny and Marshall) all agree that the tasting panel results are answering the wrong question. Bummer - it happens - and we know how to deal with it. Don't worry we will ever be good niggling scientists and reveal when we decide to strike data from the record and if nosotros have the time, we'll show yous the results with and without the outlier data.

The Results

Executive Summary. Ok, here'south what you really want you info junkies - what did our tasters notice? Can testers reliably detect that one beer is done with 1056 and one with WLP001? Crunch the numbers on our 64 tasters given the non-dissonant samples and nosotros notice that 29 of them correctly identified the odd sample. In other words, 45% of the time, a taster could correctly cull the beer made with the unlike yeast. This is right over the line of what p-value calculation would tell y'all is meaning (28 out of 64).Compared to the expectations of random chance (e.g. 33%), that seems pretty interesting! Looking at the calculated p-value, we go a value of 0.021, well beneath the normal threshold of 0.05 to be considered meaning. (When you use the anomalous results, that drops even further to 0.000001 cheers to the pool being 40 out of 75 or 53%) For openness most the numbers, we're post-obit the cue of our skillful friends over at Brulosophy.com and using a single tailed t-test function. Just to keep everything on a level playing field, nosotros're using the same calculator every bit well. The calculator was provided by Justin Angevaare and can be establish here In other words, next - tasters were reliably able to tell which beer was different, but does that mean they could tell which beer was 1056 or WLP001? Could they all concur on mutual differences or just "hey, these are different!" The Details Here'due south a listing of the results nosotros see from our individual panels. Nosotros've included the thoughts and observations of both the successful tasters and the experimenters. Let's run across what they say! (North.B. Equally number nerds volition tell you - the actual magnitude of the p-value deviation from 0.05 is, in theory, meaningless.)

Tasting Panel Numeric Data

IGOR Tasters Successful ID'due south p-Value
Jason Click eight 2 0.691 (Not pregnant)
Andy Turlington xi 11 0.00 (VERY meaning - merely also flawed - see above)
Casey Price 5 3 0.103 (Non significant)
Nicki Forster 10 four 0.327 (Not significant)
The Mossy Owl 10 five 0.132 (NOT significant)
Jason Mundy 16 nine 0.026 (meaning)
Bob In And then Cal 15 6 0.292 (NOT meaning)

Now the interesting thing to me - on a panel by panel ground, what we meet is a p-value returned that says "Not Significant", but when the analysis is practical beyond the whole data set up (due east.m. 29/64), nosotros become a return that gives usa a significant finding. The question is - is this sort of information stacking correct or are nosotros skewing the numbers by putting the trials together this way? Hopefully a real scientist blazon can assist us out hither and tell us nosotros're ok or we're horribly messing things up and should experience shame at our efforts. In discussing amongst the squad (Denny, Marshall and I), there's a few ways to look at this: Aggregate Results Are Expert: There'south value in the larger data puddle as the more results, the less sensitive the numbers are to the whims/abilities of a few tasters. With some of the smaller tasting panels, y'all're looking at a 1 vote difference swinging the p-value around a off-white corporeality. Aggregate Results Are Bad And We Are Bad People: On the other hand, the dyed in the wool number fiend could easily debate that our trials aren't rigorous enough to provide repeatability. That aforementioned thing nosotros claimed earlier as an advantage to having multiple teams (smoothing out individual "unknown" variances) makes information technology easy to dismiss the aggregate results because you can't say everything tested wasn't the same. We admit, this is sloppy Citizen Scientific discipline. We're not looking at winning the Nobel Prize for Beerology with our experiments, but instead point out things we remember are interesting and continue trying new things!

Tasting Panels Qualtative Data

IGOR Beer Thoughts Experiment Thoughts
Jason Click WLP001 drier; fruitier and more bitter - "I observe that the 1056 has a petty more flavor... 001 is more than muted. Also the 1056 seem to driblet clearer." "All in all both yeast are almost identical. I believe I like the flavor and clarity of the the WY1056 a little more than."
Andy Turlington All tasters were successful. "The WLP001 batch had a phenol that I have never experienced with this yeast before. I believe information technology is because I ran the experiment at an accelerated pace." "Tough to say. The WLP001 had a phenol that I haven't experienced with that yeast before. It was way too easy to identify the odd beer because of this."
Casey Price No difference in olfactory property, Wyeast (WY) beer was hazier, white labs (WL) beers had thicker mouthfeel. WY beer had more head retention. The WY beer had a thinner mouthfeel. The WY beer had more bitterness in the back of the throat.
Nicki Forster - WYEAST 1056 sample is softer, milder and less crisp than the WLP 001. Sample WLP 001 was light, well-baked and had more of a lager flavor, slightly sweeter upward front, lager on the backside. Preferred the WYEAST sample. - WYEAST 1056 was slightly cloudier than WLP 001 sample. Preferred WLP 001: little sweeter, crisp, nice & clear color, mild later taste. WYEAST 1056 was tangy, a little bitter by comparison. - WYEAST 1056 sample was mellow, drinkable. WLP 001 had a slightly unlike scent which helped it stand out from the other two samples. Winner, winner chicken dinner. - Expert carbonation. WLP 001 was slightly fruitier and sweeter than WYEAST 1056. As well was slightly smoother and has a mildly fuller mouth feel. Preferred WLP 001. "I personally enjoyed the flavor contour better with the WLP 001 and thought the overall recipe was a practiced design and taste success. "
The Mossy Owl Reactions weren't confident. They stated beers were very like. Some said 1056 seemed a bit more bitter, possibly brighter.
Jason Mundy 1056 More malt flavor, 001 Rubbery, 001 Butterscotch, 1056 clean biscuit flavor, 1056 more malt flavor, 001a piddling dryer "I think that the order of tasting tin can play a role in this. But I think that all these were really shut and made cracking beers. Fifty-fifty though it appears that we tin can tell a departure between the yeasts, I will freely substitute one for the other."
Bob In And then Cal Lighter in flavour, More malt flavour (Wyeast 1056) "At that place is a slight difference between the two yeast strains, simply not enough for this test to determine that they are different strains. 1056 produced a larger amount of yeast slurry, started faster and was highly agile before the 001 started to become going."

Looking through these taster comments (which are only the comments from successful tasters and the IGOR experimenter), do we see any consistent trends to Wyeast 1056'due south character vs. WLP001'due south? Hither'southward what I run into in the comments: Wyeast 1056

  • Samples tended to be hazier
  • Samples tended to emphasize malt grapheme more than WLP001

White Labs WLP001

  • Drier and crisper, more than lager like
  • Dropped clearer than 1056

At present here's the rub though - I've picked those reactions from the more common reactions out of the limited tasting data returned from our panels. Other comments from tasters seem to contradict - "1056 seem to drib clearer", for instance. Or "WLP 001 was slightly fruitier and sweeter than WYEAST 1056." So who's right nigh those yeast characteristics? I think we can't safely say without more than data so keep brewing and keep getting united states results! In the meanwhile, our general brewing recommendaton stands - you lot can care for Wyeast 1056 and WLP001 as interchangable - until you put them side by side! What do you lot think experimenters and brewers? Did we call this correctly? Does it match with your feel? Did we screw something up horribly? Let us know below or at [email protected]

heckmanstren1957.blogspot.com

Source: https://www.experimentalbrew.com/experiments/writeups/writeup-yeast-comparison-same-strain-wyeast-1056-wlp001

0 Response to "Wyeast to White Labs and Back Again"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel