Thursday, December 4, 2008

Conditionalities of the Unseen

I was just talking to a friend about how we're smack dab in the middle of cold and flu season. Some rather famous personages, including Rush Limbaugh, despite the medications and precautions they take, managed to catch flu, colds, or bronchitis so far this season. Not that I think colds are bad; by contrast, I think that periodic infections are good. Note that most of the major pandemics break out in civilized countries. Said virologist Gerald Lancz, we're scrubbing our kids down so much that we can't help but expect them to get sick.

By contrast, I cannot honestly remember the last time I "caught a cold". This might seem odd or perhaps by my own measurements alarming if you don't consider what I do for a living. As part of the microbiology lab in which I'm involved here, I obtain, maintain, and retain a series of strains of infectious microorganisms, including LVL2 Biohazard microbes (including E. coli, Diptheria, Cholera, and similar). The practical application of this I suppose is that periodic subclinical infections keep my immune system regularly challenged, such that it leaves little opportunity for opportunistic infections to take hold.

Without this information, it might seem that I'm more resilient against infection or that I'm "due" to catch something. Reality indicates however that I'm regularly and constantly under assault, but that my body is always ready to turn back the tide of invaders.

Some of the people I know who get flu shots complain that they don't work, and many people complain about responses of plants for which they care even after I dispense advice. What they do not realize is that that microbial world remains largely invisible to me. IN the first case, they probably catch the common cold, bronchitis, or even pneumonia after a flu shot and not the flu, such infections made possible by the fact that the body starts a primary response to influenza (which is a virus and MUST be fought by a secondary antibody-mediated immune response). In the second, they pretend that plants don't catch "colds" per se. Tobacco mosaic virus is basically akin to catching leprosy for a plant, but we don't pay it much mind.

Many things in science are not paid much mind. Touching plants induces genes. A failure in my building's climate control system shuts of laminar hood air flow (which was bad when I was mixing acetone and petroleum ether). However, I do not believe there are any coincidences. The problem for researchers is that when people discover unexpected things in science, they often don't provide public explanations if they come up with them at all. More often than not I observed scientists omit "anomalous" outliers in order to publish results when the outliers more accurately reflected the truth about that phenomenon.

When I studied diterpenes and volatiles in Vitis vinifera, we found, much to the chagrin of the funding agency that resveratrol increased only twofold in the berries but fiftyfold in the leaves under abiotic stress. That makes perfect sense biochemically now that I think about it, but if you're selling wine as an herbal supplement, that doesn't help your marketing. If you're making tea out of grape leaves, it's tantamount to a breakthrough. It wasn't what they wanted, but it was still useful.

One other thing that threw off our calculations was the alfalfa field adjacent to the vineyard. Overflow runoff from irrigation of that field influenced the grapes immediately antecedent, so once we were able to identify the source of the error, we omitted that one block out of the six total blocks of data available (a loss of 15 plants per subset out of 90 total biological replicates) until the situation rectified itself. Arbitrary omission or fanciful inclusion would have rendered our data irrelevant even if correct because it was founded on bad science.

Much goes on that we cannot see. The kind of scientific investigations in which I engaged as a graduate student involved ppb measurements, far below the detection levels of almost any human sense except taste/smell (which isn't quantitative). Just because we cannot see it doesn't mean it doesn't exist.

Yet my colleagues by and large also remain skeptics and atheists, except when they want me to buy their conclusions in peer review.

Friday, October 17, 2008

Consider Classic Experiments

As part of the class I teach, the students get grades and complete some assignments on WebCT. One of them complained to me the other day about not being able to do something on the system and how much they loath it. I’m inclined to agree.

When I went to college, I didn’t actually want internet. I didn’t want to sit for hours at a time in front of the monitor surfing pointlessly, playing LAN games, or getting into trouble. Since I also know about Echelon, I also balked at the prospect of being tracked by the government whenever I opened a site, even if I ended up their accidentally. However, before too long, in one lab course I took in Biochemistry, we were required to, like my students are, complete assignments on WebCT.

If not for that, I probably would never have started a blog or wasted as much time as I have playing MMORPGs or in IM conversations. When it’s up to me, I prefer now to sit in my chair and read or play my guitar, now that it’s fixed. I find those things far more fulfilling, probably since I control them. You control very little on the internet.

Besides, old technology has its advantages. So many thieves have switched to online for easy money, that it’s almost become safe to send checks in the mail. Plus, they never steal a password for websites if I transact business via the USPS. It’s a lot of work to steal money through the mail, and you have to be physically there in order to do it. People can steal money from your account from the comfort of their home in Estonia while sitting nude smoking crack if they like.

Scientific technology likewise has come along so far that you can get publications for simpler things. For demonstrating that a gene has incomplete dominance or only a single allele (which is something they spend one whole lecture on in genetics classes), you can get into a journal because nobody does that anymore. A researcher in the lab I worked got his MS doing Chlorophyll fluorescence as a marker of stress. Nobody uses it, but it’s indisputable and very very easy, if only you own the equipment. Don’t ignore classics like Southern Blotting for ELISA or Y2H assays. Besides being simple and cheap, it leaves tangible results, not just readouts of electrons on a screen.

Tuesday, October 14, 2008

Federally Funded Fishing Expeditions

As a biochemist by training, I grow increasingly tired at claims that scientific media “proves” environmental decay, a genetic basis for deviant behavior, evolution, or that medication causes disease states. What most people, including sadly many scientists, don’t realize is that much of science is no longer hypothesis-driven, leading to false presumptions and inconclusive conclusions from the data we collect. Rather, in order to maximize publications and fame, scientists sacrifice the quest to solve problems and embark on federally funded fishing expeditions in hopes of collecting mass amounts of data and finding something there from with which to wow the public. Nobody seems interested in following through on a project that has end user application, because those quests take a lifetime without promise of any return on the investment. Worse, nobody will fund those who maintain this ethical problem-solving strategy because society demands results.

Science doesn’t prove anything. Science disproves all other possibilities until only the truth presumably remains. In a hypothesis-driven endeavor, one collects data and tries to refute the hypothesis. Evidence either refutes the belief or proves insufficient to disprove the hypothesis. In this way, no matter how overwhelming the data, the truth is never really proved, we are merely unable to disprove it. This phenomenon is easily illustrated by physics, which is highly content-specific: all that we know about resistance, gravity and acceleration forces and “constants” applies only in the context of the earth. Although the principles remain the same, all the parameters change when we leave the planet, and some forces change depending on our latitude on this one.

Non-scientists refuse to accept this fundamental truth of science- that we cannot “prove” much by experimentation. Data at best provides evidence that A and B are related or that A and B may be causative agents of C. Alec Guinness had a good line in “The Empire Strikes Back”, when he said that much of what we hold to be true depends on our point of view. This is especially important to consider in light of rogue scientists who will obscure or fabricate data, ignore variables, or withhold information to prevent others from subverting their personal agendas. They cannot prove what they believe, so they fit the data to their preconceived notions.

Despite the deception and delusions, the truth is not offended. Some day, we may come to know it, and then those who sowed lies will be refuted and lose whatever glory they thought they had.

Wednesday, October 1, 2008

Contact/Submissions

Option 1: My email address







(the email is not cut and paste to discourage spam)

Option 2: Click on the link below to send a message (link will open in a new window- disable popups to continue)


Contact Webmaster

Unaccounted Variables

One of the first lab exercises I taught this semester dealt with simple use of the metric scale, but I found the opportunity to teach a much more powerful lesson in that part of our lab experience. When scientists plan experiments, we hope we take into account all the things that need to be controlled so that only one variable remains- the thing we intend to test. Due to ignorance, willful or innocent, however, sometimes things surprise us for which we do not account.

I asked the students to record the values on the board so that we could analyze the variance of data from person to person. I found another phenomenon that was easy to account for but not necessarily apparent. In the jar of pennies to weigh, there were pennies of different coinage. In 1981, the US Mint stopped coining pennies in pure copper and started wrapping copper around a zinc filler, changing the weight. Another student, unable to read apparently, weighed a 1000ml beaker instead of a 500ml one. This resulted in some widely varied numbers.

By and large, the students obtained weights where the only value that varied was in the last significant figure. That is common in science- it's the figure we're not EXACTLY certain of. For the different pennies (1997 and 1978), there was a 25% difference in weight (2.23 v. 3.41g). If you didn't know about the change in mintage and assumed that a penny was a penny, you might factor in both weights and have skewed data. Or you might throw out the one aberrant design because it was skew, but then you'd have to say "using pennies minted in the 1990s" in your description of the objects weighed. As for the beakers, it was easier to throw out the one value because he knew what he'd done wrong and could easily explain why it was okay to throw it out.

Outliers if not properly identified cannot be removed. That doesn't seem to stop many of my colleagues from deleting, losing or omitting data that counters the conclusion at which they wish to arrive. I have actually been TOLD by PhDs to omit data for various reasons. I must thank Genevieve Pont-Kingdon at ARUP for NEVER having given me the impression that so doing was acceptable.


In my last post, I mentioned briefly variegations across a species. Our lab studied 18 different cultivars of the species Vitis vinifera (wine grape), which isn't all of the cultivars available. There are at least seven members of the Vitis genus, and there are many other members of the fruiting vine family. To assume that Vitis vinifera cabernet sauvignon's behavior explains that of Vitis riparious (Norton- a North American native vine) or that of Watermelon would be silly, yet that's exactly what scientists try to tell us sometimes.

When I made conclusions, I said things like this:
For Vitis vinifera cultivar gewuertztraminer under water deficit stress in greenhouse conditions, we observed a 10-fold reduction of resveratrol in the leaves and a 2-fold reduction in the berries. A total of three biological samples were tested on three separate occasions to arrive at this figure.

I did not try to say that water deficit stress will affect other wine grapes grown in Africa or Iceland in the same way or that resveratrol was affected the same way systemically. You must restrict your conclusions to the limits you define or else you start running into other variables. Even then, sometimes they show up when you least expect it, even in something as simple as a penny.

Tuesday, September 30, 2008

Scale and Sample Size

I made myself quite unpopular at conferences by asking questions as to statistical significance of findings. All scientists want to prove some sweeping new concept or cure a disease, but depending on the scale and sample size, their efforts may not be relevant or useful to the world at large. The vast array in possibilities of SNPs accounts by and large for the frequency with which pharmaceuticals seem prone to causing severe complications including death because they are designed for the many and do not often take into account minute aberrations from "normal".

Many researchers came equipped with graphs and charts in a vast array of data, meant presumably to awe us with the enormity of their conclusions. However, I noticed with alarming frequency an absence of statistics validating the fit of their conclusions (not that that's always a guarantee depending on the frequency and severity of outliers for which we cannot account. More on that later.) I used my time to ask them questions on statistical relevance in order to determine how useful their science might be to me. After all, if I intend to springboard from their conclusions, I want to make sure their claims that appeal to me stand on solid ground.

For my own research, we considered both biological and technical replicates. I learned that lesson in industry at ARUP Laboratories in Salt Lake City. I would sample at least three different biological samples three different times for a total of 9 samples before plotting the data. This tripartate replication in biological and technical capacity helped me determine a better normality of data and isolate aberrations, which were usually due to operator error (me).

After that, I performed ANOVA, X2, and other tests. Please note that the R2 value in the graph from last entry is 99%. You need not do that much. I was willing to accept a simple standard deviation bar set and an n number representing sample size.

If you test one plant and then tell me you were able to raise its resveratrol levels under enhanced CO2 concentrations to 50x the normal level and then ask me to believe that will be true for every individual grape of every cultivar of every species in the genus, I won't buy it. Congress might, or maybe ASEV, and they may give you money, but it won't be useful to anyone if it was a fluke. Utility is after all what we seek.

Anything worth doing at all is worth doing well. Plus, it would prevent FDA warnings, Pfizer settlements, GSK recalls, ad infinitum, if a few scientists took the time to test a few more samples, especially if their sample size was one. Come on people.

Thursday, September 25, 2008

Projections and Prognostication


While teaching undergraduate labs in graduate school, we reinforced the principle of being able to make a relevant comparison. The students were asked to analyze the protein content of tissues using various biochemical measurements in this particular example using an external standard.

After diluting a standard of 1mg/ml protein, they performed the analysis and calculations in order to produce a standard curve. The following represents a typical standard curve:

Photobucket

The students would then test their unknowns and compare them to this standard curve.


As luck would have it, the samples to be tested all had protein contents WAY outside the range of the curve they created. Some students foolishly extrapolated the data assuming it to be linear out to whatever value they obtained, ignoring the possibility that outside this range the curvealinear relationship observed might not hold true. We forced them to dilute their samples until the readings fell inside those covered by the standard curve and they could give an accurate number and then scale it back up using their dilution ratio.

When you project behavior beyond the measured realm, you lose all scientific credability. In economics, they continually remind you that "past performance does not guarantee future results", and in science we can only say what we have observed, not what we expect. That is a hypothesis, not data for a conclusion, so any projection represents what we THINK will be rather than what we know. This serves relevant point in politics I may address later on my other blog. It is not actually something we have measured, and represents not fact but conjecture and assumption.

Global warming advocates do this all the time. They project outside the range and extrapolate over a wide range of time and scale, the particulars of their experiment notwithstanding. Other scientists also like to apply measurements to things outside the scale and scope and make sweeping gestures which are not necessarily true.

When working on volatile compounds in grapes, the first thing I did was establish the linear detection limits for the GC/MS protocol I used. I was able for most compounds to detect them linearly to 4ppb, which is very important since the amount of volatile is not necessarily proportional to its importance or potency. Sometimes it is RELATIVE amount that makes all the difference. Even in a linear scale then, it might take 4000ppb to register a difference from 4ppb.

The proof is in the data. When scientists make broad sweeping claims across a vast array of possibilities in clime, scale, age, time, etc., I raise an eyebrow and my hand to inquire. More often than not, this error is also accompanied by another error and my next subject- economy of scale.

Sunday, September 21, 2008

Why Start This Blog?

Many years ago in graduate school, I wasted quite a few months pursuing a project that would never ever work. What was even more frustrating is that other labs knew it wouldn't work, but they didn't bother to share their findings with us because science journals don't publish things that don't work.

In his book Climate Confusion, climatologist Roy Spencer makes the following observations. In speaking about why global warming alarmists and their complicit media counterparts sensationalize the armageddon scenario of world destruction, he points out that:

In science, if you want to keep getting funded, you should find something earth-shaking.

This phenomenon provides most of the impetus for hastily and poorly-drawn conclusions in science. To get published in a prestigious journal many scientists will project their findings to astronomically irrational levels and claim that "Our research on abiotic stress in creosote will one day provide all the rubber the world needs without any cost because these bushes grow wild throughout Nevada, so everyone who owns any of these shrubs on barren lots will one day be multi-millionaires" as a crude example. The truth is that, much as I like the guy, Dr. David Shintani's lab isn't remotely close to bringing any kind of alternative rubber source to market, nor will his lab by itself in our lifetime without some kind of corporate sponsorship and investiture.

In their haste to publish, graduate, and tack on a series of unintelligible vowels and consenants to the end of their names (mine are incidentally MSBMB, SSRAII, APB- whatever the heck that means) colleagues of mine have falsified data, omitted or deleted information, thrown out abnormal results without good reason, and made inaccurate claims based on a statistically insignificant number of biological and technical replicates. If you then try to piggyback on their research, by and large you may find their data and their conclusions faulty, meaning that you waste a lot of time and resources duplicating their efforts. What consequences do they face? None. I don't personally know of anyone stripped of a MS or PhD for having had their thesis/dissertation disproven.

The true tragedy is in cost to you the taxpayer and consumer. How much duplicit effort in time and money exists because people are only able to/interested in publishing breakthroughs that will exalt their own personal self-interest? Scientific journals as presently constituted concern themselves only with publishing what did work, to the exclusion of everything else we tried that didn't work with its accompanying data and explanations.

Enter the Journal of Negative Results. Would you like to know if someone already thought about trying to solve a particular scientific question with a particular technique? How did they fare? Why did they fail? Why can't they get credit for all that hard work with a publication? I think such a Journal adds value to the system of science and may save people a lot of time.

Now I lack the fudiciary means to fund such an endeavor, but I do have this- a searchable blog dedicated to any and all who would like to let the rest of us know what they have been able to disprove by their work. I may not offer a prestigious journal in which to publish, but I offer you yet another chance to get yourself on the Google or Yahoo web results for work you did and give credit where credit is due. I ask no compensation for this, and I will publish any and all information on techniques, organisms, variegations, equipment, and personnel who netted you abnormal results and didn't get you what you were aiming at, because maybe someone can serendipitously segue from your efforts and get an idea they didn't think about before all while saving everyone else time and money.

Now accepting manuscripts.