[Cytometry] Scaling problem with FACSDiva
Dave.Dunaway at nationwidechildrens.org
Fri Feb 7 10:40:55 EST 2014
That makes sense. I guess I have been "old school" as Cris put it in that I was originally trained to set voltages based on an unstained control and that it was important to see the unstained sample graphically without piling the events up on the left axis.
From: cytometry-bounces at lists.purdue.edu [mailto:cytometry-bounces at lists.purdue.edu] On Behalf Of Mario Roederer
Sent: Friday, February 07, 2014 7:28 AM
To: Purdue Flow (cytometry at lists.purdue.edu)
Subject: Re: [Cytometry] Scaling problem with FACSDiva
I guess I don't follow your basic premise. Unless you are using software that does something insanely ridiculous, ALL events are visible. Anything "completely offscale" on the low side would be squashed up on the left axis, and while it may appear unsightly in a graphic representation, any numerical report (gate percentage) would reflect those events. You could do an MFI calculation and show a net difference in fluorescence intensity. There are any number of statistical reports you could do to demonstrate positivity. But in this case, even a graph should convince -- because you would see all the events on the low axis (and here's a classic case of graphs misleading: from the graphic it may be impossible to even estimate the fraction of events piled up on the axis -- but they are not "missing" or invisible, unless there's a major bug in your software).
Fundamentally, you can never use one sample to conclude positivity anyway; this can only be inferred by comparing a stained sample to a control. So "visibly demonstrable" on a single sample is inconclusive. Two examples of this: First, by altering the transformation of the intensity axis (e.g., linear vs. log vs. biexponential etc), you can make a relatively uniform unstained population split into two apparent peaks. This could lead to an incorrect conclusion about "positivity" based solely on a single sample graphical analysis. Second, it is possible that your sample has heterogeneous autofluorescence (that you were unaware of), such that you suddenly have two "real" peaks even in the absence of staining. Given no other information, most people would look at the two-peaked distribution, conclude there are visibly demonstrable "higher" and "lower" intensity populations, and incorrectly conclude positivity. (Finally, I note that if your sample is 100% positive, then yo!
u will again have a uniform peak -- and only by comparing to a separate control would you establish positivity).
So you are correct that positivity can only be defined in context of a negative control, but this must be a separate sample. And it doesn't really matter if all of the control events are squashed up at the low end of the scale (which I presume is what you mean by "off-scale").
On Feb 6, 2014, at 9:20 AM, Dunaway, Dave wrote:
> Dr. Roederer,
> I understand what you are saying here but in the case of the original example put forth by Cris how does one know if the negatives are completely off scale that the positives are legitimate? If you were to review a manuscript for publication are you telling me that you would accept such a case? That is what I mean by visibly demonstrative. How do I know that what one says is positive truly is if I don't see it in concert with the negative?
> -----Original Message-----
> From: cytometry-bounces at lists.purdue.edu [mailto:cytometry-bounces at lists.purdue.edu] On Behalf Of Mario Roederer
> Sent: Wednesday, February 05, 2014 9:29 PM
> To: Purdue list
> Subject: Re: [Cytometry] Scaling problem with FACSDiva
> Dave Dunaway wrote:
>> I find this to be dubious advice unless all you are interested in is crunching numbers.
> But, in fact, that's ALL we should be interested in. Graphs are for illustration purposes, for data exploration purposes, and for pretty pictures. We should not draw conclusions from graphs, we can only do so from numbers, properly controlled, properly analyzed, but numbers nonetheless.
>> The data may be there but if its not visibly demonstrative then your data is questionable when it comes to presentation.
> "Visibly demonstrative"? I'm not even sure what you mean by this. But it illustrates a basic mistake we all make -- we put too much emphasis on visualizations, on graphs. Data is questionable based on evaluations of its reliability and reproducibility (statistics). We can look at graphs to decide on what questions to ask of the data, but we must be cautious about drawing conclusions from graphs.
> I can show you examples of graphs which look great but are meaningless. And I can show you graphs of data that look questionable but are good.
> Cris's basic advice was correct, but brief. If your positives are within a half decade of the top end of the dynamic range, then reduce your PMT voltage (sensitivity). Inserting an ND filter is not necessarily an equivalent (or good) solution -- it accomplishes roughly the same thing but at a cost of increased noise from decreased photoelectron counts.
> Reducing the PMT voltage won't make negatives "invisible" -- it will only squash them near the zero value. Nothing is hidden.
> BTW, when you see your negative population go in to the "below-zero" measurement range of uncompensated parameters (ie towards the left edge on the transformed logicle scale), this is a consequence of the baseline restore in the DIVa electronics. It's a pain... but it's not typically a problem.
> Just make sure that you consider ALL events below the upper end of an unstained sample (wherever it ends up on your settings) to be equivalent. i.e., if your unstained cells go to 100, then EVERYTHING below 100, whether it's +10 or -300 is the same -- don't gate separately those events, even if they look like distinct populations in your graph.
> Because graphics can lie, but numbers always tell the truth.
Cytometry mailing list
Cytometry at lists.purdue.edu
Search the list archive at http://tinyurl.com/cytometry
More information about the Cytometry