What if experiment




















Following some tinkering with the apparatus, Morpurgo found that if he separated the capacitor plates he obtained only integral values of charge. Pickering goes on to note that Morpurgo did not tinker with the two competing theories of the phenomena then on offer, those of integral and fractional charge:. Achieving such relations of mutual support is, I suggest, the defining characteristic of the successful experiment.

Pickering has made several important and valid points concerning experiment. Most importantly, he has emphasized that an experimental apparatus is initially rarely capable of producing a valid experimental results and that some adjustment, or tinkering, is required before it does. He has also recognized that both the theory of the apparatus and the theory of the phenomena can enter into the production of a valid experimental result. What one may question, however, is the emphasis he places on these theoretical components.

From Millikan onwards, experiments had strongly supported the existence of a fundamental unit of charge and charge quantization.

It was the failure to produce measurements in agreement with what was already known i. This was true regardless of the theoretical models available, or those that Morpurgo was willing to accept. To be sure, Pickering has allowed a role for the natural world in the production of the experimental result, but it does not seem to be decisive. He suggests that the experimental apparatus itself is a less plastic resource then either the theoretical model of the apparatus or that of the phenomenon.

He suggests that the results of mature laboratory science achieve stability and are self-vindicating when the elements of laboratory science are brought into mutual consistency and support. These are 1 ideas: questions, background knowledge, systematic theory, topical hypotheses, and modeling of the apparatus; 2 things: target, source of modification, detectors, tools, and data generators; and 3 marks and the manipulation of marks: data, data assessment, data reduction, data analysis, and interpretation.

We invent devices that produce data and isolate or create phenomena, and a network of different levels of theory is true to these phenomena. Conversely we may in the end count them only as phenomena only when the data can be interpreted by theory. One might ask whether such mutual adjustment between theory and experimental results can always be achieved? What happens when an experimental result is produced by an apparatus on which several of the epistemological strategies, discussed earlier, have been successfully applied, and the result is in disagreement with our theory of the phenomenon?

Accepted theories can be refuted. Several examples will be presented below. Hacking himself worries about what happens when a laboratory science that is true to the phenomena generated in the laboratory, thanks to mutual adjustment and self-vindication, is successfully applied to the world outside the laboratory.

Does this argue for the truth of the science. Recently Pickering has offered a somewhat revised account of science. Scientists are human agents in a field of material agency which they struggle to capture in machines Pickering, , p.

That being done, integral charges were observed and the result stabilized by the mutual agreement of the apparatus, the theory of the apparatus, and the theory of the phenomenon. My analysis thus displays an intimate and responsive engagement between scientific knowledge and the material world that is integral to scientific practice p.

Nor does the natural world seem to have much efficacy. As we have seen, Morpurgo reported that he did not observe fractional electrical charges. On the other hand, in the late s and early s, Fairbank and his collaborators published a series of papers in which they claimed to have observed fractional charges See, for example, LaRue, Phillips et al. Faced with this discord Pickering concludes,.

There is a real question here as to whether or not fractional charges exist in nature. The conclusions reached by Fairbank and by Morpurgo about their existence cannot both be correct. It seems insufficient to merely state, as Pickering does, that Fairbank and Morpurgo achieved their individual stabilizations and to leave the conflict unresolved.

Pickering does comment that one could follow the subsequent history and see how the conflict was resolved, and he does give some brief statements about it, but its resolution is not important for him. At the very least one should consider the actions of the scientific community.

Scientific knowledge is not determined individually, but communally. Pickering seems to acknowledge this. I can see nothing wrong with thinking this way…. These are questions about the natural world that can be resolved.

Another issue neglected by Pickering is the question of whether a particular mutual adjustment of theory, of the apparatus or the phenomenon, and the experimental apparatus and evidence is justified.

Pickering seems to believe that any such adjustment that provides stabilization, either for an individual or for the community, is acceptable. Others disagree. They note that experimenters sometimes exclude data and engage in selective analysis procedures in producing experimental results.

These practices are, at the very least, questionable as is the use of the results produced by such practices in science. There are, in fact, procedures in the normal practice of science that provide safeguards against them. For details see Franklin, , Section 1. Franklin remarks that it is insufficient simply to say that the resolution is socially stabilized. The important question is how that resolution was achieved and what were the reasons offered for that resolution.

If we are faced with discordant experimental results and both experimenters have offered reasonable arguments for their correctness, then clearly more work is needed. It seems reasonable, in such cases, for the physics community to search for an error in one, or both, of the experiments. Pickering discusses yet another difference between his view and that of Franklin. This police function relates specifically to theory choice in science, which,… is usually discussed in terms of the rational rules or methods responsible for closure in theoretical debate p.

For further discussion see Franklin b. Franklin regards them as a set of strategies, from which physicists choose, in order to argue for the correctness of their results. As noted above, the strategies offered are neither exclusive or exhaustive.

There is another point of disagreement between Pickering and Franklin. Pickering claims to be dealing with the practice of science, and yet he excludes certain practices from his discussions. One scientific practice is the application of the epistemological strategies outlined above to argue for the correctness of an experimental results.

In fact, one of the essential features of an experimental paper is the presentation of such arguments. Writing such papers, a performative act, is also a scientific practice and it would seem reasonable to examine both the structure and content of those papers.

Recently Ian Hacking , chapter 3 has provided an incisive and interesting discussion of the issues that divide the constructivists Collins, Pickering, etc. He sets out three sticking points between the two views: 1 contingency, 2 nominalism, and 3 external explanations of stability.

Contingency is the idea that science is not predetermined, that it could have developed in any one of several successful ways. This is the view adopted by constructivists. See Pickering a. Not logically incompatible with, just different.

The constructionist about the idea of quarks thus claims that the upshot of this process of accommodation and resistance is not fully predetermined. Laboratory work requires that we get a robust fit between apparatus, beliefs about the apparatus, interpretations and analyses of data, and theories. Before a robust fit has been achieved, it is not determined what that fit will be. Not determined by how the world is, not determined by technology now in existence, not determined by the social practices of scientists, not determined by interests or networks, not determined by genius, not determined by anything pp.

It is doubtful that the world, or more properly, what we can learn about it, entails a unique theory. If not, as seems more plausible, he means that the way the world is places no restrictions on that successful science, then the rationalists disagree strongly.

They want to argue that the way the world is restricts the kinds of theories that will fit the phenomena, the kinds of apparatus we can build, and the results we can obtain with such apparatuses. To think otherwise seems silly. Consider a homey example. It seems highly unlikely that someone can come up with a successful theory in which objects whose density is greater than that of air fall upwards.

This is not a caricature of the view Hacking describes. That is determined by the way the world is. Any successful theory of light must give that value for its speed. Another difference between Pickering and Franklin on contingency concerns the question of not whether an alternative is possible, but rather whether there are reasons why that alternative should be pursued. Pickering seems to identify can with ought.

In the late s there was a disagreement between the results of low-energy experiments on atomic parity violation the violation of left-right symmetry performed at the University of Washington and at Oxford University and the result of a high-energy experiment on the scattering of polarized electrons from deuterium the SLAC E experiment. The atomic-parity violation experiments failed to observe the parity-violating effects predicted by the Weinberg- Salam W-S unified theory of electroweak interactions, whereas the SLAC experiment observed the predicted effect.

These early atomic physics results were quite uncertain in themselves and that uncertainty was increased by positive results obtained in similar experiments at Berkeley and Novosibirsk. At the time the theory had other evidential support, but was not universally accepted. They differ dramatically in their discussions of the experiments.

Their difference on contingency concerns a particular theoretical alternative that was proposed at the time to explain the discrepancy between the experimental results. Pickering asked why a theorist might not have attempted to find a variant of electroweak gauge theory that might have reconciled the Washington-Oxford atomic parity results with the positive E result.

What such a theorist was supposed to do with the supportive atomic parity results later provided by experiments at Berkeley and at Novosibirsk is never mentioned. Pickering notes that open-ended recipes for constructing such variants had been written down as early as p. It would have been possible to do so, but one may ask whether or not a scientist might have wished to do so. This is not to suggest that scientists do not, or should not, engage in speculation, but rather that there was no necessity to do so in this case.

Theorists often do propose alternatives to existing, well-confirmed theories. Constructivist case studies always seem to result in the support of existing, accepted theory Pickering a; b; ; Collins ; Collins and Pinch One criticism implied in such cases is that alternatives are not considered, that the hypothesis space of acceptable alternatives is either very small or empty.

One may seriously question this. Thus, when the experiment of Christenson et al. As one can see, the limits placed on alternatives were not very stringent. By the end of , all of the alternatives had been tested and found wanting, leaving CP symmetry unprotected. Here the differing judgments of the scientific community about what was worth proposing and pursuing led to a wide variety of alternatives being tested.

Opponents contend that good names, or good accounts of nature, tell us something correct about the world. This is related to the realism-antirealism debate concerning the status of unobservable entities that has plagued philosophers for millennia. For example Bas van Fraassen , an antirealist, holds that we have no grounds for belief in unobservable entities such as the electron and that accepting theories about the electron means only that we believe that the things the theory says about observables is true.

A nominalist further believes that the structures we conceive of are properties of our representations of the world and not of the world itself. Hacking refers to opponents of that view as inherent structuralists. Andrew Pickering entitled his history of the quark model Constructing Quarks Pickering a. Physicists argue that this demeans their work.

For Weinberg, quarks and Mount Everest have the same ontological status. They are both facts about the world. Hacking argues that constructivists do not, despite appearances, believe that facts do not exist, or that there is no such thing as reality. One might add, however, that the reasons Hacking cites as supporting that belief are given to us by valid experimental evidence and not by the social and personal interests of scientists.

Latour and Woolgar might not agree. Franklin argues that we have good reasons to believe in facts, and in the entities involved in our theories, always remembering, of course, that science is fallible.

Rationalists think that most science proceeds as it does in the light of good reasons produced by research. Some bodies of knowledge become stable because of the wealth of good theoretical and experimental reasons that can be adduced for them. Constructivists think that the reasons are not decisive for the course of science.

Nelson concludes that this issue will never be decided. Rationalists, at least retrospectively, can always adduce reasons that satisfy them. Constructivists, with equal ingenuity, can always find to their own satisfaction an openness where the upshot of research is settled by something other than reason. Something external.

Thus, there is a rather severe disagreement on the reasons for the acceptance of experimental results. For some, like Staley, Galison and Franklin, it is because of epistemological arguments.

For others, like Pickering, the reasons are utility for future practice and agreement with existing theoretical commitments.

Although the history of science shows that the overthrow of a well-accepted theory leads to an enormous amount of theoretical and experimental work, proponents of this view seem to accept it as unproblematical that it is always agreement with existing theory that has more future utility. Hacking and Pickering also suggest that experimental results are accepted on the basis of the mutual adjustment of elements which includes the theory of the phenomenon.

Authors like Thomas Kuhn and Paul Feyerabend put forward the view that evidence does not confirm or refute a scientific theory since it is laden by it. Evidence is not a set of observational sentences autonomous from theoretical ones, as logical positivists believed. Each new theory or a theoretical paradigm, as Kuhn labeled larger theoretical frameworks, produces, as it were, evidence anew. Thus, theoretical concepts infect the entire experimental process from the stage of design and preparation to the production and analysis of data.

A simple example that is supposed to convincingly illustrate this view are measurements of temperature with a mercury thermometer one uses in order to test whether objects expand when their temperature increases. Note that in such a case one tests the hypothesis by relying on the very assumption that the expansion of mercury indicates increase in temperature.

There may be a fairly simple way out of the vicious circle in which theory and experiment are caught in this particular case of theory-ladenness.

It may suffice to calibrate the mercury thermometer with a constant volume gas thermometer, for example, where its use does not rely on the tested hypothesis but on the proportionality of the pressure of the gas and its absolute temperature Franklin et al.

Although most experiments are far more complex than this toy example, one could certainly approach the view that experimental results are theory-laden on a case-by-case basis. Yet there may be a more general problem with the view. Bogen and Woodward argued that debate on the relationship between theory and observation overlooks a key ingredient in the production of experimental evidence, namely the experimental phenomena.

The experimentalists distill experimental phenomena from raw experimental data e. Thus, identification of an experimental phenomenon as significant e.

Only when significant phenomenon has been identified can a stage of data analysis begin in which the phenomenon is deemed to either support or refute a theory. Thus, the theory-ladenness of evidence thesis fails at least in some experiments in physics. The authors substantiate their argument in part through an analysis of experiments that led to a breakthrough discovery of weak neutral currents.

It is a type of force produced by so-called bosons — short-lived particles responsible for energy transfer between other particles such as hadrons and leptons. The relevant peaks were recognized as significant via statistical analysis of data, and later on interpreted as evidence for the existence of the bosons.

This view and the case study have recently been challenged by Schindler He argues that the tested theory was critical in the assessment of the reliability of data in the experiments with weak neutral currents. He also points out that, on occasion, experimental data can even be ignored if they are deemed irrelevant from a theoretical perspective that physicists find particularly compelling.

This was the case in experiments with so-called zebra pattern magnetic anomalies on the ocean floor. The readings of new apparatuses used to scan the ocean floor produced intriguing signals. Karaca points out that a crude theory-observation distinction is particularly unhelpful in understanding high-energy physics experiments.

It fails to capture the complexity of relevant theoretical structures and their relation to experimental data. Theoretical structures can be composed of background, model, and phenomenological theories. Background theories are very general theories e. Models are specific instances of background theories that define particular particles and their properties.

While phenomenological theories develop testable predictions based on these models. Now, each of these theoretical segments stands in a different relationship to experimental data—the experiments can be laden by a different segment to a different extent. This requires a nuanced categorization of theory-ladeness, from weak to strong.

Thus, an experimental apparatus can be designed to test a very specific theoretical model. In contrast, exploratory experiments approach phenomena without relying on a particular theoretical model. Thus, sometimes a theoretical framework for an experiment consists of phenomenological theory alone. Karaca argues that experiments with deep-inelastic electron-proton scattering in the late s and early s are example of such weakly theory-laden experiments.

The application of merely phenomenological parameters in the experiment resulted in the very important discovery of the composite rather than point-like structure of hadrons protons and neutrons , or the so-called scaling law. And this eventually led to a successful theoretical model of the composition of hadrons, namely quantum chromodynamics, or the quark-model of strong interactions.

Although experiment often takes its importance from its relation to theory, Hacking pointed out that it often has a life of its own, independent of theory. In none of these cases did the experimenter have any theory of the phenomenon under investigation.

One may also note the nineteenth century measurements of atomic spectra and the work on the masses and properties on elementary particles during the s. Both of these sequences were conducted without any guidance from theory. In deciding what experimental investigation to pursue, scientists may very well be influenced by the equipment available and their own ability to use that equipment McKinney These experiments were performed with basically the same experimental apparatus, but with relatively minor modifications for each particular experiment.

By the end of the sequence the experimenters had become quite expert in the use of the apparatus and knowledgeable about the backgrounds and experimental problems. This allowed the group to successfully perform the technically more difficult experiments later in the sequence. Scientists, both theorists and experimentalists, tend to pursue experiments and problems in which their training and expertise can be used.

They saw what they saw because they were curious, inquisitive, reflective people. They were attempting to form theories. In all of these cases we may say that these were observations waiting for, or perhaps even calling for, a theory. The discovery of any unexpected phenomenon calls for a theoretical explanation. Nevertheless several of the important roles of experiment involve its relation to theory. Experiment may confirm a theory, refute a theory, or give hints to the mathematical structure of a theory.

Let us consider first an episode in which the relation between theory and experiment was clear and straightforward. The episode was that of the discovery that parity, mirror-reflection symmetry or left-right symmetry, is not conserved in the weak interactions. For details of this episode see Franklin , Ch. Experiments showed that in the beta decay of nuclei the number of electrons emitted in the same direction as the nuclear spin was different from the number emitted opposite to the spin direction.

This was a clear demonstration of parity violation in the weak interactions. After the discovery of parity and charge conjugation nonconservation, and following a suggestion by Landau, physicists considered CP combined parity and particle-antiparticle symmetry , which was still conserved in the experiments, as the appropriate symmetry. The decay was observed by a group at Princeton University. Although several alternative explanations were offered, experiments eliminated each of the alternatives leaving only CP violation as an explanation of the experimental result.

In both of the episodes discussed previously, those of parity nonconservation and of CP violation, we saw a decision between two competing classes of theories. This episode, the discovery of Bose-Einstein condensation BEC , illustrates the confirmation of a specific theoretical prediction 70 years after the theoretical prediction was first made. Bose and Einstein ; predicted that a gas of noninteracting bosonic atoms will, below a certain temperature, suddenly develop a macroscopic population in the lowest energy quantum state.

In the three episodes discussed in the previous section, the relation between experiment and theory was clear. The experiments gave unequivocal results and there was no ambiguity about what theory was predicting. None of the conclusions reached has since been questioned. Parity and CP symmetry are violated in the weak interactions and Bose-Einstein condensation is an accepted phenomenon. In the practice of science things are often more complex.

Experimental results may be in conflict, or may even be incorrect. Theoretical calculations may also be in error or a correct theory may be incorrectly applied. There are even cases in which both experiment and theory are wrong. As noted earlier, science is fallible. In this section I will discuss several episodes which illustrate these complexities.

The episode of the fifth force is the case of a refutation of an hypothesis, but only after a disagreement between experimental results was resolved. The initial experiments gave conflicting results: one supported the existence of the Fifth Force whereas the other argued against it.

After numerous repetitions of the experiment, the discord was resolved and a consensus reached that the Fifth Force did not exist.

For details of this episode see Appendix 4. In the light of later work, however, the refutation stood, but the confirmation was questionable. In fact, the experimental result posed problems for the theory it had seemingly confirmed. A new theory was proposed and although the Stern-Gerlach result initially also posed problems for the new theory, after a modification of that new theory, the result confirmed it.

In a sense, it was crucial after all. It just took some time. The Stern-Gerlach experiment provides evidence for the existence of electron spin. One might say that electron spin was discovered before it was invented. For details of this episode see Appendix 5. In the last section we saw some of the difficulty inherent in experiment-theory comparison. One is sometimes faced with the question of whether the experimental apparatus satisfies the conditions required by theory, or conversely, whether the appropriate theory is being compared to the experimental result.

After more than a decade of work, both experimental and theoretical, it was realized that there was a background effect in the experiments that masked the predicted effect. When the background was eliminated experiment and theory agreed. Appendix 6. Ever vaster amounts of data have been produced by particle colliders as they have grown from room-size apparata, to tens of kilometers long mega-labs.

Vast numbers of background interactions that are well understood and theoretically uninteresting occur in the detector. These have to be combed in order to identify interactions of potential interest. Protons that collide in the LHC and similar hadron colliders are composed of more elementary particles, collectively labeled partons.

Partons mutually interact, exponentially increasing the number of background interactions. In fact, a minuscule number of interactions are selected from the overwhelming number that occur in the detector. In contrast, lepton collisions, such as collisions of electrons and positrons, produce much lower backgrounds, since leptons are not composed of more elementary particles.

Thus, a successful search for new elementary particles critically depends on successfully crafting selection criteria and techniques at the stage of data collection and at the stage of data analysis.

But gradual development and changes in data selection procedures in the colliders raises an important epistemological concern. In other words, how does one decide which interactions to detect and analyze in a multitude, in order to minimize the possibility of throwing out novel and unexplored ones?

One way of searching through vast amounts of data that are already in, i. Physicists employ the technique of data cuts in such analysis. They cut out data that may be unreliable—when, for instance, a data set may be an artefact rather than a genuine particle interaction the experimenters expect. Thus, if under various data cuts a result remains stable, then it is increasingly likely to be correct and to represent the genuine phenomenon the physicists think it represents.

The robustness of the result under various data cuts minimizes the possibility that the detected phenomenon only mimics the genuine one Franklin , —5. At the data-acquisition stage, however, this strategy does not seem applicable.

As Panofsky suggests, one does not know with certainty which of the vast number of the events in the detector may be of interest. Yet, Karaca [ 13 ] argues that a form of robustness is in play even at the acquisition stage. This experimental approach amalgamates theoretical expectations and empirical results, as the example of the hypothesis of specific heavy particles is supposed to illustrate.

Along with the Standard Model of particle physics, a number of alternative models have been proposed. Their predictions of how elementary particles should behave often differ substantially. Yet in contrast to the Standard Model, they all share the hypothesis that there exist heavy particles that decay into particles with high transverse momentum. Physicists apply a robustness analysis in testing this hypothesis, the argument goes. First, they check whether the apparatus can detect known particles similar to those predicted.

Second, guided by the hypothesis, they establish various trigger algorithms. They are necessary because the frequency and the number of interactions far exceed the limited recording capacity. And, finally, they observe whether any results remain stable across the triggers. And one way around this problem is for physicists to produce as many alternative models as possible, including those that may even seem implausible at the time.

Perovic suggests that such a potential failure, namely to spot potentially relevant events occurring in the detector, may be also a consequence of the gradual automation of the detection process.

The early days of experimentation in particle physics, around WWII, saw the direct involvement of the experimenters in the process. Experimental particle physics was a decentralized discipline where experimenters running individual labs had full control over the triggers and analysis.

The experimenters could also control the goals and the design of experiments. Fixed target accelerators, where the beam hits the detector instead of another beam, produced a number of particle interactions that was manageable for such labs. The chance of missing an anomalous event not predicted by the current theory was not a major concern in such an environment.

Yet such labs could process a comparatively small amount of data. This has gradually become an obstacle, with the advent of hadron colliders. They work at ever-higher energies and produce an ever-vaster number of background interactions. That is why the experimental process has become increasingly automated and much more indirect.

Trained technicians instead of experimenters themselves at some point started to scan the recordings. Eventually, these human scanners were replaced by computers, and a full automation of detection in hadron colliders has enabled the processing of vast number of interactions.

This was the first significant change in the transition from small individual labs to mega-labs. The second significant change concerned the organization and goals of the labs. The mega-detectors and the amounts of data they produced required exponentially more staff and scientists. This in turn led to even more centralized and hierarchical labs and even longer periods of design and performance of the experiments.

As a result, focusing on confirming existing dominant hypotheses rather than on exploratory particle searches was the least risky way of achieving results that would justify unprecedented investments. Now, an indirect detection process combined with mostly confirmatory goals is conducive to overlooking of unexpected interactions.

As such, it may impede potentially crucial theoretical advances stemming from missed interactions. This possibility that physicists such as Panofsky have acknowledged is not a mere speculation. In fact, the use of semi-automated, rather than fully-automated regimes of detection turned out to be essential for a number of surprising discoveries that led to theoretical breakthroughs. In the experiments, physicists were able to perform exploratory detection and visual analysis of practically individual interactions due to low number of background interactions in the linear electron-positron collider.

And they could afford to do this in an energy range that the existing theory did not recognize as significant, which led to them making the discovery. None of this could have been done in the fully automated detecting regime of hadron colliders that are indispensable when dealing with an environment that contains huge numbers of background interactions. And in some cases, such as the Fermilab experiments that aimed to discover weak neutral currents, an automated and confirmatory regime of data analysis contributed to the failure to detect particles that were readily produced in the apparatus.

The complexity of the discovery process in particle physics does not end with concerns about what exact data should be chosen out of the sea of interactions. The so-called look-elsewhere effect results in a tantalizing dilemma at the stage of data analysis. Suppose that our theory tells us that we will find a particle in an energy range.

And suppose we find a significant signal in a section of that very range. Perhaps we should keep looking elsewhere within the range to make sure it is not another particle altogether we have discovered.

It may be a particle that left other undetected traces in the range that our theory does not predict, along with the trace we found. The question is to what extent we should look elsewhere before we reach a satisfying level of certainty that it is the predicted particle we have discovered.

The Higgs boson is a particle responsible for the mass of other particles. This pull, which we call mass, is different for different particles.

It is predicted by the Standard Model, whereas alternative models predict somewhat similar Higgs-like particles. A prediction based on the Standard Model tells us with high probability that we will find the Higgs particle in a particular range. Yet a simple and an inevitable fact of finding it in a particular section of that range may prompt us to doubt whether we have truly found the exact particle our theory predicted.

Our initial excitement may vanish when we realize that we are much more likely to find a particle of any sort—not just the predicted particle—within the entire range than in a particular section of that range. In fact, the likelihood of us finding it in a particular bin of the range is about hundred times lower. In other words, the fact that we will inevitably find the particle in a particular bin, not only in a particular range, decreases the certainty that it was the Higgs we found.

Given this fact alone we should keep looking elsewhere for other possible traces in the range once we find a significant signal in a bin. We should not proclaim the discovery of a particle predicted by the Standard Model or any model for that matter too soon. But for how long should we keep looking elsewhere? And what level of certainty do we need to achieve before we proclaim discovery?

The answer boils down to the weight one gives the theory and its predictions. Theoreticians were confident that a finding within the range any of eighty bins that was of standard reliability of three or four sigma , coupled with the theoretical expectations that Higgs would be found, would be sufficient.

In contrast, experimentalists argued that at no point of data analysis should the pertinence of the look-elsewhere effect be reduced, and the search proclaimed successful, with the help of the theoretical expectations concerning Higgs. One needs to be as careful in combing the range as one practically may. This is a standard under which very few findings have turned out to be a fluctuation in the past. Dawid argues that a question of an appropriate statistical analysis of data is at the heart of the dispute.

The reasoning of the experimentalists relied on a frequentist approach that does not specify the probability of the tested hypothesis. It actually isolates statistical analysis of data from the prior probabilities. The theoreticians, however, relied on Bayesian analysis. It starts with prior probabilities of initial assumptions and ends with the assessment of the probability of tested hypothesis based on the collected evidence. The prior expectations that the theoreticians had included in their analysis had already been empirically corroborated by previous experiments after all.

Experiment can also provide us with evidence for the existence of the entities involved in our theories. For details of this episode see Appendix 7. Experiment can also help to articulate a theory. For details of this episode see Appendix 8. Rather, she believed they contained important lessons. Lessons that perfectly bookend my Ph. My time in the lab began with ignorance—not the wide-eyed, first-year graduate student variety, but the rigorous brand that embraces an open question.

Now, consider that every cell in your body contains the exact same complement of DNA. Yet a heart cell looks and acts completely different from a brain cell which looks and acts completely different from a skin cell. So how did a heart cell, a brain cell, and a skin cell arrive at such different biological fates when given the exact same set of molecular blueprints? So understanding the dynamics of RNA, smack at the front lines of cellular activity, can help us understand how diversity emerges from the same DNA blueprint.

That alphabet can be expanded upon with a library of over chemical tweaks to fine-tune RNA function—a small M added to an A or a chemical S to a U. Here, ignorance comes to play. We do, however, have some clues—one that particularly piqued my interest.

All four methods were based on the same principle, so their results should overlap well with one another. But they did not. And here enters failure. I was genuinely surprised by the result.

So I hunkered down and thought through a host of technical and biological caveats that were not detailed in the original publications. But, try as I might, I could not get the method to work. And so, more failure. But while the practice of science is riddled with failures—from the banal failures of day-to-day life at the bench to the heroic, paradigm shifting failures that populate the book called Failure —many scientists are uncomfortable with the idea.

We publish our innovations, the stories of how our ignorance led to success. So there is little incentive to replicate the work of others or report experimental failure. In fact, there is barely a medium to publish these sorts of efforts, which are relegated to the bottom of the file drawer. But the scientific method hinges on self-correction, which requires transparent reporting of positive or negative data and corroboration or contradiction of previous experiments.

Review the Fundamentals. How Do I Assess Risk? What Are Control Measures? What Are the Roles and Responsibilities? Only useful if you ask the right questions Relies on intuition of team members More subjective than other methods Greater potential for reviewer bias More difficult to translate results into convincing arguments for change.

Sample What-if Questions Following is a list of sample What-if questions to get your group thinking in the right directions. Human Factor Human errors occur regardless of training and experience. What if material used is too concentrated or diluted?

What if the valve s are opened or closed in the wrong sequence? What if inert gas is omitted? What if unintended materials are mixed together? What if readings are missed or ignored?

What if warnings are missed or ignored? What if there are errors in diagnosis? Utility The following questions concern utilities, which are key to the support of any experiment or process: What if power is lost? Consider: Automatic shutoffs and emergency power What if power is restored automatically after loss?

Consider: Manual restarts What if laboratory ventilation is lost? Consider: Automatic shutoffs, emergency power, and redundant mechanical exhaust fans Experimental or Ancillary Equipment Consideration of failure of materials or components may result in decisions for additional controls or changes to higher rated or alternative types of materials and components. Consider: Pressure relief devices and barriers; personal protective equipment PPE What if glassware breaks during reaction? Consider: Alarms, automatic shutoffs, and emergency shut-off procedures Personal Protection This should be included since, despite best efforts with hazard reviews and training, incidents will occur.

What if a body is impacted by liquids or solids? Consider: Physical barriers What if someone is exposed to vapors or gases? Consider: PPE; ventilation What if someone is exposed to respirable particles?

Consider: Use of wet contamination control methods, ventilation controls, and respiratory protection. This collection of methods and tools for assessing hazards in research laboratories is based on the publication, Identifying and Evaluating Hazards in Research Laboratories [PDF].



0コメント

  • 1000 / 1000