User Tools

Site Tools


Sidebar

symptoms:models_for_experiments

Models for Experiments

A perfect summary of the current problems in high energy physics are summarized in the paper An empirical study of knowledge production at the LHC by Arianna Borrelli. Essential results from this paper;
  • “Following the most popular models in philosophy of science, working hypothesis at the start of the investigation was that HE-physicists, especially theorists, would be divided in many larger or smaller communities supporting a specific BSM-model (or model class) and presenting arguments in its favour and against other models. However, the analysis of research preprints soon showed that almost no disputes of this kind took place and that the same arguments could be used in support of different models (naturalness, dark matter candidate, unification…). ”
  • “The two runs of the online-survey confirmed how little commitment HE-physicists have in the BSM-models at present on the marked.”
  • “Moreover, most model-builders were seen to work on different kinds of models at the same time, without necessarily committing to one of them. ”
  • “The results of our investigation suggest that traditional philosophical models in which scientists choose among rival theories on the grounds of specific criteria do not fit the situation in HEP, where a large number of models remain on the market despite lack of empirical evidence in their favour, while only a minority of researchers appear to have a strong committment for or against one or the other of them. ”

Falsifiability alone doesn’t make a theory scientific.

If the only argument that speaks for your idea is that it’s compatible with present data and makes a testable prediction, that’s not enough. […] Because you can effortlessly produce some million similar prophecies.

[…]

Granted, it will take you a decade. But after this you know all the contemporary techniques to mass-produce “theories” that are compatible with the established theories and make eternally amendable predictions for future experiments.

I refer to these techniques as “the hidden rules of physics.”

These hidden rules tell you how to add particles to the standard model and then make it difficult to measure them, or add fields to general relativity and then explain why we can’t see them, and so on. Once you know how to do that, you’ll jump the bar every time. All you have to do then is twiddle the details so that your predictions are just about to become measureable in the next, say, 5 years. And if the predictions don’t work out, you’ll fiddle again.

And that’s what most theorists and phenomenologists in high energy physics live from today.

There are so many of these made-up theories now that the chances any one of them is correct are basically zero. There are infinitely many “hidden sectors” of particles and fields that you can invent and then couple so lightly that you can’t measure them or make them so heavy that you need a larger collider to produce them. The quality criteria are incredibly low, getting lower by the day. It’s a race to the bottom. And the bottom might be at asymptotically minus infinity.

This overproduction of worthless predictions is the theoreticians’ version of p-value hacking. To get away with it, you just never tell anyone how many models you tried that didn’t work as desired. You fumble things together until everything looks nice and then the community will approve. It’ll get published. You can give talks about it. That’s because you have met the current quality standard. You see this happen both in particle physics and in cosmology and, more recently, also in quantum gravity.

This nonsense has been going on for so long, no one sees anything wrong with it. http://backreaction.blogspot.de/2017/11/how-popper-killed-particle-physics.html

Most modern “model building” like drunk person searching for his keys below a street lamp, although he lost it somewhere else in the dark. Everyone is searching where currently light is, although probably the correct theories currently lie in the dark. It does not mean that it will dark there all the time. Maybe a new street lamp will be built in far future. Or a friend with torch comes along or similarly, at some point the night is over!

In an ideal world:

  1. A theoretical physicists comes up with a theory that addresses an open question,
  2. A phenomenologist works out how this theory can be tested in an experiment.
  3. An experimentalist builds the proposed experiment and checks if the theory is correct.

Unfortunately, this is not how physics works nowadays. Instead, everyone tries to come up with “models” that can be tested at the experiments that currently run, like the LHC, or that are currently being built.

No one expects that any of these models is correct. However, this is still common practice because these “models can be tested now or in the near future”.

Examples

Randall who spoke about how dark matter killed the dinosaurs. Srsly. […]

Randall, you see, has a theory for particle dark matter with some interaction that allows the dark matter to clump within galaxies and form disks similar to normal matter. Our solar system, so the idea, periodically passes through the dark matter disk, which then causes extinction events. Or something like that.

Frankly I can’t recall the details, but they’re not so relevant. I’m just telling you this because someone asked “Why these dark matter particles? Why this interaction?” To which Randall’s answer was (I paraphrase) I don’t know but you can test it.

I don’t mean to pick on her specifically, it just so happens that this talk was the moment I understood what’s wrong with the argument. Falsifiability alone doesn’t make a theory scientific.

[…]

This nonsense is caused by pressure to publish and everyone knows it.

http://backreaction.blogspot.de/2017/11/how-popper-killed-particle-physics.html

An extreme example are Xenophobic dark matter models. We can't really expect that the LHC finds dark matter, because the constraint from direct detection experiments are already far too strong. However, people still propose dozens of dark matter models that can be tested at the LHC. The “trick” they employ to make this possible is that their dark matter candidates does not couple to Xenon atoms, which are used by almost all direct detection experiments. Thus, all direct detection constraints are not valid for such Xenophobic dark matter candidates. Of course, none of these Xenophobic dark matter candidates will be found, because there is absolutely no motivation for such particles, besides that they aren't ruled out yet. Nevertheless, proposing models with Xenophobic dark matter is absolutely accepted in the physics community.

Exactly the same comments apply to “minimal flavour violating” models. We couldn't really expect that anything beyond the standard model will be discovered at the LHC, because flavour bounds were already too strong long before the LHC started. Thus everyone concentrated on models that do not contribute to the strongly constrained flavor observables. These models never arise very nicely, but, well, everything else within reach of the LHC is already ruled out. Again, no one expects that any such models is actually correct. However, by giving such an idea a “cool” name like “minimal flavor violation” or “Xenophobic dark matter” somehow these ridiculous ideas become acceptable.

Further Reading

symptoms/models_for_experiments.txt · Last modified: 2018/01/08 12:01 by jakobadmin