Falsifiability alone doesn’t make a theory scientific.
If the only argument that speaks for your idea is that it’s compatible with present data and makes a testable prediction, that’s not enough. […] Because you can effortlessly produce some million similar prophecies.
Granted, it will take you a decade. But after this you know all the contemporary techniques to mass-produce “theories” that are compatible with the established theories and make eternally amendable predictions for future experiments.
I refer to these techniques as “the hidden rules of physics.”
These hidden rules tell you how to add particles to the standard model and then make it difficult to measure them, or add fields to general relativity and then explain why we can’t see them, and so on. Once you know how to do that, you’ll jump the bar every time. All you have to do then is twiddle the details so that your predictions are just about to become measureable in the next, say, 5 years. And if the predictions don’t work out, you’ll fiddle again.
And that’s what most theorists and phenomenologists in high energy physics live from today.
There are so many of these made-up theories now that the chances any one of them is correct are basically zero. There are infinitely many “hidden sectors” of particles and fields that you can invent and then couple so lightly that you can’t measure them or make them so heavy that you need a larger collider to produce them. The quality criteria are incredibly low, getting lower by the day. It’s a race to the bottom. And the bottom might be at asymptotically minus infinity.
This overproduction of worthless predictions is the theoreticians’ version of p-value hacking. To get away with it, you just never tell anyone how many models you tried that didn’t work as desired. You fumble things together until everything looks nice and then the community will approve. It’ll get published. You can give talks about it. That’s because you have met the current quality standard. You see this happen both in particle physics and in cosmology and, more recently, also in quantum gravity.
This nonsense has been going on for so long, no one sees anything wrong with it. http://backreaction.blogspot.de/2017/11/how-popper-killed-particle-physics.html
In an ideal world:
Unfortunately, this is not how physics works nowadays. Instead, everyone tries to come up with “models” that can be tested at the experiments that currently run, like the LHC, or that are currently being built.
No one expects that any of these models is correct. However, this is still common practice because these “models can be tested now or in the near future”.
Randall who spoke about how dark matter killed the dinosaurs. Srsly. […]
Randall, you see, has a theory for particle dark matter with some interaction that allows the dark matter to clump within galaxies and form disks similar to normal matter. Our solar system, so the idea, periodically passes through the dark matter disk, which then causes extinction events. Or something like that.
Frankly I can’t recall the details, but they’re not so relevant. I’m just telling you this because someone asked “Why these dark matter particles? Why this interaction?” To which Randall’s answer was (I paraphrase) I don’t know but you can test it.
I don’t mean to pick on her specifically, it just so happens that this talk was the moment I understood what’s wrong with the argument. Falsifiability alone doesn’t make a theory scientific.
This nonsense is caused by pressure to publish and everyone knows it.
An extreme example are Xenophobic dark matter models. We can't really expect that the LHC finds dark matter, because the constraint from direct detection experiments are already far too strong. However, people still propose dozens of dark matter models that can be tested at the LHC. The “trick” they employ to make this possible is that their dark matter candidates does not couple to Xenon atoms, which are used by almost all direct detection experiments. Thus, all direct detection constraints are not valid for such Xenophobic dark matter candidates. Of course, none of these Xenophobic dark matter candidates will be found, because there is absolutely no motivation for such particles, besides that they aren't ruled out yet. Nevertheless, proposing models with Xenophobic dark matter is absolutely accepted in the physics community.
Exactly the same comments apply to “minimal flavour violating” models. We couldn't really expect that anything beyond the standard model will be discovered at the LHC, because flavour bounds were already too strong long before the LHC started. Thus everyone concentrated on models that do not contribute to the strongly constrained flavor observables. These models never arise very nicely, but, well, everything else within reach of the LHC is already ruled out. Again, no one expects that any such models is actually correct. However, by giving such an idea a “cool” name like “minimal flavor violation” or “Xenophobic dark matter” somehow these ridiculous ideas become acceptable.