Naturalness and JFK conspiracy theories

Posted by Unknown Minggu, 17 November 2013 0 komentar
Among the 89 episodes of the classic show Penn & Teller: Bullshit, the 29th one was dedicated to conspiracy theories, namely to 9/11 truthism, moonlanding, and JFK conspiracy theories.



I recommend you to find all the episodes and watch them – it will be 45 hours of intelligent fun!

Just to be sure, JFK was assassinated in Dallas on November 22nd, 1963; it will have been 50 years next Friday. The apparent sniper was Lee Harvey Oswald, an American commie (believed to be a "lone gunman") who loved Cuba and who emigrated to the Soviet Union. Yesterday, CNN listed a dozen of the conspiracy theories about the assassination and suggested that one of them could be right although I didn't quite understand which scenario they endorsed.




In their show, Penn and Teller have been primarily making fun out of many kinds of nutcases. And as the number of episodes, 89, suggests, even the number of the basic types of nuts is really, really large, and all of them have many subtypes as well as several billions of human examples.




Equally importantly, they present the actual evidence that the conspiracy theories (and other crazy beliefs discussed in other episodes) are wrong – mundane, likely possible or demonstrated explanations that easily defeat the contrived interpretations of the evidence used by the conspiracy theorists.

The show is insightful and entertaining but sometimes they discuss deeper points. Why do some people – in some cases people who are intelligent according to other benchmarks – love to believe such stuff?

A lady (12:12) proposes an explanation (see also a man at 23:10). People want to see "a big overriding story", a story with sufficiently far-reaching philosophical or moral implications, as an explanation of every big enough event. (It's possible that I am improving her quote a little bit but I won't claim the whole credit.) People want the explanations and the events that they explain to be commensurable or comparable in magnitude.

They just don't want to believe that something so grand as JFK, the most powerful man on the planet, or the World Trade Center could be terminated by something or someone as tiny, stinky, generic, and irrelevant as an angry Arab man or a mediocre American communist who preferred to read paperback trash over Marx's tirades.

(Even if some other commies were helping Oswald, e.g. some folks in the USSR, I wouldn't be stunned. I don't really care how many commies participated on a crime and I don't think that the Soviet commies were "qualitatively different" from some of their Western counterparts. If the USSR had participated, it would still have limited consequences for the relationships with the current Russia which isn't responsible for everything that was ever done by a Russian national.)

But that's how the world often works. Many great people died because of some infection, i.e. some petty stupid microorganisms that were much less sophisticated than the humans. And many other events or phenomena in Nature have seemingly mundane, low-key, disappointing (for a conspiracy theorist expecting a great story) explanations. The comparability of the demolished buildings or terminated human lives with those of the killers isn't something that is implied by the actual logic or the actual laws of physics and the society. But some people incorrectly believe that this commensurability is a part of rational reasoning.

Because of our Friday and Saturday discussions on naturalness, especially with Giotis, I couldn't overlook the apparent similarity of the sentiment of the conspiracy theorists and those who take the naturalness arguments too seriously or strictly. Why are those attitudes similar?

Well, because the strict naturalness fans identify a pattern in Nature, and the lightness of the Higgs boson is the most important example, and they expect or demand some far-reaching, paradigm-shifting, philosophically deep explanation, perhaps one with huge moral consequences or at least consequences for the character of the future research. (I generally agree with almost everything that Nima Arkani-Hamed says about physics but yes, I am talking about him in this case a little bit, too, and at least our "accent" was very different when we debated these issues.)

But let me tell you something. Just like in the case of JFK, seemingly "clear patterns" may have convoluted or uninteresting explanation. I believe there's really no solid evidence that the explanation why the Higgs mass is so much smaller than the GUT scale has to be a "grand idea". More precisely, the explanation for this hierarchy probably is a grand idea, the supersymmetry, but what I wanted to say is that the explanation why the superpartners are 10 times heavier than the Higgs boson doesn't have to be another "grand idea" anymore.

Don't get me wrong. I do use the reasoning based on naturalness. After all, all reasoning in science is ultimately probabilistic. See e.g. Why naturalness should be expected for the most pro-naturalness perspective on your humble correspondent. However, what I do not believe is the idea that the probabilistic distributions on the spaces or parameters are the most important or most rock-solid considerations we have in science. I do not believe that similar references to naturalness have dictated or will determine most of the insights about science. I don't believe such considerations have or should have the last word, either. There are much "harder", more reliable theoretical arguments and I think that the experimental evidence (if checked not to be flawed) always beats some philosophical arguments such as those based on naturalness.

I am somewhat open-minded whether the "existence of life" (or something like that) could be used as a "part of the explanation" why the Higgs boson is so light – and why other features of the vacuum surrounding us have the qualitative properties we know, properties that seem necessary for life of our type. And this open-mindedness – again, I prefer explanations that are non-anthropic but I am not 100% certain that those will be found for every question – is something that isn't really changing qualitatively once the lower bound on the scale of new physics gets doubled, for example.

Supersymmetry seems to be the only major physics paradigm we know that is capable of explaining the apparently weakly self-interacting, moderately light Higgs boson. The cancellations resulting from SUSY guarantee that the expected residual Higgs boson mass is comparable to the mass of the top squark, higgsinos, and perhaps gauginos. Those may be below a \(\TeV\) or at several \(\TeV\)s etc. so the degree of fine-tuning of \(m_h^2\) (it's the squared mass that appears in the Lagrangian and that naturally gets "almost additive contributions") gets improved from \(1\) in \(10^{30}\) to \(1\) in \(100\) or \(1,000\) or so in the SUSY models that remain viable.

But what does it "exactly" mean that the Higgs mass is predicted "not too be much smaller"? How smaller it may be? Well, there is clearly no "exact" answer. It depends how strong tuning or fine-tuning you're ready to tolerate – effectively, how unlikely event or selection you're ready to allow in the foundations of physics. I am perfectly OK with \(1\) in \(100\) and even \(1\) in \(1,000\). I believe that the number of questions comparably important to the Higgs boson's lightness in physics is comparable to 100 so it is totally normal to expect something like one of these questions whose answer will be 1-in-100 fine-tuned, apparently. But they may exist even if the chances are a bit lower.

It's important to notice that the degree of fine-tuning isn't necessarily a simple function of the mass ratios. Some models with new fields and interactions may reduce the amount of fine-tuning even if the mass ratios are much larger. For example, models with \(5\TeV\) Dirac gluinos may actually be highly natural. Because we don't know the field content and the list of interaction terms, we can't "calculate" the degree of fine-tuning with any precision.

But even if we could, the absence of new physics at the LHC (even at the \(13-14\TeV\) run) would still be a weak argument against naturalness. It wouldn't settle the question in one way or another. Why?

Imagine that the LHC establishes that there is no gluino etc. up to \(5\TeV\) sometime in the foreseeable future. Imagine that this means that \(m_h^2\) is fine-tuned to \(1\) part in \(1,000\). So the existence of the world as we know it, with the parameters we have measured, has depended on a "good luck" that only had the probability \(1/1,000\) to proceed in the right way. Is that unacceptable?

I don't think so. Well, I would kindly argue that because of the results that keep on agreeing with the Standard Model, the LHC has already excluded the idea that a \(1\) in \(10\) and perhaps \(1\) in \(100\) fine-tuning is "unacceptable". Even if you view this \(1/1,000\) fine-tuning of the squared mass as the probability, as a \(p\)-value, its magnitude is still \(1/1,000\). That's not extremely tiny. In fact, we commonly translate this \(p\)-value, using the maths of the normal distribution, to something slightly more than 3 standard deviations.

Even if you view this absence of new particles near the Higgs mass scale as the evidence falsifying the "null hypothesis which is naturalness", and even if you ignore the aforementioned disclaimers that a modified particle content may render much heavier superpartners natural, the null hypothesis has only been contradicted by a 3-sigma bump or so! In the case of other 3-sigma bumps, we would say that it fails to reach the usual standard of particle physics for a discovery. We know why we use these standards: 3-sigma bumps may be and often are due to chance. They often go away.

For a normal proper discovery, particle physicists demand 5 sigma which is equivalent to the \(p\)-value comparable to \(1\) part in \(1,000,000\). In the counting (or analogy) above, this would occur if the new particles (stop, higgsino etc.) responsible for the Higgs boson's lightness were roughly \(1,000\) times heavier than the Higgs boson, i.e. around \(100\TeV\). Only if you exclude superpartners up to \(100\TeV\) or so, something that even the SSC would be incapable of achieving, you could claim that you have the equivalent of a 5-sigma evidence against the null hypothesis (naturalness).

Because naturalness is such a natural thing to believe, at least to a certain extent, I would argue that the claim that it is completely wrong is so extraordinary that we should demand extraordinary evidence i.e. an even higher confidence level than 5 standard deviations. And again, let me repeat that because some non-minimal adjustments to the physics may tolerate even larger gaps and keep them natural, the tolerable gap increases further.

If you summarize the arguments and views outlined above, it's very clear that I won't qualitatively change my mind about the "big questions" such as the "relevance of the counting of intelligent observers" even after the \(13-14\TeV\) LHC run, regardless of its results. The LHC may be expensive but from the viewpoint of "all the physics", it's just another minor step, an improvement of the energy scale by an order of magnitude. There are still approximately 15 orders of magnitude that separate us from the GUT or Planck scale.

So the reasons why superpartners are 10 times and perhaps 100 times or 1,000 times heavier than the Higgs boson may be "a bit convoluted". The collection of reasons may be composed of some issues that are studied in some unknown papers today – or that are being completely overlooked. The neutron lifetime is vastly longer (10 minutes) than the lifetime you could expect – the nuclear time scale around \(10^{-22}\,{\rm seconds}\). We sort of understand why today. But we couldn't have understood those things before the neutron's interior was sufficiently understood. Our order-of-magnitude estimate for the neutron's lifetime could have been wrong by 25 orders of magnitude if we were sufficiently naive.

(Incidentally, would you say that with the hindsight we have today, the failure of the dimensional analysis to estimate the neutron's lifetime – or, more physically, the unexpected length of the neutron's lifetime – was due to the anthropic considerations? Is a long-lived neutron really needed for life etc.? I don't think we are organizing our explanations of the neutron's longevity in this way. In the same way, I don't think it's guaranteed that the explanation for the lightness of the Higgs believed in 2100 AD will employ some anthropic ideas. It's just not necessary even if the ideas about naturalness from a particular era are shown to be wrong.)

If someone has a particular idea how (and how strictly) naturalness should work and this idea was just falsified by the experiment, he shouldn't claim that he has everything he needs to say all the right things about naturalness in Nature. Instead, he should be more humble because he has just lost a battle with the experiments. You don't want to believe such a person if he tells you that he knows what must be the "only other alternative". There are lots of possible alternatives. Only when the more complete theory is understood more fully, we will understand why the superpartners (or whatever new particles exist) are \(X\) times heavier than the Higgs boson – much like we need some precision knowledge and arguments to understand why the neutron's decay rate is 25 orders of magnitude smaller than the most naive nuclear-physics estimates.

In the text above, I discussed the belief of the conspiracy theorists in the "commensurability" of the big events and patterns on one side and the big stories or far-reaching theories that explain them on the other side. A proper, hard-scientific reasoning just doesn't imply that this commensurability is a general law. This belief in commensurability is clearly not justifiable by solid mathematical or scientific evidence; it is partly ideological in character. I believe that this commensurability is intrinsically a left-wing belief, a form of ideological egalitarianism.

But there's one more aspect or interpretation of the egalitarian ideology that leads some people (and I really mean Nima in this case) to say that the null results from the LHC high-energy run would be a great discovery (because it would falsify naturalness as a general tool – and it would even perhaps prove the anthropic bullshitting). What is it? It's the implicit assumption that an experiment is adding the same amount of information per unit time regardless of the results. I don't claim that this is really the reason why Nima says the things about the "two roads" that he does but I do think that many other physicists implicitly want to impose this "quota".

But this "equivalence" is completely wrong. Of course that the importance of an experiment does depend on what it actually discovered – the importance of an experiment always partially depends on luck. If an experiment finds "nothing new" and only improves some lower bounds on masses or upper bounds on probabilities or interaction constants, it's naturally disappointing for the experimenters (and others).

It doesn't mean that we're learning nothing out of an experiment that continues to produce null results. We're learning something. Every time the experimental bounds are improved, and even when some previous bounds are justified by a somewhat independent method, we're learning something or at least getting more confident about something. We may exclude some models and parts of parameter spaces of other models, too. But the information we're gaining is far less groundbreaking than a positive discovery! That's just how it works. It is silly to deny it.

We don't know what the LHC will see in the \(13-14\TeV\) run. I still tend to bet that the likelihood is comparable to 50% (it doesn't make sense to try to quantify such subjective probabilities more accurately than that because there's nothing objective or high-precision about Bayesian probabilities) that new physics will be discovered. But of course that I find it conceivable that no new physics will be found, too. It wasn't found in 2012, either (unless some not yet released paper will stun us).

It's my feeling that some people try to get a "verbal insurance" that would guarantee that regardless of what the LHC will find, it will be viewed as an important experiment. An equally important experiment. They want some ultimate hedge. But nothing like that exists because the importance of the LHC will clearly be greater if some new physics (aside from the Higgs boson that was already found) will be discovered. It makes no sense to question this correlation between the importance and the positive discoveries.

Of course that the discovery of some new physics would open a completely new chapter in physics. It would be exciting. The continuation of the null results will move the physics in the "opposite direction", so to say, but this shift will be much smaller, anyway. The continuation of negative results will really change nothing about the qualitative framework of physics. You may invent new year's resolutions for yourself – that if nothing new will be found before some artificial deadline, you will stop doing A and spend more time with B. But the fact that people may invent new year's resolutions doesn't imply that they're good science, not even if the people are employed as scientists, not even if they're top scientists.

Even in the "most pro-naturalness" counting above, one in which I ignored the dependence of the "degree of fine-tuning" on the (unknown) BSM particle spectrum, it was argued that the absence of any new particles up to \(5\TeV\) will only be equivalent to a single "3 sigma bump" mildly contradicting naturalness. It's too little. If the LHC discovers new particles, it will be rather quickly able to pump those 5-sigma "positive bumps" up to 10 sigma and discover new equally strong signals in other channels, and so on.

Positive discoveries at the LHC would bring us far more information and would be far more groundbreaking than the continuation of the null results. It's just wrong to invent ideologies and hype that would attempt to contradict these self-evident facts.

And that's the memo.

Bonus: naturalness vs renormalizability

A comment about the cutoffs by Giotis unmasked something in the "strict naturalness beliefs" that I consider not just "not sharply right" but, in fact, more wrong than right. They want to say that one should expect the cutoff scale to be "naturally" of the same order as the characteristic scale of the phenomena in your effective theory.

I would say that this question cannot have a universally valid answer but if I had to pick an answer, I would surely pick exactly the opposite one! On the contrary, it's natural to consider or demand theories that allow a vastly greater cutoff scale than the scales of their characteristic phenomena (e.g. masses of particles they predict). These theories are nothing else than the renormalizable theories! Renormalizable theories are those that allow us to set the cutoff scale vastly above the characteristic energy scale.

In my opinion, there is formidable evidence, both of the "easthetic" and empirical kind, in favor of the dominance of renormalizable theories. Whenever we were living in a jungle of chaotic, seemingly strongly coupled phenomena – e.g. the chaotic zoo of hadrons in the 1960s – it was just a temporary situation that would soon be replaced by a renormalizable theory – QCD with quarks or a weakly coupled elementary Higgs scalar field. And renormalizable theories may be extrapolated to much higher cutoffs. (If they're just perturbatively renormalizable, like the electroweak theory, they may be extended up to an exponentially high cutoff scale near the Landau pole.)

The actual accumulated empirical evidence in favor of the proclamation "renormalizable theories (=theories that allow the extrapolation to vastly higher energies) are more natural to be expected than the non-renormalizable ones" is much stronger than the evidence in the naturalness in the sense of "everything is of the same order", I believe! Hadrons and the electroweak symmetry breaking didn't have to admit renormalizable descriptions and many people have actually expected the right explanation to be some strongly-coupled mess. But the right explanation was renormalizable at the end, it seems. For many questions, these two beliefs (naturalness vs renormalizability) almost directly contradict one another.

Of course that we may get to another scale of new physics which will look like a "strongly coupled chaotic zoo" to us for a while. (The string scale or the Planck scale make such an impression inevitable.) But once the dust settles, the resulting winning theory will be able to make big leaps to higher energies again. In the case of perturbative string theory, once we get past the initial floors of the Hagedorn tower and their inner organization, we will be able to extrapolate the theory to "all energies comparable to the string scale" which may mean up to the Planck scale – another multiplicative gap of order \(1/g_s\) or \(1/g_s^2\) or another power.

There's no reason to expect "lots of physics at every scale". This would be a sort of fine-tuning, too. Gaps are bound to occur and if we look at the energy scales involved in the Standard Model (and its effective theories at even lower energies), we know that they do occur. We empirically know that they exist. So at most, I would be ready to adopt a more balanced yin-and-yang philosophy. Everything-at-the-same-scale mushy reasoning linked to the dogmatic naturalness has to co-exist with the boldly-extrapolate-your-theories-as-far-as-you-can paradigm favoring renormalizable field theories and favoring the values of parameters that actually do create such deserts.

The final theory surely must allow the existence of gaps and dimensionless numbers that are "substantially" different from one because we know with certainty that those occur in Nature. So I would surely say that those who decide to believe that "everything must be of the same order" are making an empirically indefensible assumption about Nature. And if they "derive" this philosophy from the effective field theory framework, they're using the framework beyond its domain of validity to derive a skewed assumption that the full theory simply cannot back up. Only the full theory (and I don't have to provoke anyone with the phrase "string theory" even though I believe it's the same thing because none of these claims of mine depends on its "stringiness" in any technical way) may decide where the whole framework of "effective field theory" breaks down – and be sure that it does break down somewhere.

Any particular effective field theory is OK to study the "effective phenomena" and knows about the limits where this particular effective field theory ceases to hold. But it doesn't know about the place where all effective field theories cease to hold!


TERIMA KASIH ATAS KUNJUNGAN SAUDARA
Judul: Naturalness and JFK conspiracy theories
Ditulis oleh Unknown
Rating Blog 5 dari 5
Semoga artikel ini bermanfaat bagi saudara. Jika ingin mengutip, baik itu sebagian atau keseluruhan dari isi artikel ini harap menyertakan link dofollow ke http://androidjailbreak7.blogspot.com/2013/11/naturalness-and-jfk-conspiracy-theories.html. Terima kasih sudah singgah membaca artikel ini.

0 komentar:

Posting Komentar

Trik SEO Terbaru support Online Shop Baju Wanita - Original design by Bamz | Copyright of android jailbreak.