Gaming

Pure derivation of the exact fine constant: structure and as a ratio of two inexact metric constants

Theorists at the July 2000 String Conference were asked what mysteries remain to be revealed in the 21st century. Participants were invited to help formulate the ten most important unsolved problems in fundamental physics, which were ultimately selected and ranked by a distinguished panel of David Gross, Edward Witten, and Michael Duff. No question was more valuable than the first two problems posed respectively by Gross and Witten: #1: Are all the dimensionless (measurable) parameters that characterize the physical universe calculable in principle or are some simply determined by historical accidents or quantum mechanics and incalculable? #two: How can quantum gravity help explain the origin of the universe?

A newspaper article about these ancient mysteries made some interesting comments about question #1. Perhaps Einstein, in fact, “put it more bluntly: Did God have a choice when creating the universe?“- which also sums up Dilemma #2. While certainly the Eternal ‘may’ have had a ‘choice’ in Creation, the following arguments will conclude that the answer to Einstein’s question is an emphatic “No”. precise fundamentals are demonstrably calculable within a only dimensionless universal system which naturally understood a literal “Monolith.”

Also, the article went on to ask if the speed of light, Planck’s constant and electric charge are determined indiscriminately – “or do the values ​​have to be what they are due to some deep and hidden logic”. These types of questions come to a head with a puzzle involving a mysterious number called alpha. If you square the charge of the electron and then divide it by the speed of light times Planck’s (‘reduced’) constant (multiplied by 4p times the permittivity of vacuum), the entire (metric) (of mass, time, and distance) cancel out, producing the so-called “pure number”: alpha, which is just over 1/137. But why isn’t it precisely 1/137 or some other value altogether? even mystics have tried in vain to explain why.”

Which means that, although constants, such as the mass of a fundamental particle, can be expressed as a dimensionless ratio relative to the Planck scale or to a known or available unit mass with somewhat more precision, the inverse of the constant alpha electromagnetic coupling is uniquely dimensionless. have a cigar ‘fine structure number’ to ~137,036. On the other hand, assuming a single, invariably discrete or exact fine-structure numeric exists as a “literal constant”, the value has yet to be confirmed empirically as a ratio of two inaccurately determinable ‘metric constants’, bar h and electric charge e (speed of light c is exactly definite in the 1983 adoption of the SI convention as an integer number of meters per second.)

So while this puzzle has been deeply puzzling almost from its inception, my impression on reading this article in a morning paper was one of utter astonishment: a numerological problem of invariance deserved such a distinction from eminent modern authorities. For I had been obliquely obsessed with the fs number in the context of my colleague AJ Meyer’s model for several years, but had come to accept experimental determination of it in practice, periodically pondering the dimensionless issue to no avail. Gross’s question served as a catalyst for my complacency; recognizing a unique position as the only partner that could provide a categorically complete and consistent answer in the context of Meyer’s main fundamental parameter. Still, my pretentious instincts led me into two months of mindless intellectual posturing until I sensibly repeated a simple procedure explored a few years earlier. I just looked in the result using the CODATA value 98-00 of hasand the following solution immediately hit with full heuristic force.

Because the fine structure relation effectively quantifies (via the bar h) the electromagnetic coupling between a discrete unit of electric charge (e) and a photon of light; in the same sense a integer is discreetly ‘quantified’ compared to the ‘fractional continuum’ between it and 240 or 242. One can easily see what this means by considering another integer, 203, from which we subtract the exponential based on 2 of the square of 2pi. Now add the inverse of 241 to the resulting number, multiplying the product by the natural logarithm of 2. It follows that this pure calculation of the fine structure number is exactly equal to 137.0359996502301… – which here (/100) is given to 15, but is calculable to any number of decimal places.

By comparison, given the experimental uncertainty in h-bar and e, the NIST assessment varies up or down by about half of 6 of ‘965’ in the invariant sequence defined above. The following table provides the values ​​of h-bar, e, their ratio calculated as and NIST’s actual choice for has in each year of their files, as well as the 1973 CODATA, where the +/- two-digit standard experimental uncertainty is in bold in parentheses.

year…h- = northh*10^-34 Js…… e = Ne*10^-19 C….. h/e^2 = has =…..NIST value & ±(South Dakota):

2006: 1,054,571,628 (053) 1,602,176 487(040) 137,035,999.661 137,035,999 679(094)

2002: 1,054,571,680(18x) 1,602,176 53rd(14o) 137,035,999.062 137,035,999 11th (46an)

1998: 1,054,571,596(082) 1,602,176 462(063) 137,035,999.779 137,035,999 76th (fiftyan)

1986: 1,054,572 66x(63x) 1,602,177 33x(49x) 137.035.989,558 137,035,989 5xx(61XX)

1973: 1,054,588 7xx(57xx) 1,602,189 2xx(46xx) 137.036.043,335 137,036. 04x(elevenX)

So it seems that NIST’s choice is roughly determined by the measured values ​​for h I’m alone However, as explained at http://physics.nist.gov/cuu/Constants/alpha.html, in the 1980s interest shifted to a new approach that provides a direct determination of has exploiting the quantum Hall effect, as independently corroborated by both the electron magnetic moment anomaly theory and experiment, thus reducing its already finer uncertainty. However, it took 20 years before an improved measurement of the magnetic moment gram/2-factor was published in mid-2006, where the first estimate from this group (led by Gabrielse for Hussle at Harvard.edu) for has was (A:) 137.035999. 710(096) – explaining the much reduced uncertainty in the new NIST list, compared to that of h-bar and e. More recently, however, a numerical error was discovered in the initial (A:) QED calculation (we’ll call it the second paper B:) that changed the value of aa (B:) to 137.035999. 070 (098).

Although it reflects an almost identically small uncertainty, this assessment is clearly outside the NIST value that is in good agreement with the estimates for bar h and elemental charge, which are independently determined by several experiments. NIST has three years to figure this out, but in the meantime faces an embarrassing irony in that at least the 06 options for h-bar ye appear to be slightly biased toward the expected fit for has! For example, fitting the last three digits of the 06 data to hye according to our pure fs number produces a negligible fit for e alone in the ratio h628/e487.065. If the QCD bug had been fixed before the actual NIST release in 2007, it could easily have been smoothly adjusted to h626/e489; although questioning its consistency in the last 3 digits of has with respect to comparative data 02 and 98. In any case, much larger improvements in multiple experimental designs will be required for a comparable reduction in hye’s error to solve this problem definitively.

But again, even then, no matter how ‘precisely’ the metric measurement is kept, it’s still infinitely short of ‘literal accuracy’, whereas our pure fs number fits the current h628/e487 values ​​pretty well. precision. In the first sense, I recently discovered that a mathematician named James Gilson (see http://www.maths.qmul.ac.uk/%7Ejgg/page5.html ) also came up with a pure numerical value = 137.0359997867… closer to the 98 revised -01 standard. Gilson further argues that he has calculated numerous parameters of the standard model, such as the dimensionless ratio between the masses of a weak gauge boson Z and W. But I know that he could never construct a single proof using equivalences capable of deriving the masses Z and/or W per se from then precisely confirmed masses of heavy quarks Y higgs field (see the essay referenced in the resource box), which in turn result from a single primordial dimensionless tautology. Due to the numerical discretion of the fraction 1/241 allows construction physically significant dimensionless equations. If, instead, Gilson’s numerology, or the refined empirical value of Gabreilse et. al., for the fs number, would destroy this discretion, precise self-consistency, and ability to even writes a meaningful dimensionless equation! On the contrary, it is perhaps not too surprising that after I literally ‘found’ the integer 241 and got the exact fine structure number from the resulting ‘Monolith Number’, it took only about 2 weeks to calculate the six quark masses using real dimensionless. analysis and various fine structure relationships.

But since we are now not really talking about the fine structure number per se but about the integer 137, the result definitely answer Gross’s question. For those “dimensionless parameters characterizing the physical universe” (including alpha) are ratios between selected metric parameters that lack a single unified system of dimensionless mapping from which metric parameters such as particle masses are calculated from established equations. The ‘standard model’ provides a single set of parameters, but not means to calculate or predict any and/or all within a single system; therefore, the experimental parameters are entered arbitrarily by hand.

Final irony: I am doomed to be demoted as a ‘numerologist’ by ‘experimentalists’ who continually fail to recognize strong empirical evidence for the masses of quarks, Higgs or hadrons that can be used to calculate exactly the current standard for the best known precision. and the heaviest mass in high energy physics (the Z). So, contrary dumb devils – Empirical confirmation is just the final cherry that the chef puts on top before presenting a “Pudding Test” that no sentient being could resist just because he didn’t put it together himself, so in his place makes a mimic mess the real deal no no resemblance Because the base of this pudding is made from melons that I call Mumbers, which are really just numbers, pure and simple!

Leave a Reply

Your email address will not be published. Required fields are marked *