Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Why Feynman Diagrams Are So Important (quantamagazine.org)
255 points by MaysonL on July 7, 2016 | hide | past | favorite | 67 comments


John Baez's Prehistory of n-Categorial Physics [1] has some interesting nuggets about Feynman diagrams, in the "Feynman (1947)" section.

In particular, "The mathematics necessary for [interesting Feynman diagrams] was formalized later, in Mac Lane's 1963 paper on monoidal categories (see below) and Joyal and Street's 1980s work on 'string diagrams.'"

In other words, Feynman diagrams were (or at least can be taken as) an early precursor of diagrammatic approaches in category-theoretic mathematical physics.

The paper is really great for getting some context around these diagrams and a sense of the underlying mathematics, even for a category-theory ignoramus like myself, with a little "suspension of comprehension."

1: http://math.ucr.edu/home/baez/history.pdf


Maybe someone reading this can enlighten me on the utility of thinking about virtual particles in the vacuum. This is not related that directly to the article, but it is mentioned in it.

Feynman diagrams are a perturbation theory expanding around a problem we can solve - non-interacting free fields. The lines in the diagram represent the particles in the free theory, not the interacting theory. Renormalization does allow us to make some correspondence between the two, but that can only be taken so far, at least to my understanding. (And perhaps this is my limitation.)

There are lots of vacuum diagrams, for example. These represent the difference between the vacuum in the free theory and the interacting theory. That does not mean there is frothing going on in the interacting theory that is not in the free theory. The vacuum wave function is still a combination of field configurations each with an associated amplitude density. (And, relating to the article, of course, this creates an associated non-zero vacuum energy.) It looks just like the vacuum in a particle Schroedinger equation model, but just a lot more degrees of freedom.

What are the virtual particles? I'd appreciate if someone could explain what people mean and why when they talk about these.


The external legs (propagators) of Feynman diagrams represent real (not virtual) particles, whose interactions we want to study. Virtual particles run in the closed loops of Feynman diagrams, i.e. they can be thought of as actual particles (of the same or of other types), being created by the collision of the real particles, and they recombine back into the real particles.

So say two real particles collide and scatter off each other: the collision creates all kinds of "virtual particles" that are allowed by the interaction terms in the theory Lagrangian, and they recombine back into the same two real particles because of the same interaction terms. Using the famous Feynman's path integral, you integrate over all possible virtual particles and their states (with different contributions to the integral), normalize those contributions, and then you get the probability (or amplitude) of the real particles with a certain state getting scattered (or transitioning to "another [quantum] state") - the external incoming and outgoing "legs" of the diagram.


There is many ways to think about virtual particles. From a particle point of view one can think of it as quantum mechanics allowing you to create a high energy particle for a short amount of time, even if that would not be allowed classically. From a field theory point of view, you can have excitations in the fields that are not sufficient to be a particle, but can still interact locally.


I was looking for an explanation of why the lines of a Feynman diagram are called virtual particles when they are really particles in the unperterbed (or free) model. So why do you think of this as creation of a particle for a short amount of time?


Generally if you look at the Feynman diagram for an interaction, you'll have some number of external legs which one thinks of as physical particles, as well as a number of interactions. The lines between the interactions may be particles whose mass exceeds the total amount of energy of the incoming particles, so they obviously can't be real. They are thus generally called virtual (or off-shell particles). How one can think of this is what I tried to address in the original response.


I appreciate you saying the external legs can be thought of as physical particles, since they are not technically physical particles. The renormalization groups tells us that those look like physical particles. Internally to the interaction this is not true, which is why they would be called virtual particles. But I see them as a pure mathematical relic of the perturbation calculation. Perhaps I am just not able to extend my intuition here. And likely a more depth description from you or someone else is not really practical here. I just come back to the fact that in the end, particles are just different excitation states of the system. There is a vacuum state, a bunch of states we would consider one particle states, a bunch more we would consider two particle state, and so on. Doesn't this cover all configurations of the system and there are no "in-between" or what you would call off-shell states? I think these only make sense when we are making an approximation around the non-interactive model.


Well for one thing without them we wouldn't have Penguin diagrams! (https://en.wikipedia.org/wiki/Penguin_diagram)


Another cool use of diagrams in abstract algebra:

https://graphicallinearalgebra.net


This is a fantastic resource! The writing is really enjoyable.


A good read on the historical development and use of Feynman diagrams is Drawing Theories Apart The Dispersion of Feynman Diagrams in Postwar Physics https://www.amazon.com/Drawing-Theories-Apart-Dispersion-Dia...


Maybe a strange question, but could one of the stumbling blocks be that we still have a lot of work to do on the mathematics of infinities? I mean, saying that a positive and negative infinity might "cancel" sounds like that's where the maths gets murky.


Full disclosure: I don't work in particles but have solved Feynman integrals for class work.

Renormalization is indeed 'black magic'. But I don't think it is due to our limited mathematical knowledge.

In my experience in mathematical physics, singularities appear when you have incomplete modeling ansatz, and so in this sense renormalization would just be a 'hack' that is necessary because of writing down imperfect equations. In other words, formulating the problem a different (and closer to truth) way would eliminate these infinities.

I know some extensions of the standard model (string theory, in particular) do not have cutoff scales. But that is largely beyond my knowledge base.


It sounds murky because we were being silly the first time around. The physically relavant quantity was always the sum of all those contributions and we decided to (arbitrarily, in the grand scheme of physics) split things up under some scheme. So, in hindsight, the positive and negative infinities were artificially manufactured by our segregation scheme... there's not much to be bothered about in that regard.

That might still mean that there's more to be understood regarding notions of infinity or the convergence of amplitudes in quantum field theory.


As an example (not sure if applicable), you could do some calculation and get an alternating series as the result:

R = 1 - 1/2 + 1/3 - 1/4 ...

If you "explain" the positive part as being contribution from field A, and the negative part as a contribution from field B, both would be infinite, but the reality might be that splitting them is nonsensical.


Well... one infinity that pops up is solved by regulators. This might be one that you get when you try to understand the casimir force. In this case, you impose an cutoff at some scale to make the expression finite, but a function of the cutoff. You send the cutoff to infinity and the expression goes to infinity.

Now the way you calculate with this is that most of the time when you try to calculate a quantity that you can measure with an instrument (in the lab or whatever), you find that the cutoff cancels. In other words, the value the cutoff took never actually mattered to begin with.

When people say that the infinities cancel or whatever it's a bit misleading. In nature we don't really know what this cutoff is. We think it should be there though! It's just sorta where the theory breaks down. Tony Zee explains this using a bedspring as an analogy. If you probe the springiness of a bedspring with a bowling ball, it works fine. It behaves as a continuous sheet. If you throw a ball bearing at it though, the bearing might fall through or hit a wire and shoot off at a weird angle. The cutoff in this case is the distance between springs or the spring size or something. Our models basically make this continuum analogy. This is NOT to say that the fabric of space time is pixelated or something like that. We have NO idea what the structure is. We just know that if we don't look too close, it works like this. The wonderful thing though is that this model is useful. There's numerous ways to build experiments that measure quantities independent of this cutoff to verify the theory. Understanding what goes on at higher energies/shorter length scales (which are effectively the same thing) is precisely why particle physics is called high energy physics.


One simplistic thing I heard about Feynman Diagrams: Feynman Diagrams are important because they have 3 lines - not 4, 5, 6, 20, etc. The fact they have just 3 lines say something very very important about the nature and the world.

Is the above correct?


In QED, each vertex has 3 lines attached to it: 2 electron/positron lines and one photon line.

But in other quantum field theories, there are vertices with more edges. In QCD, for example, there are vertices where 4 gluons meet. In the Standard Model, there are vertices where 4 Higgs particles meet, and vertices where 4 electroweak bosons meet, and so on.

The numbers 3 & 4 aren't really special. What's special is the numbers we assign to the edges. In 4 spacetime dimensions, any gauge boson (like the photon or gluon) gets the number 1. So does any scalar field like the Higgs. Fermions like electrons and quarks get the number 3/2.

These numbers are important because you should never meet a Feynman diagram in a particle physics computation where the sum of these numbers at any vertex is larger than the spacetime dimension 4. This is a fairly deep fact about physics. Diagrams that violate these conditions are strongly suppressed at "low" energies (such as those accessible at the LHC). They contribute almost nothing to the Feynman diagram sum.

Edit: This is Wilson's explanation of renormalizability, adapted to Feynman sums. It's the explanation for why physics is possible: The details of short distance physics (think Planck scale) can be very complicated, maybe involving lots of complicated Feynman diagrams. But at low energies, almost all of that complication averages out, and we can adequately approximate the laws of physics using simple ingredients.


Those labels seem related to the spin, is that right?


Indirectly right. Those numbers are the scaling dimensions of the field operators. (Rescale your ruler by a factor f, and the fields will get multiplied by f to the scaling dimension.) These numbers are fixed by requiring that the kinetic energy density transform correctly under scaling. The form of the kinetic energy, in turn, depends on the spin of the field quanta (i.e., the particles).


Feynman diagrams are a series expansion, not unlike Taylor series. We do it for things which cannot be computed exactly. (I don't want to go if it is more "Nature", "our description of Nature" or "a box of numerical tricks we have mastered"... and frankly, I don't believe there is a strong distinction between these three.)

3 lines stand for the lowest non-trivial order (like 'x' in a Taylor series; '1' would be a line) - more complicated things can be made out of these.


I recommend the first two chapters of Griffiths's Introduction to Elementary Particles which gives a very readable approach of showing which Feynman diagrams occur and what they mean. After that, the book gets more technical, but the first two chpaters give historical background and make it feel like playing with tinker-toys where you can just snap together various basic diagrams.


I heard a good rule of thumb to follow when drawing Feynman Diagrams is that the fewer lines there are, the more probable the occurrence of the event represented by the diagram. So at the simplest case, a single line representing an unchanging particle, is far far more likely than any two particle interactions (3 lines) and so on.


This is true if the system you're trying to describe is weakly coupled.

If the theory is strongly coupled, then in fact more and more complicated diagrams count more and more. In this case the method of Feynman diagrams because basically useless---in the case you describe you know you can get most of the answer from a simple calculation (of the simplest diagrams). But if more complicated diagrams count more (as in strong coupling) then you don't have a place to start, because whatever diagram you pick to start at I can make more complicated and be confident that my new diagram counts more than the pieces you've computed.

In that case we need a different approach, the most generic of which is lattice field theory

https://en.wikipedia.org/wiki/Coupling_constant#Weak_and_str...

https://en.wikipedia.org/wiki/Lattice_field_theory


Roughly, though it is actually the number of vertices and depends on the particles involved (obviously a diagram representing something very unlikely is less likely than some normal process)


Feynman's Diagram are just abstraction that manages to abstract away some elements of nature into a simplified version.

Fetishizing them is like fetishizing the way Ancient Egyptians wrote their fractional numbers.


That doesn't say much though.

Something that is "just abstraction" can still be world-changing...

In fact those are the most world-changing things (abstractions), the concrete stuff mostly derives from there...


If you focus too much much on current abstraction, you lose sight of better ones - that was the point of my previous post.

I also don't agree concrete stuff derives from abstractions. You can make concrete stuff, using certain models/abstractions, but their behavior will always differ from the model. Because model aren't reality.


What exactly is it that it says about nature?


QED!


QED as in Quantum Electro Dynamics? :)


Being used to live in a world of cause and effect, I'm having a really hard time of getting my head around “random fluctuations” that seem to have an effect but no cause. That's about as illogical to me as the idea of a magnetic monopole. But then,

> ...particles are merely bubbles of froth, kicked up by underlying fields. Photons, for example, are disturbances in electromagnetic fields.

I can get behind this, even if only by way of misunderstanding. All deviation from nonexistence (Void) is a disturbance. All existence is one giant software bug in the fabric of Nothingness (if it could have something like fabric).

That's why I've got to love Physics. To me, it's the most exquisite collection of brain-racking mysteries I can imagine (modulo the narrow bounds of my imagination, of course).


> I'm having a really hard time of getting my head around “random fluctuations” that seem to have an effect but no cause

So you're saying God does not play dice with the universe? :)


Nope. Just Craps.


Perhaps “random fluctuations” are as much random as Brownian motion. I.e. there is cause and effect on a finer level (yet not possible to probe with current techniques), however a large number of interacting particles renders analytical prediction infinitely complex task.


Physics (as far as we know) doesn't care about the direction of time. Cause and effect depends on a fixed direction of time.


Physics does clearly "care" about the direction of time, because physics is a description of the physical universe. Dissipation, entropy, quantum measurement... etc. all have time as a key concept.

Math may not care (and it's true that most of the fundamental physics equations are often time reversible). But the universe (and any experiment to date) clearly shows that time has a direction and the equations work in that direction.


That's only one time end of the universe has a very low entropy. If you have an area with maximum entropy, there is no way to tell.


Physics does are about the direction of time: https://en.wikipedia.org/wiki/T-symmetry


https://en.wikipedia.org/wiki/CPT_symmetry

Charge, Parity, and Time Reversal Symmetry is a fundamental symmetry of physical laws under the simultaneous transformations of charge conjugation (C), parity transformation (P), and time reversal (T). CPT is the only combination of C, P and T that is observed to be an exact symmetry of nature at the fundamental level. The CPT theorem says that CPT symmetry holds for all physical phenomena, or more precisely, that any Lorentz invariant local quantum field theory with a Hermitian Hamiltonian must have CPT symmetry.


Is there a list somewhere of all of the basic particle interactions we know of, represented as Feynmann diagrams? My understanding is that there is a small, finite number of them; it would be neat to see them enumerated.


Honest question, why does it take a super computer to come up with the animated simulation in the article. Doesn't seem that there are that many variables.


The simulation in question was created using Lattice QCD (which many people in the HPC world mostly know as a benchmark to optimize their compilers/systems on, but it's also a useful physics technique).

Basically what you end up doing is discretizing spacetime on some grid, maybe e.g. 100x100x100x100 (not much larger than that last I checked or the problem becomes intractable even on very large computers) and then sample field configurations monte-carlo style (the way to do it is actually really clever - it turns out that there is a mathematical trick that can take you from the description of your theory right to a distribution from which you can draw your field configurations). The number of fields is decently large (some discretization introduce extra auxiliary fields for every physical one) and in general you need to consider the entire field configuration to compute an observable. Then do that a few billion times (to bring down statistical error) at a couple different lattice spacings (to be able to extrapolate to the continuum limit) and you quickly end up with a very large computations.

There is a number of clever ways to reduce the amount of computation required, but in general it's a very different problem.

Hope that makes some sense. I had looked into this in detail about two years ago, but looking back, I feel like I only know every other word in my write-up ;).


I do lattice QCD, and this is a pretty decent explanation.

A few minor things:

People typically use lattice sizes that have lots of factors of 2, because they fit on the supercomputers in a better way. So rather than 100 people do 96 or 128. Only some calculations require lattices this large.

The field content is usually a few fermions, and the gluons. On each site there is (usually, depends on the discretization as you say) a Dirac spinor (2 spins * 2 particle/antiparticle * 3 spins = 12 numbers) for each fermion and on each link (ie. the edges) there is an SU(3) matrix (3x3 complex doubles) for the glue.

The field configuration is simply the glue. But this already is a large amount of data. volume * (9 complex doubles per link) * (1 link per spacetime direction) * (4D spacetime) ~= 9 gigs for a (32^3 * 64 spacetime volume). This is effectively a giant (sparse) matrix, which we want to invert.

The mathematical trick is called importance sampling, https://en.wikipedia.org/wiki/Importance_sampling which people accomplish with the metropolis algorithm https://en.wikipedia.org/wiki/Metropolis%E2%80%93Hastings_al... . If only we could do a few billion samples! People usually make a few thousand to a few hundred thousand samples.


Having done some QFT, now doing hydrodynamics, I never knew that lattice people use a staggered grid for the fermions and gluons. Do you know if it's for the same reason as staggering velocity and pressure in CFD; to avoid checkerboarding of the spinors (pressure)?


KenoFischer is right---it comes down to gauge invariance.

Gauge symmetry is a local symmetry that dictates a lot of the structure of QCD. If your discretization breaks it (and say, only recovers it in the continuum limit) you will have very bad renormalization and fine-tuning problems. Putting the gauge on the links automatically guarantees that gauge symmetry is respected, even at the discretized level, and jibes very well with the fact that the gauge field / gauge connection describes, in some way, how to perform parallel transport. So, it's very natural for the gauge to connect the sites together in that you need to know the value of the link in order to compare quantities on two adjacent spacetime sites.

https://en.wikipedia.org/wiki/Gauge_theory https://en.wikipedia.org/wiki/Connection_form https://en.wikipedia.org/wiki/Parallel_transport


If I remember correctly, there is no way to obtain a gauge invariant discretization if you try to put the gluons at the lattice sites. I can try to find a proper reference.


Thanks for the correction! I never did any actual Lattice QCD calculations, just looked at the theory, so I was taking a bit of an educated guess on the feasible problem sizes.


Minor erratum: 's/3 spins/3 colors/'.


Great question! There is more to it than you can see in the animation. Because this is a quantum mechanical problem, it's not enough to take into account a single given field configuration at a given point in time, but we have to work with all possible field configurations at the same time. Because the number of configurations grows exponentially with the complexity of the system, we have to invest enormous amounts of memory and computing power into simulating it.

Feynman actually wrote this pretty amazing paper on the topic of simulating quantum mechanical systems with computers: http://link.springer.com/article/10.1007/BF02650179. You should have a look, it's quite readable!

(If the paywall is a problem for you, you should be able to find the paper elsewhere)


Thanks. Here it is without a paywall. https://people.eecs.berkeley.edu/~christos/classics/Feynman.... Such a simple title ;)


Complexity classes of algorithms.

There are only 3 variables in a discrete logarithm problem but it takes super computers to solve. Same with prime number factorization.

Number of inputs is completely unrelated to the time to solve depending on the algorithm being used to solve the problem.


If you have read any of the books like Surely Your Joking, the first thing that sprung to my mind were those diagrams he drew on napkins


I'm kind of confused by Feynmann diagrams where a stray gluon flies off. What happens to it then? Where does it go?


It goes into a vertex involving some other particle with the appropriate "color" charge. That part is not depicted because it is not relevant to the process under study, as in "and then it goes off to die somewhere, I don't care where or how".


That's just a part of a bigger diagram. Due to color confinement you can't just have one gluon.


They have a diagram that emits a single gluon in the head of every particle physics article in Wikipedia. Is that a make-believe diagram? I couldn't expect that from them.


It is not a make-believe diagram. It is a completely valid diagram, but it i just part of something bigger. Also the two quarks in the diagram need to ultimately couple to something else. The "in" and "out" final states must be colorless. QCD is a extremely complex theory, one usually computes a piece of the puzzle and tries to extract physical information from that.


Just to add to this: The part that comes this initial gluon emission (hadronization) is not very well understood since at this length scale the perturbative (i.e., feynman diagram) approach to QCD breaks down. See e.g. http://www.quantumdiaries.org/2010/12/11/when-feynman-diagra...


What does he mean by "gravity responds to all kinds of energy"? What about photons?


You might be interested to read about the kugelblitz:

> In theoretical physics, a kugelblitz is a concentration of light so intense that it forms an event horizon and becomes self-trapped: according to general relativity, if enough radiation is aimed into a region, the concentration of energy can warp spacetime enough for the region to become a black hole.

https://en.wikipedia.org/wiki/Kugelblitz_(astrophysics)


yes, those too. Light is influenced by gravitational fields!


Is it not more a case of space-time being influenced rather than light?


Yes, I would think. Gravity warps space, and light just follows a line in space, which is a curve in space-time. But since gravity can affect light, does light (energy) affect gravity?


Yes. The gravitational field couples to any energy; it doesn't care what the source is. Light carries energy, so the gravitational field responds to it.


More formally, a photon field contributes to the so-called stress-energy tensor,which acts as the 'source' of gravity in the Einstein equations, thus affecting gravity, as you say.


Yeah, in physics if A affects B then B always affects A. This idea was first formalised as Newton's Third Law.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: