I don't know if this is exactly the same as what I learned in high school as "integration by substitution."
A number of years after I finished school, I was in a new town without a job, and got hired to teach a freshman algebra course at the nearby Big Ten university. About halfway into teaching the class, I was struck by the realization that virtually every problem was solved in the same way, by recognizing the "form" of a problem and applying an algorithm appropriate for that form, drawn from the most recent chapter.
In the TFA, the natural log in the integrand was a dead give-away because it only comes from one place in the standard order of topics in calculus class.
Is this what we call intuition?
The students called this the "trick." Many of them had come from high school math under the impression that math was subjective, and was a matter of guessing the teacher's preferred trick from among the many possible.
For instance, all of the class problems involving maxima and minima involved a quadratic equation, since it was the only form with an extremum that the students had learned. Every min/max problem culminated with completing the square. I taught my students a formula that they could just memorize.
The whole affair left me with a bad taste in my mouth.
I think the difference is something like Feynman’s trick simplifies a hard integral by introducing a parameter and differentiating the whole integral, while substitution simplifies an integral by changing variables to undo the chain rule. But it has been so long since I've done integration manually I'm not 100% sure that's an accurate description/the full story.
The thing I hated about integration was which approach would work and the best option for each approach were much more "do a lot and see what's right" and I was too lazy :).
I actually think math and sciences should introduce what I call "synthesis" much earlier. i.e. I don't think it's unfair to give students all the ingredients and add in a question on the exam to see if they can take those ingredients and apply it to a problem type they haven't seen before. (This is a great differentiator between C students and A students.) Or for a science class, rather than perform an experiment, I think the students should have to actually DESIGN the experiment first. (I had one laboratory exam in 2nd semester undergrad chem class that did this and it was amazing! The students also performed pretty well at it too. It consisted of being told to figure out how much zinc was in a lozenge. We were also maybe given a handy reaction formula and that was it. You had to design your analysis procedure and figure out how to get the quantity you wanted out of it, and then actually perform your analysis all within the exam period.)
I think not doing this starting in like middle school is a big part of the reason why people think math/science is useless. Unless the exact scenario they have been taught pops up, they can very rarely see the application. But the real world NEVER works this way. A problem is NEVER formulated as a straight forward well-formed problem. Figuring out how to mold it into something that you can apply the tools you know to is in and of itself a REALLY important skill to practice, and sadly, we almost NEVER practice that. Only in grad school does that type of thing come up.
The fairness of a competition and the fairness of asking a particular question are different.
When things are just for fun, the impact of having unfair questions than when unfair questions can cause people to fail a class or get a lower GPA. This is why sometimes these kind of unfair questions get designated as extra credit as its unfair for these questions to actually count against you.
I just finished Mathematica by David Bessis and I wish this information was presented in the way he talks about math: using words and imagery to explain what is happening, and only using the equations to prove the words are true.
I just haven’t had to use integral calculus in so many years, I don’t recall what the symbols mean and I certainly don’t care about them. That doesn’t mean I wouldn’t find the problem domain interesting, if it was expressed as such. Instead, though, I get a strong dose of mathematical formalism disconnected from anything I can meaningfully reason about. Too bad.
My intuition for the Feynman's trick is that we construct a "morph" which produces the given function (the parameter t drives the morphing).
The key to the trick is that we construct the morph so that: a) we can tell the rate at which it increases the "area under curve" b) the rate is easier to integrate that the original function and c) the starting function has a known integral
a) is generally easier because differentiation under integral sign lets use use the standard differentiation rules.
b) this is where the difficulty in constructing the morph lies.
So we start from a known value of the integral (from c above) and then just add whatever the morph adds, which is the integral of the rate from a) over the interval of the morph.
That's one of the things I like best about https://betterexplained.com -- it focuses on ways to gain intuition about a given math concept, using visuals and metaphors as appropriate. If only math education were always presented like that....
When I was a student of physics and came across this paragraph in Feynman's book, I was curious if he really meant the simple technique explained in the article, a more general one (also described in the article with the integral bounds as functions of a parameter) or something else. I don't know, but this led me to read the text "Advanced Calculus" by Edwin Bidwell Wilson (1912), which includes a lot of examples and gems. If there is some young student out there who wants to go well beyond the basic techniques of calculus taught in analysis or mathematical physics courses, have a look at [0].
My issue with both this and u-substitution is that you don't know what expression to use. There are a LOT of expressions that plausibly simplify the integral. But you have to do a bunch of algebra for each one (and not screw it up!), without really knowing whether it actually helps.
OTOH, if I'm given the expression, it's just mechanical and unrewarding.
I see your point, but as it is stated in the article, it is one of those techniques that require practice, and time to mature. And like it mentions, it's a bit like chess...when you're presented with some troubling integral, you can parametrize it in a number of ways. Most will bring you back to the beginning (like with the standard integration by parts), but the right one will make your life much easier.
It can be frustrating when math does not have any clear single path, but that's just the nature of the beast. In the beginning you'll just have to explore all the paths, but do that a couple of hundred times, and you'll start to notice patterns and what will work / what will not. Kind of like chess, where a good chess player can think N moves ahead in time.
I don't know about this particular case though, I get the feeling there's a system to it that can be exploited by eg Wolfram. It's just that you're in the dark for a long time before you find the switch.
Your intuition is right. There is a general algorithm for finding the antiderivatives: https://en.wikipedia.org/wiki/Risch_algorithm Its simplified form can solve pretty much all the undergrad antiderivation problems.
I'm a math major, but I consider the time spent learning the tricks for antiderivation to be kinda useless.
Extraordinarily well done didactically, by the way:
First, a motivational anecdote, then some straightforward theory, a simple (yet impressive) example fully worked out, the general method, and further examples of increasing difficulty for practice with hints.
To people who find this stuff useful in practise today (and not merely fascinating or useful 50 years ago): what is your line of work?
I have needed to know the values of a few integrals in my job, but I have always ended up with a close enough answer using computational methods. What am I missing by not solving analytically?
Often times it so happens that the point of interest is not the numerical value of the integral but its behavior at different points in its domain. If I am able to figure out the expression it becomes easier.
To give an example consider the moment generating transform, Laplace transform. Their symbolic expression can be very informative.
Consider the Mercator projection. It was designed without any idea of the closed form of the required integral. It was mostly done by estimate and gut feel. Now that we know the actual form (an entirely serendipitous discovery) we feel more confident that we understand the transform. This part is considerably psychological but not entirely.
Note that when drawing a map in Mercator projection we have to fall back to numerical estimation. But it helps that parts of the transforms are built from functions tha have names, that means we have seen the same functions elsewhere, it instills a sense of familiarity and understanding.
There are way to many functions to name, so the ones we have given names to are a bit special.
In quantum mechanics, what you can measure experimentally (observables) are given by integrals. You can do the integrals computationally, but then you only have an empirical understanding of how the observables behave when you change some parameter of your experiment.
In our experiments, we need to know how the frequency of an electromagnetic resonator will change when we couple it to a quantum system. We calculate these frequency shifts with integrals. Being able to calculate these integrals analytically for some limiting cases helps us understand the dependence on the parameters. And usually you can patch the limiting cases together and not even have to compute the integrals numerically.
I spend a lot of time working with real-world electronics, where a good mathematical background is important to calculate things like component values for a desired behaviour.
But far better is developing a sense of what's "about right".
I have taught people who studied Electronic Engineering "properly" who calculate that the resistors need to be 20.7kΩ and 21.3kΩ for a given circuit and then will go mad scouring Farnell, Mouser et al for those values.
You or I would say "That needs to be a 22kΩ resistor and an 18kΩ resistor in series with a 4.7kΩ pot, because that is going to need adjusted on test because of the tolerances in everything else", wouldn't we?
> So I got a great reputation for doing integrals, only because my box of tools was different from everybody else's
This is the most important lesson I learned in grad school. Methods are so important. I really think it is the core of what we call "critical thinking" - knowing how facts are made.
This reminds me of the "snake oil method" for generating functions. It's been many years, but I remember it as adding another sigma and then swapping them.
No, it is correct. The integral is with respect to x, and the ordinary/partial derivatives are with respect to t. Written out fully, the derivative computation is
Back in college I stopped doing maths in second year as a major because of the way it was taught. I just hated it. Numerical methodds in particular broke me. My main problem was we never really got told how things fit together. Resources like 3blue1brown just didn't exist at that time, sadly. We just had dusty and expensive and very dry textbooks to rely on. For example, we just got through into ODEs and were told "just use e^at". We started doing contour integrals without really telling us what was going on. Honestly, things like linearity were never really taught for basic stuff like derivatives and integrals.
But I had always loved maths and went back to it much later. After having done some computer science, some concepts just made it click more for me. Like sets were a big one. Seeing functions as just a mapping between sets. Seeing functions as set elements. Seeing derivatives and integrals as simply the mapping between sets of functions.
What fascinates me is that differentiation is solved, basically. Don't come at me about known closed form expressions. But integration is not. Now this makes a certain amount of sense. Differentiation is non-injective after all. But what's more fascinating (and possibly really good evidence of my own neurodivergence) is that integration isn't just an algorithm. It requires some techniques to find, of which the Feynman technique is just one. I think I was introduced to it with the Basel problem. I have to confess I end up watching daily Tiktok integration problems. It scratches an itch.
I kinda wish I'd made it to complex analysis at least in college. I mean I kinda did. I do remember doing something with contour integrals. But it just wasn't structured well. By that I mean Laplace transforms, poles of a function in the S-plane and analytic continuations.
I'm not particularly proficient at the Feynman technique. Like I can't generally spot the alpha substitution that should be made. Maybe one day.
I don't know if this is exactly the same as what I learned in high school as "integration by substitution."
A number of years after I finished school, I was in a new town without a job, and got hired to teach a freshman algebra course at the nearby Big Ten university. About halfway into teaching the class, I was struck by the realization that virtually every problem was solved in the same way, by recognizing the "form" of a problem and applying an algorithm appropriate for that form, drawn from the most recent chapter.
In the TFA, the natural log in the integrand was a dead give-away because it only comes from one place in the standard order of topics in calculus class.
Is this what we call intuition?
The students called this the "trick." Many of them had come from high school math under the impression that math was subjective, and was a matter of guessing the teacher's preferred trick from among the many possible.
For instance, all of the class problems involving maxima and minima involved a quadratic equation, since it was the only form with an extremum that the students had learned. Every min/max problem culminated with completing the square. I taught my students a formula that they could just memorize.
The whole affair left me with a bad taste in my mouth.
I think the difference is something like Feynman’s trick simplifies a hard integral by introducing a parameter and differentiating the whole integral, while substitution simplifies an integral by changing variables to undo the chain rule. But it has been so long since I've done integration manually I'm not 100% sure that's an accurate description/the full story.
The thing I hated about integration was which approach would work and the best option for each approach were much more "do a lot and see what's right" and I was too lazy :).
I think it's intuitive to assume what you are being tested on is what is being taught by the book or the teacher. It's unfair otherwise.
I actually think math and sciences should introduce what I call "synthesis" much earlier. i.e. I don't think it's unfair to give students all the ingredients and add in a question on the exam to see if they can take those ingredients and apply it to a problem type they haven't seen before. (This is a great differentiator between C students and A students.) Or for a science class, rather than perform an experiment, I think the students should have to actually DESIGN the experiment first. (I had one laboratory exam in 2nd semester undergrad chem class that did this and it was amazing! The students also performed pretty well at it too. It consisted of being told to figure out how much zinc was in a lozenge. We were also maybe given a handy reaction formula and that was it. You had to design your analysis procedure and figure out how to get the quantity you wanted out of it, and then actually perform your analysis all within the exam period.)
I think not doing this starting in like middle school is a big part of the reason why people think math/science is useless. Unless the exact scenario they have been taught pops up, they can very rarely see the application. But the real world NEVER works this way. A problem is NEVER formulated as a straight forward well-formed problem. Figuring out how to mold it into something that you can apply the tools you know to is in and of itself a REALLY important skill to practice, and sadly, we almost NEVER practice that. Only in grad school does that type of thing come up.
Depends on your sense of fairness. Math Olympiads don't test what's in the book, but they are also fair.
The fairness of a competition and the fairness of asking a particular question are different.
When things are just for fun, the impact of having unfair questions than when unfair questions can cause people to fail a class or get a lower GPA. This is why sometimes these kind of unfair questions get designated as extra credit as its unfair for these questions to actually count against you.
I just finished Mathematica by David Bessis and I wish this information was presented in the way he talks about math: using words and imagery to explain what is happening, and only using the equations to prove the words are true.
I just haven’t had to use integral calculus in so many years, I don’t recall what the symbols mean and I certainly don’t care about them. That doesn’t mean I wouldn’t find the problem domain interesting, if it was expressed as such. Instead, though, I get a strong dose of mathematical formalism disconnected from anything I can meaningfully reason about. Too bad.
My intuition for the Feynman's trick is that we construct a "morph" which produces the given function (the parameter t drives the morphing).
The key to the trick is that we construct the morph so that: a) we can tell the rate at which it increases the "area under curve" b) the rate is easier to integrate that the original function and c) the starting function has a known integral
a) is generally easier because differentiation under integral sign lets use use the standard differentiation rules.
b) this is where the difficulty in constructing the morph lies.
So we start from a known value of the integral (from c above) and then just add whatever the morph adds, which is the integral of the rate from a) over the interval of the morph.
That's one of the things I like best about https://betterexplained.com -- it focuses on ways to gain intuition about a given math concept, using visuals and metaphors as appropriate. If only math education were always presented like that....
When I was a student of physics and came across this paragraph in Feynman's book, I was curious if he really meant the simple technique explained in the article, a more general one (also described in the article with the integral bounds as functions of a parameter) or something else. I don't know, but this led me to read the text "Advanced Calculus" by Edwin Bidwell Wilson (1912), which includes a lot of examples and gems. If there is some young student out there who wants to go well beyond the basic techniques of calculus taught in analysis or mathematical physics courses, have a look at [0].
[0] https://archive.org/details/advancedcalculus031579mbp/mode/1...
My issue with both this and u-substitution is that you don't know what expression to use. There are a LOT of expressions that plausibly simplify the integral. But you have to do a bunch of algebra for each one (and not screw it up!), without really knowing whether it actually helps.
OTOH, if I'm given the expression, it's just mechanical and unrewarding.
I see your point, but as it is stated in the article, it is one of those techniques that require practice, and time to mature. And like it mentions, it's a bit like chess...when you're presented with some troubling integral, you can parametrize it in a number of ways. Most will bring you back to the beginning (like with the standard integration by parts), but the right one will make your life much easier.
It can be frustrating when math does not have any clear single path, but that's just the nature of the beast. In the beginning you'll just have to explore all the paths, but do that a couple of hundred times, and you'll start to notice patterns and what will work / what will not. Kind of like chess, where a good chess player can think N moves ahead in time.
That’s how most of math works past high school. It requires a lot of practice and intuition.
I don't know about this particular case though, I get the feeling there's a system to it that can be exploited by eg Wolfram. It's just that you're in the dark for a long time before you find the switch.
Your intuition is right. There is a general algorithm for finding the antiderivatives: https://en.wikipedia.org/wiki/Risch_algorithm Its simplified form can solve pretty much all the undergrad antiderivation problems.
I'm a math major, but I consider the time spent learning the tricks for antiderivation to be kinda useless.
I think it just tokenizes everything and does pattern matching to find compositions it can exploit. It's not unlike compiler optimization.
Extraordinarily well done didactically, by the way:
First, a motivational anecdote, then some straightforward theory, a simple (yet impressive) example fully worked out, the general method, and further examples of increasing difficulty for practice with hints.
To people who find this stuff useful in practise today (and not merely fascinating or useful 50 years ago): what is your line of work?
I have needed to know the values of a few integrals in my job, but I have always ended up with a close enough answer using computational methods. What am I missing by not solving analytically?
Often times it so happens that the point of interest is not the numerical value of the integral but its behavior at different points in its domain. If I am able to figure out the expression it becomes easier.
To give an example consider the moment generating transform, Laplace transform. Their symbolic expression can be very informative.
Consider the Mercator projection. It was designed without any idea of the closed form of the required integral. It was mostly done by estimate and gut feel. Now that we know the actual form (an entirely serendipitous discovery) we feel more confident that we understand the transform. This part is considerably psychological but not entirely.
Note that when drawing a map in Mercator projection we have to fall back to numerical estimation. But it helps that parts of the transforms are built from functions tha have names, that means we have seen the same functions elsewhere, it instills a sense of familiarity and understanding.
There are way to many functions to name, so the ones we have given names to are a bit special.
In quantum mechanics, what you can measure experimentally (observables) are given by integrals. You can do the integrals computationally, but then you only have an empirical understanding of how the observables behave when you change some parameter of your experiment.
In our experiments, we need to know how the frequency of an electromagnetic resonator will change when we couple it to a quantum system. We calculate these frequency shifts with integrals. Being able to calculate these integrals analytically for some limiting cases helps us understand the dependence on the parameters. And usually you can patch the limiting cases together and not even have to compute the integrals numerically.
I spend a lot of time working with real-world electronics, where a good mathematical background is important to calculate things like component values for a desired behaviour.
But far better is developing a sense of what's "about right".
I have taught people who studied Electronic Engineering "properly" who calculate that the resistors need to be 20.7kΩ and 21.3kΩ for a given circuit and then will go mad scouring Farnell, Mouser et al for those values.
You or I would say "That needs to be a 22kΩ resistor and an 18kΩ resistor in series with a 4.7kΩ pot, because that is going to need adjusted on test because of the tolerances in everything else", wouldn't we?
Frustration. A lot of frustration: https://en.wikipedia.org/wiki/Path_integral_formulation
> So I got a great reputation for doing integrals, only because my box of tools was different from everybody else's
This is the most important lesson I learned in grad school. Methods are so important. I really think it is the core of what we call "critical thinking" - knowing how facts are made.
It’s interesting he mentions he doesn’t like contour integration since many integrals can be done either way.
Feynman’s trick is equivalent to extending it into a double integral and then switching the order of integration.
Don't forget to check for the necessary measurability & integrability of the sections (f(a, y), f(x, b)) before switching the order: https://en.wikipedia.org/wiki/Fubini%27s_theorem?useskin=vec....
This reminds me of the "snake oil method" for generating functions. It's been many years, but I remember it as adding another sigma and then swapping them.
It starts off with a pretty major error.
I'(t)=\int_0^1 \partial/(\partial t)((x^t - 1)/(ln x))dx = \int_0^1 x^t dx=1/(t+1), when it is actually equal to \int_0^1 x^{t-1}/ln(x)dx.
These two are definitely not always equal to each other.
No, it is correct. The integral is with respect to x, and the ordinary/partial derivatives are with respect to t. Written out fully, the derivative computation is
d/dt (x^t - 1)/ln(x) = d/dt [exp(ln(x)t) - 1]/ln(x) = ln(x)exp(ln(x)t)/ln(x) = exp(ln(x)t) = x^t.
Edit: d/dt exp(ln(x)t) = ln(x)exp(ln(x)t) by the chain rule, while d/dt (1/ln(x)) = 0 since the expression is constant with respect to t.
There are convergence considerations that were not discussed in the blog post, but the computations seem to be correct.
Ah, yes. I don't understand how I differentiated with respect to x instead of t, but...
Back in college I stopped doing maths in second year as a major because of the way it was taught. I just hated it. Numerical methodds in particular broke me. My main problem was we never really got told how things fit together. Resources like 3blue1brown just didn't exist at that time, sadly. We just had dusty and expensive and very dry textbooks to rely on. For example, we just got through into ODEs and were told "just use e^at". We started doing contour integrals without really telling us what was going on. Honestly, things like linearity were never really taught for basic stuff like derivatives and integrals.
But I had always loved maths and went back to it much later. After having done some computer science, some concepts just made it click more for me. Like sets were a big one. Seeing functions as just a mapping between sets. Seeing functions as set elements. Seeing derivatives and integrals as simply the mapping between sets of functions.
What fascinates me is that differentiation is solved, basically. Don't come at me about known closed form expressions. But integration is not. Now this makes a certain amount of sense. Differentiation is non-injective after all. But what's more fascinating (and possibly really good evidence of my own neurodivergence) is that integration isn't just an algorithm. It requires some techniques to find, of which the Feynman technique is just one. I think I was introduced to it with the Basel problem. I have to confess I end up watching daily Tiktok integration problems. It scratches an itch.
I kinda wish I'd made it to complex analysis at least in college. I mean I kinda did. I do remember doing something with contour integrals. But it just wasn't structured well. By that I mean Laplace transforms, poles of a function in the S-plane and analytic continuations.
I'm not particularly proficient at the Feynman technique. Like I can't generally spot the alpha substitution that should be made. Maybe one day.
This seems like a bizarre comment that has almost nothing to do with the title.
[dead]