Chop a function f(x) into into n pieces numbered 1, 2, … ,i. This is called a partition.
Call Mᵢ the maximum value of f(x) for xᵢ₋₁ < x < xᵢ.
Call mᵢ the minimum value of f(x) for xᵢ₋₁ < x < xᵢ.
Call upper sum U the sum from 1 to n of (xᵢ−xᵢ₋₁) · Mᵢ. That is, for every piece multiply the x-size of the piece by the maximum value, giving the area of a square of width xᵢ−xᵢ₋₁ and height Mᵢ, and sum them all together.
Call lower sum L the sum from 1 to n of (xᵢ−xᵢ₋₁) · mᵢ. That is, for every piece multiply the x-size of the piece by the minimum value, giving the area of a square of width xᵢ−xᵢ₋₁ and height mᵢ, and sum them all together.
Then use something similar to the delta-epsilon definition of a limit to show that for every ε > 0 there's a partition such that U−L < ε. When this is true, and you can show it's true for every continuous function, the limit is what's called the integral and you denote it with the ∫ symbol.
Then you can use this to prove the chain rule, that ∫ from a to b + ∫ from b to c = ∫ from a to c (called transitivity w.r.t. addition), and a few other things.
Then you define integrals to infinity as the limit of the integral form a to x as x → infinity, and use the transitivity rule to do from −∞ to ∞.
Then you do this manual limit once for f(x) = x^a and use the Taylor theorem for all other functions.
At no point there are infinites and infinitesimals, the key is the observation that you can get the difference between upper and lower sums arbitrarily close to 0.
I first learned it over 10 years ago now, but the idea of an epsilon-delta proof for limits is still so satisfying and clever. Like, showing equality (or convergence of a sequence technically) by proving that at some point the sequence gets closer than any positive number is just such a radical shift in thinking about equality it's kind of wild we were able to come up with it.
the issue is it takes a lot more machinery to explain it to someone - so lots of prereqs. but if you have that prerequisite knowledge, it's much simpler, conceptually. Lagrangian mechanics vs Newtonian mechanics is another example of this phenomenon. we teach the objectively more complicated theory because we can teach it sooner, to students with less of a math background.
It's funny, I was just thinking about that maths paper by Marx that I can only summarize as "dividing by 0 or infinity is ridiculous (immaterial?), and should not be the basis of maths" and how that just didn't really connect to my understanding of calculus. Do you think he just did not see the way that limits work or understood that and was disagreeing that this is a "real" basis of any math?
HOURS LATER EDITED: I fucked up the quote bad so see my other comment
I really wrote wrong what Marx's claim was; wrote this too early in the morning. "using limits instead of dividing by 0 is ridiculous and immaterial" is what his claim was. Really about how 0/0 could be a better basis than utilizing limits to get to a true basis. His claim was exactly what you say that nobody did; that 0/0 is meaningful and should be understood as the material/real basis of change or so
dividing by zero doesn't make sense arithmetically - in arithmetic, division is about cutting things into parts, so you can make no cuts and leave the singular whole, or you can make some number of cuts and divide the whole. algebraically, division by zero has no sensible result as it can produce any value between 0 and infinity (or negative infinity if we're approaching zero from the left). that is, it's not defined - to define division by zero as a function, we'd need to be able to assign it a particular value for any given input. if we abstract from numbers a little bit, and look at the structure that encapsulates basic arithmetic - rings - we find that the additive identity, usually denoted as 0 for the ring, never has a multiplicative inverse. the ill-definedness of division by zero is so fundamental that limits were introduced specifically so we could divide by zero rigorously - limits allow us to precisely choose a specific value out of the continuum that we expect a particular instance of zero division to evaluate to.
It's more like
At no point there are infinites and infinitesimals, the key is the observation that you can get the difference between upper and lower sums arbitrarily close to 0.
I first learned it over 10 years ago now, but the idea of an epsilon-delta proof for limits is still so satisfying and clever. Like, showing equality (or convergence of a sequence technically) by proving that at some point the sequence gets closer than any positive number is just such a radical shift in thinking about equality it's kind of wild we were able to come up with it.
this guy maths
lesbegue integration is so much more intuitive tbh
Really? I never really got Lebesgue, but I didn't put any effort into it.
the issue is it takes a lot more machinery to explain it to someone - so lots of prereqs. but if you have that prerequisite knowledge, it's much simpler, conceptually. Lagrangian mechanics vs Newtonian mechanics is another example of this phenomenon. we teach the objectively more complicated theory because we can teach it sooner, to students with less of a math background.
Half a Joke: Yes, Newton is indeed much more complicated compared to the beauty of Lagrange where it applies.
this but unironically
It's funny, I was just thinking about that maths paper by Marx that I can only summarize as "dividing by 0 or infinity is ridiculous (immaterial?), and should not be the basis of maths" and how that just didn't really connect to my understanding of calculus. Do you think he just did not see the way that limits work or understood that and was disagreeing that this is a "real" basis of any math? HOURS LATER EDITED: I fucked up the quote bad so see my other comment
there's no way he understood this because at no point is anyone dividing by infinity or 0.
I really wrote wrong what Marx's claim was; wrote this too early in the morning. "using limits instead of dividing by 0 is ridiculous and immaterial" is what his claim was. Really about how 0/0 could be a better basis than utilizing limits to get to a true basis. His claim was exactly what you say that nobody did; that 0/0 is meaningful and should be understood as the material/real basis of change or so
dividing by zero doesn't make sense arithmetically - in arithmetic, division is about cutting things into parts, so you can make no cuts and leave the singular whole, or you can make some number of cuts and divide the whole. algebraically, division by zero has no sensible result as it can produce any value between 0 and infinity (or negative infinity if we're approaching zero from the left). that is, it's not defined - to define division by zero as a function, we'd need to be able to assign it a particular value for any given input. if we abstract from numbers a little bit, and look at the structure that encapsulates basic arithmetic - rings - we find that the additive identity, usually denoted as 0 for the ring, never has a multiplicative inverse. the ill-definedness of division by zero is so fundamental that limits were introduced specifically so we could divide by zero rigorously - limits allow us to precisely choose a specific value out of the continuum that we expect a particular instance of zero division to evaluate to.