Sounds like a bunch of made up Earthclan wolfling superstitions.
It's more like
- Chop a function f(x) into into n pieces numbered 1, 2, … ,i. This is called a partition.
- Call Mᵢ the maximum value of f(x) for xᵢ₋₁ < x < xᵢ.
- Call mᵢ the minimum value of f(x) for xᵢ₋₁ < x < xᵢ.
- Call upper sum U the sum from 1 to n of (xᵢ−xᵢ₋₁) · Mᵢ. That is, for every piece multiply the x-size of the piece by the maximum value, giving the area of a square of width xᵢ−xᵢ₋₁ and height Mᵢ, and sum them all together.
- Call lower sum L the sum from 1 to n of (xᵢ−xᵢ₋₁) · mᵢ. That is, for every piece multiply the x-size of the piece by the minimum value, giving the area of a square of width xᵢ−xᵢ₋₁ and height mᵢ, and sum them all together.
- Then use something similar to the delta-epsilon definition of a limit to show that for every ε > 0 there's a partition such that U−L < ε. When this is true, and you can show it's true for every continuous function, the limit is what's called the integral and you denote it with the ∫ symbol.
- Then you can use this to prove the chain rule, that ∫ from a to b + ∫ from b to c = ∫ from a to c (called transitivity w.r.t. addition), and a few other things.
- Then you define integrals to infinity as the limit of the integral form a to x as x → infinity, and use the transitivity rule to do from −∞ to ∞.
- Then you do this manual limit once for f(x) = x^a and use the Taylor theorem for all other functions.
At no point there are infinites and infinitesimals, the key is the observation that you can get the difference between upper and lower sums arbitrarily close to 0.
I first learned it over 10 years ago now, but the idea of an epsilon-delta proof for limits is still so satisfying and clever. Like, showing equality (or convergence of a sequence technically) by proving that at some point the sequence gets closer than any positive number is just such a radical shift in thinking about equality it's kind of wild we were able to come up with it.
Really? I never really got Lebesgue, but I didn't put any effort into it.
the issue is it takes a lot more machinery to explain it to someone - so lots of prereqs. but if you have that prerequisite knowledge, it's much simpler, conceptually. Lagrangian mechanics vs Newtonian mechanics is another example of this phenomenon. we teach the objectively more complicated theory because we can teach it sooner, to students with less of a math background.
Half a Joke: Yes, Newton is indeed much more complicated compared to the beauty of Lagrange where it applies.
It's funny, I was just thinking about that maths paper by Marx that I can only summarize as "dividing by 0 or infinity is ridiculous (immaterial?), and should not be the basis of maths" and how that just didn't really connect to my understanding of calculus. Do you think he just did not see the way that limits work or understood that and was disagreeing that this is a "real" basis of any math? HOURS LATER EDITED: I fucked up the quote bad so see my other comment
there's no way he understood this because at no point is anyone dividing by infinity or 0.
I really wrote wrong what Marx's claim was; wrote this too early in the morning. "using limits instead of dividing by 0 is ridiculous and immaterial" is what his claim was. Really about how 0/0 could be a better basis than utilizing limits to get to a true basis. His claim was exactly what you say that nobody did; that 0/0 is meaningful and should be understood as the material/real basis of change or so
dividing by zero doesn't make sense arithmetically - in arithmetic, division is about cutting things into parts, so you can make no cuts and leave the singular whole, or you can make some number of cuts and divide the whole. algebraically, division by zero has no sensible result as it can produce any value between 0 and infinity (or negative infinity if we're approaching zero from the left). that is, it's not defined - to define division by zero as a function, we'd need to be able to assign it a particular value for any given input. if we abstract from numbers a little bit, and look at the structure that encapsulates basic arithmetic - rings - we find that the additive identity, usually denoted as 0 for the ring, never has a multiplicative inverse. the ill-definedness of division by zero is so fundamental that limits were introduced specifically so we could divide by zero rigorously - limits allow us to precisely choose a specific value out of the continuum that we expect a particular instance of zero division to evaluate to.
You know you can just stop after a couple million rectangles and you still get the right answer, the universe is only on single precision floating points for things like that
The Planck Length is the single greatest piece of evidence that we could be living in a simulation.
If I'm totally honest, I don't think the Planck length will be any sort of respected value in like 1000 years (if we survive with a high-tech/knowledge society for that long). It's just derived with a unit analysis using Gravity constant, the speed of light, and PLancks certainty constant. Like it's literally just multiplying and dividing the values until their units simplify to distance and naming that distance. It's a super useful analysis assuming that these values have REAL meaning in the most base sense of the word. But trust that these values are ALL perfectly real and at the base of existence is a big step to then base any worldview on, especially considering gravity and speed of light are not really at all reconciled outside of this equation.
Long rant, but it COULD be evidence of simulation if it were absolutely true, but I just doubt it's truth will hold up to future advancements
I think the argument is based on the quantization of various phenomena? like you derive the value and then show that it pops up in quanta. the assumption is that because it does so, no smaller lengths can be measured, since you can't use anything like light to do it.
I guess I might have claimed too strongly about it, but I just mean: the perspective that this length is some limit meaningfully will not be respected if we eventually resolve the differences between quantum-sized physics and macro-physics. The increase of length through the planck-length barrier will be understood as a qualitative change inflection point, just like the distance where the strong and weak force become entirely irrelevant is. The quanta will also become understood by the same underlying principle as the length and therefore interdependent and provding the explanation of its existence.
yeah that makes sense. people take physics theory a bit far in proposing various metaphysical theories. pop science leads people to think they understand physics, but the explanations don't always follow from the math. Feynman's "shut up and calculate" makes more and more sense to me the older I get.
The Planck length isn't really the grand slam for simulation theory it sounds like, position isn't actually quantized because of a thing called rotational invariance - basically, quantum mechanical systems (and really classical systems too) behave the same way if you rotate them x degrees and try again. If you did some kind of particle experiment, rotated your apparatus 10°, then did it again, you'd expect to see repeatably different results due to grid jank/snapping if everything truly operated on a square grid, but things act the same. Things can move around in distances smaller than the Planck length.
i'm a dumbass and don't know physics but I can use my noodle to reason with ideas. the alternative to the universe being "pixelated" at the planck length, if you will permit a crude analogy, is just things being allowed to be infinitely small or infinitely big, right? I don't see how things couldn't also be a simulation under those circumstances, as well. But if reality itself is a simulation, then the word simulation essentially loses its meaning, since its meaning is derived from its contrast with reality.
Speed of light too, also time dilation is just the server taking time to process the increased load.
Believe you me. When you learn the details, it's actually much more fucked than you probably think it is.
This reminds me of of one of my favorite math paradoxes: Gabriel's Horn
The video is a bit long but the headline is: using calculus you can solve that the shape has finite volume but infinite surface area. So we know how much it would take to fill it with paint but somehow that wouldn't cover the surface...
My favourite one is the napkin ring problem. For any two spheres of any size, if they were cored like an apple to the same height, the volume of both rings would be exactly the same. A 5cm high napkin ring made out of a billiard cueball has exactly the same volume as a 5cm high napkin ring made out of a neutron star.
this one makes sense to me, if you core a small sphere it's going to have a fat cross-section
Oh, yeah that's like fractals that have infinite perimeter, but finite area (i.e. Koch snowflake).
You CAN "cover the surface" with paint, if the thickness of the paint is very very small. If you fill the horn with paint, you are effectively painting it, the thickness of the paint being the radius.
It stops seeming so paradoxical when you realize it's not just the surface area of the horn that's infinite, the length of the horn is also infinite.
Mathmeticians were really phoning it in when they invented integrals.
Marx please get back to writing Kapital, don't be like Martin with Winds of Winter
You telling me you use an infinite amount of infinitesimally small rectangles to calculate the finite area under a curve?
Weierstrass: no