I don't understand what you think the problem is. What do you mean infinities can't be differentiated from each other? Infinite cardinals are by definition equivalence classes of sets that can be placed in bijection with one another. You can compare one cardinal to another by showing there exists an injection from a representative set of the first one into a representative for the other. You can show equality by showing there is an injection both ways (Cantor-Schroder-Bernstein theorem) or otherwise producing a bijection explicitly. Infinite ordinals may as well be copies of the natural numbers indexed by infinite cardinals, so of course we can distinguish them too.
So far AFAIK we have two kinds of infinity: Those that can be accommodated at the Grand Hilbert (e.g. integers, fractions, etc.) and those that cannot (set of irrational numbers, set of curves, set of polytopes, etc.) This was why we had to differentiate orders of infinity, e.g. ℵ₀ (The Grand Hilbert set), ℵ₁ (the irrational set, the real set), ℵ₂ (???), ℵ₃ (?????), ℵₙ (??!??????????)
For values of infinity that are in higher orders than ℵ₀, we can only tell if they're equal to ℵ₁ or undetermined, which means their infinity size is ℵ₁ or greater, but still unknown.
Unless someone did some Nobel prize worthy work in mathematics that I haven't heard about which is quite possible.
No, that's definitely not true. As I said, infinite cardinals (like the cardinality of the naturals ℵ₀) are defined to be equivalence classes of sets that can be placed in bijection with one another. Whenever you have infinite sets that can't be placed in bijection, they represent different cardinals. The set of functions f : X --> X has cardinality 2^X too, so e.g. there are more real-valued functions of real numbers than there are real numbers.
You can use this technique to get an infinite sequence of distinct cardinals (via Cantor's theorem, which has a simple constructive proof). And once you have all of those, you can take their (infinite) union to get yet another greater cardinal, and continue that way. There are in fact more cardinalities that can be obtained in this way than we could fit into a set-- the (infinite) number of infinite cardinals is too big to be an infinite cardinal.
You might be thinking of the generalized continuum hypothesis that says that there are no more cardinal numbers in between the cardinalities of power sets, i.e. that ℵ₁ = 2^(ℵ₀), ℵ₂ = 2^(ℵ₁), and so on.
oh this is a neat argument I'd never encountered before. I was also under the impression that we hadn't proved there were infinities with cardinality greater than ℵ₁.
What are you doing with the HoTT book if you have never heard of Cantor's theorem??? You must understand there's a minimum of several years of intensive study in between these two things
self-study. it's been a decade since I was in school and kept encountering references to it, so I've been working through a lecture series and the book.
It's quite possible that what I'm encountering is the the momentary failure to understand Cantor's theorem, or rather the mechanism it uses to enumerate the innumerable. So my math may just be old.
Cantor's theorem says the power set of X has a strictly larger cardinality than X.
When |X| is a natural number, the power set of X has cardinality 2^(|X|), since you can think of an element of the power set as a choice, for each element of X, of "in the subset" vs "not in the subset." Hence the notation 2^X for the power set of X.
Cantor's theorem applies to all sets, not just finite ones. You can show this with a simple argument. Let X be a set and suppose there is a bijection f : X -> 2^(X). Let D be the set { x in X : x is not in f(x) }. (The fact that this is well defined is given by the comprehension axiom of ZFC, so we aren't running into a Russell's paradox issue.) Since f is a bijection, there is an element y of X so that f(y) = D. Now either:
y is in D. But then by definition y is not in f(y) = D, a contradiction.
y is not in D. But then by definition, since y is not in f(y), y is in D.
Thus, there cannot exist such a bijection f, and |2^(X)| != |X|. It's easy enough to show that the inequality only goes one way, i.e. |2^(X)| > |X|.
I don't understand what you think the problem is. What do you mean infinities can't be differentiated from each other? Infinite cardinals are by definition equivalence classes of sets that can be placed in bijection with one another. You can compare one cardinal to another by showing there exists an injection from a representative set of the first one into a representative for the other. You can show equality by showing there is an injection both ways (Cantor-Schroder-Bernstein theorem) or otherwise producing a bijection explicitly. Infinite ordinals may as well be copies of the natural numbers indexed by infinite cardinals, so of course we can distinguish them too.
So far AFAIK we have two kinds of infinity: Those that can be accommodated at the Grand Hilbert (e.g. integers, fractions, etc.) and those that cannot (set of irrational numbers, set of curves, set of polytopes, etc.) This was why we had to differentiate orders of infinity, e.g. ℵ₀ (The Grand Hilbert set), ℵ₁ (the irrational set, the real set), ℵ₂ (???), ℵ₃ (?????), ℵₙ (??!??????????)
For values of infinity that are in higher orders than ℵ₀, we can only tell if they're equal to ℵ₁ or undetermined, which means their infinity size is ℵ₁ or greater, but still unknown.
Unless someone did some Nobel prize worthy work in mathematics that I haven't heard about which is quite possible.
No, that's definitely not true. As I said, infinite cardinals (like the cardinality of the naturals ℵ₀) are defined to be equivalence classes of sets that can be placed in bijection with one another. Whenever you have infinite sets that can't be placed in bijection, they represent different cardinals. The set of functions f : X --> X has cardinality 2^X too, so e.g. there are more real-valued functions of real numbers than there are real numbers. You can use this technique to get an infinite sequence of distinct cardinals (via Cantor's theorem, which has a simple constructive proof). And once you have all of those, you can take their (infinite) union to get yet another greater cardinal, and continue that way. There are in fact more cardinalities that can be obtained in this way than we could fit into a set-- the (infinite) number of infinite cardinals is too big to be an infinite cardinal.
You might be thinking of the generalized continuum hypothesis that says that there are no more cardinal numbers in between the cardinalities of power sets, i.e. that ℵ₁ = 2^(ℵ₀), ℵ₂ = 2^(ℵ₁), and so on.
oh this is a neat argument I'd never encountered before. I was also under the impression that we hadn't proved there were infinities with cardinality greater than ℵ₁.
How/why would you simultaneously hold this belief and reference the HoTT book
because I was misinformed?
What are you doing with the HoTT book if you have never heard of Cantor's theorem??? You must understand there's a minimum of several years of intensive study in between these two things
self-study. it's been a decade since I was in school and kept encountering references to it, so I've been working through a lecture series and the book.
It's quite possible that what I'm encountering is the the momentary failure to understand Cantor's theorem, or rather the mechanism it uses to enumerate the innumerable. So my math may just be old.
Cantor's theorem says the power set of X has a strictly larger cardinality than X.
When |X| is a natural number, the power set of X has cardinality 2^(|X|), since you can think of an element of the power set as a choice, for each element of X, of "in the subset" vs "not in the subset." Hence the notation 2^X for the power set of X.
Cantor's theorem applies to all sets, not just finite ones. You can show this with a simple argument. Let X be a set and suppose there is a bijection f : X -> 2^(X). Let D be the set { x in X : x is not in f(x) }. (The fact that this is well defined is given by the comprehension axiom of ZFC, so we aren't running into a Russell's paradox issue.) Since f is a bijection, there is an element y of X so that f(y) = D. Now either:
y is in D. But then by definition y is not in f(y) = D, a contradiction.
y is not in D. But then by definition, since y is not in f(y), y is in D.
Thus, there cannot exist such a bijection f, and |2^(X)| != |X|. It's easy enough to show that the inequality only goes one way, i.e. |2^(X)| > |X|.