Considering that mathematics is, at its core, a language for defining relationships between quantities, and then relationships between those relationships, so on and so forth, I think it's fair to assume that the possible number of such relationships are infinite. Some of these relationships will obviously be useful in the real world, but they don't always have to be. I too, suspect that we can keep on building theorems on top of theorems with increasing complexity, until a point is reached that it becomes just too tedious (but not impossible) for a human being to work through the proof.
> we can keep on building theorems on top of theorems with increasing complexity
This is a somewhat bleak picture of math. We also have the other phenomena of increasing simplicity. Both statements and proofs becoming more straightforward and simple after one has access to deeper mathematical constructions.
For example : Bezout's theorem would like to state that two curves of degree m, degree n would intersect in mn points. Except that you have two parallel lines intersecting at 0 instead of 1.1 =1 point, two disjoint circles intersect at 0 instead of 2.2=4 points, a line tangent to a circle intersecting at 1 point instead of 1.2=2 points. These exceptions merge into a simple picture once one goes to projective space, complex numbers and schemes. Complex numbers lead to lots of other instances of simplicity.
Similarly, proofs can become simple where before one had complicated ad-hoc reasoning.
Feynman once made the same point of laws of physics where in contrast to someone figuring out rules of chess by looking at games where they first figure out basic rules(how pieces move) and then moves to complex exceptions(en passant, pawn promotion), what often happens in physics is that different sets of rules for apparently distinct phenomena become aspects of a unity (ex: heat, light, sound were seen as distinct things but now are all seen as movements of particles; unification of electricity and magnetism).
Of course, this unification pursuit is never complete. Mathematics books/papers constantly seem to pull a rabbit out of a hat. This leads to 'motivation' questions for why such a construction/expression/definition was made. For a few of those questions, the answer only becomes clear after more research.
> the possible number of such relationships are infinite
I think you need to be careful taking about "infinite" in the context of math. If the number of quantities, relationships etc is finite, so are all their combinations. Even things like the infinit-ude of available numbers might have fixed patterns that render their relevant properties effecively finite, and lead to further distinctions e.g finite vs countable, etc.
Personally, I feel like math has a bit of a legacy problem. It holds on to the conventions of an art that is very old, with very different initial assumptions at its conception, and this is now holding it back somehow.
I lack the background to effectivly demonstrate this other than "Things I know/understand seem less intutive in standard mathenatical terms" e.g. generating functions and/or integrals feel easier to understand (to me) when you understand the, to be software-like 'loops'.
In fact, the idea of "constructivist math" seems (again, to me) to beg for a more algorithmic/computational approach.
The standard explanation of integrals as summing the areas of rectangles of decreasing width seems extremely intuitive to me without requiring the baggage of having to know some computer language. Generating functions in code are basically a rote repetition of the mathematical definitions, requiring that you also understand variables and functions and other things unrelated to the core idea.
But that "standard explanation" is a process, not a definition. Riemann sums can't be used with all integrals.
In any case, if we stick with Riemann sums, there should be a strong relationship to Generating Functions (which there is).
> Generating functions in code are basically a rote repetition of the mathematical definitions
GFs with a mathematical basis may have, for example, set-theoretic definitions that are not similar to, say, Turing machines. Any non-constructivist math is automatically not like code.
just the zfc axioms alone are already infinite. It's an axiom schema ranging over an infinite number of actual statements. That's just statements, without even considering symbols as you're saying.