While it's true that mathematics cannot ever be completely formalized, per Gödel's incompleteness theorems, a huge amount of it obviously can, as is demonstrated by the fact that we can write it down in a reasonably rigorous way in a combination of conventional mathematical notation and plain English.
Nor is this in any way "a fundamental problem with using AI to do math".
First, writing something down in English is very different from formalizing it. Natural language interacts with human brains in all kinds of complicated ways that we do not fully understand, so just because we can make a compelling-sounding argument in English doesn't mean that argument doesn't have holes somewhere.
Second, the incompleteness theorems apply only to given formal systems, not to formality in general. Given a system, you can produce a Godel sentence for that system. But any Godel sentence for a given system has a proof in some other more powerful system. This is trivially true because you can always add the Godel sentence for a system as an axiom.
This is a very common misconception about math even among mathematicians. Math produces only conditional truths, not absolute ones. All formal reasoning has to start with a set of axioms and deduction rules. Some sets of axioms and rules turn out to be more useful and interesting than others, but none of them are True in a Platonic sense, not even the Peano axioms. Natural numbers just happen to be a good model of certain physical phenomena (and less-good models of other physical phenomena). Irrational numbers and complex numbers and quaternions etc. etc. turn out to be good models of other physical phenomena, and other mathematical constructs turn out not to be good models of anything physical but rather just exhibit interesting and useful behavior in their own right (elliptic curves come to mind). None of these things are True. At best they are Useful or Interesting. But all of it is formalizable.
The other end of the Church-Turing intuition is that if we couldn't possibly do it then the machines can't do it either. What they're doing is the same, it's not only not magically less powerful it's also not not magically more powerful.
You seem to be imagining that Gödel is just a small problem you can carve off and ignore, but it applies for everything powerful enough to do arithmetic.
Of course we can have the AI muddle along with intuition, but that's not actually an improvement, our mathematicians can already use intuition and they've got a lot more experience.
It's a fundamental problem with using AI to do "intuitive" math, but not a fundamental problem with AI to do formal math.
As you have stared, a lot of mathematics can be formalized. For me the problem for AI with math is going to be the generative part. Validating a proof or trying to come up with a proof for a given statement may be within reach. Coming up with meaningful/interesting statements to proof is another completely different story.
Nor is this in any way "a fundamental problem with using AI to do math".