I'm not sure the static type system thing is real. Ultimately for 90% of workflows you wind up reserializing into a very much not-static-datatype (JSON, e.g.)
I don't see how that matters -- it seems to indicate a misunderstanding of what type systems are. Type systems obviously have boundaries whether you're interchanging with another type-safe system or not, that's irrelevant.
Every program has to deserialize data like JSON into types and enforce its assumptions about that data whether you have static analysis tooling at compile-time or not.
It absolutely matters, type systems don't exist in a pure universe, they are important because we use them to interact with human needs. A good type system pushes out typing concerns to the outer boundary of of the system. For that you don't need static typing, you need strong typing. Refactoring in a statically typed system can be an incredibly tedious process with tons of boilerplate. At the other extreme if you allow duck typing and monkeypatching then you wind up with a problem where debugging becomes a problem. There's a middle ground that a lot of very good and productive programming languages occupy.
You're not going to find many people who agree that static-typing makes refactoring harder. The classic issues with large coupled dynamically-typed systems is how difficult refactoring is without runtime errors.
You also have to be more precise when you hand-wave about benefits of "strong typing" as you can read this very comments section to see how nobody agrees on what that means.
By 'type concerns' i mean coercing the real world into the type system. You want that to be at the edge.
IMO, treating a program as a proof is somewhat impracticable as you see with very pure functional programming paradigms struggle with universal state monads.