If you believe in the epistemic power of "studies" to convey useful information in these domains, I have bad news for you - you are the one in a religious fervor.
The only semi-objective metric you've proposed is compiled code performance, which is a fairly small component of most people's utility function, especially when the difference disappears in all but numerical-computation-centric applications.
Seems easy to me.
If it's obviously benefitial, show me:
- Performance benchmarks showing how much faster the compiled code runs
- Case studies demonstrating faster development and/or fewer time spent debugging
- Studies showing consistently better maintainability
If it is lacking this data, it comes across as religious fervor.