Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The trouble is most projects are NOT employing the other reviews properly. The study made no effort to determine the quality of the test and review process, and instead assumed that they had been done by virtue of the bugs being merged to the master branch.

The study then finds that about 80% of the bugs they found were not simply ts-undetectable, but not type errors at all. Instead, they're things like the wrong URLs being used (sting errors), wrong branching logic, wrong predicate logic, etc.

That means the maximum effect must be less than 20%, but the authors couldn't detect 20% even knowing exactly what the bug and fix was, down to the line numbers and exact code used to fix the bug.

Even being extremely generous and assuming they could fix 20% of remaining bugs, it's too far down the ladder of exponentially diminishing returns to make much difference at that stage.



> The study made no effort to determine the quality of the test and review process

Assuming you wrote the article, why did you cite that study then?

> That means the maximum effect must be less than 20%

But what effect are you trying to argue? Maximum of what? I feel this is so vague it's not useful.

In my opinion (I'm not going to try to back this up with a study that doesn't measure exactly this), types make an enormous difference while you're in the middle of development before you commit anything: they're capturing cases where you forget a variable could be undefined, when you mix up items and lists, when you want to aggressively refactor, when you forget to pass required parameters etc. Type annotations are rarely required and for the few minutes I take to write them I easily make that time back even in the short term. Only looking at the bugs that get committed (which is super hard to measure) is missing out on a big aspect of the benefits.

You also need to consider that code reviews and writing TDD tests are time consuming to do as well.


I get the same real-time error detection and refactoring help from type inference, lint, and TDD. Lint and inference gives me real-time editor feedback, TDD runs automatically on file save.

I do consider TDD and code reviews are costly but you can't skip them with TypeScript because at least 80% of bugs are not detectable with TypeScript.


> because at least 80% of bugs are not detectable with TypeScript.

I give up. You're repeatedly using this figure in this thread in a misleading way to make it sound like what you're saying is more legitimate that just an opinion.

The abstract of the study you cite explicitly mentions this is a conservative estimate of effectiveness and ignores bugs detected in private:

> Evaluating static type systems against public bugs, which have survived testing and review, is conservative: it understates their effectiveness at detecting bugs during private development, not to mention their other benefits such as facilitating code search/completion and serving as documentation. Despite this uneven playing field, our central finding is that both static type systems find an important percentage of public bugs: both Flow 0.30 and TypeScript 2.0 successfully detect 15%!


The number they consider conservative is that TypeScript can address up to 15% of public bugs. That's a different number than the proportion of public bugs found ts-undetectable because they're not type errors at all.


There are so many important details about what this figure means and how accurate it is.

Would it make a difference if TypeScript was used before the code was committed? How does the subject domain of the project impact this figure? How do other testing and review approaches impact this number? Are public bugs of certain kinds more likely to be reported for certain projects?

It's highly misleading to quote this figure in such a simplistic manner without caveats.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: