> I already listed a few and very specific classes of tests
We have a list of ostensible, undefined classes of tests that are imagined to not be able to be written before the implementation. But clearly all of those listed are written before implementation, at least where we can find a common definition to apply to them. If there is an alternative definition in force, we're going to have to hear it.
> It's absurd to even imply that regressions aren't tracked.
Still no definition, but I imagine if one were to define "regression test" that it would be a test you write when a bug is discovered. But, of course, you would write that test before you implement the fix, to make sure that it actually exploits the buggy behaviour. It is not clear why we are left to guess what the definitional intent is but, using that definition, it is the shining example of why you need to write a test before turning to its implementation implementing. Like before, you would have no feedback to ensure that you actually tested what lead to the original bug if you waited until after it is fixed.
Of course, if that's what you mean, that's just a test, same as all the others. It is not somehow a completely different kind of test because the author of the test is responding to a bug report instead of a feature request. If your teammate didn't jump to implementation before writing a test, the same test would have been written before the code ever shipped. The key point here is that "regression" adds nothing to the term. Another to file under "they end up being all the same".
> Still no definition, but I imagine if one were to define "regression test"
Why do you need to "imagine" anything? Just google it. "Regression test" is a very standard thing.
Also, the first commenter was correct. Many, many, many kinds of tests are only useful after the code is written.
TDD works for some people doing some kinds of code, but I've never found that much value in it. With what I do, functional testing is highest impact, followed by targeted unit tests for any critical/complex library code, followed by integration or end to end or perf testing, depending on the project.
> Why do you need to "imagine" anything? Just google it.
Why not read the thread?
Perhaps the results are regional (in fact, we know they can be), but "regression test" literally returns results for "regression testing" instead, as said before. There is nothing out there to suggest anyone actually uses the term. Even the popular LLMs say the same thing Google does — that "regression test" is merely the act of running your tests after making changes — which is what we simply call "testing". So where do we go from here?
> Many, many, many kinds of tests are only useful after the code is written.
Are you referring to the entire codebase? Clearly once you've implemented the first test then all other tests are going to be dependent on code existing. However, that's not what we're talking about. "Implement" is in reference to the test, not the entire program.
> TDD works for some people doing some kinds of code
"Test first" isn't really TDD, although TDD suggests it too. The idea is way older than TDD. TDD is actually about testing behavioural stories instead of testing implementation details. "Test first" does help ensure that you don't accidentally test implementation details (can't when implementation doesn't yet exist), but it isn't some kind of strict requirement. Technically you can practice the spirit of TDD even if you write tests after.
But out of curiosity, if you ever use a language with static types, do you also defer defining the types until after the implementation is finished? I've never seen that before. In my experience, developers find it easier to specify a part of the program before proceeding with implementing what is specced.
> I've never found that much value in it.
I mean, to be fair, I don't either because why would I ever make mistakes? I most definitely do find the value when others do it, though. But I get what you are saying. I too was once a junior developer with insular thinking. Now that I'm old an experienced, I have to worry about how groups of people interact. That changes your perspective.
> functional testing is highest impact, followed by targeted unit tests for any critical/complex library code, followed by integration or end to end or perf testing
What's the difference? Kent Beck, who is usually credited with coining "unit test", has told on numerous occasions that a unit test is a test that can run without affecting other tests. Which, in reality, is just a test. You would never purposefully write a test that can break another, surely? If only some (or none) of your tests are unit tests, I say you are doing something horribly wrong. Lump them in the “useless” category.
> I too was once a junior developer with insular thinking
My dude, I've been a professional SWE for more than ten years lol. I don't know where you've been working, but I've been in Silicon Valley companies and startups.
I have honestly never met an engineer -- other than interns or new grads -- who didn't know the difference between a unit test and a functional test lol. Or a regression test, either, for that matter.
I'm kind of impressed that someone could read so many sources and yet not take anything away from them.
Unit tests are not "tests that can be run without affecting other tests". Maybe that was true in the 90s, I don't know how code was written and tested back then. That is not how the term is used in modern parlance.
Beck still uses it that way, but I can appreciate that he is only the credited originator, not some kind of official authority. Just because he uses it one way does not mean you use it the same way. I only reach for his definition as it is the only one I am familiar with.
Language is certainly fluid. You are still fairly new to the industry by your own admission, so I can understand that the kids' lingo may have changed by the time you started learning about things. However, for better or worse, I cannot relive your life experience. Google, which models the user when picking results, doesn't help as it returns results that match my past experience. I fully expect your Google searches offer different results, but unless you're offering up your account for me to use... (don't do that)
> That is not how the term is used in modern parlance.
Right, as indicated in the original comment, along with those that followed, I don't know how you use it in modern parlance. What does it mean to you?
> Google "unit test definition". What do you get?
It says that it is a test that runs independently. Which is just another way to say the same as what Beck says.
That's a funny way to say "Actually, you're right. No matter what definition I try to come up with, they end up being either all the same thing or useless", but I'll accept it.
We have a list of ostensible, undefined classes of tests that are imagined to not be able to be written before the implementation. But clearly all of those listed are written before implementation, at least where we can find a common definition to apply to them. If there is an alternative definition in force, we're going to have to hear it.
> It's absurd to even imply that regressions aren't tracked.
Still no definition, but I imagine if one were to define "regression test" that it would be a test you write when a bug is discovered. But, of course, you would write that test before you implement the fix, to make sure that it actually exploits the buggy behaviour. It is not clear why we are left to guess what the definitional intent is but, using that definition, it is the shining example of why you need to write a test before turning to its implementation implementing. Like before, you would have no feedback to ensure that you actually tested what lead to the original bug if you waited until after it is fixed.
Of course, if that's what you mean, that's just a test, same as all the others. It is not somehow a completely different kind of test because the author of the test is responding to a bug report instead of a feature request. If your teammate didn't jump to implementation before writing a test, the same test would have been written before the code ever shipped. The key point here is that "regression" adds nothing to the term. Another to file under "they end up being all the same".