>I am not a developer, but I understand how generative AI leveraging your work to make it easier for someone else.
To me this sounds like it's antithetical to open source software because the point of making software open source is so that other people can leverage your work. It shouldn't matter if it's done through generative AI or through a human's brain.
> the point of making software open source is so that other people can leverage your work
The point is that other people can leverage your work under the terms you distribute them under. For the vast majority of open source licenses, that means giving attribution and including the copyright notice and license when distributing the source code or its derivatives. For others, it means all of that and releasing derivatives under the same license.
If developers wanted to distribute their code under licenses with different terms, they would have, but they didn't.
But the generative models don't spit out existing code, it generates new code that (sometimes) happens to be the same as existing code. Which is the same as a human being does, just that an AI is much better at seeing a larger amount of existing work. There's no part of the model that has a specific piece of code, it just happens to reproduce the same thing.
People often write code that looks like existing code that they've seen even if they're not aware of it, it's a blurry line. I see it as just banning AI from doing the same thing as humans just because it's better at it.
An argument could be made that it's fair for an AI to not attribute the code it outputs too. The human-human reason for attribution is "I wrote this code by doing X amount of work, since you're using it and it'll save you time, I should fairly be given attribution". But then the AI is also writing out the code that it's prompted for, it's just faster at doing it.
Why not create a tool that instead runs the AI generated output through a check that provides proper attribution? Then you'll also get human written code that doesn't attribute the original author as well.
Why shouldn't it matter? Many open source licenses require attribution, so it is reasonable to think one point of making software open source is to get attribution. Generative AI prevents getting attribution, so it matters whether it is through generative AI or human brain.
To me this sounds like it's antithetical to open source software because the point of making software open source is so that other people can leverage your work. It shouldn't matter if it's done through generative AI or through a human's brain.