Ok, I'm in an argumentative mood, and I think this is more true than not.
The first theoretical foundation of OOP is structural induction. If you design a class such that (1) the constructor enforces an invariant and (2) every public method maintains that invariant, then by induction it holds all the time. The access modifiers on methods help formalise and enforce that. You can do something similar in a functional language, or even in C if you're disciplined (especially with pointers), but it was an explicit design goal of the C++/Java/C# strand of OOP to anchor that in the language.
The second theoretical foundation is subtyping or Liskov substitution, a bit of simple category theory - which gets you things like contravariance on return types and various calculi depending on how your generics work. Unfortunately the C++ people decided to implement the idea with subclassing which turned out to be a mess, whereas interface subtyping gets you what you probably wanted in the first place, and still gives you formalisms like Array[T] <= Iterable[S] for any S >= T (or even X[T] <= Y[S] for S >= T and X[_] <= Y[_] if you define subtyping on functors). In Java nowadays you have a Consumer<T> that acts as a (side-effectful) function (T => void) but composes with a Consumer<? super T> to get the type system right [1].
Whether most Java/OOP programmers realise the second point is another question.
The article seems to suggest the openclaw on compromised developer machines had something like root rights - "full system access", "install itself as a persistent system daemon surviving reboots".
What am I missing here, I thought npm didn't run as root (unlike say apt-get)?
Full system access = it's not sandboxed, it has access to anything that the user can access, and it seems to use systemd user units which don't require root access.
The law is what a court says it is; there is precedent for decisions on human rewrites but LLM (assisted) code might still be fairly uncharted territory.
That is still true, but it was more relevant back when "user" meant "programmer at another university". The "end-user" for most software is not a programmer these days.
If I release blub 1.0.0 under GPL, you cannot fork it and add features and release that closed-source, but I can certainly do that as I have ownership. I can't stop others continuing to use 1.0.0 and develop it further under the GPL, but what happens to my own 1.1.0 onwards is up to me. I can even sell the rights to use it closed-source.
You can do whatever you want. And someone else can - and will, if it's worth it - now do a deep analysis and have it reimplemented with said analysis, all via LLM. And particularly, since the law is now that AI creations can't be copyrighted, that reimplementation will be firmly in public domain. Nobody will have to even look at 1.1.0. See Valkey as one of the more recent examples, and there isn't even any LLM involved there.
They have the right to use the code, and they have the right to use improvements that someone else made, and they have the right to get someone to make improvements for them.
They also have the guarantee that the code licensed under the GPL, and all future enhancements to it, will remain free software. The same is not true of the MIT license's weak-copyleft.
As far as I know, all the (L)GPL does is make sure that if A releases some code under it, then B can't release a non-free enhancement without A's permission. A can still do whatever they want, including sell ownership to B.
Neither GPL nor MIT (or anything else) protects you against this.
(EDIT) scenario: I make a browser extension and release v1 under GPL, it becomes popular and I sell it to an adtech company. They can do whatever they want with v2.
By allowing them to benefit from the work of others who do. Directly or indirectly.
I’m not good at car maintenance but I would benefit from an environment where schematics are open and cars are easy to maintain by everyone: there would be more knowledge around it, more garages for me to choose from, etc.
Isn't the legal situation the opposite here? Car manufacturers don't release schematics because they believe in "free as in freedom". In fact any interfaces you as an end-user or an independent garage can use and schematics that are released such as the protocol for the diagnostic port, are open primarily because govermnents made laws saying so.
I'm most familiar with the "right to repair" situation with John Deere, which occasionally pops up on HN. The spirit of someone who releases something under GPL seems the opposite of that?
The 25519 crypto package that's built into practically everything these days (SSH, TLS, most e2e messaging) was released as both a spec and a C reference implementation _in the public domain_.
Unicode detection is the kind of utility the language maintainers want in their package collection if not in the standard library, and programmers who have to do anything with "plain text" files might want to rely on.
Releasing a core library like this under a genuinely free licence (MIT) is a service to anyone working in the ecosystem.
I think they moved chardet from GPL to MIT? If the maintainer made future versions proprietery, they'd surely be forked and then kicked out of the python package repo?
The first theoretical foundation of OOP is structural induction. If you design a class such that (1) the constructor enforces an invariant and (2) every public method maintains that invariant, then by induction it holds all the time. The access modifiers on methods help formalise and enforce that. You can do something similar in a functional language, or even in C if you're disciplined (especially with pointers), but it was an explicit design goal of the C++/Java/C# strand of OOP to anchor that in the language.
The second theoretical foundation is subtyping or Liskov substitution, a bit of simple category theory - which gets you things like contravariance on return types and various calculi depending on how your generics work. Unfortunately the C++ people decided to implement the idea with subclassing which turned out to be a mess, whereas interface subtyping gets you what you probably wanted in the first place, and still gives you formalisms like Array[T] <= Iterable[S] for any S >= T (or even X[T] <= Y[S] for S >= T and X[_] <= Y[_] if you define subtyping on functors). In Java nowadays you have a Consumer<T> that acts as a (side-effectful) function (T => void) but composes with a Consumer<? super T> to get the type system right [1].
Whether most Java/OOP programmers realise the second point is another question.
[1] https://docs.oracle.com/en/java/javase/21/docs/api/java.base...
reply