How so? Genuine question. Duck typing is “try it and see if it supports an action”, where interface declaration is the opposite: declare what methods must be supported by what you interact with.
Sure, type checking in Python (Protocols or not) is done very differently and less strongly than in Go, but the semantic pattern of interface segregation seems to be equivalently possible in both languages—and very different from duck typing.
Duck typing is often equated with structural typing. You’re right that officially (at least according to Wikipedia) duck typing is dynamic, while structural is the same idea, but static.
Either way, the thing folks are contrasting with here is nominal typing of interfaces, where a type explicitly declares which interfaces it implements. In Go it’s “if it quacks like a duck, it’s a duck”, just statically checked.
I'm saying that at some point declaring the minimal interface a caller uses, for example Reader and Writer instead of a concrete FS type, starts to look like duck typing. In python a functions use of v.read() or v.write() defines what v should provide.
In Go it is compile time and Python it is runtime, but it is similar.
In Python (often) you don't care about the type of v just that it implements v.write() and in an interface based separation of API concerns you declare that v.write() is provided by the interface.
The aim is the same, duck typing or interfaces. And the outcome benefits are the same, at runtime or compile time.
Also yes Protocols can be used to type check quacks, bringing it more inline with the Go examples in the blog.
However my point is more from a SOLID perspective duck typing and minimal dependency interfaces sort of achieve similar ends... Minimal dependency and assumption by calling code.
Except you need a typed variable that implements the interface or you need to cast an any into an interface type. If the "any" type implemented all interfaces then it would be duck typing, but since the language enforces types at the call level, it is not.
I'm more concerned about the fact I have no idea if the article and the HN comments are all AI generated or not. Can you tell if this comment is AI or not?
What happens when social discourse is polluted by noise that is identical to signal?
If I reply yes, would you believe me? Am I even replying to a "you" right now, or was it a comment posted by a call to requests.get() by some AI agent?
Don't forget AI being used to replace friends. AI being used for validation in place of a varied social group is scarier than anything I see on the jobs market.
Asking ChatGPT if breaking up with your girlfriend is a good idea or not? Terrifying. People should be using human networks of friends as a sounding board and support network.
> The DX provided by front end frameworks/libs is just unrivaled
How? I spent 6 months exploring React, Vue, Node, Next,...
The DX for all of them sucks. The documentation sucks. Everything is wrappers of wrappers of npm scripts of wrappers of bootstrappers of boilerplate builders of...
Vite (build tool) + React is all you need, and a command line away from setting up React with TypeScript. If you don't like how "heavy" it is, you can use Preact. Similarly, for a Node (express) project, all you gotta do is `npm i express`, with additional setup if you want TS and other dev. niceties. Or just use a batteries included framework like Nest/Adonis and skip all that.
It's not that complicated, the frontend targets the browsers, so you have to transpile if you want to ensure compatibility, prevent theft, reduce the size, or not, and you could just wing it using the plain source code.
Again, it's the best for writing APPS. For an app, with complex navigation, features, UI states, it find it asinine that your biggest complains are about the packages.
In this world where the LLM implementation has a bug in it that impacts a human negatively (the app could calculate a person's credit score for example)
I couldn't even tell you who is liable right now for bugs that impact human's negatively. Can you? If I was an IC at an airplane manufacturer and a bug I wrote caused an airplane crash - who is legally responsible? Is it me? The QA team? The management team? Some 3rd party auditor? Some insurance underwriter? I have a strong suspicion it is very complicated as it is without considering LLMs.
What I can tell you is that the last time I checked: laws are written in natural language, they are argued for/against and interpreted in natural language. I'm pretty confident that there is applicable precedent and the court system is well equipped to deal with autonomous systems already.
> If I was an IC at an airplane manufacturer and a bug I wrote caused an airplane crash - who is legally responsible?
I am not sure it is that complicated, from a legal perspective. It is the company hiring you that would be legally responsible. If you are an external consultant, things may get more complicated, but I am pretty sure that for critical mission software companies wouldn't use external consultants (for this particular reason but also many others)
I agree with this. There's so much snake oil at the moment. Coding isn't the hard part of software development and we already have unambiguous language for describing computation. Human language is a bad choice for it, and we already find that when writing specs for other humans. Adding more humaness to the loop isn't a good thing IMHO.
At best an LLM is a new UI model for data. The push to get them writing code is bizarre.
> Coding isn't the hard part of software development
That's actually a relief, when after hours and days of attending meetings and writing documentations, I can eventually sit in front of my IDE and let my technical brain enjoy being pragmatic.
I don't have the internal monologue most people seem to have: with proper sentences, an accent, and so on. I mostly think by navigating a knowledge graph of sorts. Having to stop to translate this graph into sentences always feels kind of wasteful...
So I don't really get the fuzz about this chain of thought idea. To me, I feel like it should be better to just operate on the knowledge graph itself
A lot of people don't have internal monologues. But chain of thought is about expanding capacity by externalising what you're understood so far so you can work on ideas that exceeds what you're capable of getting in one go.
That people seem to think it reflects internal state is a problem, because we have no reason to think that even with internal monologue that the internal monologue accurately reflects our internal thought processes fuly.
There are some famous experiments with patients whose brainstem have been severed. Because the brain halves control different parts of the body, you can use this to "trick" on half of the brain into thinking that "the brain" has made a decision about something, such as choosing an object - while the researchers change the object. The "tricked" half of the brain will happily explain why "it" chose the object in question, expanding on thought processes that never happened.
In other words, our own verbalisation of our thought processes is woefully unreliable. It represents an idea of our thought processes that may or may not have any relation to the real ones at all, but that we have no basis for assuming is correct.
Right but the actual problem is that the marketing incentives are so very strongly set up to pretend that there isn’t any difference that it’s impossible to differentiate between extreme techno-optimist and charlatan. Exactly like the cryptocurrency bubble.
You can’t claim that “We don’t know how the brain works so I will claim it is this” and expect to be taken seriously.
The irony of all this is that unlike humans - which we have no evidence to suggest can directly introspect lower level reasoning processes - LLMs could be given direct access to introspect their own internal state, via tooling. So if we want to, we can make them able to understand and reason about their own thought processes at a level no human can.
The important property that anyone can verify the untainted relationship between the binary and the source (providing we do the same for both tool chains, not relying on a blessed binary at any point) is useful if people do actually verify outside the debian sphere.
I hope they promote tools to enable easy verification on systems external to debian build machines.