Bullshit is the perfect term here, even as AI's get so much better and capable Brandolini's Law aka the "bullshit asymmetry principle" always applies--the energy required to refute misinformation is an order of magnitude larger than that needed to produce it. Even to use AIs effectively today requires a very good BS detector--some day in the future it won't.
Cool. I really like the music and composition with graphics.
I didn't hear of Revision demoparty before, the constraints are generous from what I found. Seems to be more about programming the art and not as much about impressive resource constraints.
Platform: Entries must run on standard Windows PC hardware (2026).
GPU: Usually a top-tier NVIDIA card (e.g., RTX 4080/4090 or the 50-series equivalent).
Resolution: It must be able to run at 1080p @ 60Hz at a minimum. Most now expected 2160p
Audio: Standard stereo output primary audio device (usually HDMI or a high-end DAC).
File Size: there is no specific file size limit for the PC Demo category. However...
Content: The demo must be a standalone executable. all visuals and audio must be generated in real-time.
Originality: The entry must be "party-fresh," not been released or shown publicly before
Duration: While there is no hard cutoff, [...] between 3 to 8 minutes
I had the same thought as I'm working with LLMs. Then I reached the same conclusion as I did without LLMs: you can get most of the benefits without many of the drawbacks using well-bounded 'modules' within a monolith. The article doesn't distinguish these:
> When coding in a monolith, you have to worry about implicit coupling. The order in which you do things, or the name of a cache key, might be implicitly relied-upon by another part of the monolith. It’s a lot easier to cross boundaries and subtly entangle parts of the application. Of course, you might not do such unmaintainable things, but your coworkers might not be so pious.
What it's saying could also apply to a monorepo with distinctly deployed artifacts. The reason many don't think about clear boundaries between modules is that popular interpreted languages don't support them. Using the Java ecosystem as an example, each module can be a separate .jar containing one or more package namespaces. These must have an explicit X uses Y declaration.
The problem I see isn't so much that misuse it's easy (though that's a part of it), it's that there's to clear indication that boundaries are being crossed since calling from one package to another is normal, and the fact that some packages belong to other modules isn't always obvious.
I would say don't buy one unless you are either (1) a researcher, or (2) plan to get multiple (up to 4) of them. One has 273 GB/s memory bandwidth and you'd be better off with a Mac M5 Pro/Max.
I haven't read this in detail but I expect it to be the same kind of sealed type that many other languages have. It doesn't cover ad-hoc unions (on the fly from existing types) that are possible in F# (and not many non-FP languages with TypeScript being the most notable that does).
The problem with ad-hoc unions is that without discipline, it invariably ends in a mess that is very, very hard to wrap your head around and often requires digging through several layers to understand the source types.
In TS codebases with heavy usage of utility types like `Pick`, `Omit`, or ad-hoc return types, it is often exceedingly difficult to know how to correctly work with a shape once you get closer to the boundary of the application (e.g. API or database interface since shapes must "materialize" at these layers). Where does this property come from? How do I get this value? I end up having to trace through several layers to understand how the shape I'm holding came to be because there's no discrete type to jump to.
This tends to lead to another behavior which is lack of documentation because there's no discrete type to attach documentation to; there's a "behavioral slop trigger" that happens with ad-hoc types, in my experience. The more it gets used, the more it gets abused, the harder it is to understand the intent of the data structures because much of the intent is now ad-hoc and lacking in forethought because (by its nature) it removes the requirement of forethought.
"I am here. I need this additional field or this additional type. I'll just add it."
This creates a kind of "type spaghetti" that makes code reuse very difficult.
So even when I write TS and I have the option of using ad-hoc types and utility types, I almost always explicitly define the type. Same with types for props in React, Vue, etc; it is almost always better to just explicitly define the type, IME. You will thank yourself later; other devs will thank you.
> unions enable designs that traditional hierarchies can’t express, composing any combination of existing types into a single, compiler-verified contract.
To me that "compiler-verified" maps to "sealed", not "on the fly". Probably.
Their example is:
public union Pet(Cat, Dog, Bird);
Pet pet = new Cat("Whiskers");
- the union type is declared upfront, as is usually the case in c#. And the types that it contains are a fixed set in that declaration. Meaning "sealed" ?
I mean that Cat, Dog and Bird don't have to inherit from the union, you can declare a union of completely random types, as opposed to saying "Animal has three subtypes, no more, no less", which is what F# does more or less.
var someUser = new { Name = "SideburnsOfDoom", CommentValue = 3 };
What type is `someUser` ? Not one that you can reference by name in code, it is "anonymous" in that regard. But the compiler knows the type.
A type can be given at compile-time in a declaration, or generated at compile-time by the compiler like this. But it is still "Compiler-verified" and not ad-hoc or at runtime.
the type (Dog, Cat) pet seems similar, it's known at compile-time and won't change. A type without a usable name is still a type.
Is this "ad-hoc"? It depends entirely on what you mean by that.
=> named sum type implicitly tagged by it's variant types
but not "sealed", as in no artificial constraints like that the variant types need to be defined in the "same place" or "as variant type", they can be arbitrary nameable types
Repeating that we don't have a definition isn't helping anything except vapid blog posts having another thing to debate. I'll give one that I believe. It's the practical ability to use AI for most things that humans do at human levels of competence, without specifically being trained for each. There is no requirement for AGI actually think/reason beyond practical measures.
It's evidently fueled a healthy debate here and you've tossed your opinion in the ring because of it.
I think you're on to something - we need a measure of what is considered "human levels of competence" or some bar by which we say "ok this is consistent enough"
I heard a similar analysis by an episode of Predictive History[0]. Only watched the first 15 minutes so far where it gets to where the US country/those in power (not population) benefits.
reply