It’s like working with the dumbest, most arrogant intern you could imagine. It has perfect recall of the docs but no understating of them.
An example from last week:
Me: Do this.
AI: OK.
<Brings me code that looks like it accomplishes the task but after looking at it it’s accomplishing it in a monkey’s paw/spiteful genie kind of way.>
Me: Not quite, you didn’t take this into account. But I made the same mistake while learning so I can pull it back on track.
AI: OK
<It’s worse, and why are all the values hardcoded now?>
…
40 minutes go by. The simplest, smallest bit of code is almost right.
Me: Alright, abstract it into a Sass mixin.
AI: OK.
<Has no idea how to do it. It installed Sass, but with no understanding of what it’s working on so the mixin implementation looks almost random. Why is that the argument? What is it even trying to accomplish here?>
At which point I just give up and hand code the thing in 10 minutes.
Don't underestimate how anti-AI the tabletop community is. This could have been entitled: "Games Workshop elects not to experience multi-year headache. Will use AI when profitable."
I don't do much with crypto/NFTs/AI, because I don't find any of it useful yet. But I get so much "with us or against us" heat for not being zealously against the the idea of them. It was NFTs, NFTs, NFTs at the table for months until it became AI, AI, AI. My preference is to talk about something else while playing board games.
One thing I've found when talking to non-technical board gamers about AI is that while they’re 100% against using AI to generate art or game design, when you ask them about using AI tools to build software or websites the response is almost always something like "Programmers are expensive, I can't afford that. If I can use AI to cut programmers out of the process I'm going to do it."
A minority are conflicted about this position.
When I talk to technical people at game nights we almost never talk about tech. The one time our programmers all played RoboRally the night kind of died because it felt too close to work for a Saturday night.
If GW was going to use AI they would probably start with sprue layouts. Maybe the AI could number the bits in sane way? I would be for that.
> while they 100% against using AI to generate art or game design, when you ask them about using AI tools to build software or websites the response is almost always something like "Programmers are expensive, I can't afford that. If I can use AI to cut programmers out of the process I'm going to do it."
Three things:
1. People simply don't respect programming as a creative, human endeavour. Replacing devs with AI is viewed in the same way as replacing assembly line workers with robots.
2. Somewhat informed people might know that for coding tasks, LLMs are broadly trained on code that was publicly shared to help people out on Reddit or SO or as part of open-source projects (the nuance of, say, the GPL will be lost). Whereas the art is was trained on is, broadly speaking, copywritten.
3. And, related to two: people feel great sympathy for artists, since artists generally struggle quite a bit to make a living. Whereas engineers have solid, high paying white collar jobs; thus, they're not considered entitled to any kind of sympathy or support.
I've been a professional artist, designer, and developer. Mostly a developer, and working in academia throughout the late teens meant being privy to the development of neural networks into what they've become. When I pointed out the vulnerability of developers to this technology, the "well maybe for some developers but I'm special" stance was nearly ubiquitous.
When the tech world realized their neato new invention inadvertently dropped a giant portion of the world's working artists into the toilet, they smashed that flusher before they could even say "oops." Endless justification, people saying artists were untalented and greedy and deserved to be financially ruined, with a heaping helping of "well, just 'pivot'."
And I did-- into manufacturing because I didn't see much of a future for tech industry careers. I'm lucky-- I came from a working class background so getting into a trade wasn't a total culture and environment shock. I think what this technology is going to do to job markets is a tragedy, but after all the shit I took as a working artist during this transition, I'm going to have to say "well, just pivot!" Better get in shape and toughen up-- your years of office work mean absolutely nothing when you've got to physically do something for a living. Most of the soft, maladroit, arrogant tech workers get absolutely spanked in that environment.
... although it's a bit unfair to the many tech people who never wanted to throw artists down the loo or indeed anyone else. E.g. when I was fiddling with language generation during my MSc it never occurred to me that someone would want to use it to replace writing, let alone coding. What would be the point in that?
> Whereas the art is was trained on is, broadly speaking, copywritten
The overwhelmingly vast majority of the code you're talking about (basically, anything that doesn't explicitly disavow its copyright by being placed in the public domain, and there's some legal debate if that is even something that you can do proactively) is just as copyright protected as the art is.
Open Source does not mean copyright free.
"Free Software" certainly doesn't mean copyright free (the GPL only has any meaning at all because of copyright law).
> 1. People simply don't respect programming as a creative, human endeavour. Replacing devs with AI is viewed in the same way as replacing assembly line workers with robots.
It is about scarcity: art is a passion; there is a perpetual oversupply of talented game designers, visual graphic artists, sculptors, magna artists, music composers, guitarists, etc...you can hire one and you usually can hire talent for cheap because...there is a lot of talent.
Programmers are (or were?) expensive because, at least in recent times, talented ones are expensive because they are rare enough.
A good artist is just as expensive as a good programmer. Commissioning art is expensive. Outsourcing to third world countries is cheaper (just like programming!).
> A good artist is just as expensive as a good programmer.
Let's look at industry, and just go look at what video game artists make compared to programmers with a similar amount of experience. Now, are you just claiming that they just aren't very good artists, so they aren't paid well? Because I've seen their work, and its not shabby at all.
Video game companies are a special case (even for programmers). They work people to the bone for lower pay because people are passionate about video games, but the common denominator there is gamers wanting to get into the industry—not being an artist or programmer.
>> Programmers are (or were?) expensive because, at least in recent times, talented ones are expensive because they are rare enough.
In all the years I worked in the industry, I never knew anyone trying to hire "talented" programmers. Only trying to hire people, usually inexperienced juniors, willing to work twice the time they're paid for if you tell them how smart they are.
Ya, there is that also. But sane orgs will want to hire programmers with some level of talent, at least. Not just some kid out of bootcamp, they will have to show that they can actually program something first.
Most of the code that was publicly available to be trained on is written by people in their spare time, not directly making any money off of it though. Personally I think if you are fine with AI used to generate code you should also be fine with it being used to generate art. That doesn't mean that I think that big companies just scraping the entire internet and training on large amount of portfolio pieces from ArtStation or people making open source projects is good either.
> Most of the code that was publicly available to be trained on is written by people in their spare time, not directly making any money off of it though.
So what? The code is offered under specific licensing terms. Not adhering to those terms is just as wrong as training on a paid product.
There is the nuance that much code that is available publicly (which includes a GIGANTIC amount of that "written by people in their spare time" stuff) is put there for the explicit goal of showing other people all the details so they can read, reuse, and modify it. Open-source licenses in some form are incredibly popular, though the details vary, and seeing your side project in a product that 100k people use is usually just neat, not "you stole from me".
Artworks have their relatively-popular creative-commons stuff, and some of those follow a similar "do whatever" vibe, but I far more frequently see "attribution required" which generally requires it at the point of use, i.e. immediately along-side the art-piece. And if it's something where someone saw your work once and made something different separately, the license generally does not apply. LLMs have no way to do that kind of attribution though, and hammer out stuff that looks eerily familiar but isn't pixel-precise to the original, so it feels like and probably is an unapproved use of their work.
The code equivalent of this is usually "if you have source releases, include it there" or a very few have the equivalent of "please shove a mention somewhere deep in a settings screen that nobody will tap on". Using that code for training is I think relatively justifiable. The licenses matter (and have clearly been broadly ignored, which should not be allowed) but if it wasn't prohibited, it's generally allowed, and if you didn't want that you would need to choose a restrictive license or not publish it.
Plus, like, artists generally are their style, in practical terms. So copying their style is effectively impersonation. Coders on the other hand often intentionally lean heavily on style erasing tools like auto-formatters and common design patterns and whatnot, so their code blends cleanly in more places rather than sounding like exclusively "them".
---
I'm generally okay with permissive open source licensed code being ingested and spat back out in a useful product. That's kinda the point of those licenses. If it requires attribution, it gets murky and probably leans towards "no" - it's clearly not a black-box re-implementation, the LLMs are observing the internals and sometimes regurgitate it verbatim and that is generally not approved when humans do it.
Do I think the biggest LLM companies are staying within the more-obviously-acceptable licenses? Hell no. So I avoid them.
Do I trust any LLM business to actually stick to those licenses? ... probably not right now. But one could exist. Hopefully it'd still have enough training data to be useful.
> 1. People simply don't respect programming as a creative, human endeavour. Replacing devs with AI is viewed in the same way as replacing assembly line workers with robots.
Very reminiscent of the "software factory" bullshit peddled by grifters 15 or 20 years ago.
And I think, frankly, a lot of agile practice as I've seen it in industry doesn't respect software development as a creative endeavour either.
But fundamentally I, like a lot of programmers/developers/engineers, got into software because I wanted to make things, and I suspect the way I use AI to help with software development reflects that (tight leash, creative decision-making sits with me, not the machine, etc.).
> People simply don't respect programming as a creative, human endeavour.
Because it's not? Programmers' ethos is having low attachment to code. People work on code together, often with complete strangers, see it modified, sliced, merged and whatever. If you rename a variable in software or refactor a module, it's still the same software.
Meanwhile for art authorship, authenticity and detail are of utter importance.
That's no different from any art. It's like saying that woodworkers' ethos is having low attachment to screws, or guitarists' ethos is having low attachment to picks. Code is a tool; the creative, human endeavor is making an artifact that people can perceive and interact with.
People don't respect the salary premium software developers have received and expect relative to other creative, human endeavors.
You lay it out perfectly in your answer, and I'll add that the entire non-tech world generally feels that if tech jobs lose their shine due to AI, its actually a welcome reversion to the mean. Software has likely depressed wage growth in many other jobs.
Which is unfortunate, because the thinking people should be having is that we should bring everyone else up to our level, and not trying to bring down the lucky few that are well compensated in this world full of leeches in the form of CEOs and middle managers.
It's not "anti-AI" to acknowledge the fact that when your job is to create work for hire in order to build up your employer's IP portfolio, being paid to use AI to create work that isn't IP isn't doing your job.
Your job is to create IP. As per the US Copyright Office, AI output cannot be copyrighted, so it is not anyone's IP, not yours, not your employer's.
That's not "anti-AI", that's AI and copyright reality. Game Workshop runs their business on IP, suddenly creating work that anyone can copy, sell or reproduce because it isn't anyone's IP is antithetical to the purpose of a for-profit company.
> As per the US Copyright Office, AI output cannot be copyrighted
I'm glad you mentioned this. It's true. But AI output as part of a larger pipeline of work to generate something is copyrightable. So I'm not sure how this is going to play out in a practical sense. I don't think we've tested this legally yet.
If a person holds a camera and clicks a button, the output can be copyrighted. But if I write a few pages worth of prompts and click enter, it cannot be copyrighted?
It's no different than spending years training a chimp to paint like Van Gogh, its output are not human creations and thus cannot be copyrighted, no matter how much effort you put into training that chimp.
> Games Workshop elects not to experience multi-year headache. Will use AI when profitable.
Indeed, companies will always start using something if it makes financial sense for them.
> One thing I've found when talking to non-technical board gamers about AI is that while they 100% against using AI to generate art or game design, when you ask them about using AI tools to build software or websites the response is almost always something like "Programmers are expensive, I can't afford that. If I can use AI to cut programmers out of the process I'm going to do it."
This is because they don't view programming as a "creative" form of labor. I think this is an incorrect view, but this knowledge is at least useful in weighting their opinions.
The most interesting observation is that regardless of how "anti-AI" most people seem to be, it isn't that deep of an opinion. Their stated preference is they don't want any AI anywhere, but their revealed preference is they'll continue to spend money as long as the product is good. Most products produced with AI, however, are still crap.
That’s the thing. One day everyone is going to just stop caring about being anti-AI. Already I’ve noticed that most people are only against other people’s use of AI. Their use is justified.
I actively don’t use AI because the results are unreliable or ugly. I’m just not against AI in principle. It’s funny that my position is considered contemptible by people who regularly use AI but are hard hardliners against it on moral grounds.
Remember when everything wasn’t a religious war? Actually, I don’t. It was always like this and it’s always going to be like this. Just one forever crusade after another.
I am going to sound cynical, but I strongly believe that everyone's view on AI is contaminated by ulterior motives, and a lot of people are not truthful with themselves about their positions on AI. For instance, I feel as though topics such as copyright, environmentalism, water use, etc., that have been thrust into the limelight are being pushed by people who didn't care about these issues 5-10 years ago, but decided to start clutching their pearls about it now. Particularly copyright; everyone was so okay with pirating movies, apps, music when it benefited them, but now they are the vanguard in enforcing other people's copyright on data they don’t even own.
> everyone was so okay with pirating movies, apps, music when it benefited them, but now they are the vanguard in enforcing other people's copyright on data they don’t even own.
You do not mention the perception of asymmetric legal and market power. Many people think that file sharing Disney movies is ok, but Google scraping the art of independent artists to create AI is not ok. That is not the same dynamic at all as not caring about copyright, and then suddenly caring about copyright.
Suddenly people change their tune, what gives? All we are talking about is the forced wealth transfer of trillions of dollars to the richest megacorps on the planet.
Most people didn't choose to be part of your moon shot death cult. Only the people at the tippy top of the pyramid get golden parachutes off Musk's exploding rocket.
They never changed their position, corpos shouldn't get any money! That's always been the position. They are inherently unethical meat grinders.
> Indeed, companies will always start using something if it makes financial sense for them.
I agree that this is often the case. I still see Games Workshop as an exception. They could have moved plastic production to a cheaper region (e.g. China), but they haven't done so. Financials are obviously important to them, but they're being very careful and thoughtful about their actions. This AI ban is just another showcase of that.
The UK production is mostly about speed (turnaround from 3d prototype, to mold, to finished sprue, and ‘Eavy Metal painted promo images) and quality control for the models. All of their paper and hard plastic products (books, dice, etc) are produced in China.
> The most interesting observation is that regardless of how "anti-AI" most people seem to be, it isn't that deep of an opinion. ... Most products produced with AI, however, are still crap.
how can you go and generalize about these people, calling them idiots (that's what "deep of an opinion" means, even if you don't say that), and then breathlessly engage in the exact same rhetoric?
Yes, anyone with an art-adjacent hobby like tabletop gaming is militantly anti-AI.
Shelling out to support artists is seen as virtuous, and AI is seen as the opposite of that - not merely replacing artists but stealing from them. There's also a general perception that every cost-saving measure is killing the quality of the product.
I play a lot of solo RPGs (4AD, Riftbreakers, Ker Nethalas, Kal-Arath, Al-Rathak, and just picked up Ironsworn: Starforged last night!) and I find AI to be amazing at filling in scenario and campaign details. I might roll and find out I'm investigating a burial ground, and I'd just left the shore where my boat ran aground. My local GPT-OSS 120B is fantastic at generating the scene, descriptors for the environment, and small details I can cue on and ask my oracle questions about. It's like an automated GM-lite that can embellish a scene.
It's also really good at suggesting complications to situations in games like The Sprawl (based on WoD), where, as a GM, I want to ratchet up the tension.
AI is super-cool, and has the potential to transform a lot of areas. I get that people are threatened by it, but letting that overshadow its utility seems...short sighted? Not to be procative, but, how do folks think this will play out over the next 20 years? Doesn't it seem like AI could be used to make the gaming experience better, not just cheaper?
To be clear, I think the quality angle is secondary, and thirst for the approval of artists is primary.
Personally, I haven't used AI to generate prose like you, but I do appreciate delegating "remembering dusty corners of the rules" to Claude, especially in a narrative campaign where you really just want an answer vs settling a dispute.
Just naming things in a world consistent way would be an amazing tool. Naming things is one of the most difficult parts of programming and writing and world creation I guess.
> One thing I've found when talking to non-technical board gamers about AI is that while they’re 100% against using AI to generate art or game design, when you ask them about using AI tools to build software or websites the response is almost always something like "Programmers are expensive, I can't afford that. If I can use AI to cut programmers out of the process I'm going to do it."
I had a conversation with an artist friend some time back. He uses Squarespace for his portfolio website. He was a few drinks in, and ranting about how even if it's primarily artists using these tools professionally at the moment, it'll still lead to a consolidation of jobs, how it's built on the skillset and learning of a broader community than those that will profit, etc. How the little guy was going to get screwed over and over by this sort of thing.
I started out doing webdesign work before I moved more to the operations and infrastructure management side of things, but I saw the writing on the wall with CMS systems, WYSIWYG editors, etc. At the time building anything decent still took someone doing design and dev work, but I knew that they would get better over time, and figured I should make the change.
So I asked him about this. I spoke about how yeah, the people behind Squarespace had the expertise - just like the artists using AI now - but every website built with it or similar is a job that formerly would have gone to the same sort of little guy he was talking about. How it's a consolidation of all the learnings and practices built out by that larger community, where the financial benefits are heavily consolidated. I told him it doesn't much matter to the end web designer whether or not the job got eliminated by non-AI automation and software or an LLM, the work is still gone and the career less and less viable.
I've had similar conversations with artists before. They invariably maintain that it's different, somehow. I don't relish jobs disappearing, but it's nothing new. Someday, maybe enough careers will vanish that we'll need to figure out some sort of system that doesn't involve nearly every adult human working.
The idea of being anti-AI for art or game design vs pro-AI for software or websites is interesting because it presumably reflects the fact that those people value art and game design more than they do software or websites. Their view of AI is as a means to an end for stuff that's necessary but low value to them while preserving the human touch for stuff that matters more.
This actually doesn't seem that unreasonable or inconsistent with how most people treat technology or similar conveniences. Many if not most people value a human component for things they think are important, even if it costs more or has other tradeoffs.
> "Games Workshop elects not to experience multi-year headache. Will use AI when profitable."
They will definitely start using AI when their competitors do to the point that they gain a substantial competitive advantage. Then, at least in a free market, their only choices are to use AI or cease to exist. At that point, it is more survival bias (companies that used AI survived) rather than profit motive (companies used AI to make more money).
* deprecating people's models so that they have to buy new ones
* making any number of rules changes that were widely hated
* making lore changes that were widely hated
They aren't going to lose customers because some other company is using AI. They effectively don't have any competition, because people love the Warhammer settings and want to play games set in them.
I can guarantee you that there are more than a few small producers in Guangzhou that can, and are using whatever advantage they can leverage (including AI, like the rest of China's industry).
GW don't have competitors, it has an absolute monopoly on the 40k and Fantasy worlds it has built up. It's like saying there's competitors to LOTR or Star Wars or DnD.
Their worlds are their monopolies. Worlds that now have multi-decades worth of lore investment (almost 50 years now I think).
Just because someone else can make cheaper little plastic models doesn't affect GW in the slightest. Or pump out AI slop stories.
The Horus Heresy book series is like 64 books now. And that's just a spin-off. It's set way before when 40k actually is set (10,000 years).
With so much lore they need complicated archiving and tracking to keep on top of it all (I happen to know their chief archivist).
You can't replace that. I only say all this just to try and explain how off the mark you are on understanding what the actual value of the company is.
I live in Nottingham where GW is based, another of my friends happens to have a company on an industrial estate where there are like 3 other tabletop gaming companies. All ex-gw staff.
You could probably fit all their buildings in the pub that GW has on its colossal factory site.
You used to know people who worked at Boots, which used to be the big Nottingham employer. Now days, I know more people who work at GW.
BattleTech is somewhat of a competitor, and a variety of smaller games have some niches.
Plenty of people use proxies, too. There's places that do monthly packs of new STLs that could be an entire faction army, and there's long been places that sold "definitely not Space Marines and Sisters of Battle" minis too.
They don't have a threat of anyone overtaking them at current, but AI making alternatives in this vein even cheaper could eat away at portions of their bottom line.
As a Battletech lover, the phrase "somewhat of a competitor" is a bit vague. I see Battletech as a 3%er - one of a few 3%ers - compared to the near-monopoly of WH40K (and fantasy WH).
As an aside, I am somewhat disappointed that Battletech's appeal to the mainstream is largely down to the Mechwarrior games which have minimal lore.
There is so much more that could be done. But the current owners seem to be pretty poor at translating all their paperwork stories for the modern crowd.
Does GW have competitors? Feels like they own their niche (with the IP associated) completely with extreme amounts of content.
Similar to how Magic rules their segment of the market
Magic has competition in Yu-Gi-Oh and Pokemon. I think Pokemon outsells MTG now. Warhammer doesn't have anything else in their league. The other games are a very tiny percent of an already small niche.
> Then, at least in a free market, their only choices are to use AI or cease to exist.
That is a false dichotomy. Eschewing AI may actually provide a competitive advantage in some markets. In other words, the third choice is to pivot and differentiate.
Don't assume your experience is uniformly distributed. I know tabletop gamers addicted to AI and 3D printing their own game pieces.
I would describe them as anti-corporate IP/copyright cartel. They understand things like automobiles and personal computers require organized heavy lifting but laying claim to own our culture and entertainment, our emotional identity is a joke.
Just rich people controlling agency, indoctrinating kids with capitalist essentialism; by chance we were born before you and survived this long so neener neener! We own everything!
> ... while they’re 100% against using AI to generate art or game design, when you ask them about using AI tools to build software or websites ...
And this is not complicated at all. It's the quality of output.
Users appreciate vibecoded apps but developers are universally unfazed about vibecoded pull requests. Lots of same devs use AI for "menial" tasks and business emails. And this is NOT a double standard: people are clearly ok when generative AI outputs may exist but aren't exposed to unsuspecting human eyes, and it's not ok if they are exposed to human eyes, because the data AIs generate haven't exceeded the low-side threshold of some sort. Maybe SAN values.
(also: IIUC, cults and ponzi scheme recruitment are endemic in tabletop game communities. so board game producers distancing from anything hot in those circles, even if it were slightly irrational to do so, also makes sense.)
I doubt a random internet commenter can persuade you, but LLMs and tools built around them are fundamentally different from NFT/crypto.
NFTs/Crypto are just ways to do crimes/speculate/evade regulations. They aren't useful outside of "financial engineering." You were right to dismiss them.
LLMs are extremely useful for real world use cases. There are a lot of practical and ethical concerns with their use: energy usage, who owns them, who profits from them, slop generation, trust erosion... I mean, a lot. And there are indeed hucksters selling AI snake oil everywhere, which may be what tripped off your BS meter.
But fundamentally, LLMs are very useful, and comparing them to NFT/Crypto does a disservice to the utility of the tech.
> They aren't useful outside of "financial engineering."
Without disagreeing with your overall point in 99% of cases, we did actually have a good use for pinning things in the Bitcoin blockchain when I worked at Keybase. If you're trying to do peer-to-peer security, and you want to prove not only that the evil server hasn't forged anything (which you do with signatures) but also that it hasn't deleted anything legitimate, "throw a hash in the blockchain" really is the Right Way to solve that problem.
The property that makes the blockchain useful for this, though, is that it's widely-distributed. "Throw a classified in the national newspaper" is just as good. Nowadays, we have better solutions (appendable BitTorrent comes to mind), with most of the advantages of blockchain but few of the disadvantages.
It's important to think about the exact procedure you want to use for verifying something. Running with your thought experiment, let's say we publish "the root hash of the whole world" (not too far off from what Keybase did) each day in the Times. Now I open my phone to read some messages from Billy Bob, and my phone needs to get that hash somehow. This is just a thought experiment, so let's say for the sake of argument that it tells me to walk down to the convenience store, buy a copy of the day's paper, and scan a QR code on page 12. The problem with that arrangement (even in thought experiment land, where I'm happy to perform these steps every day) is that all the evil server needs to do to trick me is to put a doctored copy of the Times in that one newspaper stand. That's not the level of security we were hoping for. To get real security here, I'd need to do some sort of random sampling of newspaper stands distributed across the country, to build confidence that whatever QR code I'm seeing is the same one that everyone else is seeing. And the kicker is, everyone has to do this. We can't just pay one guy to sample the papers every day and tell us what the QR code was, because now our security depends on trusting that one guy, and the whole point of peer-to-peer security is avoiding that kind of centralized trust.
I think this is actually a great way to talk about the difficulty of the problem that Bitcoin solved, and why so many nerds were so interested in the whitepaper, long before all the real money got involved.
> The problem with that arrangement (even in thought experiment land, where I'm happy to perform these steps every day) is that all the evil server needs to do to trick me is to put a doctored copy of the Times in that one newspaper stand.
Your analogy is analogous, and that's exactly the same problem as with the blockchain! Unless you're maintaining your own Bitcoin full node, your integrity comes from the provenance: "my lightweight client trusts this full node not to lie to me". This is the same as your "one guy to sample the papers".
All you need to do is grab a copy of the day's paper from your local convenience store, and compare your results with the Times website and two randomly-selected peers (selected from a distribution carefully chosen to ensure that each day's graph is connected). Any discrepancy will be obvious, and undeniable (since you have the physical artefact as a certificate of duplicity), so anyone who discovers a discrepancy can blow the whistle. If no whistle is blown, then either there was no discrepancy, or there is a big conspiracy (i.e., one large enough that blockchain wouldn't have saved you either).
The problem is not all that difficult. The main advantage of Bitcoin is that it's a good enough solution that many people don't feel the need to think about the problem any more – even though it's a marginal improvement over the prior art, with major downsides of its own.
> If you're trying to do peer-to-peer security, and you want to prove not only that the evil server hasn't forged anything (which you do with signatures) but also that it hasn't deleted anything legitimate, "throw a hash in the blockchain" really is the Right Way to solve that problem.
and it only requires the same electricity as a medium sized country to do it
If you want to learn CSS, and I mean, REALLY learn it, buy "CSS: The Definitive Guide" (https://www.amazon.com/CSS-Definitive-Guide-Layout-Presentat...), read it cover to cover, and use every property in playground while you're going through it. I was a backend developer that hated CSS before it, now I love it.
Any professionally edited and published book is better than reading disjointed blog posts and online pages. I learned CSS in 2006 or 2007 from the then-new book, Web Design in a Nutshell, and after a month I was already way more comfortable writing CSS than three months of reading various blogposts.
I was about to argue that there can be no definitive guide on CSS as it's a technology without a single manufacturer and it has a 20 year old history. But then I saw the length of the book:
It's literally the definitive guide on CSS, and frankly, it's the gold standard for any book calling itself a definitive guide. An inordinate amount of work went into the book's 1126 pages. You will learn something every time you open the book. It will pay for itself the next time you wonder, "how do I do X with css?", because you don't have to search. It's right there in the book.
If you do software as a profession, is the book really THAT expensive at £45? Having a deep understanding of CSS could make you significantly more than that.
Simplistic analysis of whether CSS sucks: this definitive guide is 1,126 pages long. On the Amazon page it also suggests the "Definitive guide to JavaScript" - it's 704 pages long.
If you can fully explain JS (an inexplicable bodge built on a tower of inexplicable bodges) in less pages then CSS almost definitely sucks.
Yeah, there was a years long debate that effectively ended with: “We held a vote that you weren’t aware of and decided that masonry was out. If you cared, you should have participated in the vote that you were not aware was happening. It’s too late to change it.”
> We held a vote that you weren’t aware of and decided that masonry was out. If you cared, you should have participated in the vote that you were not aware was happening. It’s too late to change it.
I think that’s an exceptionally uncharitable description of what happened. This is a decision the WebKit team has been repeatedly publicly asking people to participate in for over 18 months.
> Help us invent CSS Grid Level 3, aka “Masonry” layout
> P.S. About the name
> It’s likely masonry is not the best name for this new value. […] The CSSWG is debating this name in [this issue]. If you have ideas or preferences for a name, please join that discussion.
> Help us choose the final syntax for Masonry in CSS
> We also believe that the value masonry should be renamed.
> As described in our previous article, “masonry” is not an ideal name, since it represents a metaphor, and not a direct description of its purpose. It’s also not a universally used name for this kind of layout. Many developers call it “waterfall layout” instead, which is also a metaphor.
> Many of you have made suggestions for a better name. Two have stood out, collapse and pack as in — grid-template-rows: collapse or grid-template-rows: pack. Which do you like better? Or do you have another suggestion? Comment on [this issue] specifically about a new value name (for the Just Use grid option).
The debates went on for years and following it closely became a poor use of time. Even the subgrid conversation seemed completely stalled. I think a lot of people tuned out long before any vote was discussed. I did.
But if you were the one who tuned out, then isn’t it uncharitable to describe it as their failing to make you aware of the vote? Isn’t it on you to stay in the loop?
Surely they can’t start just pinging everyone who might have cared at some point during the time to get involved.
I get what you're saying but making interminable arguments and keeping the "debate" going is a tactic. There's that CIA sabotage manual with the section about meetings and conferences, it can feel like that. The duration of these debates aren't usually measured in hours, days, or weeks, but years. And the people who dragging them on and staying in the fights are employed full-time to do exactly that.
It got to the point where I believed that subgrid was dead. FF implemented it but absolutely no one else did, for years.
Is it our fault for tuning out of the debate? Yep. But tactics were employed to achieve that exact outcome. I'm fine admitting that I tuned out. But it was a battle of attrition waged by people who were fine holding up progress indefinitely.
Is that how you want decisions to be made?
Ultimately I'm not too concerned what you call the masonry feature. However the debate over what to call it was an extreme case of bikeshedding. I would have rather given up the fight over semantics to resolve the non-issues and ship the feature years ago. As it stands we're still years away from actually being able to use the feature in production.
I've stopped waiting for companies, committees, or projects to change course. I don't have an incentive to build consensus within a group of people who fundamentally disagree that the thing I need should exist. Why bother? I have an incentive to spend my time building features that users will use.
There’s no incentive to the companies or the employees to draw out the discussion, especially over something so trivial. It’s much more preferable to try and speed through things to get things done in a time frame that can be adopted.
And regardless, if you don’t feel it’s worth your time, then why cast aspersions that it was something clandestine and intentionally hidden? You could have shown up and kept up with it, just like everyone else involved presumably did.
I didn’t ascribe a motive to anyone. Their reasons are their own and it only makes sense that the people who stay in these fights do it because it’s part of their jobs.
There are people who, for whatever reason, keep debates going over small points of disagreement and prevent issues from being settled. Sometimes for years. Right?
The older I get, the more likely I am to recognize and route around or ignore interminable debates. Especially if it’s not for a company, project, or initiative under my direct control.
Remember, the question at the top of this thread was essentially “What happened to ‘masonry’?” Well, there were quibbles over the descriptors.
I don’t care about quibbles. “masonry”, “grid-lanes”, “grid-masonry”, pick one, they’re equivalent. I don’t like it when quibbles block progress.
Sometimes people and companies do want to block things. You’d have to ask them why. Like I said earlier:
> I don't have an incentive to build consensus within a group of people who fundamentally disagree that the thing I need should exist.
Pick your battles… Actually, no, it’s usually better to ignore the fights and just get what you need to get done so you can move on.
Masonry was never “in”, no? Mozilla proposed it and were the only ones to implement it, behind a feature flag. Then WebKit proposed an alternative that was discussed at length:
People have been dragging their feet on subgrid, masonry, etc for almost a decade. I followed it pretty closely for years but stopped when it started turning into a Christopher Guest mockumentary.
Masonry or grid-lanes, who cares? I’m just glad masonry (the feature, Baseline 20XX) and subgrid (Baseline 2023) are finally here.
The article is muddled, I wish he'd split it into two. One for UUID4 and another for UUID7.
I was using 64-bit snowflake pks (timestamp+sequence+random+datacenter+node) previously and made the switch to UUID7 for sortable, user-facing, pks. I'm more than fine letting the DB handle a 128-bit int vs over a 64-bit int if it means not having make sure that the latest version of my snowflake function has made it to every db or that my snowflake server never hiccups, ever.
Most of the data that's going to be keyed with a uuid7 is getting served straight out of Redis anyway.
I've been tracking tracking my work down to the minute for the past seven years. (timer and log book.)
I started out trying to get to five "hands on keyboard" coding hours a day, five days a week, and realized that it was unrealistic after the first few years.
Four hours of coding time, five days a week works for me. If I'm a little under I work Saturdays, if I'm a little over I take off early on Friday. Unless I'm sick I put in 20 hours of coding a week. You would not believe how nice this way of working is.
I lost a month to Bazel a few years ago. The documentation had so many holes and what was there was either out of date or wildly inaccurate. You could not produce an Angular build using the tutorials as written. Everything was wrong. I'm sure Bazel great if you have a team of people to write bespoke libraries on top of it for each of your targets. I ended up using turbo for frontend and uv workspaces on the backend.
reply