The real difference is that a "true professional" already has the software—purchased at full price by themselves or by their employer—and doesn't need a subscription in the first place.
The biggest distinction, in my experience, is that prosumers tend to be means-focused and professionals tend to be ends-focused, so there's less zealotry and evangelism in professional circles.
This is a weird one. I think their reasoning was that most people don't use Launchpad, so they integrated it into Spotlight to eliminate redundancy.
I much prefer the new app launcher in Tahoe, but it was created at the expense of Launchpad, which some people actually relied on. I don't know why they couldn't have kept both options.
I think that’s unnecessary waffling. Of course there are exceptions, but the prejudiced negative views that the old and the young hold of each are generally wrong.
For example, the constantly recurring critique that the music of the young is not about musicality[1] is always wrong. It's as wrong today as it was about Elvis.
Tailwind is almost too simple to bother using an LLM for. There’s no reason to introduce high-level abstractions (your “real” CSS, I imagine) that make the code more complicated, unless you have some clever methodology.
AFAICT, Tailwind is largely (not entirely) a different, shorter syntax for writing inline styles. (E.g., "class: 'bg-white'" = "style: 'background-color: white'".)
If you've rejected structural CSS to begin with, I sort of get the point that it saves a lot of typing; otherwise I don't see how it helps all that much over SASS or just modern plain CSS.
Tailwind is a dirty hack, normally you are supposed to declare a class, which you apply to items of the same concept. This is the cause for CSS to exist.
Front devs got lazy, and started writing for each element, position: absolute; left:3px, top:6px, color:red;...
You could write
<font color="red">Hello</font> this would be similar "cleanliness"
Do you have a cite for that? It's a hamburger menu, which is by far a more common menu paradigm in the modern world. Do you imagine some kind of "casual user" who... doesn't have a phone?
I repeat: the menu bar is a dying abstraction, preserved in a consistent form only on the Macintosh (even iOS has no equivalent), because it's presence is unavoidable. Users of modern apps don't see it used consistently, so it's absolutely not surprising that Apple's designers aren't doing it either.
Humans are definitely not the same as in 1992 when it comes to their everyday knowledge of computer interactions.
And even if human cognition itself were unchanged, our understanding of HCI has evolved significantly since then, well beyond what merely “feels right.”
Most UX researchers today can back up their claims with empirical data.
The article goes on at great length about consistency, yet then insists that text transformations require special treatment, with the HIG example looking outright unreadable.
Menu text should remain stable and not mirror or preview what’s happening to the selected text IMHO.
Also, some redundancy is not necessarily a bad thing in UI design, and not all users, for various reasons, can read with a vocabulary that covers the full breadth of what a system provides.
Most UX researchers today can back up their claims with empirical data.
HCI work in 1992 was very heavily based on user research, famously so at Apple. They definitely had the data.
I find myself questioning that today (like, have these horrible Tahoe icons really been tested properly?) although maybe unfairly, as I'm not an HCI expert. It does feel like there are more bad UIs around today, but that doesn't necessarily mean techniques have regressed. Computers just do a hell of a lot more stuff these days, so maybe it's just impossible to avoid additional complexity.
One thing that has definitely changed is the use of automated A/B testing -- is that the "empirical data" you're thinking of? I do wonder if that mostly provides short-term gains while gradually messing up the overall coherency of the UI.
Also, micro-optimizing via A/B testing can lead to frequent UI churn, which is something that I and many others find very annoying and confusing.
Tognazzini and Norman already criticized Appple about this a decade ago, while the have many good points, I cannot shake the feeling that they simply feel like the were used to just brand Apple as user friendly in the 90s and that Apple never actually adopted their principles and just used it as it fit the company's marketing.
Hmmm, I don't quite see where that supports "Apple didn't do empirical validation"? Is it just that it doesn't mention empirical validation at all, instead focusing on designer-imposed UI consistency?
ISTR hearing a lot about how the Mac team did user research back in the 1980s, though I don't have a citation handy. Specific aspects like the one-button mouse and the menu bar at the top of the screen were derived by watching users try out different variations.
I take that to be "empirical validation", but maybe you have a different / stricter meaning in mind?
Admittedly the Apple designers tried to extract general principles from the user studies (like "UI elements should look and behave consistently across different contexts") and then imposed those as top-down design rules. But it's hard to see how you could realistically test those principles. What's the optimal level of consistency vs inconsistency across an entire OS? And is anyone actually testing that sort of thing today?
I cannot shake the feeling that they simply feel like the were used to just brand Apple as user friendly in the 90s and that Apple never actually adopted their principles and just used it as it fit the company's marketing.
I personally think Apple did follow their own guidelines pretty closely in the 90s, but in the OS X era they've been gradually eroded. iOS 7 in particular was probably a big inflexion point -- I think that's when many formerly-crucial principles like borders around buttons were dropped.
The person I replied to said, "that interface designers or even the average computer user understands more than in 1992 is highly implausible on its face"
Think of computer users at the ages of 10, 20, 30, 40, 50, 60, 70, and 80 in 1992. For each group, estimate their computer knowledge when they sat down at a computer in 1992.
Now do the same exercise for the year 2026.
How is it highly implausible on its face that the average computer user in 2026 understands less than the average computer user in 1992?
> The person I replied to said, "that interface designers or even the average computer user understands more than in 1992 is highly implausible on its face"
Yes, I agree with this person.
>How is it highly implausible on its face that the average computer user in 2026 understands less than the average computer user in 1992?
I don't think it is. Particularly with the average user, the bar of understanding is lower now.
> Particularly with the average user, the bar of understanding is lower now.
Can you explain how this is true given that everyone using a computer today has had a lifetime of computer use whereas in 1992 many people were encountering computers for the first time?
1. Computer users were generally well-educated, unlike today.
2. UX designers didn’t inherit any mess and could operate from first principles.
3. The “experience” of modern users—phones, tablets, and software that does everything for you—doesn’t translate the way you think. And it explains why Gen Z seems to have regressed in terms of tech knowledge.
> Can you explain how this is true given that everyone using a computer today has had a lifetime of computer use whereas in 1992 many people were encountering computers for the first time?
The userbase has been watered down with a larger proportion of individuals who are not highly technical.
The real difference is that a "true professional" already has the software—purchased at full price by themselves or by their employer—and doesn't need a subscription in the first place.
reply