Hacker Newsnew | past | comments | ask | show | jobs | submit | vehemenz's commentslogin

I don't think there's that much of a distinction.

The real difference is that a "true professional" already has the software—purchased at full price by themselves or by their employer—and doesn't need a subscription in the first place.


The biggest distinction, in my experience, is that prosumers tend to be means-focused and professionals tend to be ends-focused, so there's less zealotry and evangelism in professional circles.

Also in professional circles, there's usually one or two industry standards and you just use what everyone else is using.

Many people that use professional tools are genuinely doing hobbyist stuff. Especially if they haven't already bought their tools outright.

But besides, this subscription works with Family Sharing and is only $12, so it looks easy to get your money's worth.


This is a weird one. I think their reasoning was that most people don't use Launchpad, so they integrated it into Spotlight to eliminate redundancy.

I much prefer the new app launcher in Tahoe, but it was created at the expense of Launchpad, which some people actually relied on. I don't know why they couldn't have kept both options.


And yet, sometimes the criticism is warranted, and sometimes it's not. That's why it's good not to overgeneralize about patterns.

I think that’s unnecessary waffling. Of course there are exceptions, but the prejudiced negative views that the old and the young hold of each are generally wrong.

For example, the constantly recurring critique that the music of the young is not about musicality[1] is always wrong. It's as wrong today as it was about Elvis.

[1] https://news.ycombinator.com/item?id=45637667#45639674


Shortcuts.app and AppleScript works for this.

Tailwind is almost too simple to bother using an LLM for. There’s no reason to introduce high-level abstractions (your “real” CSS, I imagine) that make the code more complicated, unless you have some clever methodology.

Can you explain? Tailwind massively reduces overhead for abstraction, classing, documentation, and maintenance.

AFAICT, Tailwind is largely (not entirely) a different, shorter syntax for writing inline styles. (E.g., "class: 'bg-white'" = "style: 'background-color: white'".)

If you've rejected structural CSS to begin with, I sort of get the point that it saves a lot of typing; otherwise I don't see how it helps all that much over SASS or just modern plain CSS.


Tailwind is a dirty hack, normally you are supposed to declare a class, which you apply to items of the same concept. This is the cause for CSS to exist.

Front devs got lazy, and started writing for each element, position: absolute; left:3px, top:6px, color:red;...

You could write <font color="red">Hello</font> this would be similar "cleanliness"


Can’t wait for Headwind CSS implemented as custom elements.

The top menu is not necessarily for common functions. Nor is it true that no one uses it.

Chrome’s unique buried menu breaks user expectations. Casual users have trouble finding it.


Do you have a cite for that? It's a hamburger menu, which is by far a more common menu paradigm in the modern world. Do you imagine some kind of "casual user" who... doesn't have a phone?

I repeat: the menu bar is a dying abstraction, preserved in a consistent form only on the Macintosh (even iOS has no equivalent), because it's presence is unavoidable. Users of modern apps don't see it used consistently, so it's absolutely not surprising that Apple's designers aren't doing it either.


The author addresses this. Humans are the same in 2026 as 1992.

Besides, that interface designers or even the average computer user understands more than in 1992 is highly implausible on its face.


Humans are definitely not the same as in 1992 when it comes to their everyday knowledge of computer interactions.

And even if human cognition itself were unchanged, our understanding of HCI has evolved significantly since then, well beyond what merely “feels right.”

Most UX researchers today can back up their claims with empirical data.

The article goes on at great length about consistency, yet then insists that text transformations require special treatment, with the HIG example looking outright unreadable.

Menu text should remain stable and not mirror or preview what’s happening to the selected text IMHO.

Also, some redundancy is not necessarily a bad thing in UI design, and not all users, for various reasons, can read with a vocabulary that covers the full breadth of what a system provides.


Most UX researchers today can back up their claims with empirical data.

HCI work in 1992 was very heavily based on user research, famously so at Apple. They definitely had the data.

I find myself questioning that today (like, have these horrible Tahoe icons really been tested properly?) although maybe unfairly, as I'm not an HCI expert. It does feel like there are more bad UIs around today, but that doesn't necessarily mean techniques have regressed. Computers just do a hell of a lot more stuff these days, so maybe it's just impossible to avoid additional complexity.

One thing that has definitely changed is the use of automated A/B testing -- is that the "empirical data" you're thinking of? I do wonder if that mostly provides short-term gains while gradually messing up the overall coherency of the UI.

Also, micro-optimizing via A/B testing can lead to frequent UI churn, which is something that I and many others find very annoying and confusing.


I there not any user testing as we know it today, mostly top down application of priciples.

This was all experts driven in that time to my knowledge.

Empirical validiton did not really take off until the late 00s.

https://hci.stanford.edu/publications/bds/4p-guidelines.html

Don had the explicit expert knowledge first stance in 2006 and 2011, nothing inherently wrong with that, but it's defenitly no research driven.

"Always be researching. Always be acting."

https://jnd.org/act-first-do-the-research-later/

Tognazzini and Norman already criticized Appple about this a decade ago, while the have many good points, I cannot shake the feeling that they simply feel like the were used to just brand Apple as user friendly in the 90s and that Apple never actually adopted their principles and just used it as it fit the company's marketing.

https://www.fastcompany.com/3053406/how-apple-is-giving-desi...

there are a bunch of discussions on this

https://news.ycombinator.com/item?id=10559387 [2015] https://news.ycombinator.com/item?id=19887519 [2019]


That's interesting, I hadn't heard that point of view before.

Empirical validiton did not really take off until the late 00s.

https://hci.stanford.edu/publications/bds/4p-guidelines.html

Hmmm, I don't quite see where that supports "Apple didn't do empirical validation"? Is it just that it doesn't mention empirical validation at all, instead focusing on designer-imposed UI consistency?

ISTR hearing a lot about how the Mac team did user research back in the 1980s, though I don't have a citation handy. Specific aspects like the one-button mouse and the menu bar at the top of the screen were derived by watching users try out different variations.

I take that to be "empirical validation", but maybe you have a different / stricter meaning in mind?

Admittedly the Apple designers tried to extract general principles from the user studies (like "UI elements should look and behave consistently across different contexts") and then imposed those as top-down design rules. But it's hard to see how you could realistically test those principles. What's the optimal level of consistency vs inconsistency across an entire OS? And is anyone actually testing that sort of thing today?

I cannot shake the feeling that they simply feel like the were used to just brand Apple as user friendly in the 90s and that Apple never actually adopted their principles and just used it as it fit the company's marketing.

I personally think Apple did follow their own guidelines pretty closely in the 90s, but in the OS X era they've been gradually eroded. iOS 7 in particular was probably a big inflexion point -- I think that's when many formerly-crucial principles like borders around buttons were dropped.


Like the whole recoverability paradigm, seems more like a feature from developer perspective looking for a reason to exist, than a true user demand.

You have state management for debugging purposes already, so why not expose it to the user.

As an example in photoshop no non-professional users care about non-destructive workflows, these things have to be learned as a skill.

Undo is nice to have in most situations, but you can really only trust your own saves and version management with anything serious.

Sonething as simple as a clipboard history is still nowhere to be found as built in feature in MacOS, yet somehow made it's way into Windows.


Why is it highly implausible on its face other than the fact it makes arguing against him harder?


Why would UX be getting worse across the board if there is greater understanding now?


Did you mean to reply to me?

The person I replied to said, "that interface designers or even the average computer user understands more than in 1992 is highly implausible on its face"

Think of computer users at the ages of 10, 20, 30, 40, 50, 60, 70, and 80 in 1992. For each group, estimate their computer knowledge when they sat down at a computer in 1992.

Now do the same exercise for the year 2026.

How is it highly implausible on its face that the average computer user in 2026 understands less than the average computer user in 1992?


> Did you mean to reply to me?

I think so.

> The person I replied to said, "that interface designers or even the average computer user understands more than in 1992 is highly implausible on its face"

Yes, I agree with this person.

>How is it highly implausible on its face that the average computer user in 2026 understands less than the average computer user in 1992?

I don't think it is. Particularly with the average user, the bar of understanding is lower now.


> Particularly with the average user, the bar of understanding is lower now.

Can you explain how this is true given that everyone using a computer today has had a lifetime of computer use whereas in 1992 many people were encountering computers for the first time?


Here are a few perfectly acceptable explanations.

1. Computer users were generally well-educated, unlike today.

2. UX designers didn’t inherit any mess and could operate from first principles.

3. The “experience” of modern users—phones, tablets, and software that does everything for you—doesn’t translate the way you think. And it explains why Gen Z seems to have regressed in terms of tech knowledge.


> Can you explain how this is true given that everyone using a computer today has had a lifetime of computer use whereas in 1992 many people were encountering computers for the first time?

The userbase has been watered down with a larger proportion of individuals who are not highly technical.


Oh that statement is so 1992. Millions of people getting a Dell or a Gateway and annoying their techie friend “So now what do I do with this?”

Or 1982.

Users are always non-technical.


This does seem to be what many are arguing, even if the analogy is far from perfect.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: