Hacker Newsnew | past | comments | ask | show | jobs | submit | DharmaPolice's commentslogin

>The more different the genetic material is, the less you care

This is sort of true but it misses that we don't actually have DNA sensors built into our eyes. Instead we rely on heuristics like the Westermarck effect where we will (normally) tend to not find someone we lived with as a child attractive regardless whether they're a blood relation or not.

We influence who (or what) is in our group through our behaviour, thoughts and associations. Look at the vast number of people who value their dog or cat over other human beings. It's unlikely their dog is closer to them, genetically speaking than any single human on Earth but they spend time and invest emotionally in their pet so they form a bond despite the genetic distance.

If you see a child being hurt it likely invokes a slightly stronger emotional response if the child reminds you of someone in your own life. Often this will be someone who looks like you/your family (i.e. is genetically similar to you) but it might be some other kid you've grown attached to who is not related at all.

So yes, we are driven by a calculating selfish gene mechanism but we're also burdened/gifted with a whole bunch of emotional and social instincts and rely on imperfect sensors not tricorders. It's why people can form group identities over all sorts of non-genetic characteristics (e.g. religion, nation, neighbourhood, sports team affiliation, political ideology, vi vs emacs, etc).


That's completely true because there are many aspects to what is "my group" and what isn't, but the key point is, people naturally care about their group more than they care about strangers. Thinking in terms of genetics provides a simple model that's good enough to explain a lot of phenomena. But yes, if you want to go deeper, you need to consider other factors - at first glance it seems like "culture" is the most important one.

B would require a fairly large shift in approach since currently the primary way we interact with the cloud is via browsers which are probably the biggest single users of client memory currently.

It probably would have the same effect, but the point is that a lot of people find it easier to stick to one meal a day (or similar) than multiple smaller meals.

I've always thought that you can lose weight on almost any diet - as long as it makes you think before you eat and almost by definition any diet will make you do that. For me at least most (probably all) of the time I eat it has nothing to do with hunger and if I just stop for a second I'll probably not eat at all.

> a lot of people find it easier to stick to one meal a day (or similar) than multiple smaller meals

I am highly skeptical of this take... is there any science behind it?


I believe research has mixed results on dietary compliance.

I can only speak anecdotally - if I start doing something enjoyable, it's hard to stop while it's still enjoyable. One of the benefits of an IF diet is you can start eating your main meal and continue until you're full.


Records of cases involving children are already excluded so that's not a relevant risk.

I think a good rule of thumb is to default to assuming a question is asked in good faith (i.e. it's not a trick question). That goes for human beings and chat/AI models.

In fact, it's particularly true for AI models because the question could have been generated by some kind of automated process. e.g. I write my schedule out and then ask the model to plan my day. The "go 50 metres to car wash" bit might just be a step in my day.


> I think a good rule of thumb is to default to assuming a question is asked in good faith (i.e. it's not a trick question).

Sure, as a default this is fine. But when things don't make sense, the first thing you do is toss those default assumptions (and probably we have some internal ranking of which ones to toss first).

The normal human response to this question would not be to take it as a genuine question. For most of us, this quickly trips into "this is a trick question".


Rule of thumb for who, humans or chatbots? For a human, who has their own wants and values, I think it makes perfect sense to wonder what on earth made the interlocutor ask that.

Rule of thumb for everyone (i.e. both). If I ask you a question, start by assuming I want the answer to the question as stated unless there is a good reason for you to think it's not meant literally. If you have a lot more context (e.g. you know I frequently ask you trick or rhetorical questions or this is a chit-chat scenario) then maybe you can do something differently.

I think being curious about the motivations behind a question is fine but it only really matters if it's going to affect your answer.

Certainly when dealing with technical problem solving I often find myself asking extremely simple questions and it often wastes time when people don't answer directly, instead answering some completely different other question or demanding explanations why I'm asking for certain information when I'm just trying to help them.


> Rule of thumb for everyone (i.e. both).

That's never been how humans work. Going back to the specific example: the question is so nonsensical on its face that the only logical conclusion is that the asker is taking the piss out of you.

> Certainly when dealing with technical problem solving I often find myself asking extremely simple questions and it often wastes time when people don't answer directly

Context and the nature of the questions matters.

> demanding explanations why I'm asking for certain information when I'm just trying to help them.

Interestingly, they're giving you information with this. The person you're asking doesn't understand the link between your question and the help you're trying to offer. This is manifesting as a belief that you're wasting their time and they're reacting as such. Serious point: invest in communication skills to help draw the line between their needs and how your questions will help you meet them.


>Going back to the specific example: the question is so nonsensical on its face that the only logical conclusion is that the asker is taking the piss out of you.

I would dispute that that matters in 99.9% of scenarios.

>The person you're asking doesn't understand the link between your question and the help you're trying to offer.

Sure I, get that and I do always explain why I need to know something but it does add delays to the process (either before or after I ask). When I'm on the receiving end of a support call I answer the questions I'm asked (and provide supplementary information if I think they might need it).


> That's never been how humans work. Going back to the specific example: the question is so nonsensical on its face that the only logical conclusion is that the asker is taking the piss out of you.

Or a typo, or changing one's mind part way through.

If someone asked me, I may well not be paying enough attention and say "walk"; but I may also say "Wa… hang on, did you say walk or drive your car to a car wash?"


Sure, in a context in which you're solving a technical problem for me, it's fair that I shouldn't worry too much about why you're asking - unless, for instance, I'm trying to learn to solve the question myself next time.

Which sounds like a very common, very understandable reason to think about motivations.

So even in that situation, it isn't simple.

This probably sucks for people who aren't good at theory of mind reasoning. But surprisingly maybe, that isn't the case for chatbots. They can be creepily good at it, provided they have the context - they just aren't instruction tuned to ask short clarifying questions in response to a question, which humans do, and which would solve most of these gotchas.


I don't mind people asking why I asked something, I'd just prefer they answer the question as well. In the original scenario, the chatbot could answer the question as written AND enquire if that's what they really meant. It's the StackOverflow syndrome where people answer a different question to the one posed. If someone asks "How can I do this on Windows?" - telling me that Windows sucks and here's how to do it on Linux is only slightly useful. Answer the question and feel free to mention how much easier it is in Linux by all means.

I personally love explaining to people who might want to solve the issue next time so I'm happy to bore them to tears if they want. But don't let us delay solving the problem this time.


Software (that is running on hardware) isn't a great example - you'd be better off going with something like prime numbers. They don't really "exist" in the same way a toaster does. Souls also don't exist (citation needed etc) but are a similarly useful (for some people) way of thinking about the world.


Currently if someone posts here (or in similar forums elsewhere) there is a convention that they should disclose if they comment on a story related to where they work. It would be nice if the same convention existed for anyone who had more than say, ten thousand dollars directly invested in a company/technology (outside of index funds/pensions/etc).


A browser plugin that showed the stock portfolios of the HN commenter (and article-flagger) next to each post would be absolutely amazing, and would probably not surprise us even a little.


The person was referring to gaming where most PC players are sitting closer than 3 metres from their screen.


This is speculation but generally rules like this follow some sort of incident. e.g. Someone responds to a FOI request and accidentally discloses more information than desired due to metadata. So a blanket rule is instituted not to use a particular format.


Noise insulation.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: