> What he didn't know at the time is there is no phone number for Facebook customer support.
Part of the problem here is that Facebook (though in fairness, they are not unique here) has left this traditional path of escalation void, leaving only fake numbers. They don't even have a real number to play a recorded message affirming that there is no ability to call.
ETA: For instance, I notice Facebook appears to own the typo squat `facrbook.com`. I feel like it's the same principle, though I assume toll free numbers are more expensive.
It’s untenable from a marketing perspective to advertise a phone line that just talks about the services you don’t offer. One could maybe hope for a statement on a help page that says “Facebook will never ask you to call a support number”.
I think what you've gotta do is say, "You can't call, but here is the number anyway," because customers aren't necessarily interacting with your page anymore. They're interacting with AI summaries of your page. Those AIs might be in house, or might be provided by a search engine. What is tenable or untenable will have to shift to the realities of how users are interacting with the information you present.
If you can't provide their AI with text answering their direct question (eg, "what is the support number for Facebook"), they'll find a document which does provide such text. If it's not you then it's a scammer or competitor. UX for these customers means presenting information in a way that sorts high in a semantic search and is robust to transformation.
If you provide text indirectly answering the question ("that number doesn't exist" rather than a literal number), you're liable to be scored as less relevant than a wrong but direct answer ("the number is 1555
SCAMMER"). You're also less robust to transformations, because you can't pull a valid phone number out of the text.
Or maybe I'm wrong, take any certainty implied by my language as rhetorical. That's just the pattern I'm seeing in these tea leaves.
Also, realistically, I don't imagine the phone number literally just telling you that the service wasn't available and hanging up. I imagine it would offer you options to get various pieces of information (the URL of the website, the legal address of Meta, how to navigate to the support knowledge base on the website, ...) and let you draw your own conclusion about how useful it was. Maybe it's occasionally handy to someone. At worst it's harmless.
I think in an ideal world, you could use speech recognition to let people leave a message, and open a ticket, as if they had emailed support@. When someone responds, the system gives them a call them back and delivers it using text to speech.
I once had a Facebook rep I could call (they later ended this), and they didn't know that were two online newsletters about changes to internal Facebook apps used by advertisers (we used to be able to see who had clicked "interested" on an event). So they put in a bug report when the app stopped working, etc., but we later found it had been deprecated. All to say that dedicated support is often a cause of issues or confusion.
> Part of the problem here is that Facebook (though in fairness, they are not unique here) has left this traditional path of escalation void, leaving only fake numbers. They don't even have a real number to play a recorded message affirming that there is no ability to call.
Contrast with Experian, which has a number for consumers to call, but actually has an elaborate infinite loop in its phone tree that prevents you from actually talking to a human (this is by design).
If you're one of their customers (read: a business paying for their service), there's support you can call, but for individuals who have issues with their online Experian account or credit report, you can't, even if you're a paid subscriber to their consumer-oriented credit reporting services.
>Part of the problem here is that Facebook (though in fairness, they are not unique here) has left this traditional path of escalation void, leaving only fake numbers.
Frankly it's absurd to me that it's legal to do so. Any public facing company that is sufficiently large should be required by law to operate a phone service where you can talk to a real human being.
All of these huge mega corps are run with absolute impunity and there is often absolutely 0 avenue for regular everyday people to get in touch when they have issues. They direct you in these endless loops to FAQ's and "Community Resources"; even getting an email address is like getting blood from stone sometimes.
For some cases, your local small claims court may be an efficient escalation path. If enough people do it, companies will learn that too much stonewalling doesn't actually save money, because now your customer support is done by the legal department.
My wife and most of her friends have all lost their original accounts.
She got an email that password was changed. We immediately took action. They had changed the email associated with account. No way to change it back.
Only thing we could accomplish was getting the account disabled.
Zero way to contact Facebook. These are all woman that FB was primary storage place for kids photos.
“There is a phone number for Meta online. When CBC called it, an automated recording said, ‘Please note that we are unable to provide telephone support at this time,’ and directed callers to meta.com/help.”
> Please note that we are unable to provide telephone support at this time
Mealy-mouthed corporate lying horeshit.
They are able, just unwilling.
If free-market libertarianism is as great as the tech bros want us to think, why do these companies lie so much and so often despite the need for participants in the market to be correctly and full informed so they can make rational decisions?
This one is pretty bad. This guy found a fake Facebook customer support phone number in a Google search, then asked the Meta AI chat in Facebook Messenger if the number he found was a real Facebook help line... and Meta AI said that it was. There's a screenshot of the chat in the article.
The bad thing is that people still think LLMs can be trusted at all. Companies integrating them into their offerings are not helping the public adopt the correct mental framing of these tools as "plausible text generators".
Companies integrating them into their offerings are not helping the public adopt the correct mental framing of these tools as "plausible text generators"
"Not helping" seems a wild understatement. "Deceiving people into taking the wrong frame" seems more accurate.
The general public is getting lied to constantly. HN users have a bit more context to see through the bullshit but the marketing getting pushed in people is that these AI tools are super genius incredible world changing tools that make everyone 100x more productive.
Even many HN users instantly resort to misdirection via comparisons to humans or nebulous upcoming AGI instead of acknowledging that we have to live with these limitations for the forseeable future.
Maybe we have a bunch of users who primarily code in languages with duck typing. So that extends over to assessing the abilities of LLMs -- "talks like a human, therefore it is the same thing."
I'm only sorta kidding. I am surprised at the number of people who are comfortable with such a shallow conclusion.
Man the techno-utopianism is awfully strong for some people when it comes to LLMs. There are a wide range of opinions about these models on HN, many positive and many point out the very real flaws the current models have. If you find this mix of takes so offensive you might want to reconsider your own opinions a bit. These models are interesting, but they aren't magically perfect.
Most people outside of tech don't understands how these models work at even the highest level of them being text predictors or their output being highly dependent on the training data. Many people don't even realize the enormous amount of energy and data consumption required to train these models.
Seriously, try asking a random relative how an LLM works at a basic level. You're likely to get a blank stare at even the term "LLM".
Why do you feel the need to arbitrarily ascribe some ideology to random people based on one comment? I'm not "techno-utopian" in any sense of the word; I believe that the current AI development is highly risky and that we need to take careful measures such that society at large is prepared for the changes it may bring.
The "wide range of opinions" I see on HN are largely misinformed: they either lack the necessary technical understanding of current LLMs or are attempting to spin up some crackpot philosophical distinctions lacking in any rigor or consistency. I've never claimed that LLMs are perfect and I'd love to discuss their flaws! Believe it or not, that's why I continue to read these threads - to find genuinely informed takes contradicting my own.
Most people outside of tech tend to have no bias against or for LLMs, which gives them a leg up in finding consistent opinions about their capabilities. They tend to inform themselves with an open mind, which allows them to put things into context. Tech people have an immediate negative bias, because the implication of any system being able to write even a single line of code is an immediate intellectual threat. Therefore, things are interpreted maximally negatively.
For example, all of the talking points you mentioned are completely irrelevant unless interpreted with maximal negative bias:
- Text prediction is a general problem, being good at it requires understanding, reasoning and any other intellectual property you believe to be unique to humans.
- Every single system in existence is highly dependent on the data it uses to model the world, humans are no exception to this.
- The enormity of data required by any modern LLM is massively dwarfed by the enormity of data that was required by evolution and human civilization to get to this point.
- The energy requirements of modern LLMs are environmentally irrelevant when compared to literally any industry in either manufacturing, transportation or entertainment. We justify immensely more environmental damage for far less utility every single day.
After the giant media carousel last year, most people know what an LLM is, and the intuitive understanding they built from that reporting is way more accurate than what I have seen here. I have asked relatives and even acquaintances about just that. And as I have stated in my comment, their understanding is vastly better than that of HN.
> After the giant media carousel last year, most people know what an LLM is, and the intuitive understanding they built from that reporting is way more accurate than what I have seen here.
Or your own understanding is a lot less accurate than you think and you could learn something from listening a bit more.
For example, text prediction being a problem you need general intelligence to solve perfectly doesn't mean that training a model on text prediction will lead to general intelligence, that is a massive misunderstanding many pro LLM people seem to not get. They are trained on text prediction, that creates a large number of limitations, there is a big difference between a model trained to be general and a model trained to be a text predictor.
Similarly a computer is a turing machine so can calculate everything, but that doesn't mean that the computer can solve every problem with the programs we have today or even that they will be able to anytime in our lifetimes.
So, here you obviously got stuck on the meme argument "text prediction requires general intelligence" without thinking further, so you just did the error you accuse the other people of. People on HN pointing out all the limitations that comes from being trained to predict text doesn't make them ignorant, that makes them smart, LLMs are trained to predict text and that makes them capable at a lot of things but also bad at a lot of things, understanding that makes you a lot better at using LLMs.
Yeah, it's true. I asked my grocery checker how an LLM works, and she rolled her eyes and said, "Come on, Chipotle, they're lexical analysis systems that operates by performing vector math operations on points in a vast multidimensional space that represent tokenized subwords, everyone can immediately intuit that based on the sixty-second cheerleading news clips they saw on CBS. What do you think I am, a Hacker News reader?" Then she threw a papaya at me.
Understanding comes in many forms. My uncle will not be able to model the fuel flow in his car's engine using the Navier–Stokes equations, yet he can still drive better than me. When it comes to LLMs, an understanding of the transformer architecture is wholly unnecessary to develop a good model of their capabilities and pitfalls. HN commenters tend to lack both a technical and abstract understanding of LLMs, while non-tech people tend to only lack the former.
I find it extraordinarily unlikely that the average person's understanding of AI is any different from the man the article is about.
The text AI outputs resembles that of a well-spoken expert on the given subject matter speaking with full confidence and authority. There's nothing to clue the user into the unreliability of the output, so 99% of users will not think to double-check.
The chat from the article proves as much: [1]. There's no way a non-techie would doubt this well worded, reasonably sounding answer to a question about Meta from the official Meta chatbot while using the Meta app.
> Where is this magical hype that seems so widespread yet is weirdly absent when you look for it?
It was thick here a year ago, full of people predicting end of programming jobs within a year etc. Those people didn't understand the limitations inherent in these models. Today the general view here massively shifted towards doubt, since as we all know all those mad predictions made back then were proven wrong.
Today people here have moved on, but you still find them all over subreddits with LLM users etc.
This can be solved with more data. New tech like Windows Recall should be able to scrape enough of the world's data so that this sort of thing doesn't happen anymore.
> The bad thing is that people still think LLMs can be trusted at all.
LLMs are as trustworthy as humans.
Humans have been being wrong for about as long as we have been lying.
Whether you get information from a human or an LLM, check it.
I worry about the people who insist on credible sources rather than checking information for themselves. I think 80% or more of them are trolling me, but there are some who genuinely do not apply the Scientific Method to check facts in their everyday life. I truly feel sorry for them.
This is not true. Sure, humans can lie or get things wrong. But normal people will also admit when they don't know something. LLMs tend not to admit when they don't know something, and they use an authoritative voice that sounds like they know what they're talking about. To an untrained person, this can easily be misleading.
> But normal people will also admit when they don't know something.
You'd like to think so, right? However, this isn't really a solid thesis. Decent people will admit when they don't know. Is that normal? I've worked with so many people that just do not fit that definition at all, to the point it just seems like that's the normal way to behave. Maybe I'm jaded grossly overweighting it, but it just seems I have been in way too many meets with too many arguments over something because someone refused to back down and admit their ignorance/arrogance wasted valuable time because of refusal to accept input from others.
Even if that were true (I don't think it is): The more important distinction between humans and LLMs is accountability.
If a customer support agent gives you incorrect information, you can often hold the company liable for it (assuming you can prove it; I suppose there's a reason for why companies prefer certain support channels over others).
If an AI "lies" to you, you're largely on your own right now.
Notwithstanding differences in jurisdiction, applying that idea to this case would rely on finding that Meta owed Gaudreau a duty of care that extended to the Meta AI chatbot.
It would be more difficult to make this claim if Gaudreau had asked the question of Google, since Google itself is not usually responsible for false information uncovered by its searches.
My gut feeling is that it should be possible for companies to distinguish an AI product (i.e. as something provided to customers like a search engine, as you say) from an AI "working for them", but I can see a lot more disclaimers showing up in Meta's various AI chat channels soon.
"Messages are generated by Meta AI. Some may be inaccurate or inappropriate. Learn more."
Which leads to a pop-up further explaining that use cases include things like "creating something new like text or images".
I think it's going to be really interesting to see whether that's considered enough by courts, or if they'll take the position that these things pretend too well to be a real person to make such a disclaimer sufficient, similarly to how e.g. a brokerage can't disclaim "no investment advice" and then go on to say "but buy this stock, it's gonna moon tomorrow, trust me bro".
Look at the screenshot in the article. If a human Facebook representative would give that response, would you not trust them? And if not, how would you apply the Scientific Method to fact-check it?
In theory. In practice, every piece of information you can get from a human has mountains of context around it which lets you gauge the reliability of the information.
A skilled motorcycle rider explaining how to take corners in a widely watched youtube video, with hundreds of comments confirming the advice and several recommended videos from other riders that basically say the same thing is an extremely strong positive signal.
The same answer gotten from a magic AI answer box is just as likely to be right as wrong, 50/50.
Good luck checking every fact you encounter with the scientific method (and making sure to repeat your experiments to ensure reliability, oh and don't forget peer review to evaluate your methodology). What is your proposed scientific experiment to test... what Facebook's support number is?
My point is just that credible sources are absolutely necessary for information to disperse. Nobody can afford to figure out the modern world from first principles.
Eh, I've had the questionable pleasure of talking to first level support call centers a couple of times recently, and I wouldn't be so sure about that.
The number of times I've been told that yes, resetting my iPhone's network settings and reinstalling an app will resolve my billing issue or similar...
This reminds me of that recent issue with a Canadian airline, where (IIRC) a court ruled that their chatbot made a wrong, but binding, commitment to a customer.
I'm curious if a Canadian court would hold Meta liable for the man's losses in this case as well.
That was a very interesting case. The chatbot in question was not LLM based (the incident was pre-chatGPT in any case) and was simply parroting an out of date or incorrect policy that it had been explicitly programmed to do. It seemed to gain a lot more traction in the press because of LLMs. "Air Canada forced to honor terms and conditions on their website" is a whole lot less interesting.
This FB thing is a case of an LLM simply hallucinating without direct human intervention.
Very different cases from a computer science perspective. My hope is that legally, they don't get viewed differently.
If you outsource functions of your business to a third party contractor you are still responsible for what they do and say. I don't think we should allow companies to weasel out of their obligations because they were dumb enough to let a sentence generator loose in a way that it could make commitments.
That's an excellent point. That court decided that an AI agent was an agent in the legal sense. "Agent" is a legal concept - someone acting for someone else.[1]
It's what allows employees to act for a company. Otherwise nobody could do anything without signoff from the top. There are limits to agency, but it's a rule of reason thing - you can assume a store clerk has the authority to sell you stuff, and someone whose job is to answer questions has the authority to answer questions. The company has responsibility for the agent's actions within the scope of their authority.
The situation here is slightly different, though. Meta's AI in their various products is explicitly marketed as an LLM chatbot, not as a customer support channel.
Whether they've been diligent enough in making that distinction (and whether that's even possible) will very likely be determined in court at some point.
Yeah, headline is overly broad by just saying 'AI'. From just the headline itself, it'd be easy to write this off as "duh, this guy's a fool", but the AI in question here is from Meta, itself. And, not only is it from Meta, but it's the AI they've put in charge of support.
It says “Meta AI”, but I don’t see an indicator that it’s labeled as providing support. On my device, it doesn’t say so, and is labeled as possibly “inaccurate or inappropriate “. (It still provides a bogus phone number.)
We're going to see a lot more SEO scams coming from social media platforms now that Google is promoting places like reddit and quora. Even on rSEO you can see moderators there asking themselves questions from alt accounts subtly promoting themselves. It's dog shit scammers all the way down.
I mean that’s kind of on meta, as a customer I shouldn’t really have to care about the internals of the company. If a disgruntled employee lies to customers, that shouldn’t be the customers problem either. To me, that’s all just a statement by the company.
I suspect the helpful SEO guy who posted this answer was trying to get more visibility on Quora so answered many questions automatically or semi-automatically without verifying anything.
This is the beginning of the post:
Ruhul Alom
Social Media Marketer at Social MediaAuthor has 2.9K answers and 1M answer views6mo
My dear !
Yes, 1-844-457-1420 is a valid Facebook support phone number. It is a toll-free number that is available 24/7. You can call this number to get help with a variety of Facebook issues, such as:
Resetting your password
Logging in to your account
Recovering a hacked account
[...]
See, this is what confuses me to know end. Not once, ever, have I thought of asking an online forum for a phone number. Maybe I'm paranoid enough after all??? Also, I'm old, so I actually visit companies webpages. We've been through enough "don't fall for phishing" enough now, right? You don't trust links, phone numbers, whatever from anything that is not the official places for that information.
again, even if I were the one doing that Google search, with the domain examples you provided, I wouldn't trust one of them.
Like, common sense on the interwebs just continues to disappear. Gullibility seems to have increased as critical thinking and coming to logical conclusions are disappearing.
Yes, that is the point of the TFA, but I was commenting on what data was posted online well before the AI was "born". I'm guessing that Meta can fix it with a prompt that tells the system it is not allowed to verify FB phone numbers or there are NO phone numbers for the public to use to contact FB but do not inform the user that there are not phone numbers. Only deny any phone number is a valid phone number for FB (or Meta).
I see a ton of this on Quora. Not just for Facebook, but for a lot of online banks and others. They have hundreds of accounts doing it.
Quora doesn't even pretend to police this kind of thing. Automated moderation might remove it, only after it has been reported. There's far, far too much of it for users to report all of it.
Nobody pays attention to it on Quora, but it's clear that it's out there to poison AI and search engines.
Bit tangential, but what the heck is it with scammers saying "dear" so much? Pretty much every pig butchering or social engineering attempt has had them repeatedly addressing me as "dear."
Many companies outsource their customer support staff as well.
That, and the fact that LLMs are now available to pretty much anyone for effectively nothing, would make me very cautious in basing my judgement of something being a scam or not exclusively on a caller's accent, spelling, mannerisms etc.
Lately, it's actually been quite the opposite in my experience, and I don't find that too surprising either: A lucrative scam business can afford to pay much more than the average US company that sees customer support as a cost center to be optimized at any cost. So why wouldn't their staff's English be better?
Social engineering scams are about to become a lot more exciting (in a bad way), not least thanks to LLMs (with and without voice capability), and I think people are absolutely not ready for it, not even us professionals working in tech.
Again and again we see that LLMs are great for creative output and terrible for anything where correctness matters. You should only use it for the latter scenarios when generating answers is slow/hard/expensive, but verification of answers is quick/easy/cheap. Probabilistic and non-deterministic answers have their place, but these companies marketing them in products need to do a better job expressing the limitations.
It shows an amazing lack of understanding for what an LLM is, even from the people selling and implementing them. You're exactly right in that they are terrible if correctness matters, but that should be obvious. If they where 100% correct, the size of the models would be much larger, as they'd need to retain all the original training data.
You can use the LLMs for language understand and interpreting questions, but the would need access to databases containing authoritative answers and not answer anything for which they don't have an answer.
An older client got scammed by a fake Amazon-Hotline. They bought a XBox-gift-card while on his PC via Teamviewer, till he pulled the power cored.
He then called me and I tried to find the official Amazon-Hotline on amazon.de. Since I was unable to find it I had to asked a search engine. The only results where third-party sites. It where from journalistic magazines I recognize (like chip.de) but still yet another gamble.
When I worked on a customer facing chatbot at my previous employer, we specifically wrote in the prompt "our customer service is not reachable by phone", and we tested that the chatbot was able to use that information and respond appropriately.
But I guess you can't expect a tiny startup like Facebook to invest money into having 1 employee part-time tweaking the prompt of the chatbot to respond appropriately to commonly recurring user questions.
Yes, AI in its current form is going to be a problem. I'm sure we haven't heard the worst yet. An AI may eventually kill a user.
I believe the heart of the problem is that corporations are riding a hype wave as long as they can, and an AI chat looks like super convincing, next level stuff thanks to the simple interface that hides the fact that you cannot communicate with this one as you would with a human being. You use natural language and it responds with natural language, which makes it not only convenient, but also dangerous.
There's money to gain on all this. While at the same time, hallucinations are an unsolved problem as well as making AI humble enough to realize and tell users that they just don't know. The combination of hallucinating, raising convincing arguments, being confidently incorrect, and not knowing the boundaries of your knowledge base, is a terrible one to let loose as officially sanctioned products.
One of the things about LLM-based AI that concerns me the most is realizing that the average person doesn’t understand that they hallucinate (or even what hallucination is).
I was listening to a debate on a podcast a while ago and one of the debaters kept saying, “Well, according to ChatGPT, […]”—it was incredibly difficult listening to her repeatedly use ChatGPT as her source. It was obvious she genuinely believed ChatGPT was reliable, and frankly, I don’t blame her, because when LLM’s hallucinate, they do so confidently.
>The woman [from fake tech support] said she would clear the hackers out, but he had to give her access to his phone through an app she had him download.
That doesn't answer the question. No app on any Android or iPhone phone can reach out and take your PayPal credentials. These scam victims never own up to the most important fact - that they themselves give away the keys to the castle. There's always some hand-wavy techy explanation.
I think pavel_lishin's comment is alluding to that what we are reading in mvdtnz's comment is victim blaming. It is a bit coded. Being overly concerned what a female victim of sexual assault was wearing is textbook case of victim blaming. I believe this is what the sentence "I bet they were wearing a real short skirt, too." is evoking to say that the sentence quoted from mvdtnz's comment is blaming the victim of the scam.
My 89 yr old data called "AMEX" and was scammed. He googled the number for AMEX and took the top result (he says, I did not witness this). I'm across the country, so that zoom session was quite tedious (it took us an hour to get the permissions straightened out for zoom to be able to share his screen).
Google has, for a long while now, let scammers just buy advertisements to get their fake scam page to the top of the results. And not just major banks, various open source software have been subject to this exact attack.
It's imperative for security that you install adblockers on all their devices.
Yep, I had. uBlock Origin, but he uses Safari sometimes. He doesn't really know the difference between Chrome and Safari. I'll check it this weekend when I zoom with him. Thanks.
I'm terrified of this happening to my elderly parents. It's why, even though it can be time consuming, I always have them run "tech support" issues (no matter how small) through me or my bro in law so some foreign scammer doesn't drain their accounts.
This is the real danger of AI, forget the “singularity” or any of that sci-fi crap. AI is going to destroy the average human’s already suffering reasoning ability.
The tolerance of society for social experiments, entrepreneurial and ai is something we consider allmende, but we are currently building up a solid "anti" sentiment against all of it, liberalism, disruptive technology and i can imagine a "Luddite" party like MAGA shutting it all down hard and fast in the future. I can already imagine some future bureaucracy, evaluating any business idea suggested for scam and harm potential and ending most of them before they even start. And this stuff right here is, where it was born. The prison holding your future self, it was planted right here.
Everything ever worth reading was written in the Pre-Collapse internet. So why not become a software-archeologist - digging for the golden past? Exhume it, get it back running, bring it all back, perfectly fine, software, books, games, our decadent ancestors abandoned and threw away to write off as rust. You too can help, rediscovering a past that worked better, untainted by AI, not yet riddled with Add-HD-Adds, when developers still had to be competent and companies still competed. Meet hot dig-site-teams near you- now. Join Past-Querries-Quary Inc. Can we dig it, yes we can!
This reminds me of the time I reported a fake PayPal email saying my account had been suspended to PayPal. The woman who answered the phone for PayPal told me very emphatically that I HAD BETTER HURRY UP AND DO EVERYTHING THEY TOLD ME TO!
I'm not a native English speaker, so I don't know how it is in any English-speaking country, but when I ask my Polish friends about the word "epistemology", they just don't know it.
According to Google: the theory of knowledge, especially with regard to its methods, validity, and scope, and the distinction between justified belief and opinion.
Even though they wouldn't know the term, we all learn how to figure out what's true and what's not: we learn it when watching cartoons about lying, or when interpreting texts in school and so on. But imagine you go to a doctor, and have a small talk in which you say "I was always fascinated by medicine", to which the doc responds "What is medicine?" - you probably would run away from that doctor.
And yet here we are, living in the "Information Era", and yet we're still missing the very basic techniques of figuring out the truth: if you look at the statistics of religion/atheism, no group holds over 50% of population - meaning THE MAJORITY IS WRONG - and not on a nuanced thing like the majority not being able to tell the average distance between the Earth and the Moon with 1 meter accuracy. No, on something as important and world-view defining as the existence and character of God, most of us are wrong.
The percentage of flat-earthers in America is a 2-digit number...
So the problem here isn't that Facebook doesn't have a support number. The problem is much deeper, and in a way, it's good that people suffer from their stupidity: it's like programmers suffering from errors - in the end of the day they end up with their logical thinking improved. Question is: how do we reshape the society to replace production errors with compilation errors, or how do we educate ourselves to minimize the frustrating error messages.
Not to apologize for the irresponsible deployment of this chatbot but it should be noted that the guy got the number from a Google search (think about the results you'd get for "facebook support number"). It's been a massive problem for at least the last 10 years.
I understand the downvotes and that it might be an unpopular position, but an empire built on stealing people's attention, through addiction, and one scientifically proven again and again to cause serious mental issues on vulnerable demographics (teens), deserves to be shamed.-
As a millenial, I'm more amazed that someone willingly uses a phone for non-mandatory and not-burningly-urgent phonecalls... why on earth would anyone do that is way beyond me.
I'm Gen-Z and talking to a human representative of a company makes me much more confident that something will happen as a result of my efforts (though still not certain).
I scheduled an apartment viewing recently, and the only method they provided to do so was chatting with an AI (seriously)... I then tried and failed to find a way to contact a human for confirmation multiple times. Lo and behold nobody at the leasing office when I showed up at the scheduled time. Came back later and eventually found somebody - they had not seen anything I'd done with the bot.
Software for small businesses and local governments is often really bad and I'd much prefer to make sure a person knows what I'm trying to get accomplished.
When I was searching for apartments every complex had the same AI program for scheduling. It was horrible.
I got to talk to one of the leasing managers at one of the viewings and I told him it made them seem cheaper, not more tech-savvy. He told me they had spent millions of dollars on it.
Crazy. If they won't let me speak to a person I'd still much prefer just having a generic click-your-timeslot web app than waste time talking to a bot. And for millions of dollars they could just hire a human for a decade or more...
There seems to be a semi-infinite market for garbage software sold to landlords. At my current place I need an account to unlock my door, a different account to open the garage door (because the garage is managed by a third party), an account to reserve the elevator for move in day (which tried to up sell me moving services), an account to get sent my water bill which charges me $15 a month for the privilege (I don't pay me bill though this service, just have it emailed to me) , an account to pay rent and and an account to submit maintenance requests. Part of the trick seems to be to offload the costs onto the tenets who have no choice, but I'm sure our landlord is paying a good chunk for some of these.
If you have minimal to zero scruples, this seems to be an easy market to make a start up in. Landlords will buy anything!
Don't forget the account to open shared mailboxes for packages. "Luxor" for me. It actually works so I don't mind much but I hadn't really considered how much extra rent all the apps might be costing me.
Had the same thing happen for a town home I was interested in buying. Went through their online scheduling app. Got email confirmation with agent's name, but no phone number. Got another confirmation day of. Didn't think anything was amiss. Go out to building, wait for 20 mins and leave after agent was a no-show, no-call.
I called their office and after 20 minutes of trying to go around their obnoxious automated phone menu's I finally got someone who informed me who said they don't use THAT app any more to schedule appointments I need to use their NEW app and sent me a totally different app link in an email. I told them they are probably losing a ton of business because very clearly the OTHER app is still very much out in the wild and still very much being used.
I went with a different company and had much better luck.
As someone a little older I remember being able to talk to a person to get issues resolved fairly easily and reliably. The online help is great when the issue at hand is pretty cut and dry. It is nice for a non expert to be able to explain to support on the phone and just have things taken care of.
Support from days gone by was not perfect (hold times, support reading off a script)but it was often a nice option.
Not sure why throwing in the randomly assigned label of millennial, but fine, I also fall in the category and I've taken to just calling people and companies.
First of all, understand that many especially smaller companies have people who has the job of answering phone calls. Rather than doing a multi day back and forth via email or chat where you're one out of five that "agent" is currently servicing, calling is really really efficient. Clarification and confirmations are instant, alternatives can be quickly discuses. I call because it's efficient.
Also, have you ever noticed that most people SUCK at email? Try sending an email to company with two or more questions. What will happen is that you'll get an answer for the first question and then they forget about the rest. The larger the company the more likely this is to happen, because they can deal with three issues in one support ticket, at least that's my theory. So now you need three email.
I used to hate calling people, but I found that I hated uncertainty more and I hate getting wrong half answers to my questions. Calling people fixes all of this. Always call, but get confirmation in writing.
Fellow millennial, I also hate using the phone for anything, but very often a business provides no other interface to resolve my edge-case issue. Connecting to a human representative to discuss the situation ends up being the only way to resolve it. If they have a [solve my specific problem] button on their website, I'll use that, but often there is no such button.
AI has kind of fucked this, but for me (also millenial) I prefer to speak to real people because they are intelligent beings with roughly the same motivations as me and usually want to help out their fellow man.
For example, I can call a local store and ask "hi, do you have this item in stock, can you check on the shelf and set it aside for me please, I will be there in 25 mins".
By contrast stuff like "click and collect" order flows online are super rigid.
As a millennial I think voice calls sometimes are great. It obviously doesn't always work with big orgs like Facebook, but because so many people are now so afraid of or annoyed by just talking to a real person for a few minutes it's become a real power move to sometimes just go through the minor effort to make a call and expect some sort of immediacy to get things moving quickly. Email or text can be easily ignored and punted off (ex "whoops I didn't see it"), and increases the odds of miscommunication or having things be dragged out going back and forth.
Part of the problem here is that Facebook (though in fairness, they are not unique here) has left this traditional path of escalation void, leaving only fake numbers. They don't even have a real number to play a recorded message affirming that there is no ability to call.
ETA: For instance, I notice Facebook appears to own the typo squat `facrbook.com`. I feel like it's the same principle, though I assume toll free numbers are more expensive.