Just because you're not the target audience doesn't mean there's not a real target audience.
There a people today learning to use an LLM instead of an actual search engine. For these types of people whatever happens outside of the LLM app is invisible to them. The social media apps did similar where they started letting people purchase directly within the app. People started looking at these shops rather than doing searches for it elsewhere.
Buying things on a social media app is crazy to me, but I don't use the social media apps. Buying things from an LLM app seems crazy to you, (because it's new and it's borked is fixable), but to people that first turn to their LLM app of choice that decision isn't so crazy.
> Just because you're not the target audience doesn't mean there's not a real target audience.
Just because a target-audience has >0 members doesn't mean the target is plausible or good. :p
______________
To transcribe an old Dilbert comic, which I think captures the sometimes, uh, aspirational nature of "target audiences":
1. Dogbert, presenting a labeled circle: "Your target market is the high income group. They're the only ones who can afford your product."
2. Dogbert, adding circles to Venn-diagram: "More specifically, they must be rich, tasteless and easily amused. I've located a cluster of them for study."
3. Scene change to outdoor lawn, one suitably-dressed man confiding to another: "That dog's watching us golf again."
Social media apps generally opened up new markets though of their existing user bases as sellers. Perhaps chatgpt could know everything in your house, if you don't actually use it, and pair you with a neighbor that needs it!
Yes. Not interacting with neighbors is something that can happen naturally, but working hard to not is an entirely different thing.
Knowing your neighbors is a good thing. Even if it's just a friendly hi. You don't have to hang out, but if there's ever something you need like "did I leave the sprinkler on" or "did I leave the stove on" or "borrow a cup of sugar", it helps being on speaking terms with a neighbor rather than your first interaction with them is because you need something.
Caveat being that you live next door to Epstein or similar where not knowing them will be beneficial when the police come asking questions.
I disagree, walmart's website isn't nice. a lot of commerce sites are cancer!
if i can just ask chatgpt or gemini to shop I'd love that.
Just navigating their sites for items is a pain, I can imagine an LLM being great at finding items, and facilitating the browsing experience. My only concern would be having to chat with it a lot, and any dark patterns coercing impulse purchases.
to buy a toilet paper, you have to click 2-3 buttons or type out toilet paper in a search bar and hit a button. and then you have to scroll around, hit more buttons, filter, sort, then click through to the product page, review details, add it to your cart, start the checkout and select a payment method, confirm the order, and purchase.
compared to prompts:
"Show me toilet paper that has good reviews for being soft and that doesn't clog"
"I'll buy the first option, just use the card I used last time for payment"
What saves time with an LLM is being able to communicate what you want in natural language. with the normal experience, you're pressing lots of buttons and other inputs to get some results so you figure it out yourself. You will get results, and you will pick what you think is the best option for you (but chances are it isn't, since the site didn't know exactly what you wanted).
I'm not promoting "ChatGPT Checkout", however, just because something sucks now doesn't mean it will suck forever. I'm confident they'll iterate and improve. I say this because my 14-year-old niece's entire world seems to be ChatGPT; it's pretty much the only way she interacts with the internet. I don't think she's alone - all her young peers are like this, too. Retailers know this, so they've got no choice but to improve the experience of purchasing crap through an AI chat system.
When I read that sentence I thought of travel booking sites. The kind that are unavailable to help you when you arrive at a hotel that has no record of your reservation. The kind I avoid precisely because I’ve had enough problems like that where it’s not worth saving 10% and I’d rather just book direct and avoid the headaches.
When the hotel, airline, car rental, etc. can’t even talk to me because of the channel
I purchased from that’s a problem. So I’d expect similar issues from buying through a AI interface. Walmart will say something about “we can’t see your order details” when your delivery driver forgot the milk.
These interactions really don't get the testing they need.
When they aren't designed, how do you know how to test?
Over the weekend, I was directed to file a police report with a chatbot and could not complete it because it was asking for information that did not exist and did not apply to my case.
(I'm sure somebody is going to say that this can be solved by having LLMs role play as victims and have an LLM observe and decide what's a failing test case and what isn't.)
As bad as the idea of a solution looking for a problem, this is peak science. Copernicus who figured out that the Earth was not at the center of the solar system, he had a solution. The general word is called deduction.
Personally I am an inductivist, I imagine you may be too.
Think top down decisioning is deduction. Bottom up is induction.
You might think induction is amazing but if you ask yourself "Are there any black swans?" and your answer is "No I've never seen any so there can't be any black swans." The issue is you've never actually seen every Swan and actually there are black swans in Australia.
Point being, we don't know if this is a good thing until it's tested.
Why would anyone have an extra layer of friction too where things could go wrong, where handing over payment details in another chain.
Just let me buy my stuff in peace. Shopping is not the 'killer app' for GenAI.