Any decision maker can be cyberbullied/threatened/bribed into submission, LLMs can even try to create movements of real people to push the narrative. They can have unlimited time to produce content, send messages, really wear the target down.
Only defense is to have consensus decision making & deliberate process. Basically make it too difficult, expensive to affect all/majority decision makers.
Communities also evolve and devolve with time even without large external event. Maybe you don't feel the same belonging in the friend group after ten years or community grows to become something it wasn't in the beginning.
Maybe you have to accept that communities are here and now, but they can dissolve at any time.
Even if you can achieve awesome things with LLMs you give up the control over tiny details, it's just faster to generate and regenerate until it fits the spec.
But you never quite know how long it takes or how much you have to shave that square peg.
I see that Software as a Service banked too much on the first S, Software. But really customers want the second S, the Service.
When you sell a service, it's opaque, customer don't really care how it is produced. They want things done for them.
AI isn't killing SaaS, it's shifting it to second S.
Customers don't care how the service is implemented, they care about it's quality, availability, price, etc.
Service providers do care about the first S, software makes servicing so much more scalable. You define the service once and then enable it to happen again and again.
They didnt, dont make the mistake of thinking Saas companies are just software companies. They are Sales companies who happen to sell software. Companies like Dropbox & Atlassian have long been surpassed in Tech but they live only because they continue selling even when demand was hard to get. Their moat is sales & networking and software has to be just good enough. And other part is service, these companies still have one of best costumer service since the start of early 2010s. You can still get refund on Uber quite easily, but if you try doing that at a regular old school company you would require a prayer and couple of business weeks.
Yes, many don't like Sharepoint, but still they use it. It's the tool they can use.
Customers don't care if Sharepoint uses LLM, they just want to share ideas, files, reports, pages, etc. If LLM makes it easier, great! If some other product makes it easier, great!
It's not about the product it's about the results.
You're proving the point? Sharepoint, teams: availability + price. Every company has microflows, sharepoint and teams are automatically available and part of the price or lower priced than the competition.
Nah it's not that at all. Most of the services are totally fungible and everyone has a short attention span. You need to be in a market which is extremely difficult to disrupt and have a product which people are totally dependent on. And those tend to have a rather large cost to enter unless you were in early.
I just don't want to pay $50/user/month for an initially open source product that was relicensed and then crippled that the initial group giving something away decided they wanted to make a business of it.
I think we will use more tools to check the programs in the future.
However I don't still believe in vibecoding full programs. There are too many layers in software systems, even when the program core is fully verified, the programmer must know about the other layers.
You are Android app developer, you need to know what phones people commonly use, what kind of performance they have, how the apps are deployed through Google App Store, how to manage wide variety of app versions, how to manage issues when storage is low, network is offline, battery is low and CPU is in lower power state.
LLMs can handle a lot of these issues already, without having the user think about such issues.
Problem is - while these will be resolved (in one way or another) - or left unresolved, as the user will only test the app on his device and that LLM "roll" will not have optimizations for the broad range of others - the user is still pretty much left clueless as to what has really happened.
Models theoretically inform you about what they did, why they did it (albeit, largely by using blanket terms and/or phrases unintelligible to the average 'vibe coder') but I feel like most people ignore that completely, and those who don't wouldn't need to use a LLM to code an entirety of an app regardless.
Still, for very simple projects I use at work just chucking something into Gemini and letting it work on it is oftentimes faster and more productive than doing it manually. Plus, if the user is interested in it, it can be used as a relatively good learning tool.
Skills.md will in time have same problem as MCP, they will bloat the context. I wonder if we could just have the scripts without the descriptions and LLM would have been trained to search the most useful things in specific folder.
This seems like a solvable engineering problem. For example, you could have a lightweight subagent with its own context for reading the skills and determining which to use
I like the idea but the example doesn't make much sense.
In what application would you load all users into memory from database and then filter them with TypeScript functions? And that is the problem with the otherwise sound idea "Functional core, imperative shell". The shell penetrates the core.
Maybe some filters don't match the way database is laid out, what if you have a lot of users, how do you deal with email batching and error handing?
So you have to write the functional core with the side effect context in mind, for example using query builder or DSL that matches the database conventions. Then weave it with the intricacies of your email sender logic, maybe you want iterator over the right size batches of emails to send at once, can it send multiple batches in parallel?
I am surprised by this example, for the same reason.
Generally, performance is a top cause of abstraction leaks and the emergence of less-than-beautiful code. On an infinitely powerful machine it would be easy and advisable to program using neat abstracrions, using purely "the language of" the business. Our machines are not infinitely powerful, and that is especially evident when larger data sets are involved. That's where, to achieve useful performance, you have to increasingly speak "the language of" the machine. This is inevitable, and the big part of the programmer's skill is to be able to speak both "languages", to know when to speak which one, and produce readable code regardless.
Database programming is a prime example. There's a reason, for example, why ORMs are very messy and constitute such excellent footguns: they try to gap this bridge, but inevitably fail in important ways. And having and ORM in the example would, most likely, violate the "functional core" principle from the article.
So it looks like the author accidentally presented a very good counterexample to their own idea. I like the idea though, and I would love to know how to resolve the issue.
Any decision maker can be cyberbullied/threatened/bribed into submission, LLMs can even try to create movements of real people to push the narrative. They can have unlimited time to produce content, send messages, really wear the target down.
Only defense is to have consensus decision making & deliberate process. Basically make it too difficult, expensive to affect all/majority decision makers.
reply