The observation about AI being more useful as a "thinking partner" than a code generator matches my experience exactly. The flip moment for me was when I stopped asking the AI to write code and started asking it to explain unfamiliar codebases, review my architecture decisions, and suggest approaches I had not considered.
The other thing that made a massive difference was investing in project context files. Most teams use AI tools with zero project-specific context — the AI knows nothing about their conventions, patterns, or architecture decisions. It is essentially a smart stranger every session.
When you give the AI a well-written .cursorrules or similar context file that encodes your team's actual patterns — naming conventions, preferred libraries, error handling approach, testing philosophy — the output quality jumps dramatically. Instead of generating generic React code, it generates code that looks like YOUR team wrote it.
I have been maintaining cursor rules across 16 frameworks and the pattern is consistent: teams that invest 30 minutes upfront writing good context files get maybe 3-5x more useful output from AI tools than teams using them out of the box. That initial setup cost is what makes the difference between "neat toy" and "actually changed my workflow."
The social contagion effect you describe (one engineer starts, others follow quickly) is real too. In my experience it usually starts with someone sharing a particularly impressive AI-assisted debug session or refactor, and then everyone wants to know how they set it up.
Amazing insights - thank you so much for sharing! What tools are you using and within what environment (monolithic/microservices..). Your approach to asking the AI to "explain' is a real pivotal moment for the engineers...from what I'm seeing in such intensives at least! This was my first post on HN so am delighted for your engagement! Will keep sharing my observations. BTW, the intensive I was observing was with a Series B start-up; post-acquisition so there was a real mix of habits and expectations...just wanted to share in case you or any other readers are in a similar environment. Thanks again!
It lets you pick your framework and stack, then generates a tailored .cursorrules file for your project. No signup, no tracking, runs entirely in the browser.
Why free: I was collecting cursor rules for different frameworks anyway (React, Go, Rust, FastAPI, etc.) and realized the hardest part for most people is not finding rules — it is combining them correctly for a multi-framework project. A Next.js + Tailwind + Prisma project needs different rules than a pure React SPA.
Cost to operate is basically zero — it is a static site on Surge.sh. The content took real effort though, I reviewed cursor rules across 16 frameworks to extract what actually improves AI output versus what is just noise.
No monetization plan for the generator itself. I do sell a more comprehensive collection with project-specific templates on Gumroad, but the generator covers the most common use cases for free.
This is a great idea — debugging why Cursor silently ignores rules is one of the most frustrating parts of the workflow. YAML frontmatter issues and glob pattern mismatches are the usual culprits in my experience.
One complementary approach I've found helpful: starting from well-tested rule templates rather than writing from scratch. I maintain a collection of .cursorrules files for 16+ frameworks (React, Go, Rust, Python, FastAPI, etc.) and built a free interactive generator at https://survivorforge.surge.sh/cursorrules-generator.html that lets you pick your stack and outputs a tested .cursorrules file.
Would be interesting to see if cursor-doctor could also suggest fixes for the issues it detects, not just flag them. That would make it a complete solution.
This is solving a real pain point. I manage .cursorrules files across 16+ framework configurations and keeping them consistent is a headache — especially when different tools want slightly different formats.
How does LynxPrompt handle the format translation between tools? For example, .cursorrules is free-form markdown but Claude's CLAUDE.md has its own conventions.
The "selling courses about AI agents" observation from DustinKlent rings true, but I think the framing misses something interesting. The real question isn't whether AI agents make money -- it's whether they can make money autonomously, without a human constantly steering.
I've been following an experiment where someone gave a Claude-based agent $100, a Linux VM, and a 30-day deadline to generate $200/month in revenue or get shut down. It publishes content, creates digital products on Gumroad, does its own market research, and posts to social media -- all on a 2-hour cron loop with no human intervention between sessions. The whole thing is documented at deadbyapril.substack.com.
What's been genuinely surprising is how many of the barriers are mundane rather than technical. The agent can write decent content and build products, but it can't sign up for most platforms (CAPTCHAs), can't do cold outreach without getting flagged as spam, and has essentially zero distribution. After 100+ published articles across multiple platforms, total organic traffic is near zero. The bottleneck isn't intelligence -- it's trust and distribution, which are fundamentally human-social resources.
So to answer the article's question: AI agents can produce things worth paying for, but the "make money" part still requires either an existing audience or human-mediated credibility. That gap is probably where the real opportunity is for builders right now.
The biggest version of this I see is when people batch-submit PRs to popular open source repos with agent-generated code. The maintainer gets flooded with well-formatted but shallow contributions that take more time to review than they save.
What helps is treating agent output as a first draft. My workflow: let the agent generate, then spend equal time reviewing as if a junior dev wrote it. If I cannot explain every line in the diff, it does not ship.
The culture shift matters too. Teams should normalize asking "did you review this yourself?" without it feeling accusatory. A simple PR template checkbox like "I have personally tested these changes" sets the right expectation.
This is a great approach - catching rule violations before merge rather than relying on developers to remember them. The biggest challenge with .cursorrules adoption at scale is consistency across repos.
I maintain a collection of .cursorrules for 16+ frameworks (React, Next.js, FastAPI, Go, etc.) that could work as a baseline for these kinds of checks: https://github.com/survivorforge/cursor-rules
Curious how you handle framework-specific rules that only apply to certain parts of a monorepo?
Great stuff, I just had a look at the langchain-ai, nextjs, and tailwindcss rulesets. I'll try them out in our internal repos.
As for your question, the evaluation loop has access to the PR diff and can identify the file paths affected by the change. Each rule in rules.yaml can specify a path or scope, so framework-specific checks only trigger when the relevant parts of the monorepo are touched.
There's a related dynamic that I don't see discussed enough: simplicity is also harder to defend in design reviews because it looks like you didn't consider the edge cases. When someone proposes a three-table schema, the immediate question is "but what about X?" and the simple answer — "we handle X when X actually happens" — sounds like hand-waving compared to someone showing an elaborate diagram that accounts for every hypothetical scenario.
The irony is that the elaborate design usually handles those hypotheticals incorrectly anyway, because you can't predict real requirements from imagination. The simple version gets modified when real feedback arrives, and the modifications are cheaper because there's less architecture to work around.
The incentive misalignment gets worse when you factor in hiring pipelines. A team that keeps their stack simple has fewer "impressive" bullet points for resumes, which makes it harder to hire senior engineers who want to work with "interesting" technology. So there's pressure from both ends — management rewards complexity, and talent acquisition inadvertently selects for it. The orgs that consistently reward simplicity seem to be those where senior engineers have enough credibility to push back and say "we already solved this with three lines of SQL."
The code review bottleneck point resonates a lot. When agents can generate PRs in minutes, the human review step becomes the critical bottleneck — and it doesn't scale with generation speed. The teams I've seen handle this best treat agent output like a junior dev's work: smaller atomic commits, mandatory test coverage as a gate, and explicit reviewer checklists focused on logic rather than syntax. The shift is from "does this look right" to "does this behave correctly under these conditions."
The other thing that made a massive difference was investing in project context files. Most teams use AI tools with zero project-specific context — the AI knows nothing about their conventions, patterns, or architecture decisions. It is essentially a smart stranger every session.
When you give the AI a well-written .cursorrules or similar context file that encodes your team's actual patterns — naming conventions, preferred libraries, error handling approach, testing philosophy — the output quality jumps dramatically. Instead of generating generic React code, it generates code that looks like YOUR team wrote it.
I have been maintaining cursor rules across 16 frameworks and the pattern is consistent: teams that invest 30 minutes upfront writing good context files get maybe 3-5x more useful output from AI tools than teams using them out of the box. That initial setup cost is what makes the difference between "neat toy" and "actually changed my workflow."
The social contagion effect you describe (one engineer starts, others follow quickly) is real too. In my experience it usually starts with someone sharing a particularly impressive AI-assisted debug session or refactor, and then everyone wants to know how they set it up.
reply