This was pre-Anthropic but the fact that Bun automatically loads .env files if they're present almost disqualifies it from most tasks https://github.com/oven-sh/bun/issues/23967
It makes it hard to take them too seriously with such a design choice - a footgun really. It's so easy to accidentally load secrets via environment variables, with no way to disable this anti-feature.
1. Randomly peeking at process.argv and process.env all around. Other weird layering violations, too.
2. Tons of repeat code, eg. multiple ad-hoc implementations of hash functions / PRNGs.
3. Almost no high-level comments about structure - I assume all that lives in some CLAUDE.md instead.
It's implicit state that's also untyped - it's just a String -> String map without any canonical single source of truth about what environment variables are consulted, when, why and in what form.
Such state should be strongly typed, have a canonical source of truth (which can then be also reused to document environment variables that the code supports, and eg. allow reading the same options from configs, flags, etc) and then explicitly passed to the functions that need it, eg. as function arguments or members of an associated instance.
This makes it easier to reason about the code (the caller will know that some module changes its functionality based on some state variable). It also makes it easier to test (both from the mechanical point of view of having to set environment variables which is gnarly, and from the point of view of once again knowing that the code changes its behaviour based on some state/option and both cases should probably be tested).
That's exactly why, access to global mutable state should be limited to as small a surface area as possible, so 99% of code can be locally deterministic and side-effect free, only using values that are passed into it. That makes testing easier too.
environment variables can change while the process is running and are not memory safe (though I suspect node tries to wrap it with a lock). Meaning if you check a variable at point A, enter a branch and check it again at point B ... it's not guaranteed that they will be the same value. This can cause you to enter "impossible conditions".
Wait, is it expected for them to be able to change? According to this SO answer [0] it's only really possible through GDB or "nasty hacks" as there's no API for it.
I’m not strongly opinionated, especially with such a short function, but in general early return makes it so you don’t need to keep the whole function body in your head to understand the logic. Often it saves you having to read the whole function body too.
But you can achieve a similar effect by keeping your functions small, in which case I think both styles are roughly equivalent.
useCanUseTool.tsx looks special, maybe it'scodegen'ed or copy 'n pasted? `_c` as an import name, no comments, use of promises instead of async function. Or maybe it's just bad vibing...
Maybe, I do suspect _some_ parts are codegen or source map artifacts.
But if you take a look at the other file, for example `useTypeahead` you'd see, even if there are a few code-gen / source-map artifacts, you still see the core logic, and behavior, is just a big bowl of soup
This looks like a useful set of guidelines. I see the most value in reducing the bikeshedding which invariably happens when designing an API. I wonder if anyone is using AEP and can comment on downsides or problems they've encountered.
One thing I've noticed is that the section on batch endpoints is missing batch create/update. Also batch get seems a little strange - in the JSON variant it returns an object with a link for missing entities.
It also struck me as a bit of a sleight of hand - but maybe it's just rhetorical flourish. Or more charitably you could say it's inevitable - in a conference talk of finite length, you can't possibly back up every assertion with detailed evidence. "It turns out" or "it ends up" are then a shorthand way of referring to your own experience.
Literally every interview I've done recently has included the question: "What's your stance on AI coding tools?" And there's clearly a right and wrong answer.
In my case, the question was "how are you using AI tools?" And trying to see whether you're still in the metaphorical stone age of copy-pasting code into chatgpt.com or making use of (at the time modern) agentic workflows. Not sure how good of an idea this is, but at least it was a question that popped up after passing technical interviews. I want to believe the purpose of this question was to gauge whether applicants were keeping up with dev tooling or potentially stagnating.
To be fair, this topic seems to be quite divisive, and seems like something that definitely should be discussed during an interview. Who is right and wrong is one thing, but you likely don't want to be working for a company who has an incompatible take on this topic to you.
Nice write-up, thanks for sharing. How does your hand-vibed python program compare to frameworks like pipecat or livekit agents? Both are also written in python.
I'm sure LiveKit or similar would be best to use in production. I'm sure these libraries handle a lot of edge cases, or at least let you configure things quite well out of the box. Though maybe that argument will become less and less potent over time. The results I got were genuinely impressive, and of course most of the credit goes to the LLM. I think it's worth building this stuff from scratch, just so that you can be sure you understand what you'll actually be running. I now know how every piece works and can configure/tune things more confidently.
I'm a bit confused by the fact that the array starts out typed as `any[]` (e.g. if you hover over the declaration) but then, later on, the type gets refined to `(string | number)[]`. IMO it would be nicer if the declaration already showed the inferred type on hover.
I agree, it's always been unsettling to see any[] on hover, even though it gets typed in the end.
I think one reason might be to allow the type to be refined differently in different code paths. For example:
function x () {
let arr = []
if (Math.random() < 0.5) {
arr.push(0)
return arr
} else {
arr.push('0')
return arr
}
}
In each branch, arr is typed as number[] and string[], respectively, and x's return type is number[] | string[]. If it decided to retroactively infer the type of arr at declaration, then I'd imagine x's return type would be the less specific (number | string)[].
I don't believe this is correct. There's no settings that correspond to that AFAIK, and it'd actually be quite bad, because you could access the empty array and then get a `never` object, which you're not supposed to be able to do.
`unknown[]` might be more appropriate as a default, but TypeScript does you one better: with OP's settings, although it's typed as `any[]`, it'll error out if you don't do anything to give it more information because of `noImplicitAny`.
I think when you are new with good ideas, you are judged against average. If you are above average, you are listened to.
As years pass, you are judged against the standard you set, and if you do not keep raising this standard, you start being seen as average, even if you are performing the same when you joined.
I've seen this play out many, many times.
When an incompetent person is hired, even if issues are acknowledged, if they somehow stay, the expectations from them will be set to their level. The feedback will stop as if you complain about same issues or same person's work every time, people will start seeing this as a you problem. Everyone quietly avoids this, so the person stays.
When a competent person is hired, it plays out the same. After 3/5/10 years, you are getting the same recognition and rewards as the incompetent person as long as you both maintain your competency.
However, I've seen (very few) people who consistently raised their own standards and improved their impact and they've climbed quickly.
I've seen people lowering their own standards and they were quickly flagged as under-performers, even if their reduced impact was still above average.
I agree with this summary to a degree. Additional problem arises when you simply cannot raise the standard as you lack political influence to do so. As it is said in the article - sometimes companies are comfortable with status quo, irregardless of the problems, whether they are technical or not. Another issue stems when product, rather than looking at tech as a partner in pursuit of common goal starts to see it as an underling.
While I can't say that I observe that kind of radical shift for myself, one of the reasons I still can see something similar is AI development.
Basically manager asks me something and asks AI something.
I'm not always using so-called "common wisdom". I might decide to use library of framework that AI won't suggest. I might use technology that AI considers too old.
For example I suggested to write small Windows helper program with C, because it needs access to WinAPI; I know C very well; and we need to support old Windows versions back to Vista at least, preferably back to Windows XP. However AI suggest using Rust, because Rust is, well, today's hotness. It doesn't really care that I know very little of Rust, it doesn't really care that I would need to jump through certain hoops to build Rust on old Windows (if it's ever possible).
So in the end I suggest to use something that I can build and I have confidence in. AI suggests something that most internet texts written by passionate developers talk about.
But manager probably have doubts in me, because I'm not world-level trillion-dollar-worth celebrity, I'm just some grumpy old developer, so he might question my expertise using AI.
You mention the tradeoffs between rust. Including the high level of uncertainty and increased lead time as you need to learn the language.
The manager, now having that information, can insist on using rust, and you get er great opportunity to learn rust. Now being totally off the hook, even if the project fails, as you mentioned the risks.
“Truly I tell you,” he continued, “no prophet is accepted in his hometown."
- Luke 4:24
It's why people often trust consultants over the people inside the organization. It's why people often want to elect new leaders even if the current leaders are doing a decent job.
The baby almost always gets thrown out with the bath water.
I find this hilarious given that I've experienced it from both viewpoints - 1. consultant implemented their half baked solution that continued to bite us for my tenure and imo was completely unmaintainable; how were they able to convince leadership about their ideas - sometimes it's just snake oil 2. In new place am preaching certain things to people that do listen and seem to want to do it - it makes me a bit uncomfortable and to a degree scary in how easily you can find acolytes. They do validate my suggestions, ask questions and most importantly - think, so I am hopeful that I won't turn out to be a false prophet
I've also played both roles myself at times. I've been the wise consultant. And I've been the Cassandra that nobody would listen to. My wisdom was never as good as presumed when I was the consultant. And my wisdom was far better than was assumed when I as the Cassandra.
The prevalent pattern I can see is making things mundane. Capabilities that you are enabling are no longer something that only you could do, was you expertise there at all? Things running smoothly is something that is granted. Doing your job well becomes unexceptional
It makes it hard to take them too seriously with such a design choice - a footgun really. It's so easy to accidentally load secrets via environment variables, with no way to disable this anti-feature.
reply