Hacker Newsnew | past | comments | ask | show | jobs | submit | krastanov's commentslogin

Strict police does sound quite a bit less bad than fascist police...

You have to know very little about China to think that it is somehow more favorable to foreigners and minorities than the US.

What the US does is bad, but somehow Americans think that means everywhere else is better.


I don’t know but I and my friends still visit China regularly, but not the US anymore, because we have no clue what’s the expectation there to not be in a jail for weeks. I have quite clear idea what the expectation in China, but not the US. Maybe there is something to it.

As an aside, it is really interesting to see a computational package that, while supporting multiple GPU vendors, was first vetted on AMD, not NVidia. It is encouraging to see ROCM finally shaking off its reputation for poor support.

The vendor-agnostic GPU approach via KernelAbstractions is great to see. The Vulkan compute path is underrated for this — it runs on AMD, NVIDIA, and Intel without needing ROCm or CUDA, just whatever driver ships with the GPU.

Re: the compilation latency discussion — it's a real tension. JIT gives you expressiveness but kills startup. AOT gives you instant start but limits flexibility. Interesting that most GPU languages went JIT when the GPU itself runs pre-compiled SPIR-V/PTX anyway.


well, I do hate vendor lock in with a passion ;) But yeah, a lot did happen, this likely wouldn't have been possible one or two years ago!

I am happy to hear that things as not as bad as I thought, but my experience being judge/mentor for a couple of years for the high school science fair near a top university was very discouraging and closer to what the author of the article describes.

Maybe the mass of the kids at the first round were what you describe, but very quickly the focus turned to the top 20% who were very much "reputation laundering" and "CV padding" internships at labs, not actual curiosity driven independent exploration


On the one hand, I understand why this can seem disheartening; we want the kids to be pure and protect them from the BS of the world. It would be great if we could just allow kids to play around with science, and there's no pressure to compete or perform.

At the same time, the reason they're doing the CV padding is because we, the adults in control of those systems they wish to gain access to (which is not the science fair), have made it so they need to pad their CV in the first place.

So as much as we want them to be purely driven by curiosity and independent exploration, our society at large does not allow for that kind of thing. Fixing those kinds of macro incentives doesn't happen by reforming the science fair.

If we want curiosity driven independent exploration for children, our society should provide that modality for adults. That we don't provide it for adults is reflected on our children; because we impose credentialism on adults, the adults understand credentialism is important to success and they impose it on their children. If adults understood that creative inquiry and independent exploration are paths to success, then more children would be encouraged to pursue that at the elite level.


I maintain serious code bases and I use LLM agents (and agent teams) plenty -- I just happen to review the code they write, I demand they write the code in a reviewable way, and use them mostly for menial tasks that are otherwise unpleasant timesinks I have to do myself. There are many people like me, that just quietly use these tools to automate the boring chores of dealing with mature production code bases. We are quiet because this is boring day-to-day work.

E.g. I use these tools to clean up or reorganize old tests (with coverage and diff viewers checking of things I might miss), update documentation with cross links (with documentation linters checking for errors I miss), convert tests into benchmarks running as part of CI, make log file visualizers, and many more.

These tools are amazing for dealing with the long tail of boring issues that you never get to, and when used in this fashion they actually abruptly increase the quality of the codebase.


It's not called vibe coding then.

Oh you made vibe coding work? Well then it's not vibe coding.

But any time someone mentions using AI without proof of success? Vibe coding sucks.


No, what the other commenter described is narrowly scoped delegation to LLMs paired with manual review (which sounds dreadfully soul-sucking to me), not wholesale "write feature X, write the unit tests, and review the implementation for me". The latter is vibe-coding.

Reviewing a quick translation of a test to a benchmark (or another menial coding tasks) is way less soul-sucking than doing the menial coding by yourself. Boring soul-sucking tasks are an important thankless part of OSS maintenance.

I concur it is different from what you call vibecoding.


Sidenote, i do that frequently. I also do varying levels of review, ie more/less vibe[1]. It is soul sucking to me.

Despite being soul sucking, I do it because A: It lets me achieve goals despite lacking energy/time for projects that don't require the level of commitment or care that i provide professionally. B: it reduces how much RSI i experience. Typing is a serious concern for me these days.

To mitigate the soul sucking i've been side projecting better review tools. Which frankly i could use for work anyway, as reviewing PRs from humans could be better too. Also inline with review tools, i think a lot of soul sucking is having to provide specificity, so i hope to be able to integrate LLMs into the review tool and speak more naturally to it. Eg i belive some IDEs (vscode? no idea) can let Claude/etc see the cursor, so you can say "this code looks incorrect" without needing to be extremely specific. A suite of tooling that improves this code sharing to Claude/etc would also reduce the inane specificity that seems to be required to make LLMs even remotely reliable for me.

[1]: though we don't seem to have a term for varying amounts of vibe. Some people consider vibe to be 100% complete ignorance of the architecture/code being built. In which case imo nothing i do is vibe, which is absurd to me but i digress.


> According to Karpathy, vibe coding typically involves accepting AI-generated code without closely reviewing its internal structure, instead relying on results and follow-up prompts to guide changes.

What you are doing is by definition not vibe coding.


It's not vibe coding if you personally review all the diffs for correctness.

Yeah esp. the latest iterations are great for stuff like “find and fix all the battery drainers.” Tests pass, everyone’s happy.

(rhetorical question) You work at Apple? :p

I find this type of posts unproductive, somewhat emotionally exhausting, and generally impolite.

Most of the time open source tools are a labor of love. If the tool is not for you, move on. But self-aggrandizing "this tool is not good enough for me" posts, when you have not contributed, and when you disregard the fact that the tool has been immensely helpful to many others (who might have even started contributing back) just creates negativity in the world for no good reason. Nothing good is created in posts like that (and no, such posts are not constructive critique).

And then there are "the language is dying" complaints -- I consider these the worst of all. A tool does not need to be the most popular tool to be useful. Let's stop chasing hockey-stick curves in all human endeavors.

(to prevent claims of sour grapes: I am not a Scala user, I just find this type of posts distasteful, no matter the target)


+1000.

A vast majority of posts and comments here shows a classic case of consumerism. I want this. I want that. This sucks. That sucks. Well, what are you bringing to the community?

There is this weird kind of amnesia with open source tools where people forget that these open source projects are a labor of love by small communities. They are developing the projects for themselves, their community. Are you part of that community or are you outside it? If you're outside the community, it shouldn't surprise you when the tool doesn't perfectly work for you. I sometimes feel that many of these people would be better off buying commercial tools and pay for support.


You are right to find that post unproductive, somewhat emotionally exhausting, and generally impolite.

And I have a similar opinion of comments like yours critizing the post from a person that explains their personal feelings, opinions and judgements about a piece of tech.

If you feel personally attacked when reading criticism about a lenguage/ecosystem you love take into account that people have different ways to think about programming.

You like a language that is the epitome of "more is more" and there are some people that prefer "less is more".


In my post I actually made a number of (seemingly unclear) claims diametrically opposite to what you surmise I claimed in my original post.


doesn't mean those posts should not exist.

that's a guy sharing his honest experience on his personal blog. you have the choice to simply not read the article. it was pretty obvious from the start what it would be, there was no click-baiting.

these posts also provide an honest pulse on reality. for this same reason i won't say your post "just creates negativity in the world for no good reason". your post gives some kind of feedback on what some real people think. my post is just meta+1 on this honestly.


Usually they randomly shoot atoms at the substrate and then just search for a spot (among thousands) where it randomly has the configuration they want. Still pretty amazing.


Can they do that here, they've got quite a few sets of 4/5 atoms which they've interconnected, so that's a lot to get by shotgunning it. I'd assumed they were using something like a STM to nudge the atoms around.


The “precision manufacturing” reference in the paper is to this 2012 paper about an STM placement technique. [0]

[0] https://www.nature.com/articles/nnano.2012.21


hmm. i remember my electron microscopes prof being very excited about his ability to manipulate single atoms exactly where he wants them ~10years ago.

id have assumed the holography has gotten more common and able to operate on bigger volumes


The magnitude of an "amplitude" is basis dependent. A basis is a human invention, an arbitrary choice made by the human to describe nature. The choice of basis is not fundamental. So just choose a basis in which there are no vanishingly small amplitudes and your worry is addressed.


Any implementation of Shor will need vanishingly small amplitudes, as it forms a superposition of 2^256 classical states.


This is completely missing the point. There is nothing fundamental to an amplitude. The amplitudes are this small because you have chosen to work in a basis in which they are small. Go to the Hadamard basis and the amplitude value is exactly 1. After all, the initial state of Shor's algorithm (the superposition of all classical bitstrings) is the perfectly factorizable, completely not entangled state |+++++++>


The initial state of Shor's algorithm just has the n-bit number to be factored. From there it creates the superposition in the next n steps.

Forget the talk about amplitudes. What I find hard to believe is that nature will let us compute reliably with hundreds of entangled qubits.


Shor's algorithm does not start with the qubits storing anything related to the n-bit number to be factored. The n-bit number is encoded *only* in the XOR-oracle for the multiplication function.

Shor's algorithm starts with the qubits in a superposition of all possible bitstrings. That is the only place we have exponentially small amplitudes at the start (in a particular choice of a basis), and there is no entanglement in that state to begin with.

We do get interesting entangled states after the oracle step, that is true. And it is fair to have a vague sense that entanglement is weird. I just want to be clear that your last point (forgetting about amplitudes, and focusing on the weirdness of entangled qubits) is a gut feeling, not something based in the mathematics that has proven to be a correct description of nature over many orders of magnitude.

Of course, it would be great if it turns out that quantum mechanics is wrong in some parameter regime -- that would be the most exciting thing in Physics in a century. There is just not much hope it is wrong in this particular way.


When the amplitude has norm 1, there is only one nonzero amplitude. Changing basis does not affect the number of basis functions.


> When the amplitude has norm 1, there is only one nonzero amplitude.

Yes, that is exactly the point. The example statevector you guys are talking about can (tautologically) be written in a basis in which only one of its amplitudes is nonzero.

Let's call |ψ⟩ the initial state of the Shor algorithm, i.e. the superposition of all classical bitstrings.

|ψ⟩ = |00..00⟩ + |00..01⟩ + |00..10⟩ + .. + |11..11⟩

That state is factorizable, i.e. it is *completely* unentangled. In the X basis (a.k.a. the Hadamard basis) it can be written as

|ψ⟩ = |00..00⟩ + |00..01⟩ + |00..10⟩ + .. + |11..11⟩ = |++..++⟩

You can see that even from the preparation circuit of the Shor algorithm. It is just single-qubit Hadamard gates -- there are no entangling gates. Preparing this state is a triviality and in optical systems we have been able to prepare it for decades. Shining a wide laser pulse on a CD basically prepares exactly that state.

> Changing basis does not affect the number of basis functions.

I do not know what "number of basis functions" means. If you are referring to "non zero entries in the column-vector representation of the state in a given basis", then of course it changes. Here is a trivial example: take the x-y plane and take the unit vector along x. It has one non-zero coefficient. Now express the same vector in a basis rotated at 45deg. It has two non-zero coefficients in that basis.

---

Generally speaking, any physical argument that is valid only in a single basis is automatically a weak argument, because physics is not basis dependent. It is just that some bases make deriving results easier.

Preparing a state that is a superposition of all possible states of the "computational basis" is something we have been able to do since before people started talking seriously about quantum computers.


Sounds like we agree on how basis vectors work. But you’re talking about the initial state, and I’m talking about the output. Finding a basis that makes the output an eigenvector isn’t trivial. Take Grover’s algorithm. You have to iterate to approximate that eigenvector. Small errors in the amplitudes can prevent convergence. When you have 2^256 components, amplitudes are divided down by around 2^128.

Even preparing the initial state that accurately is only trivial on paper.


The initial state was the example given. It is fair to then point out the consecutive states though. A few points still hold:

- I am not saying that you have to find a basis in which your amplitudes are not small, I am saying that such a basis always exists. So any argument about "small amplitudes would potentially cause problems" probably does not hold, because there is no physical reality to "an amplitude" or "a basis" -- these are all arbitrary choices and the laws of physics do not change if you pick a different basis.

- In classical probability we are not worried about vanishingly small probabilities in probability distributions that we achieve all the time. Take a one-time pad of n bits. Its stochastic state vector in the natural basis is filled with exponentially small entries 1/2^n. We create one-time pads all the time and nature does not seem to mind.

- Most textbooks that include Shor's algorithm also include proof that you do not need precise gates. Shor's algorithm (or the quantum Fourier transform more specifically) converges even if you have finite absolute precision of the various gates.

- Preparing the initial state to extremely high precision in an optical quantum computer is trivial and it has been trivial for decades. There isn't really much "quantum" to it.

- It is fair to be worried about the numerical stability of a quantum algorithm. Shor's algorithm happens to be stable as mentioned above. But the original point by OP was that physics itself might "break" -- I am arguing against that original point. Physics, of course, might break, and that would be very exciting, but that particular way of it breaking is very improbable (because of the rest of the points posted above).


I don’t think we can discuss precision usefully without numbers. We seem to agree on the word “finite” but that covers a lot of ground. “High” precision in rotating an optical polarization to exactly 45 degrees is maybe 60 dB, +- 0.0001% of the probability. That means the amplitudes are matched within 0.1%. 0.1% is fine for two qubits with 4 states. It might work for 8 qubits (256 states). For 256 qubits, no.


Gah, I wrote the wrong thing. If each probability is 50% +- 10^{-6} then the amplitudes are matched to within around 2 times 10^{-6}.

But when N>2 this gets tougher rapidly.

If we add 10^12 complex amplitudes and each one is off by one part in 10^{-6}, we could easily have serious problems with the accuracy of the sum. And 10^12 amplitudes is "only" around 40 qubits.


1/sqrt(N)


Isn't 250 square meters already pretty small for a company of their size? That is a small McMansion, and in Serbia the rent is probably 1k$ per month.


I am currently writing this from an xreal one pro. I think it fits what you are asking for.


Stochasticity (randomness) is pervasively used in classical algorithms that one compares to. That is nothing new and has always been part of comparisons.

"Error prone" hardware is not "a stochastic resource". Error prone hardware does not provide any value to computation.


Yes the claims here allow the classical computer to use a random number generator.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: