Hacker Newsnew | past | comments | ask | show | jobs | submit | measurablefunc's commentslogin

Yes, it's well known that money & prices are what make people act rationally. We'd still be slinging mud & rocks if it wasn't for money & prices.

Tangentially related from something I'm currently reading¹:

> This is the reality of twenty-first-century resource exploitation: reducing vast quantities of rock into granules and chemically processing what remains. It is both awe inspiring and disturbing. One risk is that the cyanide and mercury used in the method could escape into the surrounding ecosystem. After all, while miners like Barrick insist they follow all the rules laid down by the US Environmental Protection Agency (EPA), campaigners warn that pollution often finds its way out of the mine. Indeed, a few years earlier the EPA had fined Barrick and another nearby miner $618,000 for failing to report the release of toxic chemicals including cyanide, lead and mercury. But the main thing I was struck by as I observed each stage in this process was just how far we will go these days to secure a tiny shred of shiny metal.

> The scale, for one thing, was mind-boggling. As I looked down into the pit I could just about make out some trucks on the bottom, but only when they emerged at the top did I realise that they were bigger than three-storey buildings; the tyres alone were the size of a double-decker bus. How much earth do you have to remove to produce a gold bar? I asked my minders. They didn’t know, but they did know that in a single working day those trucks would shift rocks equivalent to the weight of the Empire State Building.

¹ Material World: A Substantial Story of Our Past and Future by Ed Conway


> in a single working day those trucks would shift rocks equivalent to the weight of the Empire State Building.

Oh. My. God.


With four parameters I can fit an elephant, and with five I can make him wiggle his trunk so there is still room for improvement.

Except learning to reason is a far cry from curve fitting. Our brains have more than five parameters.

After a quick content browse, my understanding is this is more like with a very compressed diff vector, applied to a multi billion parameter model, the models could be 'retrained' to reason (score) better on a specific topic , e.g. math was used in the paper

It's the statistics equivalent of 'no one needs more than 640kb of RAM'

My very first PC was a Packard Bell with 640KB of RAM. If I’d known, I’d have saved all my RAM for retirement…

speak for yourself!

reasoning capability might just be some specific combinations of mirror neurons.

even some advanced math usually evolves applying patterns found elsewhere into new topics


I agree, I don't think gradient descent is going to work in the long run for the kind of luxurious & automated communist utopia the technocrats are promising everyone.

It's not that simple. Production costs have gone up for everyone, inflation is going to get worse so the simple logic of "higher prices, higher profits" doesn't really work in this case.

There will be a short term long term thing with this. I agree with you that ultimately everyone loses long term. Short term the higher prices will result in higher profits which will enrich whoever owns the oil.

We aren't at the end of the inflation, though, that's going to hit. This is only the beginning. Next year will be when things really go south. At this point it's not a question of if, but rather how bad.


I agree.

It's not clear or obvious why continuous semantics should be applicable on a digital computer. This might seem like nitpicking but it's not, there is a fundamental issue that is always swept under the rug in these kinds of analysis which is about reconciling finitary arithmetic over bit strings & the analytical equations which only work w/ infinite precision over the real or complex numbers as they are usually defined (equivalence classes of cauchy sequences or dedekind cuts).

There are no dedekind cuts or cauchy sequences on digital computers so the fact that the analytical equations map to algorithms at all is very non-obvious.


Continuous formulations are used with digital computers all the time. Limited precision of floats sometimes causes numerical instability for some algorithms, but usually these are fixable with different (sometimes less efficient) implementations.

Discretizing e.g. time or space is perhaps a bigger issue, but the issues are usually well understood and mitigated by e.g. advanced numerical integration schemes, discrete-continuous formulations or just cranking up the discretization resolution.

Analytical tools for discrete formulations are usually a lot less developed and don't as easily admit closed-form solutions.


It is definitely not obvious, but I wouldn't say it is completely unclear.

For instance we know that algorithms like the leapfrog integrator not only approximate a physical system quite well but even conserve the energy, or rather a quantity that approximates the true energy.

There are plenty of theorems about the accuracy and other properties of numerical algorithms.


How do they apply in this case?

This is what the field of numerical analysis exists for. These details definitely have been treated, but this was done mainly early in the field's history; for example, by people like Wilkinson and Kahan...

I just took some basic numerical courses at uni, but every time we discretized a problem with the aim to implement it on a computer, we had to show what the discretization error would lead to, eg numerical dispersion[1] etc, and do stability analysis and such, eg ensure CFL[2] condition held.

So I guess one might want to do a similar exercise to deriving numerical dispersion for example in order to see just how discretizing the diffusion process affects it and the relation to optimal control theory.

[1]: https://en.wikipedia.org/wiki/Numerical_dispersion

[2]: https://en.wikipedia.org/wiki/Courant%E2%80%93Friedrichs%E2%...


Doesn't continuous time basically mean "this is what we expect for sufficiently small time steps"? Very similar to how one would for example take the first order Taylor dynamics and use them for "sufficiently small" perturbations from equilibrium. Is there any other magic to continuous time systems that one would not expect to be solved by sufficiently small time steps?

You should look into condition numbers & how that applies to numerical stability of discretized optimization. If you take a continuous formulation & naively discretize you might get lucky & get a convergent & stable implementation but more often than not you will end up w/ subtle bugs & instabilities for ill-conditioned initial conditions.

I understand that much, but it seems like "your naive timestep may need to be smaller than you think or you need to do some extra work" rather than the more fundamental objection from OP?

The translation from continuous to discrete is not automatic. There is a missing verification in the linked analysis. The mapping must be verified for stability for the proper class of initial/boundary conditions. Increasing the resolution from 64 bit floats to 128 bit floats doesn't automatically give you a stable discretized optimizer from a continuous formulation.

Or you can just try stuff and see if it works

Point still stands, translation from continuous to discrete is not as simple as people think.

Numerical issues totally exist but the reason has nothing to do with the fact that Cauchy sequences don't exist on a computer imo.

The abstract formulation is different from the concrete implementation. It is precisely b/c the abstractions do not exist on computers that the abstract analysis does not automatically transfer the necessary analytical properties to the digital implementation. Cauchy sequences & Dedekind cuts are abstract & do not exist on digital computers.

Infinity has properties that finite approximations of it just don't have, and this can lead to serious problems for certain theorems. In the general case, the integral of a continuous function can be arbitrarily different from the sum of a finite sequence of points sampled from that function, regardless of how many points you sample - and it's even possible that the discrete version is divergent even if the continous one is convergent.

I'm not saying that this is the case here, but there generally needs to be some justification to say that a certain result that is proven for a continuous function also holds for some discrete version of it.

For a somewhat famous real-world example, it's not currently known how to produce a version of QM/QFT that works with discrete spacetime coordinates, the attempted discretizations fail to maintain the properties of the continuous equations.


Real numbers mostly appear in calculus (e.g. the chain rule in gradient descent/backpropagation), but "discrete calculus" is then used as an approximation of infinitesimal calculus. It uses "finite differences" rather than derivatives, which doesn't require real numbers:

https://en.wikipedia.org/wiki/Finite_difference

I'm not sure about applications of real numbers outside of calculus, and how to replace them there.


I can't tell if this a troll attempt or not.

If your definition of "algorithm" is "list of instructions", then there is nothing surprising. It's very obvious. The "algorithm" isn't perfect, but a mapping with an error exists.

If your definition of "algorithm" is "error free equivalent of the equations", then the analytical equations do not map to "algorithms". "Algorithms" do not exist.

I mean, your objection is kind of like questioning how a construction material could hold up a building when it is inevitably bound to decay and therefore result in structural collapse. Is it actually holding the entire time or is it slowly collapsing the entire time?


You should provide evidence & examples for your claims if you want to be taken seriously.

Precisely!

No need to engage with an article that makes naked assertions with little backing.

Ok, fine then...:

"But they have no more consciousness, sensitivity, and sentience than a hammer. " -- naked assertion, no backing, no definition, no ope rationalization, no scientific or philosophical work shown (and this is a spicy one, because there's been philosophical turf wars on this for half a century, you can't just ASSERT that)

"Every device made by man has an off switch. We can use it sometimes." -- I have stories. Semi-Explosive near death stories. At any rate... uh, not quite?

Look, at very least he's sloppy here. Mostly just a raw opinion piece I guess, but not really backed by much that is real. Just so you know, this cost me more time than the text even deserves.


This is similar to AWS & their Graviton VMs.

The author does not exist & the paper is pure nonsense: https://scholar.google.com/citations?user=G97KxEYAAAAJ&hl=en. Might even be a psyop by some 3 letter agencies. So the obvious question, why did you post this?

Sorry for the confusion, even though the author's names may not have an active record on Scholar. But I would like to share it here because I read the paper, and I find it interesting.

You read the paper? All 459 pages of it? And you missed e.g. this gem on page 257? "[11:23:54] CLAUDE: OPUS 5 — 606 pages, need 194 more. FINAL PUSH. Write these in first person as Logan. MAXIMUM DENSITY:"

I'm sorry.

All traffic is monitored, all signal sources are eventually incorporated into the training set in one way or another. The person you're responding to is correct, even a single API call to any AI provider is sufficient to discount future results from the same provider.

ok! So if someone uses an existing, checkpointed, open source model then the answer is yes the results are valid and it doesn't matter that the tests are public.

Yes, assuming the checkpoint was before the announcement & public availability of the test set.

You live in a conspiracy world. Those AI providers don't update the models that fast. You can try ask them solve ARC-AGI-3 without harness and see them struggle as yesterday yourself.

Which part is the conspiracy? Be as concrete as possible.

That's great but how about UltraAgents: Meta-referential meta-improving self-referential hyperagents?

AGI-MegaAgent 5.7 Pro Ultra

Somehow still financed w/ ads & ubiquitous surveillance.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: