Hacker Newsnew | past | comments | ask | show | jobs | submit | JoelMcCracken's commentslogin

Na each key press goes to a separate lambda invocation that gets submitted to a Kafka queue, and what happens after that is a mystery to all involved.

We can make crazy latency ourselves just fine, no space transmission necessary


No, not a mystery, in fact.

Each keypress is appended to an 80 line prompt (key name along with timestamp of keypress and current text shown on the screen) and fed to a frontier LLM. Some of the office staff banged on the keypad for a few hours to generate training data to fine-tune the LLM on the task of denouncing key presses.

Thanks to some optimizations with Triton and running multi-GPU instances, latency is down to just a few seconds per digit entered.

You see, we needed to hit our genAI onboarding KPIs this quarter…


The only languages I know that can do prolog-like-constructs as-a-library are lisps, or at least langs that have reasonable symbol constructs. Usability is way way worse if you can’t talk about variables as first class objects.

I was talking to Bob Harper about this specific issue (context was why macro systems are important to me) and his answer was “you can just write a separate programming language”. Which I get.

But all of this is just to say that doing relational-programming-as-a-library has a ton of issues unless your language supports certain things.


I believe Rust uses datafrog, Datalog as a library to implement some of its next gen solvers for traits and lifetimes. Not a lisp and maybe this still isn’t as elegant as you had in mind? Curious how this library compares for you.

https://github.com/rust-lang/datafrog


If you want to see what it looks like when you actually embed Datalog in your language, have a look at Flix: https://flix.dev/

(Select the "Usinag Datalog..." example in the code sample dropdown)

The Rust code looks completely "procedural"... it's like building a DOM document using `node.addElement(...)` instead of, say, writing HTML. People universally prefer the declarative alternative given the choice.


Haskell can do the same, and does with a lot of little embedded DSLs.

Yea I’ve wanted to try using logict to do some larger logic programming stuff. I’ve done it with list monad but found a lot of speed issues, never quite figured out why it was so slow.

Well, lists are really slow.

To be more precise: lists in your code can be really fast, if the compiler can find a way to never actually have lists in the binary it produces. If it actually has to have lists at runtime, it's generally not all that fast.


Yeah I figured that, but the combinations I was dealing with really weren’t that many.

The problem was https://xmonader.github.io/prolog/2018/12/21/solving-murder-... and trying to solve it with list monad. Someday I hope to get back to it.


The experimental "Polonius" borrow checker was first implemented with datafrog, but I believe it's been abandoned in favour of a revised (also "Polonius") datalog-less algorithm.

oh my god, what a horrible but incredible idea


neat! I've been moving my site over to heavily use emacs/org for the authoring format and nix for the tooling infrastructure. I'll keep this in mind as a possible tool to help; I don't precisely know what I may still need to do that won't be easily doable with emacs.


Ditto. I have a hard time thinking in pictures. When I do there can only be one detailed part at a time, a very small area.

I don’t really think in language either. To me thought is much more a kind of abstract process


This is cool. I could see myself using this for notes.


You can use it with AsciiDoc readily, if you use that [1]. With anything you could also use MathML in an HTML-passthrough block, but it's pretty verbose.

1. https://docs.asciidoctor.org/asciidoctor/latest/stem/asciima...


I do agree with this, but also, I don't really understand a lot of the tradeoffs, or at least to me they are false tradeoffs.

Her first example is excellent. In Haskell, we have global type inference, but we've found it to be impractical. So, by far the best practice is not to use it; at the very least, all top-level items should have type annotations.

the second one, structural typing: have your language support both structural types and nominal types, then? This is basically analogous to how haskell solved this problem: add type roles. Nominal types can't convert to one another, whereas structural types can. Not that Haskell is the paragon of language-well-designed-ness, but... There might be some other part of this I'm missing, but given the obviousness of this solution, and the fact that I haven't seen it mentioned, it is just striking.

on dynamic dispatch: allow it to be customized by the user - this is done today in many cases! Problem solved. Plus with a global optimizing compiler, if you can deal with big executable size, you have cake and eat cake.

on JIT: Yes, JIT can take some time, it is not free. JIT can make sense even in languages that are AOT compiled, in general it optimizes code based upon use patterns. If AOT loop unrolling makes sense in C, then I certainly think runtime optimization of fully AOT compiled code must be advantageous too. But, today, you can just about always figure that you can get yourself a core to do this kind of thing on, we just have so many of them available and don't have the tools to easily saturate them. Or, even if you do today with N cores, you probably won't be able to on the next gen, when you have N+M cores. Sure, there's gonna have to be some overhead when switching out the code, but I really don't think that's where the mentioned overhead comes from.

Metaprogramming systems are another great example: Yes, if we keep them the way they are today, at the _very least_ we're saying that we need some kind of LSP availability to make them reasonable for tooling to interact with. Except, guess what, all languages nowadays of any reasonable community size will need LSP. Beyond that, there are lots of other ways to think about metaprogramming other than just the macros we commonly have today.

I get her feeling, balancing all of this is hard. One think you can't really get away from here is that all of this increases language, compiler, and runtime complexity, which makes things much harder to do.

But I think that's the real tradeoff here: implementation complexity. The more you address these tradeoffs, the more complexity you add to your system, and the harder the whole thing is to think about and work on. The more constructs you add to the semantics of your language, the more difficult it is to prove the things you want about its semantics.

But, that's the whole job, I guess? I think we're way beyond the point where a tiny compiler can pick a new set of these tradeoffs and make a splash in the ecosystem.

Would love to have someone tell me how I'm wrong here.


Citation needed? I don't mean this in a snarky way, though. I genuinely have not seen anything that these things can train on their own output and produce better results than before this self-training.


this is something I think about. state of the art in self driving cars still makes mistakes that humans wouldn't make, despite all the investment into this specific problem.

This bodes very poorly for AGI in the near term, IMO


what's super weird to me is how people seem to look at LLM output and see:

"oh look it can think! but then it fails sometimes! how strange, we need to fix the bug that makes the thinking no workie"

instead of:

"oh, this is really weird. Its like a crazy advanced pattern recognition and completion engine that works better than I ever imagined such a thing could. But, it also clearly isn't _thinking_, so it seems like we are perhaps exactly as far from thinking machines as we were before LLMs"


Well the difference between those two statements is obvious. One looks and feels, the other processes and analyzes. Most people can process and analyze some things, they're not complete idiots most of the time. But also most people cannot think and analyze the most ground breaking technological advancement they might've personally ever witnessed, that requires college level math and computer science to understand. It's how people have been forever, electricity, the telephone, computers, even barcodes. People just don't understand new technologies. It would be much weirder if the populace suddenly knew exactly what was going on.

And to the "most groundbreaking blah blah blah", i could argue that the difference between no computer and computer requires you to actually understand the computer, which almost no one actually does. It just makes peoples work more confusing and frustrating most of the time. While the difference between computer that can't talk to you and "the voice of god answering directly all questions you can think of" is a sociological catastrophic change.


Why should LLM failures trump successes when determining if it thinks/understands? Yes, they have a lot of inhuman failure modes. But so what, they aren't human. Their training regimes are very dissimilar to ours and so we should expect alien failure modes owing to this. This doesn't strike me as good reason to think they don't understand anything in the face of examples that presumably demonstrate understanding.


Because there's no difference between a success and failure as far as an LLM is concerned. Nothing went wrong when the LLM produced a false statement. Nothing went right when the LLM produced a true statement.

It produced a statement. The lexical structure of the statement is highly congruent with its training data and the previous statements.


This argument is vacuous. Truth is always external to the system. Nothing goes wrong inside the human when he makes an unintentionally false claim. He is simply reporting on what he believes to be true. There are failures leading up to the human making a false claim. But the same can be said for the LLM in terms of insufficient training data.

>The lexical structure of the statement is highly congruent with its training data and the previous statements.

This doesn't accurately capture how LLMs work. LLMs have an ability to generalize that undermines the claim of their responses being "highly congruent with training data".


By that logic, I can conclude humans don't think, because of all the numerous times out 'thinking fails'.

I don't know what else to tell you other than this infallible logic automaton you imagine must exist before it is 'real intelligence' does not exist and has never existed except in the realm of fiction.


You’re absolutely right!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: