Hacker Newsnew | past | comments | ask | show | jobs | submit | teleforce's commentslogin

Typical FUD.

Replace AI with "open source and Linux", and "open source" with "Windows" in the statements. That's what Microsoft's PR team would have said about open source and Linux about 20 years back in the 2000s.

After the unsuccessful FUD era, now Microsoft is running away with Linux by running its Windows alongside via WSL to combat MacOS Unix-like popularity, and due to Linux and open source dominance in the cloud OS demographic.


Even worse, in that Microsoft's FUD was mostly right. The joke about Open Source being communism played out straight - FOSS pretty much destroyed the ability to make money on software products, accelerating transition to SaaS models where you can carefully seek rent from the shelter of your secure company servers (later, cloud), and that is in large part responsible for modern surveillance economy - as it turns out, some SaaS segments decayed to "free with ads", where - much like with OSS and locally-run software - you cannot compete on price with free.

>But what about GC pauses

GC is like auto transmission, it's an inevitable natural evolution of programming languages.

I think the future of programming languages will have hybrid modes of GC and manual, similar to today's hybrid auto transmission automatic and manual in state-of-the-art hypercar [1]. I considered D language as pioneer in this innovative approach.

My hypothesis is that GC can be made deterministic like manual memory management, just like how ICE auto industry minimize the manual transmission. Heck, no manual for EV.

Hopefully the new io_uring facility with BPF controlled can enable this deterministic GC [2],[3].

[1] Here’s how Koenigsegg’s new manual/automatic CC850 gearbox work (2025):

https://www.topgear.com/car-news/supercars/heres-how-koenigs...

[2] BPF meets io_uring (2026):

https://lwn.net/Articles/847951/

[3] How io_uring and eBPF Will Revolutionize Programming in Linux (2020):

https://www.scylladb.com/2020/05/05/how-io_uring-and-ebpf-wi...


I am very sorry, but you are very wrong.

It's dark ages in Europe but it's really golden age in Islamic Empires which far surpassed the Greek and Roman. The epitome was the Baitul Hikmah or House of Wisfom established at the time of Abassid Caliphate [1].

>Between the fall of Rome (476 AD) and the Carolingian empire (~800 AD)

During this time Al-Khwarizmi was born and House of Wisdom was established. His contributions in algebra (book Al-Jabr) and many others, and the word algorithm literally originated from his name [2].

Many Greek and Indian books translated to Arabic during this myth so called dark age by the Islamic Scholars.

Many many more new books were written that improved and innovated the prior knowledge. These books greatly expanding the state-of-the-art centered both in Baghdad, Iraq and Toledo, Spain.

The book Almagest by Ptolemy studied by Galileo was the Arabic translation of the Ptolemy works. Of course at the time of Galileo, Islamic astronomers knowledge and contributions have already far surpassed the Greek and the Indian.

The Islamic scholars not only translating (that it self is a progress) but also contributing to the body and foundations of knowledge not only in astronomy, but in many others in mathematics, sciences (physics, chemistry, biology), medicine, engineering, geography, psychology, politics, economy, architecture, etc.

There are numerous other polymath scholars like Al-Khwarizmi, for examples Al-Haitham, Ibnu-Sina, Ibnu-Rusd to name just a few. But European community has been in denial for so long time and in addition unfairly supressing these Islamic scholars contributions. They even literally changed the scholars name and Latinized them with lousy names like Alhazen, Avicennia, Averroes to hide the fact that they are Islamic scholars. Imagine if now people changed name of the scholar like Newton to Nawab in the literature.

Heck, even Copernicus copied diagrams from the Islamic earlier astronomers' books without proper citations, that in the modern will be plagiarism [3]. If this happen today, the university president (if plagiarised his/her book/thesis/paper) will be asked to resign.

[1] House of Wisdom:

https://en.wikipedia.org/wiki/House_of_Wisdom

[2] Al-Khwarizmi:

https://en.wikipedia.org/wiki/Al-Khwarizmi

[3] Islamic Astronomy and Copernicus [pdf]:

Islamic Astronomy and Copernicus [pdf]


>having the ability to write Go syntax and interop directly with C is the plus.

It's always a plus to interop with the lingua franca of programming languages.

I think D language approach is more organic and intuitive that you can interop directly, and now that it's natively supported by the D compiler make it even better [1],[2].

[1] Interfacing to C:

https://dlang.org/spec/interfaceToC.html

[2] Adding ANSI C11 C compiler to D so it can import and compile C files directly (105 comments)

https://news.ycombinator.com/item?id=27102584


I think instead of cloning on a static meaningless statue, much better if we clone Magawa in term of functionality and cabability, and name the landmine detection machine device Magawa.

Japanese researchers have already successful in detecting sub-surface bamboo shoots for culinary, because young bamboo shoot underneath the ground taste better than apparent overground ones.

Let's invent a landmines detection robotic device namely MAGAWA for Mines Apparatus Ground Assessment Waveform Analysis.


> Classic RAG (using only vector database) is over

Fixed that for you.


For what it's worth there's a new book on The Science of Music by Mark Newman who also the author of the popular book on Computational Physics [1].

[1] Mark Newman's new book: The Science of Music (2023):

https://lsa.umich.edu/cscs/news-events/all-news/search-news/...


Does the PostgresSQL 18 performance increased with the latest asynchronous I/O, smarter query planning with improved parallelism kind of offset this performance hits? [1].

"Enhanced and smarter parallelisation; initial benchmarks indicate up to 40% faster analytical queries".

[1] PostgreSQL 18 released: Key features & upgrade tips:

https://www.baremon.eu/postgresql-18-released-key-features-u...


>Why domain specific LLMs won’t exist: an intuition

>We would have a healthcare model, economics model, mathematics model, coding model and so on.

It's not the question whether there ever will be specialized model, rather it's the matter of when.

This will democratize almost all work and profession, including programmers, architects, lawyers, engineers, medical doctors, etc.

For half-empty glass people, they will say this is a catastrophe of machine replacing human. On the other hand, the half-full glass people will say this is good for society and humanity by making the work more efficient, faster and at a much lower cost.

Imagine instead of having to wait for a few months for your CVD diagnostic procedures due to the lack of cardiologist around the world (facts), the diagnostics with the help of AI/LLM will probably takes only a few days instead with expert cardiologist in-the-loop, provided the sensitivity is high enough.

It's a win-win situation for patients, medical doctors and hospitals. This will lead to early detection of CVDs, hence less complication and suffering whether it's acute or chronic CVDs.

The foundation models are generic by nature with clusters HPC with GPU/TPU inside AI data-center for model training.

The other extreme is RAG with vector databases and file-system for context prompting as the sibling's comments mentioned.

The best trade-off or Goldilocks is the model fine-tuning. To be specific it's the promising self-distillation fine-tuning (SDFT) as recently proposed by MIT and ETH Zurich [1],[2]. Instead of the disadvantages of forgetting nature of the conventional supervised fine-tuning (SFT), thr SDFT is not forgetful that makes fine-tuning practical and not wasteful. The SDFT only used 4 x H200 GPU for fine-tuning process.

Apple is also reporting the same with their simple Smself-distillation (SSD) for LLM coding specialization [3],[4]. They used 8 x B200 GPU for model fine-tuning, which any company can afford for local fine-tuning based on open weight LLM models available from Google, Meta, Nvidia, OpenAI, DeepSeek, etc.

[1] Self-Distillation Enables Continual Learning:

https://arxiv.org/abs/2601.19897

[2] Self-Distillation Enables Continual Learning:

https://self-distillation.github.io/SDFT.html

[3] Embarrassingly simple self-distillation improves code generation:

https://arxiv.org/abs/2604.01193

[4] Embarrassingly simple self-distillation improves code generation (185 comments):

https://news.ycombinator.com/item?id=47637757


It seems that self-distillation is the way to go for LLM.

Self-distillation has been shown recently as very efficient and effective back in January this year by MIT and ETH team in their Self-Distillation Fine-Tuning (SDFT) LLM system [1],[2].

This paper is also their closest competitor named On-Policy Self-Distillation in the comparison table.

I hope they keep the original work real name that is Self-Distillation Fine-Tuning or SDFT. Imagine later paper citing this very paper as cross-entropy self-distillation instead of their very own given name Simple Self-Distillation or SSD. Although I'd have admitted it's a lousy name that breaks the namespace with common SSD nomenclature for solid-dtate drive, as others have rightly pointed.

I think they should given the proper credit to this earlier seminal earlier on SDFT but apparently they just put it as one as of the systems in their benchmark but not explaining much of the connection and lineage which is a big thing in research publication.

[1] Self-Distillation Enables Continual Learning:

https://arxiv.org/abs/2601.19897

[2] Self-Distillation Enables Continual Learning:

https://self-distillation.github.io/SDFT.html


Good explainer for on-policy self distillation from the authors https://x.com/siyan_zhao/status/2014372747862999382#m

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: