Hacker Newsnew | past | comments | ask | show | jobs | submit | pards's commentslogin

> I happen to rent and can't keep

This is my fear - what happens if the AI companies can't find a path to profitability and shut down?


Don't threaten us with a good time.

That’s not a good time, I love these things. I’ve been able to indulge myself so much. Possibly good for job security but would suck in every other way.

This is why local models are so important. Even if the non-local ones shut down, and even if you can't run local ones on your own hardware, there will still be inference providers willing to serve your requests.

Recently I was thinking about how some (expensive) customer electronics like the Mac Studio can run pretty powerful open source models with a pretty efficient power consumption, that could pretty easily run on private renewable energy, and that are on most (all?) fronts much more powerful than the original ChatGPT especially if connected to a good knowledge base. Meaning that aside from very extreme scenarios I think it is safe to say that there will always be a way not to go back to how we used to code, as long as we can offer the correct hardware and energy. Of course personally I think we will never need to go to such extreme ends... despite knowing of people who seem to seriously think developed countries heavily run out of electricity one day, which, while I reckon there might be tensions, seems like a laughable idea IMHO.

My first introduction to the internet was through the telnet-based EW-too talkers like Foothills (Boston U) and Forest (UTS). I have very fond memories of staying up late talking to people from all over the globe. It was truly amazing to me.

The best part was how the users moderated behaviour - bad actors were ejected swiftly but rarely permanently.


This represents a fork in the road that becomes apparent by your mid-40s.

Those who ignore it will be overweight, unfit, and on daily meds. Those who change their lifestyle will not.

The fix is:

> Leading a healthy life is simple: sleep well, exercise three times a week, have an active social life, eat a variety of vegetables and whole foods, avoid sugar, processed foods, alcohol and drugs. That's 90%. Everything else is optimisation.


A 1980s Toyota Hilux would give it a run for its money

https://youtu.be/Yl1FNX08HFc


Toyota Hilux are in their own category. They are literally combat proven.

https://i.pinimg.com/originals/93/42/42/93424282cbcafce0ad81...


It doesn’t look like it would fit three cows.


Name and shame

Tangerine (formally ING Direct) in Canada only has 6-digit PINs and SMS 2FA

TD Canada Trust only supports SMS 2FA

PC Financial only supports SMS 2FA


> The third, Build, will teach you about how to reliably build your software with Make.

Make? In 25 years as a professional developer I have never encountered make in the enterprise.

At least cover the various generic _models_ behind a few of the modern build tools so students can understand both the commonality and the differences between say NX, NPM, Maven, Gradle, go build etc.

Maybe a class on CI/CD pipelines, too.


I develop embedded software. I use make all the time.

I don't want to .. but people keep using it because it's simpler than other build systems.

Many UI tools based on eclipse use make under the hood.

Many recipes used by Yocto just use make to build the software and then install the output somewhere.

It all depends what you're trying to build and where you work.


You'll never guess what we talk about later on in the unit. Spoiler: exactly that!

It notionally focuses on make but the concepts apply much more broadly than the one specific tool


makefiles and shellscripts are still knocking around in systems programming world, which i think is the world OP comes from


Makefiles are a perfect abstraction over proprietary CI/CD DSLs and commands.

As a polyglot, having to remember and the difference is awful - so I make(ha!) local Makefiles that invoke the relevant tool, the same routine concepts (lint, build, or run tests) may be "yarn foo -arg1", "npx -foo", "go bar" depending on project and tool, which gets annoying when you're frequently switching between projects.

Big tech with monorepos solve this cognitive effort using a unified build system (blaze, buck, buck2). IMHO, Make makes a decent glue system at smaller organizations lacking a compiler/build/tooling team.


Indeed. CMake is now the gold standard for C/C++ projects. It should be taught especially in an introductory class.


I did, but so what? But make IS the generic model and no one should invent any kind of build system without understanding make first.


Yeah, really bad UX. Unreadable.

I zoomed on Firefox on MacOS and the two finger scroll stops working. The scrollbar appears momentarily so I grabbed it but it jumps to the next article and scrolls left, which pushes the next article off the page to the right.


Is there a link that doesn't require me to agree to give up my first-born?



>> But because it can also be used to bypass paywalls

> How? Does the site pay for subscription for every newspaper?

Someone with a subscription logs into the site, then archives it. Archive.is uses the current user's session and can therefore see the paywalled content.


> Someone with a subscription logs into the site, then archives it.

That’s not the case. I don’t have a NYT subscription, I just Googled for an old obscure article from 1989 on pork bellies I thought would be unlikely for archive.today to have cached, and sure enough when I asked to retrieve that article, it didn’t have it and began the caching process. A few minutes later, it came up with the webpage, which if you visit on archive.is, you can see it was first cached just a few minutes ago.

https://www.nytimes.com/1989/11/01/business/futures-options-...

My assumption has been that the NYT is letting them around the paywall, much like the unrelated Wayback Machine. How else could this be working? Only way I could think it could work is that either they have access to a NYT account and are caching using that — something I suspect the NYT would notice and shutdown — or there is a documented hole in the paywall they are exploiting (but not the Wayback Machine, since the caching process shows they are pulling direct from the NYT).


I believe news sites let crawlers access the full articles for a short period of time, so that they appear in search results. Archive.is crawls during that short window.


Do they have such an option? I don't see it on the site, and the browser extension seems to send only the URL [1] to the server. Can you provide more information?

[1] https://github.com/JNavas2/Archive-Page/blob/main/Firefox/ba...


Does it still leak your IP, e.g. if the page rendered by the site you're archiving includes it? You'd think they'd create a simple filter to redact that out.


I’m not advocating for it but;

Websites like newspapers might soon put indicator words on the page, not just simple subscriber numbers that can be replaced, to show who is viewing the page which would make it way to archives.


> Over multiple years, we built a supervised pipeline that worked. In 6 rounds of prompting, we matched it. That’s the headline, but it’s not the point. The real shift is that classification is no longer gated by data availability, annotation cycles, or pipeline engineering.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: