Hacker Newsnew | past | comments | ask | show | jobs | submit | DavidSJ's commentslogin

The original Mac system software was written in Pascal and most Mac toolbox calls took Pascal-style (prefixed by length) rather than C-style (terminated with null character) strings. But you could write application code in either language keeping this caveat in mind.

It was actually mostly written in assembly, but used Pascal calling conventions and structure layouts since that was expected to be the primary language for application developers. As it had been for Lisa, as it was for “large” applications on Apple II, and as was the case for much of the rest of the microcomputer and minicomputer industry and even the nascent workstation industry (eg Apollo).

It was the Lisa system software that was mostly implemented in Pascal and some blamed this for its largeness and its performance. Compilers and linkers weren’t great back then; most compiler code generation was pretty rigid, and most linkers didn’t even coalesce identical string literals across compilation unit boundaries!

Lisa Workshop C introduced the “pascal” keyword for function declarations and definitions to indicate they used Pascal calling conventions, and otherwise followed Lisa Pascal structure layout rules, so as to minimize the overhead of interoperating with the OS. (I’m not sure whether it introduced the “\p” Pascal string literal convention too or if that came later with Stanford or THINK Lightspeed C.)


That brings back the memories. I had a copy of Lightspeed C for the Mac in college.

In the workstation world, most companies used C and not Pascal. Apollo was different in that regard as their operating system, Domain, was unique to themselves, while most of the other workstation companies (Sun, HP, DEC, and IBM) were using Unix variants of some time (either BSD-based or System V-based in most cases). Apollo Domain was written in Pascal and was definitely not Unix-based. It had many unique and interesting features. In particular it had very sophisticated authentication and file sharing capabilities. A user could log in on any machine that was part of the domain (hence the name) and the user’s complete file system would be made available over the network on that hardware. Every system on the network shared a domain-level file system which removed the need for many Unix solutions like NFS. I had just accepted a job offer out of college from HP’s workstation division when HP bought Apollo. By the time I started, a couple months later, I was part of the HP side of the Apollo Systems Division.


You’re talking about the workstation world circa 1985 and later, but prior to then the victory of C and UNIX wasn’t a sure thing. Apollo was the big player, but they weren’t the only ones.

In particular, many minicomputer vendors had some type of graphics and engineering workstation system built around their minicomputer product line, whether multi-user (where you’d have one minicomputer or even mainframe serving multiple bitmap or vector graphics terminals) or single-user (whether using a dedicated low-end minicomputer as a single-user system or using a new CPU design).

The Xerox Alto is what everyone cites as the start of the workstation trend, but it didn’t just beget the Xerox Star, the Lisp Machine, and the Lisa, it also led to the Three Rivers PERQ and CAD/CAE environments built on top of modular hardware from Data General and DEC, to the point where eventually DG, DEC, HP, and others released their own graphical workstations based on their minicomputer architectures.

All of these used vendor operating systems, not UNIX, and almost all emphasized the use of Pascal and FORTRAN for high-level application development. (The ones that didn’t had vendor languages too, like InterLISP and Mesa for Xerox.)


A good example of this dichotomy is the Puzzle Desk Accessory --- originally written in Pascal (as an example of making a DA thus), it was too large to include on a 400KB micro-floppy disk, so was re-written in assembly language going from 6K Bytes to 600 Bytes:

https://www.folklore.org/Puzzle.html


> Do they have this set on business accounts also by default? If so, this is really shady.

Looks like not, but would it actually have been shadier, or are we just used to individual users being fucked over?


Capacity is tight, you serve from where you can.


Probably also because most token use cases are not latency sensitive. A 200ms extra delay isn't going to change much for most use cases.


Right, so if they were able to get a discount in UAE…


I'm like you.

I loved Apple IIs at schools and libraries as a young child, fell in love with my Mac IIsi at home at the age of 7. Later, at 13, I had a Macintosh-evangelizing web site and mailing list that Guy Kawasaki (Apple's lead evangelist) even subscribed to.

I've been a primary Mac user through the 68k, PowerPC, Intel, and Apple Silicon days, from System 6.0.7 through today. Got an original iPhone and iPad, have upgraded my iPhone every few years since.

The technofeudalism, bugginess, and UI crappiness has me done and looking for the exits, to say nothing of the embrace of Trump. My next laptop won't be a Mac, and my next phone won't be an iPhone.


Yes, the actual LLM returns a probability distribution, which gets sampled to produce output tokens.

[Edit: but to be clear, for a pretrained model this probability means "what's my estimate of the conditional probability of this token occurring in the pretraining dataset?", not "how likely is this statement to be true?" And for a post-trained model, the probability really has no simple interpretation other than "this is the probability that I will output this token in this situation".]


It’s often very difficult (intractable) to come up with a probability distribution of an estimator, even when the probability distribution of the data is known.

Basically, you’d need a lot more computing power to come up with a distribution of the output of an LLM than to come up with a single answer.


What happens before the probability distribution? I’m assuming say alignment or other factors would influence it?


In microgpt, there's no alignment. It's all pretraining (learning to predict the next token). But for production systems, models go through post-training, often with some sort of reinforcement learning which modifies the model so that it produces a different probability distribution over output tokens.

But the model "shape" and computation graph itself doesn't change as a result of post-training. All that changes is the weights in the matrices.


OpenAI should not be agreeing to any contract with DOD under these circumstances of Anthropic being falsely labeled a supply chain risk.


That's 4–6 months in the 18 months the trials lasted for, i.e. about a 30% slowdown of progression. The open-label extensions suggest this relative slowdown seems to continue at least to the 4-year mark (at which point it would have bought you over a year of time): https://www.alzforum.org/news/conference-coverage/signs-last...

Time will tell if the 30% slowdown continues beyond four years, and/or if earlier treatment with more effective amyloid clearance from newer drugs has greater effects. The science suggests it should.


It’s one of the best blood tests. There are also PET scans, lumbar punctures (spinal taps), and postmortem analyses of brain tissue.


I don’t think we should preemptively surrender our free speech to the authoritarians.


Even the counting numbers arose historically as a tool, right?

Even negative numbers and zero were objected to until a few hundred years ago, no?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: