I'm pretty sure that this is not true. I talked to Bud Lawson (the inventor of the pointer) and he claimed that they had implemented special behaviour for null pointers earlier. When I talked to Tony later about it, he said he had never heard of Bud Lawson. So probably both invented them independently, but Bud came first.
If we start playing the "who was first" game, then for the Soviet machine Kiev (Kyiv), an "address language" with a "prime operation" was created in 1957-59.
The prime operation and address mapping.
The prime operation defines a certain single‑argument function. Its symbol (a prime mark) is written above and to the left of the argument:
'a = b
where a is the argument and b is the result of the operation.
This is read as: "prime a equals b" (or "b is the contents of a").
The argument a is called an address, and the function value b is called the contents of the address.
The prime function ' defines a mapping from the set of addresses A to the set of contents B, which we will call an address mapping.
Pointers and indirect addressing were used in assembly languages and machine languages much earlier than that, perhaps even in some relay-based computers.
In any case, by 1954 already most or all electronic computers used this.
The only priority questions can refer to which are the first high-level programming languages that have used pointers.
In my opinion the first language having pointers with implicit dereferencing was CPL, published in 1963-08, and the first language having pointers with explicit dereferencing was Euler, published completely in 1966-01, but this feature had already been published in 1965-11. The first mainstream programming language, with a large installed base, which had pointers, was the revised IBM PL/I, starting with its version from 1966-07.
Thanks for the link to the book describing the "Kiev" computer. It seems an interesting computer for the year 1957, but it does not have anything to do with the use of pointers in high-level programming languages.
At the page indicated by you there is a description of what appears to be a symbolic assembler. The use of a symbolic assembly language was a great progress at that early date, because many of the first computer programs had been written directly in machine language, or just with a minimal translation, e.g. by using mnemonics instead of numeric opcodes.
However this does not have anything to do with HLL pointers and means to indicate indirect addressing in an assembly language have existed earlier, because they were strictly necessary for any computed that provided indirect addressing in hardware.
In the very first computers, the instructions were also used as pointers, so a program would modify the address field of an instruction, which was equivalent to assigning a new value to a pointer, before re-executing the instruction.
Later, to avoid the re-writing of instructions, both index registers and indirect addressing were introduced. Indirect addressing typically reserved one bit of an address to mark indirection. So when the CPU loaded a word from the memory, if the indirect addressing bit was set, it would interpret the remainder of the word as a new address, from which a new word would be loaded. This would be repeated if the new word also had the indirection bit set.
The assembly languages just had to use some symbol to indicate that the indirection bit must be set, which appears to have been "prime" for "Kiev".
Pity you didn't look a little further, where there was more syntax and semantics...
The concept of a high-level language is, of course, relative, but if, for example, someone considers Forth to be an HLL, then imho, the language/formalism from the book about the Kiev machine was definitely one, and it was described in more detail by its chief architect, Katherine Yushchenko, in a book from 1963:
https://it-history.lib.ru/TEXTS/Adresnoe-programmirovanie_EY...
If you are still interested, you can look at page 35, where there are several examples, including finding the GCD.
"Reference" was the original term used in the languages derived from ALGOL for what is now called "pointer".
The distinction that exists in C++ between "reference" and "pointer" is something very recent. In the past the 2 terms were synonymous.
The term "pointer" was introduced by IBM PL/I in July 1966, where it replaced "reference".
PL/I has introduced many terms that have replaced previously used terms. For example:
reference => pointer
record => structure
process => task
and a few others that I do not remember right now.
"Pointer" and "structure" have become dominant after they have been taken by the C language from PL/I and then C has become extremely popular. Previously "reference" and "record" were more frequently used.
Euler had both an address-of operator, which was prefix "@" and an indirect addressing a.k.a. pointer dereferencing operator, which was a postfix middle dot.
So it had everything that C has, except pointer arithmetic.
Only a subset of the programming languages that have pointers also allow pointer arithmetic, because many believe that whenever address arithmetic is needed only indices shall be used, not pointers, because with indices it is much easier for the compiler to determine the range of addresses that may be accessed.
You should provide a citation for where Bud Lawson has published his invention.
The use of pointers in assembly language does not count as an invention, as it was used since the earliest automatic computers. The use of implicit reference variables, which cannot be manipulated by the programmer, like in FORTRAN IV (1962) does not count as pointers.
The method for forcing another level of evaluation of a variable by using a "$" prefix, which was introduced in SNOBOL in January 1964, and which has been inherited by the UNIX shell and its derivatives does not count as a pointer.
The term "pointer" was introduced in a revision of the IBM PL/I language, which was published in July 1966. In all earlier publications that I have ever seen the term used was "reference", not "pointer".
There are 2 high-level programming languages that were the first to introduce explicit references (i.e. pointers). One language was Euler, published in January 1966 by Niklaus Wirth and Helmut Weber. However Hoare knew about this language before the publication, so he mentioned it in his paper from November 1965, where he discussed the use of references (i.e. pointers).
The other language was the language CPL, which had references already in August 1963. The difference between how CPL used references and how Euler used references is that in Euler pointer dereferencing was explicit, like later in Pascal or in C. On the other hand, in CPL (the ancestor of BCPL), dereferencing a pointer was implicit, so you had to use a special kind of assignment to assign a new value to a pointer, instead of assigning to the variable pointed by the pointer.
Looking now in Wikipedia, I see a claim that Bud Lawson has invented pointers in 1964, but there is no information about where he has published this and about which is the high-level programming language where the pointers of Bud Lawson had been used.
If the pointers of Bud Lawson were of the kind with explicit dereferencing, they would precede by a year the Euler language.
On the other hand, if his pointers were with implicit dereferencing, then they came a year after the British programming language CPL.
Therefore, in the best case for Bud Lawson, he could have invented an explicit dereferencing operator, like the "*" of C, though this would not have been a great invention, because dereferencing operators were already used in assembly languages, they were missing only in high-level languages.
However, the use of references a.k.a. pointers in a high-level programming language has already been published in August 1963, in the article "The main features of CPL", by Barron, Buxton, Hartley, Nixon and Strachey.
Until I see any evidence for this, I consider that any claim about Bud Lawson inventing pointers is wrong. He might have invented pointers in his head, but if he did not publish this and it was not used in a real high-level programming language, whatever he invented is irrelevant.
I see on the Internet a claim that he might have been connected with the pointers of IBM PL/I.
This claim appears to be contradicted by the evidence. If Bud Lawson had invented pointers in 1964, then the preliminary version of PL/I would have had them.
In reality, the December 1964 version of PL/I did not have pointers. Moreover, the first PL/I version used in production, from the middle of 1965 also did not have pointers.
The first PL/I version that has added pointers was introduced only in July 1966, long enough after the widely-known publications of Hoare and of Wirth about pointers. That PL/I version also added other features proposed by Hoare, so there is no doubt that the changes in the language were prompted by the prior publications.
So I think that the claim that Bud Lawson has invented pointers is certainly wrong. He might have invented something related to pointers, but not in 1964.
PL/I had one original element, the fact that pointer dereferencing was indicated by replacing "." with "->". This has later been incorporated in the language C, to compensate its mistake of making "*" a prefix operator.
The "->" operator is the only invention of PL/I related to pointers, so that is a thing that has been invented by an IBM employee, but I am not aware of any information about who that may be. In any case, this was not invented in 1964, but in 1966.
He (Lawson) can only point to his paper from 1967 and the fact that in 1964 he was asked to join PL/I team due to his earlier published works on linked lists.
PS
He didn't mention it in his recollections [1], but 1978 paper The Early History and Characteristics of PL/I [2] claims that a paper was produced in October 1965.
Like I have said, PL/I did not have pointers in the beginning, i.e. since 1964 until July 1966.
So I think that this claim about Lawson having invented pointers comes from a misunderstanding. It is likely that Lawson has been the lead developer for adding pointers to PL/I.
Someone has heard this and because the first version of PL/I was developed during 1964, a false conclusion was inferred, i.e. that Lawson had invented pointers in 1964, before Euler.
That conclusion was wrong, because pointers have been added to PL/I much later, not during the initial development.
The paper written by Lawson about using pointers was sent for publication in August 1966, i.e. one month after the official introduction of a new PL/I version with pointers.
Due to the timing of the implementation of a PL/I extension with pointers, a short time after a public debate about how programming languages such as ALGOL must be improved, with most proposals integrated and analyzed in the papers published by Hoare, Wirth and a few others, I believe that it is rather certain that the impulse to add pointers to PL/I was not internal to IBM, but it was caused directly or indirectly by watching this debate.
The paper written by Radin, linked by you, confirms that IBM has added pointers to PL/I after an important customer, General Motors, has requested this, presumably in Q4 1965.
Because Lawson was the one who reported the end result of this work of extending PL/I with pointers, I assume that he was the lead developer of this project, which must have happened mostly during the first half of 1966.
As I have said, the only original element of the PL/I pointers was the "->" indirect addressing operator. Unlike in C, no other operator was needed, because PL/I followed the recommendation of Hoare, which was to use pointers only as structure members, where they are useful for implementing linked data structures, and not also as independent variables.
Therefore it seems likely that Harald Lawson was the one who invented the "->" operator.
However, he clearly had not invented pointers, as those (under the alternative name of "references") had been used earlier in the languages CPL and Euler and the implementation of pointers in PL/I done by Lawson followed closely the recommendations made by Hoare in his "Record Handling" paper.
Same here. And I have a friend who keeps his small IPhone because they stopped building smaller phones, too. There is a demand, maybe not that big.
For me, I want to be able to operate the phone with one hand, and the large screen makes it difficult to reach all the spots on the screen even with large hands. I do operate my Fairphone 5 with one hand, but it is super awkward and at some point, the phone will fall into a gully because I cannot hold it tight while navigating.
And I wouldn't mind 2mm more thickness if this means the cameras are flush with the back and the battery is larger.
Whenever I see this when talking about small phones, I'm reminded of the stats, where the iPhone minis were a small proportion of iPhone sales but still by themselves outsold most manufacturers.
I was in the same boat and literally this week bought a Pixel 8. It's a 2 year old phone but with the extended support period that's no longer a problem, and being old means you can get it new for about €300 or refurbished for even less.
The other option is the Samsung S2x line, which you can apply the same strategy to.
Thanks! MBCompass will stay fully FOSS and free. Donations are extremely rare (tbh, I've not received a single one), especially from the Foss Android community, but they’re still very helpful for long-term sustainability (given Google's non-sense Play monthly policies) and greatly appreciated, especially for users new to open source.
Not true anymore! \o/ I don't use Android anymore, but I agree a lot with the principles you've shared here, so it's a thank you for sharing that. And I know how difficult it is to get the first donations, and after you get the first, it's much easier to get the later ones. So best of luck, and I hope you'll remain steadfast with your principles when it'll matter! :)
Thank you, that really means a lot. Consistency has been important to me, whether it’s shipping regular releases with real improvements (https://github.com/CompassMB/MBCompass/releases) or writing about Android development and FOSS alongside the project. I really appreciate the encouragement and support.
You might add Bitcoin, Lighting or Monero to your donations page. Would've gladly dropped you a few bucks but I don't use any of the services you're offering.
Thanks for the suggestion, that’s a fair point. I currently rely on a couple of mainstream platforms mainly to keep things simple, but I do see the value in more open and permissionless options like Bitcoin/Lightning or Monero.
I’ll definitely consider adding at least one of them going forward. Really appreciate the willingness to support.
> But what additionally raised red flags was the presence of tcpdump and aircrack - tools commonly used for network packet analysis and wireless security testing. While these are useful for debugging and development, they are also hacking tools that can be dangerously exploited.
Must be another AI slop article. Stop feeding your writings into GPT & co to turn into extra long nonsense.
systemd is so resource hungry that i'm sure they removed it to reduce the RAM bill. Apt... why install apt if the distro has a different means of updating?
2. While these are useful for debugging and development, they are also hacking tools that can be dangerously exploited.
This is purely fear mongering. Even the shell could be a "hacking tool that can be dangerously exploited". Let's remove the shell too.
There are some legitimate complaints in the article, like the use of the same key on all installs. The rest looks more like fear mongering and security theater.
Including the microphone. What were they supposed to do, desolder it manually and add $10 to the price of each device?
I don't see the article complaining that a PiKVM has so many unused peripherals when used as a KVM. To go in the spirit of item #2, the usb ports could be used as "dangerous hacking tools" so you should desolder your usb ports from a Pi used as a KVM, right?
apt is a package manager. It's only relevant if the system uses it to manage it's packages. Red Hat based distributions, for example, don't use apt. Embedded devices typically don't manage packages on an individual basis, rather updating the entire distribution via "firmware updates".
Oh yes I know they do in this post, I meant more generally. Even myself I often wish I had a need to use a lower-level, cooler language, but the pragmatic side of me just can't justify it.
It is if the script is written badly, gets truncated while it's being downloaded, and fails to account for this possibility.
Look into tailscale's installation script, they wrapped everything into a function which is called in the last line — you either download and execute every line, or it does nothing.
This "what if it gets truncated in the middle of the download, and the half-run script does something really bad" objection gets brought up every time "curl | bash" is mentioned, and it always feels like "what if a cosmic ray flips a bit in your memory which makes the kernel erase your hard drive". Like, yes, it could happen in the same way getting killed by a falling asteroid could happen, but I'm not losing sleep over it.
Just living far from major datacenters is enough. I get truncated downloads pretty regularly, maybe a couple times a month or so. The network isn't really all that reliable when you consistently use it across the globe.
It usually happens on large files though, due to simple statistics, but given enough users, not hard to imagine it happening with a small script...
That's quite uncommon. Typically your distribution checks that the downloaded source/binary has the correct checksum and an experienced maintainer checked the (sandboxed) installation. Here someone puts an arbitrary script online that runs with your user's permission and you hope that the web page is not hijacked and some arbitrary dev knows how to write bash scripts.
Although this article does state that bind's "configuration files and options require careful attention to detail".
So, maybe it's not appropriate for the modern hype-cycle s/w development model?
In general, I don't think I'm disagreeing with you, so I'm not sure what message the reply is intended to convey.
Technitium seems like another one of those: "My weekend hobby project was to reinvent fire, and the wheel" sort of things, that seem popular on the HN feed.
My favorite feature of bind is "split views". This allows the same service to provide DNS on the local LAN, as well as authoritative DNS to the internet.
I am fan of Technitium, because I like to build and I built two plugins for it to fit my use case. But at work, we use Windows DNS and Bind in parallel. So, this is also a hobby of mine. The hook for me is that it is built with dotnet, and I have experience in that stack. Other features are secondary actually.
I am curious though, what would TDNS do so that you can replace BIND with TDNS in your homelab/workplace or wherever it is used? I genuinely ask for it so that I can help the original developer with some PRs.
Are you kidding? Bind has been the de facto standard for DNS servers for ages but it's just a badly engineered piece of software and had braindead vulnerabilities for decades:
Already 20 years ago it was common knowledge to never use software that Paul Vixie had touched (bind, vixie-cron, sendmail ...) and we used alternatives such as djbdns. Good old times...
Bold statement just one month after the last cache poisoning vulnerability. Bind is the Microsoft Windows of DNS servers - a lot of users and bugs nonetheless the go-to for many admins because that's what they are most familiar with. And similar to Windows, the internet mostly relies on others - none of the big companies (Meta, Cloudflare, Google, MS, Amazon, Netflix, Twitter...) use bind and neither do most hobbyists. It's just for the plethora of mid-sized companies with unmotivated admins.
reply