Hacker Newsnew | past | comments | ask | show | jobs | submit | ninkendo's commentslogin

C's string handling is so abominably terrible that sometimes all people really need is "C with std::string".

Oh, and smart pointers too.

And hash maps.

Vectors too while we're at it.

I think that's it.


When I developed D, a major priority was string handling. I was inspired by Basic, which had very straightforward, natural strings. The goal was to be as good as Basic strings.

And it wasn't hard to achieve. The idea was to use length delimited strings rather than 0 terminated. This meant that slices of strings being strings is a superpower. No more did one have to constantly allocate memory for a slice, and then keep track of that memory.

Length-delimited also super speeded string manipulation. One no longer had to scan a string to find its length. This is a big deal for memory caching.

Static strings are length delimited too, but also have a 0 at the end, which makes it easy to pass string literals to C functions like printf. And, of course, you can append a 0 to a string anytime.


Just want to off-topic-nerd-out for a second and thank you for Empire.

You're welcome!

One of the fun things about Empire is one isn't out to save humanity, but to conquer! Hahahaha.

BTW, one of my friends is using ClodCode to generate an Empire clone by feeding it the manual. Lots of fun!


I agree on the former two (std::string and smart pointers) because they can't be nicely implemented without some help from the language itself.

The latter two (hash maps and vectors), though, are just compound data types that can be built on top of standard C. All it would need is to agree on a new common library, more modern than the one designed in the 70s.


I think a vec is important for the same reason a string is… because being able to properly get the length, and standardized ways to push/pop from them that don’t require manual bounds checking and calls to realloc.

Hash maps are mostly only important because everyone ought to standardize on a way of hashing keys.

But I suppose they can both be “bring your own”… to me it’s more that these types are so fundamental and so “table stakes” that having one base implementation of them guaranteed by the language’s standard lib is important.


why not std::string?

You can surely create a std::string-like type in C, call it "newstring", and write functions that accept and return newstrings, and re-implement the whole standard library to work with newstrings, from printf() onwards. But you'll never have the comfort of newstring literals. The nice syntax with quotes is tied to zero-terminated strings. Of course you can litter your code with preprocessor macros, but it's inelegant and brittle.


If only WG14 added something similar to C.

Yes, SDS exists, however vocabulary types are quite relevant for adoption at scale.


It's a class, so it doesn't work in C.

Sure, but you can have a similar string abstraction in C. What would you miss? The overloaded operators?

Automatic memory accounting — construct/copy/destruct. You can't abstract these in C. You always have to call i_copied_the_string(&string) after copying the string and you always have to call the_string_is_out_of_scope_now(&string) just before it goes out of scope

This seems orthogonal to std::string. People who pick C do not want automatic memory management, but might want better strings.

Automatic memory management is literally what makes them better

For many string operations such as appending, inserting, overwriting etc. the memory management can be made automatic as well in C, and I think this is the main advantage. Just automatic free at scope end does not work (without extensions).

Yeah, WG14 has had enough time to provide safer alternatives for string and arrays in C, but that isn't a priority, apparently.

And constructors and destructors to be able to use those vectors and hash maps properly without worrying about memory leaks.

And const references.

And lambdas.


Add concurrency and you more or less came up with same list C's own creator came up when he started working on a new language.

Nit: please don’t push to my browser history every time I expand one of the sections… I had to press my browser’s back button a dozen or so times to get back out of your site.

You can also hold down the back button to get a menu of previous pages in order to skip multiple back button presses. (I still agree with your point and you might already know that. Maybe it helps someone.)

Thanks. I'll look into that. It was recommended exactly for backtracking but I get that if you want to leave it's a whole lot of backpaddling :-)

Use history.replaceState() instead of history.pushState() and you're all good.

Thanks. It makes sense. I'll switch.

Playing music doesn’t require unlocking though, at least not from the Music app. If YouTube requires an unlock that’s actually a setting YouTube sets in their SiriKit configuration.

For reading messages, IIRC it depends on whether you have text notification previews enabled on the lock screen (they don’t document this anywhere that I can see.) The logic is that if you block people from seeing your texts from the lock screen without unlocking your device, Siri should be blocked from reading them too.

Edit: Nope, you’re right. I just enabled notification previews for Messages on the lock screen and Siri still requires an unlock. That’s a bug. One of many, many, many Siri bugs that just sort of pile up over time.


It’s so great when the files on the navigator pane aren’t sorted, and then if you right-click sort, it rewrites half your pbxproj file and you get merge conflicts everywhere. So then nobody sorts the files because they don’t want to deal with it. Why can’t the sorting be a view thing that’s independent of the contents of the project file? Who knows.

When I used it in a team, I had to write a build step that would fail the build if the pbxproj file wasn’t sorted. (Plus a custom target that would sort it for you.) It was the only way to make sure it never got unsorted in the first place.


Sudo’s networking functionality is infuriating too, because if my system’s DNS is broken, I get to wait 60 seconds for sudo to work, during which time I can’t even ctrl+c to cancel!

(It has to do with sudoers entries having a host field, since the file is designed to be deployed to multiple servers, which may each want differing sudoers rules. It’s truly 90s era software.)


Strange to say it’s not useful, I personally get a lot of use out of it. It’s mostly replaced search engines for me, which is no small thing.

Agree that society can survive without it though, but seems a weird thing to just claim as useless.


Interesting. How did it work getting your photos off of iCloud? Does Apple give you a good way to get an archive of all of your photos? That is, the original quality photos, without manually downloading them individually? (I currently have 446 GB of photos in iCloud…)

Immich iOS app supports backing up photos directly from iCloud in original resolution, with the all EXIF data included. I had 230 GB of photos myself, and I left the phone on the charger overnight with the app running in the foreground and screen locking disabled. In the morning everything was imported.

Some people have instead set Photos app on a Mac to download original photos from the iCloud library and then moved the files directly into the server. I have not personally tried this method though.


> Immich iOS app supports backing up photos directly from iCloud in original resolution

wait that is just crazy!!! Dang my dad is going to flip out when I tell him about this. He's got like 1.5 TB of photos in iCloud and has been searching for a way to get them off. And we're so close to our family storage limit that he gets mad at me when I text him pictures hahaha


There is a community-supported CLI program called immich-go that directly supports reading in iCloud and Google takeout archives, as well as local directories. It works great, and has gobs of import options to set up albums and tags. [ https://github.com/simulot/immich-go ]

i havent seen anyone else mention it so i will. privacy.apple.com lets you export your apple data similar to google takeout

That's the worst service I've ever seen. It asks you the size of each zip file and I said 50G at first. And I couldn't download it because the connection was so unstable. No way to resume it and every 20~30 mins, it failed in the middle. Chrome, firefox, safari were all the same. I tried from a GCE VM as well to see if that's my network problem but didn't help.

I had to request again with 2G and I was able to download files finally. But only one by one. And after download 3~5 files, I had to login again as their login expires so frequently.

I had to do that for days and the download got expired. Oh my god. I had to request it again. And you know what? Their file list wasn't deterministic. I had to download from the beginning. lol

I finally made it and I swear I will never use any cloud service from apple.


Same issue has been going on for me with just about any big download from apple serves. Could be icloud. Could be xcode. Doesn’t matter. It will randomly fail in the transfer and require manual intervention to restart. Been this way for years.

iCloud Photos Downloader isn’t user friendly or pretty, but I finally managed to rip my entire collection without having to install any apple software.

https://news.ycombinator.com/item?id=46578921


Sure but the US isn’t vowing to eliminate all dependencies on EU goods. (Just burning all their good will.)

It doesn’t begin at the transmitter either, in the earliest days even the camera was essentially part of the same circuit. Yes, the concept of filming a show and showing the film over the air existed eventually, but before that (and even after that, for live programming) the camera would scan the subject image (actors, etc) line-by-line and down a wire to the transmitter which would send it straight to your TV and into the electron beam.

In fact in order to show a feed of only text/logos/etc in the earlier days, they would literally just point the camera at a physical object (like letters on a paper, etc) and broadcast from the camera directly. There wasn’t really any other way to do it.


Our station had an art department that used a hot press to create text boards that were set on an easel that had a camera pointed at it. By using a black background with white text you could merge the text camera with a camera in the studio and "super-imposed the text into the video feed.

"And if you tell the kids that today, they won't believe it!"


It's kind of amazing the sort of hoops people needed to jump through to make e.g. the BBC-1 ident: https://www.youtube.com/watch?v=xfpEZDeVo00

It seems like imagination was more common in those days. There was no "digital" anything to lean on.

The live-action PBS idents from the early 90's were some of the best.

https://www.youtube.com/watch?v=5Ap_JRofNMs https://www.youtube.com/watch?v=PJpiIyBkUZ4

This mini doc shows the process:

https://youtu.be/Q7iNg1dRqQI?t=167


So, it’s interesting. You know how with RAM, it’s a good idea for it to be “fully utilized”, in a sense that anything apps aren’t using should be used for file system cache? And then when apps do need it, the least-recently-used cache can be freed to make room? It’s actually similar for the file system itself!

If macOS is using 153GB for iCloud cache, that’s only a bad thing if it’s not giving it back up automatically if your filesystem starts getting full. Because it means you have local copies of things that live in iCloud, making the general experience faster. In that sense, you want your filesystem to be “fully utilized”. The disk viewer in macOS that shows you filesystem utilization should even be differentiating this sort of cache from “real” utilization… this cache should (if everything is working right) should logically be considered “free space”.

Now of course, if there are bugs where the OS isn’t giving that storage back when you need it, that all goes out the window. And yeah… bugs like these happen too damned often. But I still say, the idea is actually a good one, at least in theory.


This would be acceptable if solid state storage weren’t so susceptible to write wear, in a laptop where nothing is user serviceable.

What would the alternative be? Simply don't cache anything you get from icloud? Because even if you delete it more eagerly, that's a write cycle.

In fact, avoiding deleting it in case the user gets it again, is going to put fewer write cycles on the SSD, assuming you're going to write it to the SSD at all. The only alternative I can think of is keeping everything from iCloud in RAM, but that is a pretty insane idea. (Also, then the first thing you'd get is people complaining that iCloud eats up all their 5G data caps, etc.)


Of course, but then iCloud might want to cache a reasonable amount of data, say, the 10% the user uses the most. Seeing iCloud caches in the 100+GB arena makes no sense to me, especially if the system isn’t rapidly releasing that storage when needed.

If the ability to release the storage on-demand works correctly (and this is a big if) there’s no reason to limit to 10%. What benefit will that have? If the system works well, deleting the data eagerly accomplishes nothing.

I think the actual system uses filesystem utilization as a form of “disk pressure”, to the point where once it’s above a certain threshold (say, 90% used), it should start evicting least-recently-used data. It doesn’t wait for 100%, because it takes some nonzero amount of time to free the cache. But limiting the cache size arbitrarily doesn’t seem useful.

It gets more complicated when there are multiple caches (maybe some third party apps has their own caches) and you need to prioritize who gets evicted, but it’s still the same thing in theory.

But yeah, if the system isn’t working right and cache isn’t seen as cache, or if it can’t evict it for some reason, then this all goes out the window. I’m only claiming it’s good in theory.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: