I've been using DeleteMe. It generally works well, with two caveats:
1. They seem to largely rely on automated or semi-automated workflows, and that sometimes breaks down. For me, they removed ~95% of the stuff, but I could still find some breadcrumbs in web searches, and need to file a couple more opt-outs manually. It might be less of a problem if your online footprint is small.
2. They target "frontend" sites, rather than the actual data brokers. This cleans up search results, but doesn't necessarily remove from the commercial databases that are available to commercial and institutional users. Because the frontends come and go, it also means that if you cancel your subscription, you will probably go back to square one in 2-5 years.
> if you go RISC-V, you are free to switch CPU providers.
That's not even true within the ARM ecosystem itself. The chips from Infineon are not source-code compatible with STM, STM is not compatible with Microchip, Microchip is not compatible with TI...
The problem is that the ARM core is just a portion of the architecture. Everything on top of that - GPIO, memory interfaces, timing, etc - is vendor specific, and will stay that way for RISC-V. RISC-V is just an instruction set architecture (with some appendages), not a blueprint for a complete CPU / MCU / SoC.
Not to mention, the chips also won't be electrically-compatible. Your hardware architecture can be as daunting to redesign as the code, if not more so. There's a reason why we try to do as much as possible in software, after all...
It feels like you're making a bad-faith argument here. You can implement 'yes' in a straightforward way in a couple lines of C, too.
main(int argc, char** argv) {
while (1) {
if (argc > 1)
for (int i = 1; i < argc; i++) printf("%s%c", argv[i], (i == argc - 1) ? '\n' : ' ');
else puts("y");
}
}
The point other folks are making is that it's written differently for a reason. Maybe not a reason that's important, but at the very least, let's try to compare apples to apples.
I don't see include, which means you're ignoring warnings aren't printing them in the first place. Also testing against a number instead of boolean. Also you have a horrible hack instead of proper flag parsing. And you're also abusing brace elision just to reduce LOC. And again abusing ternary syntax for the same reason.
It might be ugly, but that is idiomatic C code. C didn't even have boolean types until C99, and even then it's an "extension" of an integer type.
You could argue about the loop itself, after all K&R specified "for(;;)", but the other commonly used (ergo idiomatic) infinite loops use precisely the same number of lines.
"while(1)" is a perfectly idiomatic manner to create an infinite loop.
Likewise a void return type for main was entirely legal until C99. The BSD yes(1) I've laying around only prints the first argument, so flag parsing? What flag parsing?
Yes in nine lines of C inclusive of preprocessor macro invocations and white space.
There are no flags in yes(1) ergo there's no need for "flag parsing". yes(1) takes one optional string as input, and that's exactly what argv provides. I'm not sure what you think "flag parsing" is bringing to the table here, but checking the array of command line parameters and accessing an element is pretty far from a hack.
If it's more comfortable you can also declare argv as an array of character arrays e.g. char *[], but that won't change the line count.
C can obviously parse flags. See for example the plethora of C software that does so, such as many things from GNU Coreutils.
There is no requirement to parse flags for the basic functionality of yes. You implemented that yourself. You can remove your own requirement whenever you want. You don't need to parse a flag, at most you need to parse a string from the command line arguments.
But you're also arguing in bad faith. Your go code is shorter, okay, but it doesn't do the same thing as the GNU yes code, so what point are you trying to make? I can also link to philosophy 101 wikipedia articles:
I think I have made it pretty clear already, but here it is again:
the Go code has MORE functionality (flag parsing) with LESS code. yes its not as fast, and yes the executable is larger, but for many, thats a good tradeoff for the extra standard library features, and the reduced LOC/code complexity. sadly as of yet, I haven't seen any cogent technical arguments against my points thus far.
> the Go code has MORE functionality (flag parsing) with LESS code.
Your code does not have more functionality than GNU's yes as written. It's less code you have to write because of the flag parsing code that has already been written, and it's incompatible with GNU's yes because yours requires -m to change the message.
For an extremely simple utility like the 'yes' command that is compiled and distributed as a binary to trillions of installations what metric do you consider more important, size and speed? Or lines of code in the source? Think about this in engineering terms, everything is a tradeoff and it's your job to come up with the best solution.
previous comments have demonstrated this not to be the case, so I will stand by my previous points. I have already made over 10 comments on this one topic, so if any aren't already convinced, they never will be, either because they disagree with the tradeoff, or they just have stockholm syndrome for C.
I think anyone who has the ability to read and comprehend text would disagree with the comment I am replying to. Best of luck in the high school level reading class my friend.
I mean, yes, go has proper flag parsing as part of the standard library and C doesn’t. Yes that’s going to make a line count difference but it’s also why code golf arguments are pointless.
> go has proper flag parsing as part of the standard library and C doesn’t
That's the whole point. Every single command line program needs command line parsing. Go helps me get the job done, C forces me to write my own parser, or find some third party one.
Yeah, but it’s horses for courses. The C version can be deployed in far more places and can be far faster than the Go equivalent. Which is “better” is a contextual judgement call. There’s plenty of weird architectures out there that run C and almost nothing else.
Yes takes a single string with no embellishment, and that's what C provides. There's nothing additional to parse. There are no flags, no additional options, nothing else to configure… and that's by design as there's simply no need.
While I'm sure the FCC has an end game of actually mandating these labels, the vast majority of IoT devices are not exposed to the internet and just aren't a major attack vector in most environments. How much money and time needs to be spent to secure an RGB lightbulb or a wireless speaker?
There is approximately one class of consumer devices that I suppose fall under the IoT umbrella and that are commonly attacked: modems and wifi routers. But these generally get security support. And if you had product labels, would it change shopping behaviors in any way? "This NetGear router will get security updates for 8 years" sounds great. But then, in 10 years, you might have the same router in your closet. Will you even remember the label by then?
Most consumer routers these days are automatically assigning global IPv6 addresses to every device on their network. The only security feature protecting them is the difficulty of (random) discoverability (no firewall rules by default). As in, you can't just scan the entire IPv6 Internet looking for insecure devices as it would take too long (e.g. thousands of years) but if you can figure out their address they're right there, ready for hacking, from anywhere in the world.
The truth is that there's always other ways to find the IPv6 address of various devices inside a home. Many of them will happily tell you if you just send out the right broadcast (e.g. zeroconf) or they connect to services on the Internet that can be spoofed or just have generally terrible security (e.g. the addresses of all devices are publicly discoverable).
Another fun way to find these devices is buying up dead domain names (e.g. because the company no longer exists) and setting up services that auto-hack the insecure devices once they can finally "phone home" again due to the malicious domain suddenly coming back online. This kind of hack works regardless of firewall rules (assuming the device is allowed to "phone home" at all).
Can you give an example of a consumer router that does not provide a default deny inbound (tonight in noun, according to voice transcribe) for IPv6 traffic? I'm not arguing with you, I'm curious. As a network and security guy, it seems like step zero in IPv6 security to have a default deny inbound firewall rule to make up for the lack of NAT.
There was a CVE for my router which permitted some sort of traffic over IPv6 that should've been blocked. IIRC, it was some sort of malicious firmware update vector, actually. Good times.
I found out retroactively after my router had been pwned and was acting as some sort of shady DNS server. I'll never actually know the method by which it was compromised, but I made a few educated guesses.
I've never seen one that did but I've only looked at IPv6 on Netgear and TP-LINK routers. Let's try the other route: Find a consumer router that both hands out IPv6 addresses and blocks inbound IPv6 traffic by default.
Disagreeing on being good enough. The problem is that many of these devices regularly poll their mothership for commands and updates. We are one (feasibly, already done but unknown) server compromise away from millions of light bulbs or outlets or whatever turning into a botnet.
>If the device isn't internet-connected, it's not an IoT device. That's what the I stands for
in practice, the I in IoT means that the device connects to your wi-fi. whether that extends to the open web or not, it's still an IoT device, even if it doesn't conform to the word "internet" in the strictest sense
More like buttercream frosting is made out of butter. Sometimes people make something similar from margarine, but it isn't quite the same, and isn't as common.
But even that analogy isn't perfect, because while buttercream frosting made from margerine is usually cheaper and lower quality, from what I've seen, "IoT" devices that don't depend on remote servers, or send back telemetry tend to be higher end and more expensive.
I disagree. Regardless of the use of the word Internet, I argue IoT is a broad term describing devices not traditionally connected to networks which now are.
Just because I build a private network of cameras, power monitors and weather sensors at my house doesn't mean those don't qualify as an IoT device.
The FCC commissioner that was here the other day explained that if companies put these labels on their products then they will be liable for cyber security issues which would be enforceable under contract law. The label being a contractual agreement.
> the vast majority of IoT devices are not exposed to the internet
IoT stands for "internet of things". I am by no means an expert in the area, but my understanding was that am IoT device is by definition connected to the Internet.
It's really no different than people who pay more than they need to for a car or a home.
It's some combination of it being a status symbol and an "I can afford it and it's fun" kind of a deal.
There is a variety of attitudes, as with fancy cars, McMansions, or other "premium" goods. Some people wax their car every week, some people let it rust.
Watch theft isn't particularly common. I have a nice watch, I wear it daily, and I don't think about it much.
It is not common at all. You have maybe several people dying every year, which is in the "hit by lightning" territory. It's very well-publicized whenever it happens, which probably helps keep the numbers low. But this is a very unusual way to die.
Just like for road accidents, death is a wrong measure, as it really depends on your health, i.e a 20 years forager will survive whereas an 80 years old one would die. Many mushrooms are liver-toxic, so if you already have hepatitis (very common in Russia), you're probably more prone to die from a poisonous one.
Intoxication leading to hospitalization is a better measure.
Given approximately everyone's heard of it, and warnings about it, but also that I don't personally know anyone who tells me that they personally do this, my guess is in the order of 1% of the population. Might be more, might be less.
1% of Russia would be 1.5 million (and given this thread, 412k for Ukraine).
According to the 2014 survey https://fom.ru/Obraz-zhizni/11711 , 75% Russians have ever went to pick up mushrooms, and 40% have done it in the last year before the survey. 8% personally know people who suffered from mushroom poisoning.
This corresponds to my expectations: I'm from a neighbouring country, Belarus, and basically almost everyone around me has been picking up mushrooms.
That's not just a culinary experience, but a way to relax: go to forest, disconnect from the world, unwind. (People also pick up blueberries and lingonberries for the same reasons.)
Oh, and my classmate died from mushroom poisoning.
People in Germany and France buy about as much cheap junk and have about as much attachment to material belongings. Which I don't blame them for, but it's remarkable that in both countries, it's fashionable to pooh-pooh the "dumb Americans" for that.
"When your data doesn't align with the anecdotes, suspect the data" - paraphrase of one of the world's richest people, possibly being correct for once.
Facilities and real estate are among the most significant expenses for any tech company in a prime location such as Seattle or SFBA. If remote work turned out to work flawlessly, there would be a huge financial incentive to shed all that.
I find it frustrating that we always need to come up with some sinister explanation for any decision we don't like. It's always the corporate profit motive, and when it doesn't fit, we try to invent some personal profit motive for execs to act against the best interest of the company, without a shred of evidence.
I know several execs involved in RTO decisions for a public company. They have sincere convictions and some arguably flawed data to back it all. I don't agree with it, but this way of thinking that everybody up to and including my pay level is a good and smart person, and everybody above is clueless and evil... it's just juvenile.
The website was first picked up on the Internet Archive in 1999. I had no trouble finding memes from 1994-1996 on KYM, such as "Dancing Baby", "Ate My Balls", and "Goodtimes Virus".
In fact, clicking that "year:1994" turns up 108 results, so they're pretty comprehensive, although they only cover Web-based memes, usually with images, and not Usenet.
High-performance STM32 chips have been in short supply since 2019, with backorder times in the range of years. There's a reason why you don't see a whole lot of hobby boards using them today.
So, as a hobbyist, I wouldn't be getting my hopes up for 2024...
>So, as a hobbyist, I wouldn't be getting my hopes up for 2024...
Meh, if you're a hobbyist you can migrate to any other ARM board you can find on the cheap in your area. No need to pigeonhole yourself in the STM32 ecosystem.
1. They seem to largely rely on automated or semi-automated workflows, and that sometimes breaks down. For me, they removed ~95% of the stuff, but I could still find some breadcrumbs in web searches, and need to file a couple more opt-outs manually. It might be less of a problem if your online footprint is small.
2. They target "frontend" sites, rather than the actual data brokers. This cleans up search results, but doesn't necessarily remove from the commercial databases that are available to commercial and institutional users. Because the frontends come and go, it also means that if you cancel your subscription, you will probably go back to square one in 2-5 years.