Hacker Newsnew | past | comments | ask | show | jobs | submit | hunson_abadeer's commentslogin

I've been using DeleteMe. It generally works well, with two caveats:

1. They seem to largely rely on automated or semi-automated workflows, and that sometimes breaks down. For me, they removed ~95% of the stuff, but I could still find some breadcrumbs in web searches, and need to file a couple more opt-outs manually. It might be less of a problem if your online footprint is small.

2. They target "frontend" sites, rather than the actual data brokers. This cleans up search results, but doesn't necessarily remove from the commercial databases that are available to commercial and institutional users. Because the frontends come and go, it also means that if you cancel your subscription, you will probably go back to square one in 2-5 years.


> if you go RISC-V, you are free to switch CPU providers.

That's not even true within the ARM ecosystem itself. The chips from Infineon are not source-code compatible with STM, STM is not compatible with Microchip, Microchip is not compatible with TI...

The problem is that the ARM core is just a portion of the architecture. Everything on top of that - GPIO, memory interfaces, timing, etc - is vendor specific, and will stay that way for RISC-V. RISC-V is just an instruction set architecture (with some appendages), not a blueprint for a complete CPU / MCU / SoC.

Not to mention, the chips also won't be electrically-compatible. Your hardware architecture can be as daunting to redesign as the code, if not more so. There's a reason why we try to do as much as possible in software, after all...


>Everything on top of that - GPIO, memory interfaces, timing, etc - is vendor specific, and will stay that way for RISC-V.

Not as true for RISC-V, as there's efforts (some of them complete) to standardize interfaces to standard peripherals.

E.g. timers, gpios and watchdogs.


It feels like you're making a bad-faith argument here. You can implement 'yes' in a straightforward way in a couple lines of C, too.

  main(int argc, char** argv) {
    while (1) {
      if (argc > 1)
        for (int i = 1; i < argc; i++) printf("%s%c", argv[i], (i == argc - 1) ? '\n' : ' ');
        else puts("y");
    }
  }
The point other folks are making is that it's written differently for a reason. Maybe not a reason that's important, but at the very least, let's try to compare apples to apples.


I don't see include, which means you're ignoring warnings aren't printing them in the first place. Also testing against a number instead of boolean. Also you have a horrible hack instead of proper flag parsing. And you're also abusing brace elision just to reduce LOC. And again abusing ternary syntax for the same reason.


If you start with code golf, as you did, then this is where you end up. The only way to win is not to play?


mine is just normal idiomatic Go code. thats not true of the C code.


It might be ugly, but that is idiomatic C code. C didn't even have boolean types until C99, and even then it's an "extension" of an integer type.

You could argue about the loop itself, after all K&R specified "for(;;)", but the other commonly used (ergo idiomatic) infinite loops use precisely the same number of lines. "while(1)" is a perfectly idiomatic manner to create an infinite loop.

Likewise a void return type for main was entirely legal until C99. The BSD yes(1) I've laying around only prints the first argument, so flag parsing? What flag parsing?

Yes in nine lines of C inclusive of preprocessor macro invocations and white space.

  #include <stdio.h>
  
  int main(int argc, char **argv) {
    const char *phrase = argc > 1 ? argv[1] : "y";
    while (1) {
      printf("%s\n", phrase);
    }
    return 0;
  }


I don't see proper flag parsing, I see an argv hack.


There are no flags in yes(1) ergo there's no need for "flag parsing". yes(1) takes one optional string as input, and that's exactly what argv provides. I'm not sure what you think "flag parsing" is bringing to the table here, but checking the array of command line parameters and accessing an element is pretty far from a hack.

If it's more comfortable you can also declare argv as an array of character arrays e.g. char *[], but that won't change the line count.


It's a lose for C either way. Either it cant parse flags, or we remove that requirement, and my code goes from 9 lines to 6.


The requirement to parse flags is your own. You can remove it from your go program. You only need to parse one string, if it exists.


[flagged]


C can parse flags and there is no requirement here for it to parse flags.


It's a lose for C either way. Either it cant parse flags, or we remove that requirement, and my code goes from 9 lines to 6.


C can obviously parse flags. See for example the plethora of C software that does so, such as many things from GNU Coreutils.

There is no requirement to parse flags for the basic functionality of yes. You implemented that yourself. You can remove your own requirement whenever you want. You don't need to parse a flag, at most you need to parse a string from the command line arguments.

I wonder at this point how you define "flag".


It's a lose for C either way. Either it cant parse flags, or we remove that requirement, and my code goes from 9 lines to 6.


Who cares though? We get it, you prefer golang, congrats?


I am not seeing a technical argument here against the previous points, only one against the commenter:

https://wikipedia.org/wiki/Ad_hominem


But you're also arguing in bad faith. Your go code is shorter, okay, but it doesn't do the same thing as the GNU yes code, so what point are you trying to make? I can also link to philosophy 101 wikipedia articles:

https://en.wikipedia.org/wiki/Straw_man


I think I have made it pretty clear already, but here it is again:

the Go code has MORE functionality (flag parsing) with LESS code. yes its not as fast, and yes the executable is larger, but for many, thats a good tradeoff for the extra standard library features, and the reduced LOC/code complexity. sadly as of yet, I haven't seen any cogent technical arguments against my points thus far.


> the Go code has MORE functionality (flag parsing) with LESS code.

Your code does not have more functionality than GNU's yes as written. It's less code you have to write because of the flag parsing code that has already been written, and it's incompatible with GNU's yes because yours requires -m to change the message.


> Your code does not have more functionality than GNU's yes as written.

it has flag parsing


Which does not do functionally more than the C version that was shared by inferiorhuman.


yes it does


For an extremely simple utility like the 'yes' command that is compiled and distributed as a binary to trillions of installations what metric do you consider more important, size and speed? Or lines of code in the source? Think about this in engineering terms, everything is a tradeoff and it's your job to come up with the best solution.

I'm genuinely curious to hear your argument.


> I'm genuinely curious to hear your argument.

previous comments have demonstrated this not to be the case, so I will stand by my previous points. I have already made over 10 comments on this one topic, so if any aren't already convinced, they never will be, either because they disagree with the tradeoff, or they just have stockholm syndrome for C.


You've demonstrated nothing and made no discernable argument to anyone. Best of luck in the job search my friend.


I think anyone who has the ability to read and comprehend text would disagree with the comment I am replying to. Best of luck in the high school level reading class my friend.


Also, take a look at openbsd's version of yes

https://github.com/openbsd/src/blob/master/usr.bin/yes/yes.c


more lines of code, and still doesn't have flag parsing


There are no flags to parse. Why are you adding flag parsing? This would fail a junior interview Steven.


when people resort to doxing, it shows how truly pathetic they are. I pity those people.


I mean, yes, go has proper flag parsing as part of the standard library and C doesn’t. Yes that’s going to make a line count difference but it’s also why code golf arguments are pointless.


> go has proper flag parsing as part of the standard library and C doesn’t

That's the whole point. Every single command line program needs command line parsing. Go helps me get the job done, C forces me to write my own parser, or find some third party one.


Yeah, but it’s horses for courses. The C version can be deployed in far more places and can be far faster than the Go equivalent. Which is “better” is a contextual judgement call. There’s plenty of weird architectures out there that run C and almost nothing else.


Yes takes a single string with no embellishment, and that's what C provides. There's nothing additional to parse. There are no flags, no additional options, nothing else to configure… and that's by design as there's simply no need.


It's a lose for C either way. Either it cant parse flags, or we remove that requirement, and my code goes from 9 lines to 6.


You can write a perfectly legible 4, 5, or 6 line version in C.


OK I am waiting...


While I'm sure the FCC has an end game of actually mandating these labels, the vast majority of IoT devices are not exposed to the internet and just aren't a major attack vector in most environments. How much money and time needs to be spent to secure an RGB lightbulb or a wireless speaker?

There is approximately one class of consumer devices that I suppose fall under the IoT umbrella and that are commonly attacked: modems and wifi routers. But these generally get security support. And if you had product labels, would it change shopping behaviors in any way? "This NetGear router will get security updates for 8 years" sounds great. But then, in 10 years, you might have the same router in your closet. Will you even remember the label by then?


If the device isn't internet-connected, it's not an IoT device. That's what the I stands for.

If what you're getting at is that most networked devices sit behind a consumer firewall, and that's probably good enough -- well, I mostly agree.


Most consumer routers these days are automatically assigning global IPv6 addresses to every device on their network. The only security feature protecting them is the difficulty of (random) discoverability (no firewall rules by default). As in, you can't just scan the entire IPv6 Internet looking for insecure devices as it would take too long (e.g. thousands of years) but if you can figure out their address they're right there, ready for hacking, from anywhere in the world.

The truth is that there's always other ways to find the IPv6 address of various devices inside a home. Many of them will happily tell you if you just send out the right broadcast (e.g. zeroconf) or they connect to services on the Internet that can be spoofed or just have generally terrible security (e.g. the addresses of all devices are publicly discoverable).

Another fun way to find these devices is buying up dead domain names (e.g. because the company no longer exists) and setting up services that auto-hack the insecure devices once they can finally "phone home" again due to the malicious domain suddenly coming back online. This kind of hack works regardless of firewall rules (assuming the device is allowed to "phone home" at all).


Can you give an example of a consumer router that does not provide a default deny inbound (tonight in noun, according to voice transcribe) for IPv6 traffic? I'm not arguing with you, I'm curious. As a network and security guy, it seems like step zero in IPv6 security to have a default deny inbound firewall rule to make up for the lack of NAT.


There was a CVE for my router which permitted some sort of traffic over IPv6 that should've been blocked. IIRC, it was some sort of malicious firmware update vector, actually. Good times.

I found out retroactively after my router had been pwned and was acting as some sort of shady DNS server. I'll never actually know the method by which it was compromised, but I made a few educated guesses.


I've never seen one that did but I've only looked at IPv6 on Netgear and TP-LINK routers. Let's try the other route: Find a consumer router that both hands out IPv6 addresses and blocks inbound IPv6 traffic by default.


Disagreeing on being good enough. The problem is that many of these devices regularly poll their mothership for commands and updates. We are one (feasibly, already done but unknown) server compromise away from millions of light bulbs or outlets or whatever turning into a botnet.


>If the device isn't internet-connected, it's not an IoT device. That's what the I stands for

in practice, the I in IoT means that the device connects to your wi-fi. whether that extends to the open web or not, it's still an IoT device, even if it doesn't conform to the word "internet" in the strictest sense


In the same way a butterfly is made out of butter.

Many devices work just fine with local only connection. If an IOT devices does not work without internet that is a reason to not buy it.


More like buttercream frosting is made out of butter. Sometimes people make something similar from margarine, but it isn't quite the same, and isn't as common.

But even that analogy isn't perfect, because while buttercream frosting made from margerine is usually cheaper and lower quality, from what I've seen, "IoT" devices that don't depend on remote servers, or send back telemetry tend to be higher end and more expensive.


I disagree. Regardless of the use of the word Internet, I argue IoT is a broad term describing devices not traditionally connected to networks which now are.

Just because I build a private network of cameras, power monitors and weather sensors at my house doesn't mean those don't qualify as an IoT device.


The FCC commissioner that was here the other day explained that if companies put these labels on their products then they will be liable for cyber security issues which would be enforceable under contract law. The label being a contractual agreement.


> the vast majority of IoT devices are not exposed to the internet

IoT stands for "internet of things". I am by no means an expert in the area, but my understanding was that am IoT device is by definition connected to the Internet.


It's really no different than people who pay more than they need to for a car or a home.

It's some combination of it being a status symbol and an "I can afford it and it's fun" kind of a deal.

There is a variety of attitudes, as with fancy cars, McMansions, or other "premium" goods. Some people wax their car every week, some people let it rust.

Watch theft isn't particularly common. I have a nice watch, I wear it daily, and I don't think about it much.


It is not common at all. You have maybe several people dying every year, which is in the "hit by lightning" territory. It's very well-publicized whenever it happens, which probably helps keep the numbers low. But this is a very unusual way to die.


Ah, just wait until another Russian scientist or business person gets hit by lightning.


Suicide by lightning twice to the back of the head is on the rise in that part of the world.


It's like few cases per season per million people eating wild shrooms.


Just like for road accidents, death is a wrong measure, as it really depends on your health, i.e a 20 years forager will survive whereas an 80 years old one would die. Many mushrooms are liver-toxic, so if you already have hepatitis (very common in Russia), you're probably more prone to die from a poisonous one.

Intoxication leading to hospitalization is a better measure.


How many millions of people go mushroom hunting?


Probably many millions? It's a pretty popular thing to do in Eastern Europe.


Given approximately everyone's heard of it, and warnings about it, but also that I don't personally know anyone who tells me that they personally do this, my guess is in the order of 1% of the population. Might be more, might be less.

1% of Russia would be 1.5 million (and given this thread, 412k for Ukraine).


1%?!

According to the 2014 survey https://fom.ru/Obraz-zhizni/11711 , 75% Russians have ever went to pick up mushrooms, and 40% have done it in the last year before the survey. 8% personally know people who suffered from mushroom poisoning.

This corresponds to my expectations: I'm from a neighbouring country, Belarus, and basically almost everyone around me has been picking up mushrooms.

That's not just a culinary experience, but a way to relax: go to forest, disconnect from the world, unwind. (People also pick up blueberries and lingonberries for the same reasons.)

Oh, and my classmate died from mushroom poisoning.


Wow, OK, I'm surprised by those numbers.


Plenty.


People in Germany and France buy about as much cheap junk and have about as much attachment to material belongings. Which I don't blame them for, but it's remarkable that in both countries, it's fashionable to pooh-pooh the "dumb Americans" for that.


Per capita consumer spending, Germany 2021: $21,704

Per capita consumer spending, USA 2021: $47915


Per capita GDP, Germany 2023: $51,383

Per capita GDP, United States 2023: $80,034

If Germany were a US state, it would rank somewhere between Mississippi and West Virginia.

https://en.wikipedia.org/wiki/List_of_countries_by_GDP_(nomi...

https://en.wikipedia.org/wiki/List_of_U.S._states_and_territ...

(slight edit for formatting)


Maybe if what you’re measuring leads to believe that Germany is equivalent to Mississippi, you’re measuring the wrong thing.


[flagged]


In 2022, 184,753 German citizens returned to Germany, while 268,167 left.

Departures have outnumbered arrivals every year since 2005.

Source: https://www.destatis.de/EN/Themes/Society-Environment/Popula...


"When your data doesn't align with the anecdotes, suspect the data" - paraphrase of one of the world's richest people, possibly being correct for once.


It would be interesting to see the median values of such w/COLA


Have you ever been to France or Germany? That’s obviously and clearly very not true.


Facilities and real estate are among the most significant expenses for any tech company in a prime location such as Seattle or SFBA. If remote work turned out to work flawlessly, there would be a huge financial incentive to shed all that.

I find it frustrating that we always need to come up with some sinister explanation for any decision we don't like. It's always the corporate profit motive, and when it doesn't fit, we try to invent some personal profit motive for execs to act against the best interest of the company, without a shred of evidence.

I know several execs involved in RTO decisions for a public company. They have sincere convictions and some arguably flawed data to back it all. I don't agree with it, but this way of thinking that everybody up to and including my pay level is a good and smart person, and everybody above is clueless and evil... it's just juvenile.


I Can Eat Glass was definitely a thing, but in the late 1990s or perhaps very early 2000s. Sites such as knowyourmeme.com don't go nearly as far back.


The website was first picked up on the Internet Archive in 1999. I had no trouble finding memes from 1994-1996 on KYM, such as "Dancing Baby", "Ate My Balls", and "Goodtimes Virus".

In fact, clicking that "year:1994" turns up 108 results, so they're pretty comprehensive, although they only cover Web-based memes, usually with images, and not Usenet.


The Everything2 entry is dated April 12 2000.


High-performance STM32 chips have been in short supply since 2019, with backorder times in the range of years. There's a reason why you don't see a whole lot of hobby boards using them today.

So, as a hobbyist, I wouldn't be getting my hopes up for 2024...


>So, as a hobbyist, I wouldn't be getting my hopes up for 2024...

Meh, if you're a hobbyist you can migrate to any other ARM board you can find on the cheap in your area. No need to pigeonhole yourself in the STM32 ecosystem.


I don't have any insider knowledge but supposedly supplies are easing now and be normal by 2024.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: