My internet providers (both home wifi and cellular) do this. The problem with unlimited slow speed is that it's too slow. I am sometimes unable to open the carrier's own app and pay for a recharge. Either the app just doesn't open or the transaction in the payments app fails.
I recently switched to linux from windows. The only reason I was sticking with windows was because hoyoverse refuses to support linux. I finally decided I need some break from them anyways and took the plunge.
First, I tried to install fedora atomic cosmic. It kind of worked but I could not get it to work with my dock + external monitors at all. Now that I am used to that setup, I can't go back.
Not wanting to spend time figuring it out, I just installed Ubuntu instead. Thankfully, that worked out though it's not perfect. Everytime I turn on my laptop, I need to spend 10-15 minutes turning the monitors on and off until ubuntu recognises them correctly and also sends dp output (it shows the monitor in settings and I can open windows on it but the monitor doesn't actually show anything; other times, it reads the monitor as something nvidia with the lowest resolution).
I tried to install genshin anyways on ubuntu. I couldn't get it to work via wine/lutris. Virtualbox doesn't support gpu passthrough so I tried using virt-manager. The setup was too hard and it didn't work anyways. I gave up on hoyo at this point and install steam instead.
Honestly, ubuntu is rough and Linux as a whole is very rough. But on the whole, I would still pick this over dealing with windows any longer.
The trick with linux is being selective when you buy hardware. Getting things to work the first time is hit or miss, but once they work, they tend to continue to work without too many surprises. For laptops, that means thinkpads.
The dock and monitors aren't mine actually. I got them from work. The work laptop runs windows 11 so the hardware is only tested for windows. I will buy my own stuff when I have to return these and then I will make sure it all works with linux.
What distro / wm / de is good with external monitors, in your opinion? After going through some of the comments on some threads, it feels like external displays are a common pain point across all linux systems.
Yes, external displays can break in weird ways. I remember that a common annoyance used to be that windows and applications don't go back to where you left them before suspending. There are likely other paper cuts as well in areas of variable refresh rates, colour management, etc. I think the biggest issue with external displays, screen tearing, is solved now due to wayland. As for hw and distros, stick to intel or amd igpus, especially in laptops. Both gnome and kde are pretty good these days. Ubuntu and Fedora are both quite good. Distros aren't that different anymore these days. The differences boil down to release cadence mostly.
check out the launchers here for hoyo stuff, i haven't tried the genshin one but zzz worked nearly out of the box (had to change the wine version it was using iirc) https://github.com/an-anime-team
Yeah. Certainly felt like that. On the other hand, the content does seem good. It definitely wasn't slop, even if I can't judge how useful it really was (in terms of giving a solution).
You don't. Assertions are assumptions. You don't explicitly write recovery paths for individual assumptions being wrong. Even if you wanted to, you probably wouldn't have a sensible recovery in the general case (what will you do when the enum that had 3 options suddenly comes in with a value 1000?).
I don't think any C programmer (where assert() is just debug_assert!() and there is no assert!()) is writing code like:
assert(arr_len > 5);
if (arr_len <= 5) {
// do something
}
They just assume that the assertion holds and hope that some thing would crash later and provide info for debugging if it didn't.
Anyone writing with a standard that requires 100% decision-point coverage will either not write that code (because NDEBUG is insane and assert should have useful semantics), or will have to both write and test that code.
The risk of losing one (or both) earbud is a real one. My ears don't tend to keep snug grip on the earbuds so they tend to get loose after I walk a little. With earbuds, this might just be my own singular piece but, there is also the chance that only one of the two would connect to your phone.
On the other hand, the cables get tangled together. I can't walk around with them because the cable gets stuck in the swing of my arms. Connecting them to the phone after a call had already started was a piece of cake though. With bluetooth, I never have my earbuds on when I actually need them and it's too much of a pain to take them out of my bag and connect them.
Whenever it is time to replace my current earbuds, I am gonna go for a neckband instead. It has basically the best of both, imo (I am not that sensitive to audio quality mostly) and the downsides aren't large enough (I'll think of the weight as a neck workout).
Then don’t buy headphones like that. I have AirPods Pro. But I also have a pair of $50 Beat Flex that if they fall out of my ear they just go around my neck. I use them when I travel.
I bought a pair of double flange doohickies to replace the standard ones.
Since the human eye is most sensitive to green, it will find errors in the green channel much easier than the others. This is why you need _more_ green data.
Couldn't the compiler optimise this still? Make two versions of the function, one with constant folding and one without. Then at runtime, check the value of the parameter and call the corresponding version.
Personally, I would prefer that the package managers keep their own lockfiles with all their metadata. A CI process (using the package managers itself) can create the SBOM for every commit in a standardized environment. We get all the same benefits without losing anything (the package managers can keep their own formats and metadata and remove anything unneeded for the SBOM from it).
Second that. It is trivial to add SBOM generator to your pipeline - it is not trivial to make all kind of package managers to switch and each format is used for different audiences.
To understand what an impossible task this is, there is no need to think about different ecosystems (PyPI vs NPM vs Cargo vs ...). Even in the case of different Linux distributions, the package managers are so different that expecting them to support the same formats is a lost cause.
I do exactly that in my container build pipelines and it is great. And then CI uploads those SBOMs to Dependency Track.
Depending on the language, scanning just the container is not enough, you for sure want to scan the lockfiles for full dependency list before it is compiled/packed/minified and becomes invisible to trivy/syft.
You are building everything in CI from scratch so theoretically, it should be completely possible to not need to scan lockfiles and get all the data from their respective sources (OS, runtime, dynamic libs, static deps, codegen tools, build time deps, etc)
This isn't really something the logging library can do. If the language provides a string interpolation mechanism then that mechanism is what the programmers will reach for first. And the library cannot know that interpolation happened because the language creates the final string before passing it in.
If you want the builtin interpolation to become a noop in the face runtime log disabling then the logging library has to be a builtin too.
I feel like there's a parallel with SQL where you want to discourage manual interpolation. Taking inspiration from it may help: you may not fully solve it but there are some API ideas and patterns.
A logging framework may have the equivalent of prepared statements. You may also nudge usage where the raw string API is `log.traceRaw(String rawMessage)` while the parametrized one has the nicer naming `log.trace(Template t, param1, param2)`.
You pass "foo" to Template. The Template will be instantiated before log ever sees it. You conveniently left out where the Foo string is computed from something that actually need computation.
Like both:
new Template("doing X to " + thingBeingOperatedOn)
new Template("doing " + expensiveDebugThing(thingBeingOperatedOn))
You just complicated everything to get the same class of error.
Heck even the existing good way of doing it, which is less complicated than your way, still isn't safe from it.
All your examples have the same issue, both with just string concatenation and more expensive calls. You can only get around an unknowing or lazy programmer if the compiler can be smart enough to entirely skip these (JIT or not - a JIT would need to see that these calls never amount to anything and decide to skip them after a while. Not deterministically useful of course).
Yeah, it's hard to prevent a sufficiently motivated dev from shooting itself in the foot; but these still help.
> You conveniently left out where the Foo string is computed from something that actually need computation.
I left it out because the comment I was replying to was pointing that some logs don't have params.
For the approach using a `Template` class, the expectation would be that the doc would call out why this class exists in the first place as to enable lazy computation. Doing string concatenation inside a template constructor should raise a few eyebrows when writing or reviewing code.
I wrote `logger.log(new Template("foo"))` in my previous comment for brevity as it's merely an internet comment and not a real framework. In real code I would not even use stringy logs but structured data attached to a unique code. But since this thread discusses performance of stringy logs, I would expect log templates to be defined as statics/constants that don't contain any runtime value. You could also integrate them with metadata such as log levels, schemas, translations, codes, etc.
Regarding args themselves, you're right that they can also be expensive to compute in the first place. You may then design the args to be passed by a callback which would allow to defer the param computation.
A possible example would be:
const OPERATION_TIMEOUT = new Template("the operation $operationId timed-out after $duration seconds", {level: "error", code: "E_TIMEOUT"});
// ...
function handler(...) {
// ..
logger.emit(OPERATION_TIMEOUT, () => ({operationId: "foo", duration: someExpensiveOperationToRetrieveTheDuration()}))
}
This is still not perfect as you may need to compute some data before the log "just in case" you need it for the log. For example you may want to record the current time, do the operation. If the operation times out, you use the time recorded before the op to compute for how long it ran. If you did not time out and don't log, then getting the current system time is "wasted".
All I'm saying is that `logger.log(str)` is not the only possible API; and that splitting the definition of the log from the actual "emit" is a good pattern.
Unless log() is a macro of some sort that expands to if(logEnabled){internalLog(string)} - which a good optimizer will see through and not expand the string when logging is disabled.