There also seems to be a plan to add uefi support to u-boot[1]. Many of these kinds of boards have u-boot implementations, so could then boot uefi kernel.
However many of these ARM chips have their own sub-architecture in the Linux source tree, I'm not sure that it's possible today to build a single image with them all built in and choose the subarchitecture at runtime. Theoretically it could be done, of course, but who has the incentive to do that work?
(I seem to remember Linus complaining about this situation to the Arm maintainer, maybe 10-20 years ago)
That sounds worth knowing; however when I looked MERV up, it seems that it's a rating system, not a type of filter. Could you be more specific abot the kind of filter you mean?
HEPA is typically just one type of filter with True HEPA as an offspring, MERV is a range which allows you to filter exactly what you need at the highest airflow. It really depends on what kind of pollution you have at your home.
If you just have a lot of dust then you want highest airflow possible (around MERV 9-10) if you want to filter things that cause allergies you need to go as high as MERV 14 since MERV 9-10 effectiveness is super low in that specific range.
I too have no expertise here, but I've known quite a few rodents. So... my amateur take:
I do not think that the area would necessarily need to be cleared of debris first. Rats can get places people would never imagine they could quite easily.
What I did find to be consistent with rodents is the difficulty in getting them to use set search patterns and such. Rodents go where and how rodents go, and I've never found it possible to teach them set routes. They get a whiff and tend to go right at whatever they smelled in contrast to dogs who can be taught to take set routes/patterns.
The "distance from the nose" wouldn't matter. Rodents can often smell stuff that's around a mile away... a few inches of dirt (most landmines are under fewer than 25cm of dirt, anti-tank mines under fewer than 30cm) wouldn't be sufficient to deter them.
Reliably signaling humans depends upon the particular rodent. Rodents have personalities, and they will often make very particular signals to their people in response to particular things. Reliably enough to bet a life on it? Not sure, but I don't think they'd be terrible in that regard.
"1) Rats have short legs and can only be used in a smooth, obstruction free area.
2) Rats cannot be trained to move in a search pattern.
3) Rats have the capacity to detect explosive residue and can apparently be encouraged to do so with food rewards...
4) ...but there is no evidence that rats can respond to explosive residue in a hazardous area reliably.
5) There is no evidence that the method of deployment results in a thorough search of the area.
6) The speed of search sometimes claimed would make thorough search (tiny nose within 10 cms of target) physically impossible.
7) The cost of preparing the area where they are used, then dragging the rat back and forth over it has to be added to training and housing the rats.
8) While a rat may indicate on a hazard, it cannot expose and clear it. The cost of manually excavating and clearing any explosive hazard that the rat signals on must also be added to the total cost of clearance.
9) The ground has to be scalped for them to be dragged across it in a straight line, then safe-lanes for the handlers manually searched and cleared – so the total cost of using the rats includes the cost of an armoured vegetation cutter and a fully equipped demining team to prepare the area in which the rat will search.
10) There is absolutely no evidence that the cost is less than the cost of demining the area using the same assets but without rats.
11) Because the rat cannot be trained to search and indicate in a set way (which a dog can be reliably trained to do) there is no way for a Quality Assurance observer to know whether the rat is paying attention, so no way of knowing whether the rat has done anything of any value at all.
"
Ok, I only skimmed the article, but it sounded a bit forced. And already point 1 in the summary makes me sceptic, because rats can climb pretty well as far as I know. (Otherwise I am no expert, either)
"5. There is no evidence that the method of deployment results in a thorough search of the area."
This is the most technical post, and it will go almost unnoticed.
A cool story about a rat or a dog draws more attention than mines which were missed and maimed people years after searching the field.
Anyways, the author of that piece is a little mad, but in the sense that it is worth serious consideration. TLDR: the rats aren't cost effective and, worst of all, haven't scientifically proved to be effective.
Depends on your jurisdiction, of course. (I am not a lawyer and this is not legal advice, merely my impressions). In the UK this would likely be worth it if the injury is a specified financial amount. So for people who have paid for something and simply not got it, a small claims court is a good bet for getting a refund. A lot of the time however, the injury is in the consequences of relying on one of these companies services, and having it withdrawn without notice, as in the OP. Usually, you want service restored, as that is in fact the least costly action for both sides. But small claims courts (in the UK) do not make that kind of order. In theory you could sue for the financial consequences of the abrupt withdrawal, but I'd guess that's too complicated for a small claim.
Check your local law. In some jurisdictions, you can charge interest, or penalties. You can be gentle about it - give fair warning, reminder that intrest is starting to accrue, etc. But customers don't generally want liabilities to increase, so will prefer to pay before extra costs are incurred.
It depends a lot on your relationship with the customer as well, I guess. Some may get butthurt about it, for others, your relationship is with a person in a different dept to the people organising the payments, you can send interest notices to the finance dept without worrying the person who wants your services.
Suppose you are a building contractor. You have given start dates for future jobs, but your current job is going to run over the expected time. You can choose between:
1 slip every job, annoying all of the customers whose jobs are queued up. You get a bad reputation.
2 Move onto the next job on time, and gradually complete the stalled job in the background by sending workers back to it when you have spare (which you should have, because in general you must overestimate or things will go badly wrong).
That customer will now suffer because their job is going to take a multiple of the expected time, but all of the other customers are happy, so your reputation is good.
I’ve observed airlines will do this as well if they have maintenance or gate queues. They will sacrifice 1-2 flights (hours late or even cancelled) to keep many other flights near on-time. Fewer angry customers, better reported average “on-time” metrics.
I had a section in the post I cut out about how optimizing queue selection started out as a technical problem, but transformed into more of a business and ethical problem the more I pondered it.
You're effectively deciding how to distribute suffering across a large group of people.
Comes up in any situation where large metric gains can be accomplished by optimizing for specific groups - recommender and personalization systems are another example.
Not the OP, but Fyi you know that to some extent anyway, because the termination condition is that confidence is above a specified value. This is one of the advantages over just doing git bisect with some finger-in-air test repeat factor.
But yeah it can print that too.
It's worth noting that the analysis (although not this specific algorithm) applies in cases where there is a deterministic approach, but a nondeterministic algorithm is faster.
For example, suppose you have some piece of hardware which you can interrogate, but not after it crashes. It crashes at a deterministic point. You can step it forward by any amount of steps, but only examine it's state if it did not crash. If it crashed, you have to go back to the start. (I call this situation "Finnegan Search", after the nursery rhyme which prominently features the line "poor old Finnegan had to begin again").
The deterministic algorithm has you do an examination after every step. The nondeterministic algorithm has you choose some number of steps, accepting the risk that you have to go back to the start. The optimal number of steps (and thus the choice of algorithm) depends on the ratio of the cost of examination to the cost of a step. It can be found analytically as the expected information gain per unit time.
(Either way the process is pretty annoying and considerable effort in hardware and software design has gone into providing ways to render it unnecessary, but it still crops up sometimes in embedded systems).
In theory, the algorithm could deal with that by choosing the commit at each step, which gives the best expected information gain; divided by expected test time. In most cases it would be more efficient just to cache the compiled output though.
This doesn't sound quite right, but I'm not sure why.
Perhaps: a reasonable objective would be to say that for N bits of information, I would like to pick the test schedule that requires the least total elapsed time. If you have two candidate commits and a slow recompile time, it seems like your algorithm would do many repeats of commit A until the gain in information per run drops below the expected gain from B divided by the recompile time, then it would do many repeats of B, then go back to A, etc. So there are long runs, but you're still switching back and forth. You would get the same number of bits by doing the same number of test runs for each commit, but batching all of the A runs before all of the B runs.
Then again: you wouldn't know how many times to run each in advance, and "run A an infinite number of times, then run B an infinite number of times" is clearly not a winning strategy. Even with a fixed N, I don't think you could figure it out without knowing the results of the runs in advance. So perhaps your algorithm is optimal?
It still feels off. You're normalizing everything to bits/sec and choosing the maximum. But comparing an initial test run divided by the rebuild time vs a subsequent test run divided by a much faster time seems like you're pretending a discrete thing is continuous.
The general requirement for this approach to be optimal, is called "dynamical consistency". A good description is in [1]. It is the situation where, suppose you have a budget B , and you search until your budget is exhausted. Then you are informed that there is an additional budget, B2, and you can continue searching until that is exhausted. A situation is dynamically consistent if, for any B,B2, the optimal strategy is such that you would make the same choices whether you know that you will get B2 or not.
So you are correct that discreteness is a problem, because if you are nearing the end of the budget you may optimally prefer to get more dice rolls than take bigger bets. But the optimal solution is then often analytically intractable (or at least it was - I last read about this a while back), and the entropy approach is often reasonable anyway. (For cases where search effort is significant, a good search plan can be found by simulation).
Note that "pick the commit with best expected information gain" in git_bayesect isn't optimal even in the no overhead regime. I provide a counterexample in the writeup, which implies ajb's heuristic is also not optimal. I don't see a tractable way to compute the optimal policy.
One idea: if you always spend time testing equal to your constant overhead, I think you're guaranteed to be not more than 2x off optimal.
(and agreed with ajb on "just use ccache" in practice!)
However many of these ARM chips have their own sub-architecture in the Linux source tree, I'm not sure that it's possible today to build a single image with them all built in and choose the subarchitecture at runtime. Theoretically it could be done, of course, but who has the incentive to do that work?
(I seem to remember Linus complaining about this situation to the Arm maintainer, maybe 10-20 years ago)
[1] https://docs.u-boot.org/en/v2021.04/uefi/uefi.html
reply