According to the article, they did have a human verify the images before sending the alert. Apparently they and the school still think they made the right call.
Originally, a font (also spelled fount, at least formerly) was a physical thing: a collection of metal slugs, each bearing the reversed shape of a letter or other symbol (a glyph, in typographical parlance). You would arrange these slugs in a wooden frame, apply a layer of ink to them, and press them against a sheet of paper.
The typeface dictated the shapes of those glyphs. So you could own a font of Caslon's English Roman typeface, for example. If you wanted to print text in different sizes, you would need multiple fonts. If you wanted to print in italic as well as roman (upright), you would need another font for that, too.
As there was a finite number of slugs available, what text you could print on a single sheet was also constrained to an extent by your font(s). Modern Welsh, for example, has no letter "k": yet mediaeval Welsh used it liberally. The change came when the Bible was first printed in Welsh: the only fonts available were made for English, and didn't have enough k's. So the publisher made the decision to use c for k, and an orthographical rule was born.
Digital typography, of course, has none of those constraints: digital text can be made larger or smaller, or heavier or lighter, or slanted or not, by directly manipulating the glyph shapes; and you're not going to run out of a particular letter.
So that raises the question: what is a font in digital terms?
There appear to be two schools of thought:
1. A font is a typeface at a particular size and in a particular weight etc. So Times New Roman is a typeface, but 12pt bold italic Times New Roman is a font. This attempts to draw parallels with the physical constraints of a moveable-type font.
2. A font is, as it always was, the instantiation of a typeface. In digital terms, this means a font file: a .ttf or .otf or whatever. This may seem like a meaningless distinction, but consider: you can get different qualities of font files for the same typeface. A professional, paid-for font will (or should, at least) offer better kerning and spacing rules, better glyph coverage, etc. And if you want your text italic or bold, or particularly small or particularly large (display text), your software can almost certainly just digitally transform the shapes in your free/cheap, all-purpose font, But you will get better results with a font that has been specifically designed to be small or italic or whatever: text used for small captions, for example, is more legible with a larger x-height and less variation in stroke width than that used for body text. Adobe offers 65 separate fonts for its Minion typeface, in different combinations of italic/roman, weight (regular/medium/semibold/bold), width (regular/condensed) and size (caption/body/subhead/display).
What are you talking about, e-ink is much nicer for things like this. An OLED produces actual light, and uses way more power. I wouldn't want an oled display on 24/7 in my living room.
Everyone defaults to it because it's really nice actually.
I think they mean that even an OLED display will actively emit light. When, in contrast, the e-ink displays shown in the linked posts are unlit. That, for me, is the key advantage making the device blend in.
I just want to be able to save the image to a folder and copy it to my clipboard when taking a screenshot. iirc in KDE Plasma's Spectacle, these options are checkboxes, you can enable as many at once as you like.
The convention at every company I've worked at was to use DTO's. So yes, JSON payloads are in fact validated, usually with proper type validation as well (though unfortunately that part is technically optional since we work in php).
Usually it's not super strict, as in it won't fail if a new field suddenly appears (but will if one that's specified disappears), but that's a configuration thing we explicitly decided to set this way.
> This is a key difference between RTOS and Linux, where operations wait in an execution queue. And this is one of the reasons why Linux isn’t used in professional security systems.
Linux also has a realtime kernel available. It would have been nice to know why they didn't go with that, but it wasn't even mentioned.
I've also worked with payment processors a lot.
The ones I've used have test environments where you can fake payments, and some of them (Adyen does this) even give you actual test debit and credit cards, with real IBAN's and stuff like that.
Don't know anything about the OP's system, other than "POS" but the bug they experienced - and (maybe?) all the typical integration stuff like inventory management - is very complex and wouldn't manifest itself in a payment processing failure. I'm doubtful that anyone's production inventory or accounting systems allow for "fake" transactions that can be validated by an e2e test
It was a linux running on (year appropriate) https://www.hp.com/us-en/solutions/pos-systems-products.html... - and add on all the peripherals. The POS software was standalone-ish (you could, in theory, hook it up to a generator to a register and the primary store server and process cash, paper check, and likely store branded credit cards)... it wouldn't be pleasant, but it could.
The logic for discounts and sales and taxes (and if an item had sales tax in that jurisdiction) was all on register. The store server logged the transaction and handled inventory and price lookup, but didn't do price (sale, taxes) calculations itself.