So many people want to believe in this sort of thing for various reasons that I get fatigued at the very thought of trying to explain to people who believe in it earnestly that it is not a good idea. (e.g. commercial hosting services are really competitive; for a long time the cost of computing has been going down over time though I don't know if that is reversing because we've hit the end of the real Moore's law [1] or if it is a temporary blip)
[1] the motor behind it is cost reduction, once that stops it stops because we can't afford it anymore!
Well, it exists, but it exists if you’re willing to buy server hardware on eBay, hustle to get old parts working together, negotiate a good deal on a cabinet, get space from ARIN and announce it and so on. There are probably 10-50x cost efficiencies vs. renting 5 year old CPU families on AWS at huge markup.
A laptop isn’t the way to do that though. And your typical VC-fueled startup isn’t going to know how to do it either. It takes a very narrow slice of competence to be able to do that correctly.
I think it's most likely testing the waters for a real offering. It's not that weird. Many colo data centers already have policies about hosting laptops because it's already something that happens. It just isn't common and usually isn't for hosting servers.
If the battery in the laptop is still good, it comes with it's own UPS. My MBPs haven't had an ethernet port in a minute, so do you have to supply your own adapters as well??? You could fit ~15 MBPs on their edge in 9RUs. That'd be an interesting looking rack. Not quite a blade chassis. It'd be rather boring looking as there's no blinky-blinkies
I didn't really think that any of what I wrote would be taken seriously to the point of needing a retort. I mentioned blade servers and knew rack unit measurements which as context clues would have suggested I was familiar with actual data center equipment.
And yet most homes and offices are full of them. Laptop batteries don't usually catch fire. At the colos I am familiar with (which have pretty strict rules, generally), you can have equipment with batteries as long as you regularly inspect them.
If you got creative with cable management you might be able to double up front and rear. It would probably be a PITA to manage but you could probably get some halfway decent density
Looks like they were proposing supplying usb Ethernet adapters, which doesn’t seem crazy, they’re cheap.
Hetzner rents you 42RU for €199 plus power and network. If we assume they can fill the entire rack, that's 4 9RU units for about €50 plus power and network.
If we assume an average power draw of 20W per laptop, that's 300W for each 15 laptop unit, or about €57/month in Hetzner's Finish DC (including aircon)
Not sure about network. A 1Gbit uplink with 10TB traffic (and €1/TB after that) is provided. Upgrading that to 10Gbit is probably similar to the €51/month cost for the same uplink for dedicated servers, so another €15 for each 15 laptop unit. Plus around €2/month/IP, but you can probably bring your own if you find a cheaper subnet to buy
So yeah, you are right that the math does not work out. But it is pretty close to break even. I think you can break even on this if you find a more space efficient way to cram them into the rack and don't pay yourself any salary
That surely depends on a country. Data centre is still better in theory. But in practice, I have very little imagination to use a gigabit connection all to myself.
Just for hobbyists. It's very much over-engineered as a simple Z80 cpu drop-in replacement.
That's not to say I couldn't imagine that someone, somewhere, wakes up to an alert one day that some control board has failed, and it's _just_ the CPU, and the spare parts bin for out-of-production components got water in it and is ruined, and the company is losing millions per hour the system is down. I just don't think that'll be a common story. With full faith in humanity I like to imagine instead that the people responsible for such systems have planned for full control board replacements to be available for use comfortably before unavailability of the Z80 risks a significant outage due to component failure.
A LOT of them. Zilog only announced its discontinuation in 2024.
But that also means that there are A LOT of them out there, and they are cheap and generally extremely reliable parts. So if you rely on a device with a Z80 in it and you're worried about the CPU failing you can have hundreds of these things on the shelf for ~no money.
So I would say it's of limited utility for industrial applications for now simply because scarcity is not an issue for the real thing. This might change in the future so it's good that projects like this exist.
It's sad Z80 production was discontinued. There are some niche IC manufacturers that specialize in legacy parts (eg. Rochester Electronics comes to mind). It would have been nice if Zilog had passed on manufacture of the good 'ol Z80 to a manufacturer like this. Even if it's just small production batches every couple of months/years or so.
There'll be plenty of hobbyists and/or legacy industrial / niche applications for a looong time to come.
The TI-84+ graphing calculator is still popular and a current model and it is Z80 based. (Though I doubt you'll find a DIP40 socket in one for a swap.)
The TI-84+ uses a TI REF 84PLUSB (or variant) ASIC that has a Z80-compatible core in it, not a Zilog Z80, and, as you say, definitely not a DIP40 part.
Thank you! I had a job coding Z80 assembly "back in the day" and grew to love its instruction set so I'm not surprised there is legacy and value to keep stuffing wee Z80ish cores into modern devices.
Odds are if he left there's the possibility their compensation situation might have changed for the worse if not leading to downsizing, that in the edge of a recession with plenty of competition out there.
We’ve heard this for every version of Windows for the past twenty years or more.
When XP was new, there were people refusing to upgrade from Win2000 to “Fischer-Price Windows”.
Well, all versions except Vista — everybody seemed happy to upgrade to Windows 7. (Of course the lesson Microsoft drew from that smooth upgrade was to blow up everything for the next version. “They want tablet interactions, they just don’t know it!”)
Win8 (and Win8.1) also had the same reception. People were, of course, more than happy to move to Win10, which contained most of the under-the-hood improvements from Win8 and had a more traditional UI. (Also, with Vista → 7, it didn't hurt that machines had gotten more powerful in the meantime, so the extra RAM usage didn't really matter much anymore.)
O no. I absolutely agree with the GP. I was fine with every windows after and including XP. Until I received a company laptop with win 11. I have a big fat list of things that are super annoying or bugs.
Anecdotal, but I went 2000 -> 7 -> 10, skipping XP, Vista, and 8. Given that cadence, will hopefully be skipping 11 as well and waiting for whatever is next.
You got a point in the optimization part, it's difficult to compare both chips when you're running on completely different OSs, specially when one of them ruins a specially optimized OS like ios
The closest it could get I think would be running a variant of Unix optimized for the Ryzen.
Not all the keys can change and the ones that can't are all Mac ones which limits the adoption of Windows users which like it or not are still the vast majority
the catch is, if we're really hiding, we must have got thousands of deaths by now. Just see how quickly it spread in Korea, Italy and Iran. Hiding (thus no action can be done) for 1.5 months is a sure way to suicide, given the amount of traffic in/out of Vietnam and the population density.
Or you can see that in the past week alone we got 27 cases, starting from 2 planes. And the number is low because we vigorously chased people down to test them all and quarantine them. Had we let them go loose those 2 cases alone could spread to hundreds.
reply