Ah, that explains this patchset that was submitted to the Linux kernel today
"KVM: s390: Introduce arm64 KVM"
"By introducing a novel virtualization acceleration for the ARM architecture on
s390 architecture, we aim to expand the platform's software ecosystem. This
initial patch series lays the groundwork by enabling KVM-accelerated ARM CPU
virtualization on s390....."
From the perspective of PC building, I've always thought it would be neat if the CPU/storage/RAM could go on a card with a PCIe edge connector, and then that could be plugged into a "motherboard" that's basically just a PCIe multiplexer out to however many peripheral cards you have.
Maybe it's gimmicky, but I feel like you could get some interesting form factors with the CPU and GPU cards sitting back-to-back or side-by-side, and there would be more flexibility for how to make space for a large air cooler, or take it up again if you've got an AIO.
I know some of this already happens with SFF builds that use a Mini-ITX motherboard + ribbon cable to the GPU, but it's always been a little awkward with Mini-ITX being a 170mm square, and high end GPUs being only 137mm wide but up to 300mm in length.
Oh, going back to a backplane computer design? That could be cool, though I assumed we moved away from that model for electrical/signaling reasons? If we could make it work, it would be really cool to have a system that let you put in arbitrary processors, eg. a box with 1 GPU and 2 CPU cards plugged in
I believe PCIe is a leader/follower system, so there'd probably be some issues with that unless the CPUs specifically knew they were sharing, or there was a way for the non-leader units to know they they shouldn't try to control the bus.
If every device is directly connected to every other one of n devices with Thunderbolt cables, each with its own dedicated set of PCIe lanes, you'd be limited to 1/n of the theoretical maximum bandwidth between any two devices.
What you really want is for every device to be connected through a massive PCIe switch that allows PCIe lanes to be connected arbitrarily, so, e.g., a pair of EPYCs could communicate over 96 lanes with 32 lanes free to connect to peripheral devices.
There were also PC compatible systems based around ISA backplanes. This was especially common for industrial computers but Zenith/Heathkit made ISA backplane based systems for the business and consumer markets. I own a Zenith Z-160 luggable computer from 1984 which uses an 8 slot 8-bit ISA backplane. 1 slot is occupied by a CPU card which also has the keyboard connector. My system has 2 memory cards which each provide up to 320k along with a serial and parallel port. Zenith sold a desktop version of this as the Z-150. They later released models based upon 16-bit ISA backplanes. I think but am not sure of the top of my head that the last CPU they produced a 16-bit card for was the 486.
This was (is?) done - some strange industrial computers for sure and I think others, where the "motherboard" was just the first board on the backplane.
The transputer b008 series was also somewhat similar.
The RAM and CPU would still be on the same card together, and for the typical case of a single GPU it would just be 16x lanes direct from one to the other.
For cases where there are other cards, yes there would more contention, but few expansion cards are able to saturate more than a lane or two. One lane of PCIe Gen5 is a whopping 4 GB/s in each direction, so that theoretically handles a dual 10gige NIC on its own.
That's what I was hoping Apple was going to do with a refreshed Mac Pro.
I had envisoned a smaller tower design that with PCI slots and Apple developing and selling daughter cards that were basically just a redesigned macbook pro PCB but with a PCI-E edge connector and power connector.
The way I see it a user could start with a reasonably powerful base machine and then upgrade it over time and mix and match different daughter cards. A ten year old desktop is fine as a day to day driver, it just needs some fancy NPU to do fancy AI stuff.
This kind of architecture seems to make sense to me in an age where computers have such a longer usable lifespan and where so many features are integrated into the motherboard.
Now we have cables that include computers more powerful than an old mainframe. So if it pleases you, just think of all the tiny little daughter computers hooked up to your machine now.
A 4x RPi Zero Ws Docker Swarm cluster running the dockerised versions of Hercules with VM/370 Sixpack, VM/370 CE and MVS TK 4. All in an IKEA picture frame.
I'd guess most apps are bytecode only, which will run on any platform. Some apps with native code have bytecode fallbacks. Many apps with native code include multiple support for multiple architectures; the app developer will pick what they think is relevant for their users, but mips and x86 are options. There were production x86 androids for a few years, some of those might still be in user bases; mips got taken out of the Native Development Kit in 2018 so probably not very relevant anymore.
Price competitive to AMD and intel? Sure. Abilities? There is no magic, the Tellium and Power11 are each as complicated as something like Epyc and the former has both a longer and taller compatibility totem pole than x86.
Anyway this post was never about building ARM or x86 CPUs, the point is they could have done a zArch fast path for x86 for "free", so there is some other strategy at play to consider doing it with ARM.
IIRC Qualcomm smartphone SoCs have always run some kind of hypervisor, I believe it's to allow partitioning of the CPU cores with the modem/DSP.
They used to (mid-late 2000s) use an L4 derivative ("REX"?), with the more recent chips (including the 'X' series for PCs) using their homegrown "Gunyah" hypervisor (https://github.com/quic/gunyah-hypervisor)
Would be interesting if you know of any evidence about being an architectural hw limitation. Though of course the practical difference may be small if the DRM bootloader enforces loading the hypervisor through cryptographic checksums. But I guess if a customer asked they would allow it and the hardware could do it.
I have a similar Windows Arm64 machine (Lenovo "IdeaPad 5 Slim"), RDP into it works OK.
There is one issue I ran into that I haven't on my (self-built) Windows desktops: when Windows Hello (fingerprint lock) is enabled, and neither machine is on a Windows domain, the RDP client will just refuse to authenticate.
Aside from the changing pint glass color and level, the Sky set top box / decoder, will also overlay the subscription ID at random intervals and locations.
I don't know if Sky does it, but Foxtel in Australia, in addition to the pint glass watermark, have a separate set of channels for public venues, which have different ad breaks/content to residential subscriptions. (https://www.foxtelmedia.com.au/foxtel-media-network/fox-venu...)
Ten64's have been shipping for a while now, though you are best to ask in our support forum: https://forum.traverse.com.au/ . We haven't posted much on Crowd Supply as it's a very manual process to get stuff up on there.
I'm not too familiar with TrustZone, but I'm not aware of any limitations in the secure world. I haven't tried OP-TEE or any similar secure world firmware' simply as no one has asked for it.
When we did the research for our network appliance customers, we were very interested in NXP. Have you tried to buy Application Solutions Kit from NXP? We faced the problem of not achieving spec performance using NXP reference boards. After reaching NXP, support informed us that we should buy ASK, which would solve our problems. Unfortunately, it was beyond our research budget.
No, we haven't bought any ASKs. We have worked with them on a DPDK project and they were quite helpful when it came to debugging difficult bugs with it.
Improving the routing performance in Linux is near the top of my TODO list, however. XDP is one candidate (see https://forum.traverse.com.au/t/vyos-build-my-repo/181/5 for some results) and using the LS1088's AIOP is another possibility.
Linux is important but FreeBSD is the key to heart of majority of DYI firewall builders. I'm huge fan of your project and looking for opportunities to get hands on experience with it.
I see that support for FreeBSD going in good direction:
dsl (Dmitry) is the main developer behind the DPAA2 drivers for FreeBSD and he's done a fantastic effort so far. Myself and Bjoern (bz) have also written bits of it.
The performance has improved compared to how it was a year ago (e.g struggling to get 400mbps throughput) but there are some severe issues in -CURRENT :( I believe dsl is trying to get them fixed by 14.0-RELEASE.
The money comes from a levy on telecommunications carriers (Telstra included, but also most of it's competitors). There was a bit of conjecture in the media that the "free" payphones were really being paid for by the other telcos, once the money Telstra contributes is discounted.
In Australia (Afterpay's home market), credit card merchant fees are tightly regulated (<1% for most transactions AFAIK), so 'cash-back' offers like the US don't exist.
Airline/Frequent Flyer point conversions similarly got nerfed, so much so it's better to cycle cards (credit worthiness permitting) every year to get a point bonus up front (e.g 200K points for spending $X000 in the first 3 months) than trying to earn that amount.
That's not the main benefit of using a credit card. The primary benefit of a CC is decoupling it from your bank account - if there is an issue, CC company will fight the charge for you and almost always side on your side.
I bought a pair of speakers costing $2500. I tried to reach to the company for a return but no one would answer. No phone, no email, nada. 1 month return deadline was coming up and I called American Express. Within literally 10 mins (no wait at all), I filed a claim and was done with it. 60 days later, I got the money back. Turns out that they couldn't reach them either.
Credit cards are more consumer centric than merchant centric and they're a huge boon to otherwise rampant asshole behavior from merchants. I also run an online shop and its a pain in the neck when someome files a charge-back, we just consider it as part of the cost of doing business.
In last 20 or so years of using an AMEX card, I've filed half a dozen charge backs. It's an amazing company, it's better than any other CC company in my opinion. They also have one of the highest fees and it shows why - their customer service is absolutely top notch - red carpet, the works.
If someone fraudulently or mistakenly charges my debit card $5,000, that's $5,000 gone from my bank account until I get it fixed.
If someone charges $5,000 to my credit card in error, I've got 30+ days plus to resolve with zero financial impact to me. If it doesn't get resolved in that time, I have the option to not pay the bill (or that portion of the bill), knowing that if the error is fixed, the penalties and interest will be waived.
Have you experienced that? There's limits on charges on debit cards, usually ~2000 Euros. Credit cards are higher but then again they also have the credit limit and won't go beyond that in most cases.
I haven’t experienced it personally, but I know someone who used a debit card for their utilities on auto-pay. Gas company said they damage their meter so they replaced it and auto-debited their account for $2,500. They had to borrow money from friends until it got fixed and the money refunded.
In my experience, chargebacks with a debit card (in Europe) are a pain in the ass.
The banks themselves seem to be against it so they made me write emails explaining the situation, give the seller more time (1 month not enough) and then wait another few weeks for processing.
With a credit card (British), they just did it -_-
And let me mention how a SEPA payment means your money is gone unless the seller themselves agrees to refund.
In the US debit cards have terrible charge back, and even then, if the merchant doesn't follow through, you're the one who needs to fight for it, where as with a CC it'll be them doing the fighting.
Reminds me of a famous 'intentional buffer overflow' by AOL in 1999 to determine if the user was using a 'genuine' AIM client, this came after Microsoft had added an AIM client to MSN Messenger.
A lot of the Quark documentation was basically copied-pasted from the 486 one, and 486 replaced with Quark. There was some rather unusual phrasing and anachronisms as a result.
"KVM: s390: Introduce arm64 KVM"
"By introducing a novel virtualization acceleration for the ARM architecture on s390 architecture, we aim to expand the platform's software ecosystem. This initial patch series lays the groundwork by enabling KVM-accelerated ARM CPU virtualization on s390....."
https://patchwork.kernel.org/project/linux-arm-kernel/cover/...
reply