Hacker Newsnew | past | comments | ask | show | jobs | submit | mcbridematt's commentslogin

Ah, that explains this patchset that was submitted to the Linux kernel today

"KVM: s390: Introduce arm64 KVM"

"By introducing a novel virtualization acceleration for the ARM architecture on s390 architecture, we aim to expand the platform's software ecosystem. This initial patch series lays the groundwork by enabling KVM-accelerated ARM CPU virtualization on s390....."

https://patchwork.kernel.org/project/linux-arm-kernel/cover/...


Oh that's a weird way to do it; they used to have an x86 add on block for mainframes which was just a pile of x86 blades with some integration.

I loved the era of "daughter cards" which were just entire computers on a board.

things like https://www.youtube.com/watch?v=a6b4lYOI0GQ could get you a really interesting form of multitasking


From the perspective of PC building, I've always thought it would be neat if the CPU/storage/RAM could go on a card with a PCIe edge connector, and then that could be plugged into a "motherboard" that's basically just a PCIe multiplexer out to however many peripheral cards you have.

Maybe it's gimmicky, but I feel like you could get some interesting form factors with the CPU and GPU cards sitting back-to-back or side-by-side, and there would be more flexibility for how to make space for a large air cooler, or take it up again if you've got an AIO.

I know some of this already happens with SFF builds that use a Mini-ITX motherboard + ribbon cable to the GPU, but it's always been a little awkward with Mini-ITX being a 170mm square, and high end GPUs being only 137mm wide but up to 300mm in length.


Oh, going back to a backplane computer design? That could be cool, though I assumed we moved away from that model for electrical/signaling reasons? If we could make it work, it would be really cool to have a system that let you put in arbitrary processors, eg. a box with 1 GPU and 2 CPU cards plugged in

I believe PCIe is a leader/follower system, so there'd probably be some issues with that unless the CPUs specifically knew they were sharing, or there was a way for the non-leader units to know they they shouldn't try to control the bus.

But if we're dreaming, we can have the backplane be actually multiple (Nx thunderbird 5 cables connected each slot to all other slots directly).

Then each device can be a host, a client, at the same time and at full bandwidth.


If every device is directly connected to every other one of n devices with Thunderbolt cables, each with its own dedicated set of PCIe lanes, you'd be limited to 1/n of the theoretical maximum bandwidth between any two devices.

What you really want is for every device to be connected through a massive PCIe switch that allows PCIe lanes to be connected arbitrarily, so, e.g., a pair of EPYCs could communicate over 96 lanes with 32 lanes free to connect to peripheral devices.


That's basically what S-100 systems were, isn't it (on a much slower bus)?

There were also PC compatible systems based around ISA backplanes. This was especially common for industrial computers but Zenith/Heathkit made ISA backplane based systems for the business and consumer markets. I own a Zenith Z-160 luggable computer from 1984 which uses an 8 slot 8-bit ISA backplane. 1 slot is occupied by a CPU card which also has the keyboard connector. My system has 2 memory cards which each provide up to 320k along with a serial and parallel port. Zenith sold a desktop version of this as the Z-150. They later released models based upon 16-bit ISA backplanes. I think but am not sure of the top of my head that the last CPU they produced a 16-bit card for was the 486.

Yes, but also in many other scenarios. The last backplane systems I saw were 90s industrial 486s.

This was (is?) done - some strange industrial computers for sure and I think others, where the "motherboard" was just the first board on the backplane.

The transputer b008 series was also somewhat similar.


That would crush latency on RAM.

The RAM and CPU would still be on the same card together, and for the typical case of a single GPU it would just be 16x lanes direct from one to the other.

For cases where there are other cards, yes there would more contention, but few expansion cards are able to saturate more than a lane or two. One lane of PCIe Gen5 is a whopping 4 GB/s in each direction, so that theoretically handles a dual 10gige NIC on its own.


That's what I was hoping Apple was going to do with a refreshed Mac Pro.

I had envisoned a smaller tower design that with PCI slots and Apple developing and selling daughter cards that were basically just a redesigned macbook pro PCB but with a PCI-E edge connector and power connector.

The way I see it a user could start with a reasonably powerful base machine and then upgrade it over time and mix and match different daughter cards. A ten year old desktop is fine as a day to day driver, it just needs some fancy NPU to do fancy AI stuff.

This kind of architecture seems to make sense to me in an age where computers have such a longer usable lifespan and where so many features are integrated into the motherboard.


You can do basically that by connecting over Thunderbolt 5

https://news.ycombinator.com/item?id=46248644


Homogenous RDMA is less like a daughterboard and more like a brother or sisterboard.

M5 processor plugged into the same RDMA as IBM POWER for that "brother from anothermotherboard".

Apple already experimented with this with the prototype Jonathan computer. It's very late 80's in its aesthetic, and I love it.

https://512pixels.net/2024/03/apple-jonathan-modular-concept...


This is the kind of glorious thing that will only appear when Moore's law is dead and buried.

Now we have cables that include computers more powerful than an old mainframe. So if it pleases you, just think of all the tiny little daughter computers hooked up to your machine now.


Z/OS for ARM then? ;-)

I’ve been running VM/370 and MVS on my RPi cluster for a long time now.


Cool, can you share more about the setup?

A 4x RPi Zero Ws Docker Swarm cluster running the dockerised versions of Hercules with VM/370 Sixpack, VM/370 CE and MVS TK 4. All in an IKEA picture frame.

But I wonder if this is "much better" than x86 emulation or virt?

Is there really SW that's limited to (Linux) ARM and not x86?


Technically aren't most android apps limited to ARM?

There's certainly some, but I don't think most.

I'd guess most apps are bytecode only, which will run on any platform. Some apps with native code have bytecode fallbacks. Many apps with native code include multiple support for multiple architectures; the app developer will pick what they think is relevant for their users, but mips and x86 are options. There were production x86 androids for a few years, some of those might still be in user bases; mips got taken out of the Native Development Kit in 2018 so probably not very relevant anymore.


Probably Intel and AMD aren't willing to do this deal but Arm is.

IBM actually owns x86 rights still. They last used it to do something similar called Lx86 which ran x86 VMs on POWER CPUs.

Developing a good x86 CPU is far beyond IBM's abilities. The rights aren't enough.

Price competitive to AMD and intel? Sure. Abilities? There is no magic, the Tellium and Power11 are each as complicated as something like Epyc and the former has both a longer and taller compatibility totem pole than x86.

Anyway this post was never about building ARM or x86 CPUs, the point is they could have done a zArch fast path for x86 for "free", so there is some other strategy at play to consider doing it with ARM.


> Is there really SW that's limited to (Linux) ARM and not x86?

MacOS? (hides)


IIRC Qualcomm smartphone SoCs have always run some kind of hypervisor, I believe it's to allow partitioning of the CPU cores with the modem/DSP.

They used to (mid-late 2000s) use an L4 derivative ("REX"?), with the more recent chips (including the 'X' series for PCs) using their homegrown "Gunyah" hypervisor (https://github.com/quic/gunyah-hypervisor)


Would be interesting if you know of any evidence about being an architectural hw limitation. Though of course the practical difference may be small if the DRM bootloader enforces loading the hypervisor through cryptographic checksums. But I guess if a customer asked they would allow it and the hardware could do it.


I have a similar Windows Arm64 machine (Lenovo "IdeaPad 5 Slim"), RDP into it works OK.

There is one issue I ran into that I haven't on my (self-built) Windows desktops: when Windows Hello (fingerprint lock) is enabled, and neither machine is on a Windows domain, the RDP client will just refuse to authenticate.

I had to use a trick to "cache" the password on the "server" end first, see https://superuser.com/questions/1715525/how-to-login-windows...


Aside from the changing pint glass color and level, the Sky set top box / decoder, will also overlay the subscription ID at random intervals and locations.

I don't know if Sky does it, but Foxtel in Australia, in addition to the pint glass watermark, have a separate set of channels for public venues, which have different ad breaks/content to residential subscriptions. (https://www.foxtelmedia.com.au/foxtel-media-network/fox-venu...)


The Cypress chipset used to be a Broadcom product. Broadcom decided to sell the 'IoT' side of their WiFi/BT product line.


Hi, I'm the person behind the Ten64.

Ten64's have been shipping for a while now, though you are best to ask in our support forum: https://forum.traverse.com.au/ . We haven't posted much on Crowd Supply as it's a very manual process to get stuff up on there.

I'm not too familiar with TrustZone, but I'm not aware of any limitations in the secure world. I haven't tried OP-TEE or any similar secure world firmware' simply as no one has asked for it.

You can see all our firmware components here: https://gitlab.com/traversetech/ls1088firmware


Thanks for posting. Looking to use Arm DRTM, implemented in TrustZone as part of Arm Trusted Firmware. Will follow up in your support forum.

https://www.youtube.com/watch?v=xZoCtNV8Qs0&t=9080s

https://documentation-service.arm.com/static/620e0f9b0ca3057...

https://review.trustedfirmware.org/q/topic:%22mb%252Fdrtm-pr...


When we did the research for our network appliance customers, we were very interested in NXP. Have you tried to buy Application Solutions Kit from NXP? We faced the problem of not achieving spec performance using NXP reference boards. After reaching NXP, support informed us that we should buy ASK, which would solve our problems. Unfortunately, it was beyond our research budget.


No, we haven't bought any ASKs. We have worked with them on a DPDK project and they were quite helpful when it came to debugging difficult bugs with it.

Improving the routing performance in Linux is near the top of my TODO list, however. XDP is one candidate (see https://forum.traverse.com.au/t/vyos-build-my-repo/181/5 for some results) and using the LS1088's AIOP is another possibility.


Linux is important but FreeBSD is the key to heart of majority of DYI firewall builders. I'm huge fan of your project and looking for opportunities to get hands on experience with it.

I see that support for FreeBSD going in good direction:

https://forum.traverse.com.au/t/freebsd-preview-for-ten64/17...

Are there any performance reports for FreeBSD? Especially VPN/wireguard bandwidth would be interesting.


dsl (Dmitry) is the main developer behind the DPAA2 drivers for FreeBSD and he's done a fantastic effort so far. Myself and Bjoern (bz) have also written bits of it.

The performance has improved compared to how it was a year ago (e.g struggling to get 400mbps throughput) but there are some severe issues in -CURRENT :( I believe dsl is trying to get them fixed by 14.0-RELEASE.


It's part of what's called the Universal Service Obligation (USO) for which Telstra (the dominant carrier) is responsible for delivering:

https://www.infrastructure.gov.au/media-technology-communica...

The money comes from a levy on telecommunications carriers (Telstra included, but also most of it's competitors). There was a bit of conjecture in the media that the "free" payphones were really being paid for by the other telcos, once the money Telstra contributes is discounted.


In Australia (Afterpay's home market), credit card merchant fees are tightly regulated (<1% for most transactions AFAIK), so 'cash-back' offers like the US don't exist.

Airline/Frequent Flyer point conversions similarly got nerfed, so much so it's better to cycle cards (credit worthiness permitting) every year to get a point bonus up front (e.g 200K points for spending $X000 in the first 3 months) than trying to earn that amount.


That's not the main benefit of using a credit card. The primary benefit of a CC is decoupling it from your bank account - if there is an issue, CC company will fight the charge for you and almost always side on your side.

I bought a pair of speakers costing $2500. I tried to reach to the company for a return but no one would answer. No phone, no email, nada. 1 month return deadline was coming up and I called American Express. Within literally 10 mins (no wait at all), I filed a claim and was done with it. 60 days later, I got the money back. Turns out that they couldn't reach them either.

Credit cards are more consumer centric than merchant centric and they're a huge boon to otherwise rampant asshole behavior from merchants. I also run an online shop and its a pain in the neck when someome files a charge-back, we just consider it as part of the cost of doing business.

In last 20 or so years of using an AMEX card, I've filed half a dozen charge backs. It's an amazing company, it's better than any other CC company in my opinion. They also have one of the highest fees and it shows why - their customer service is absolutely top notch - red carpet, the works.


Then why not us debit card? Chargebacks are also available and risk of ending with spiral of debt is removed.

(Yes, it is a real risk. Yes I know that you and me are not going to e so stupid. But I am not trusting this belief.)


If someone fraudulently or mistakenly charges my debit card $5,000, that's $5,000 gone from my bank account until I get it fixed.

If someone charges $5,000 to my credit card in error, I've got 30+ days plus to resolve with zero financial impact to me. If it doesn't get resolved in that time, I have the option to not pay the bill (or that portion of the bill), knowing that if the error is fixed, the penalties and interest will be waived.


Have you experienced that? There's limits on charges on debit cards, usually ~2000 Euros. Credit cards are higher but then again they also have the credit limit and won't go beyond that in most cases.


I haven’t experienced it personally, but I know someone who used a debit card for their utilities on auto-pay. Gas company said they damage their meter so they replaced it and auto-debited their account for $2,500. They had to borrow money from friends until it got fixed and the money refunded.


> If someone fraudulently or mistakenly charges my debit card $5,000

in my case payment will fail due to spending limits (though repeated small charges would go through)


In my experience, chargebacks with a debit card (in Europe) are a pain in the ass.

The banks themselves seem to be against it so they made me write emails explaining the situation, give the seller more time (1 month not enough) and then wait another few weeks for processing.

With a credit card (British), they just did it -_-

And let me mention how a SEPA payment means your money is gone unless the seller themselves agrees to refund.


Because at least if it goes wrong, the money being used isn't actually your money until you pay the CC bill. That $2500 is still sat in your account.

Spiral of debt is only a risk if you have no discipline.

For those of us that do have discipline, CC's are great.


In the US debit cards have terrible charge back, and even then, if the merchant doesn't follow through, you're the one who needs to fight for it, where as with a CC it'll be them doing the fighting.


That customer service is also paid for by the credit card fees they charge merchants in the United States.


Nice writeup.

Reminds me of a famous 'intentional buffer overflow' by AOL in 1999 to determine if the user was using a 'genuine' AIM client, this came after Microsoft had added an AIM client to MSN Messenger.

https://www.geoffchappell.com/notes/security/aim/index.htm


Intel tried this with the Quark core, which was discontinued in 2019:

Compare the Quark Core Block Diagram: https://www.intel.com/content/dam/support/us/en/documents/pr...

with that of the 486 (Figure 3-2): http://datasheets.chipdb.org/Intel/x86/486/manuals/27302101....


A lot of the Quark documentation was basically copied-pasted from the 486 one, and 486 replaced with Quark. There was some rather unusual phrasing and anachronisms as a result.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: