Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's very much "need" in this case. This was considered at length.

To be clear, we are talking about exotic custom hardware that has little in common with the average x86/x64 desktop.

For something like a 24-port 10 GbE switch, the platform might have a gigabyte of off-chip DRAM, but only a megabyte of on-chip SRAM. An ask of 16 kiB SRAM per port is 37% of that capacity, which is badly needed for other things.

The other complicating factor is that the PTP egress timestamp and update pipeline needs to be predictable down to the clock cycle, so DRAM isn't an option.

Most PTP packets are small, yes, but others have a lot of tags and metadata. They may also be tucked between other packets. To be fully compliant, we have to handle the worst case, which means a full-size buffer for a jumbo frame.

And yes, we did consider RFC1141 and RFC1624. We use those when we can, but unfortunately not possible in this case.

Say what you will about the rest of IPv6, but I am particularly salty about the UDP checksum requirement.



> To be fully compliant, we have to handle the worst case, which means a full-size buffer for a jumbo frame.

Well, fully compliant except for IPv6. If you said no jumbo frames for PTP, or no jumbo frames for specifically IPv6 PTP, then the extra buffer for PTP checksums only needs 4% of your SRAM.

> They may also be tucked between other packets.

Does that matter? Let's say a particular PTP packet is 500 bytes. If there's a packet immediately after it, I would expect it to flow through the extra buffer like it's a 500 byte shift register.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: