Rendered at 00:35:28 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
debugga 2 days ago [-]
Clean-room, portable C++17 implementation of the PlanB IPv6 LPM algorithm.
Includes:
- AVX-512 SIMD path + scalar fallback
- Wait-free lookups with rebuild-and-swap dynamic FIB
- Benchmarks on synthetic data and real RIPE RIS BGP (~254K prefixes)
Interesting result: on real BGP + uniform random lookups, a plain Patricia trie can sometimes match or beat the SIMD tree due to cache locality and early exits.
Would love feedback, especially comparisons with PopTrie / CP-Trie.
talsania 2 days ago [-]
254K prefixes with skewed distribution means early exits dominate, and no SIMD throughput advantage survives a branch that terminates at depth 3. The interesting edge case is deaggregation events where prefix counts spike transiently and the rebuild-and-swap FIB has to absorb a table that's temporarily 2x normal size
Sesse__ 2 days ago [-]
The obvious question, I guess: How much faster are you than whatever is in the Linux kernel's FIB? (Although I assume they need RCU overhead and such. I have no idea what it all looks like internally.)
I use Wireguard rarely enough that the AllowedIPs concept gets me every time. It gets easier when I replace it mentally with “Route=” :-)
zx2c4 2 days ago [-]
It's like a routing table on the way out and an ACL on the way in. Maybe an easier way to think of it.
Sesse__ 2 days ago [-]
Sure, but how does this differ from a routing table with RPF (which is default in Linux already)?
zx2c4 2 days ago [-]
It's associated per-peer, so it assures a cryptographic mapping between src ip and public key.
newman314 2 days ago [-]
I wonder if this would port nicely over to rustybgp.
matt-p 1 days ago [-]
This is cool! In my experience the absolute most important factor for performance is that we are able to hold the FIB in CPU Cache, and my reading of this is that at >250K prefixes patrica may use less space? Did you find this?
E.g with a CPU with say 256MB L3 cache lookups are many many times more performant because you don't need to check ram on many/any lookups. Hot top levels in L2 > hot path in local CCD L3 > rest somewhere in socket L3 > DRAM misses (ideally almost 0)
throwaway81523 2 days ago [-]
IPv6 longest-prefix-match (LPM).
ozgrakkurt 2 days ago [-]
Why detect avx512 in build system instead of using #ifdef ?
ozgrakkurt 2 days ago [-]
It actually does detect it using ifdef [0] but uses cmake stuff to avoid passing "-mavx512" kind of flags to the compiler [1].
Why? It is 500 lines of pretty basic code. You can port it if you don't like C++ to any language, assuming you understand what it is.
It does look a bit AI generated though
simoncion 2 days ago [-]
> It does look a bit AI generated though
These days, when I hear a project owner/manager describe the project as a "clean room reimplementation", I expect that they got an LLM [0] to extrude it. This expectation will not always be correct, but it'll be correct more likely than not.
[0] ...whose "training" data almost certainly contains at least one implementation of whatever it is that it's being instructed to extrude...
pmarreck 1 days ago [-]
As far as LLM-produced correctness goes, it all comes down to the controls that have been put in place (how valid the tests are, does it have a microbenchmark suite, does it have memory leak detection, etc.)
simoncion 1 days ago [-]
There's much more to it than that. One unmentioned aspect is "Has the tooling actually tested the extruded code, or has it bypassed the tests and claimed compliance?". Another is "Has a human carefully gone over the extruded product to ensure that it's fit for purpose, contains no consequential bugs, and that the test suite tests all of the things that matter?".
There's also the matter of copyright laundering and the still-unsettled issue of license laundering, but I understand that a very vocal subset of programmers and tech management gives zero shit about those sorts of things. [0]
[0] I would argue that -most of the time- a program that you're not legally permitted to run (or distribute to others, if your intention was to distribute that program) is just as incorrect as one that produces the wrong output. If a program-extrusion tool intermittently produces programs that you're not permitted to distribute, then that tool is broken. [1]
[1] For those with sensitive knees: do note that I said "the still-unsettled issue of license laundering" in my last paragraph. Footnote zero is talking about a possible future where it is determined that the mere act of running gobs of code through an LLM does not mean that the output of that LLM is not a derived work of the code the tool was "trained" on. Perhaps license-washing will end up being legal, but I don't see Google, Microsoft, and other tech megacorps being very happy about the possibility of someone being totally free to run their cash cow codebases through an LLM, produce a good-enough "reimplementation", and stand up a competitor business on the cheap [2] by bypassing the squillions of dollars in R&D costs needed to produce those cash cow codebases.
[2] ...or simply release the code as Free Software...
sylware 16 hours ago [-]
If so, I wonder how good a LLM c++ port to plain and simple C would look.
It seems there is a signal (here on HN) that coding LLM would be really good at mass porting c++ code to plain and simple C to remove the c++ kludge dependency.
sylware 16 hours ago [-]
Should have been plain and simple C, or even assembly in the first place.
alex_duf 1 days ago [-]
side note, but I hate that we've reach the point where we don't know what's written by a human and what's written by an LLM.
That goes for a lot of comments here accusing each other of being a bot.
I feel like we've known internet trust at its highest and it can only go one way now.
Includes: - AVX-512 SIMD path + scalar fallback - Wait-free lookups with rebuild-and-swap dynamic FIB - Benchmarks on synthetic data and real RIPE RIS BGP (~254K prefixes)
Interesting result: on real BGP + uniform random lookups, a plain Patricia trie can sometimes match or beat the SIMD tree due to cache locality and early exits.
Would love feedback, especially comparisons with PopTrie / CP-Trie.
E.g with a CPU with say 256MB L3 cache lookups are many many times more performant because you don't need to check ram on many/any lookups. Hot top levels in L2 > hot path in local CCD L3 > rest somewhere in socket L3 > DRAM misses (ideally almost 0)
[0] https://github.com/esutcu/planb-lpm/blob/748d19d5fbd945cefa3...
[1] https://github.com/esutcu/planb-lpm/blob/748d19d5fbd945cefa3...
Alternatively, if you don't want a dynamic FANOUT you keep the FANOUT=8 (or another constant) and do a stripmining loop
This will take FANOUT/VLEN iterations and the branches will be essentially almost predicted.If you know FANOUT is always 8 and you'll never want to changes it, you could alternatively use select the optimal LMUL:
It does look a bit AI generated though
These days, when I hear a project owner/manager describe the project as a "clean room reimplementation", I expect that they got an LLM [0] to extrude it. This expectation will not always be correct, but it'll be correct more likely than not.
[0] ...whose "training" data almost certainly contains at least one implementation of whatever it is that it's being instructed to extrude...
There's also the matter of copyright laundering and the still-unsettled issue of license laundering, but I understand that a very vocal subset of programmers and tech management gives zero shit about those sorts of things. [0]
[0] I would argue that -most of the time- a program that you're not legally permitted to run (or distribute to others, if your intention was to distribute that program) is just as incorrect as one that produces the wrong output. If a program-extrusion tool intermittently produces programs that you're not permitted to distribute, then that tool is broken. [1]
[1] For those with sensitive knees: do note that I said "the still-unsettled issue of license laundering" in my last paragraph. Footnote zero is talking about a possible future where it is determined that the mere act of running gobs of code through an LLM does not mean that the output of that LLM is not a derived work of the code the tool was "trained" on. Perhaps license-washing will end up being legal, but I don't see Google, Microsoft, and other tech megacorps being very happy about the possibility of someone being totally free to run their cash cow codebases through an LLM, produce a good-enough "reimplementation", and stand up a competitor business on the cheap [2] by bypassing the squillions of dollars in R&D costs needed to produce those cash cow codebases.
[2] ...or simply release the code as Free Software...
It seems there is a signal (here on HN) that coding LLM would be really good at mass porting c++ code to plain and simple C to remove the c++ kludge dependency.
That goes for a lot of comments here accusing each other of being a bot.
I feel like we've known internet trust at its highest and it can only go one way now.