Skip to content

Bufferalways onlytome#38

Open
troelsjessen wants to merge 9 commits intomasterfrom
bufferalways_onlytome
Open

Bufferalways onlytome#38
troelsjessen wants to merge 9 commits intomasterfrom
bufferalways_onlytome

Conversation

@troelsjessen
Copy link

Most commits will be squashed prior to merge

A few commits are not directly related to this improvements, those will go as is.

Replacing sizeof(...) macros when defines are anyway there and used in certain cases
This is to align with CAN implementation
The functionality is used to stall a module in case it is running out of buffers, potentially causing a reboot. Nevertheless, running out of buffers is a valid situation if a module is routing between two physical networks with different capatilities (e.g. CAN vs ETH).
Currently implemented for CAN, ETH and ZMQ
There is no need to try handling an ethernet frame that does not contain a valid CSP header
@troelsjessen
Copy link
Author

Johan states:
Jeg synes vi skal lave funktionen i routeren, således at når pakker kommer ind, og buffree er under et vist tal, eksempelvis 5 eller 10, så skal routeren stoppe med at route, og kun acceptere pakker "tome". Resten bliver discarded af routeren. - Det har den fordel at det automatisk virker for alle interfaces, og vi slipper for at lave "lag 3 inspektion" på lag 2.

@troelsjessen
Copy link
Author

I just tried Johans approach (which is significantly more elegant), but unfortunately it fails as most TX functions are synchronous and blocking the router thread.
This means that several packets can come in through high-speed interfaces (ZMQ, ETH, SpW etc.), typically having designated RX threads or ISR routines, exhausting the buffer queue while the router is waiting to finish transmission of a single CAN packet.
In the current implementation, buffer_get_always is actually not used for ZMQ and ETH (which is a mistake), but packets can still come through CAN and exhaust the buffer queue.

@johandc
Copy link

johandc commented Mar 10, 2026

Aha! That in itself is an issue, so maybe we should quite early separate the routing queue and the incoming queue, and process the tx of routed traffic separately from incoming traffic. Incoming traffic probably doesnt even need a task to process it, because it can usually be placed into the correct queue immediately.

@troelsjessen
Copy link
Author

I think Johans latest comment is reaching beyond the current requirements. As Johan states, most interface types rely on a task for for handling incoming messages, if not handled in an ISR.
Nevertheless, TX is currently carried out simply by having the router calling a blocking transmit function going into the interface and further into the driver layer. Demanding these to be asynchronous procedures seem like a significant change to me, likely requiring additional resources from modules hosting a CSP stack.

@johandc
Copy link

johandc commented Mar 13, 2026

So what i am suggesting is to drop them before they arrive at the TX queue, in the csp_qfifo_write call.

@troelsjessen
Copy link
Author

So what i am suggesting is to drop them before they arrive at the TX queue, in the csp_qfifo_write call.

Definitely better... But still... Allocating buffers when starting reception of a new packet (not to me), only to be deallocated when the full packet is received, due to a low amount of free buffers, the deallocation will happen too late.

Imagine large incoming packets simultaneously on multiple CAN interfaces, possibly interrupted by higher prio packets coming from other nodes (i.e. GNC). We will easily be handling more than two incoming packets in parallel.

@johandc
Copy link

johandc commented Mar 23, 2026

I think its only the smaller CPU, and its only when set to 2k packets that we are limited to 7 buffers. That is way too low anyways to perform any meaningfull network bridging. So i prefer to keep the driver as simple as possible, assemble the entire packet, and then throw away in qfifo input. Actually this aligns with the goal of direct to task delivery. So the qfifo becomes more or less just a forwarding queue.

@troelsjessen
Copy link
Author

Ping me when you are in the office, then we can set up a test

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Development

Successfully merging this pull request may close these issues.

2 participants