[<<][sm][>>][..]
Sat Jan 18 15:28:17 CET 2020

CSP and interrupts: two levels are enough

To fit in the current setup, i'm tempted to run the CSP machine from
interrupt, but that really doesn't work due to the need for
pre-emption.

The case of a intterrupt-per-char on uart is actually quite common,
and frequently processing will be irregular: the character that
completes a message will take longer to process than the one that is
just buffered.

So general rule:
- intterupt data goes into buffers
- main loop uses WFI to wake up

The same issue happens on the FPGA.  The problem is TDM: one
representation of the data requires frequent small operations, another
requires infrequent large operations.  This decoupling is essential in
computing in general: the "time warping" is what makes things
practical.

So I've identified two cases: character vs. data packet.  This can
likely grow larger as well, involving processes that operate on many
data packets.  But these do not seem to need special requirements.

Why does the interrupt need special attention?

Because it has a time base that doesn't stretch!

The interrupt is tied to real time.  Once buffered, time becomes
relative, i.e. at the fine grain, only causality matters, and the
relation to real time is there at a much coarser level.

So one thing to take away is that the need for priorities ALWAYS has
to do with some exeternally imposed deadline.

Reading that back it is rather obvious.  I just made a weird detour to
get to a very simple conclusion.





[Reply][About]
[<<][sm][>>][..]