Mon Jan 13 16:45:42 CET 2020


The code above uses a fairly dumb data structure which is memory
efficient but makes scheduling slow.  To accelerate, an index can be
built for the channel->task map.

For now, assume that the hot/cold list structure based on the
next_task pointer can be kept.

What is needed?

- For each channel:
  - a list of blocked readers
  - a list of blocked writers

  It is generally not straighforward to determine the size of this
  list.  The most typical case however is 1, so optimize for that.

- A way to easily remove a task from the this.  This can be done by
  following the task->channel map, and then searching the channel

- The cold list can probably be removed.  Only a hot list is
  necessary to keep track of newly blocked tasks.

The sizes of these lists are not easily determined, so it seems
best to place an upper bound on the number of list slots, and use a
shared list cell pool.

Also, if the number of tasks is relatively small, indirection could
be used.  Since this requres a base table, it seems to add a lot of
complexity for not a whole lot of gain.  Revisit the idea later.

The upper bound on the number of cells needed is determined by the
maximum select count for each task: for each select entry, the task
is added to the channel's list.

So total number of extra bytes needed:

- Tasks * MaxSelect * 8
- Channels * 2 * 4

It does seem to create quite a bit of overhead.  Suppose we can use
IDs for task lists and tasks, the count then becomes:

- Tasks * 5  (id -> pointer table + backpointer in task struct)
- Channels * 2 * 1

This requres that the total number of active selects also fits in 8

Re-arranging tasks so they are in a single array avoids the back
pointer, but requres a secondary pointer.

Basic conclusion I come to is that keeping it simple is proably going
to be the way to go.  Pick speed or pick memory efficiency.

Add a cell pool that just has pointers.