[<<][meta][>>][..]
Tue Feb 12 14:03:56 CET 2013

Possible bug

Currently, no distinction is made between distributions on 2 different
levels:

   for(i) { for(j} { a[i] } }

   for(i) { for(j} { a[j] } }

TODO: Make this all a bit more explicit.

A 'const' introduces a stride of 0 for that particular level, meaning
that it doesn't count.  See difference between:

     a[ 100 * i + 1 * j ];

     x[   0 * i + 1 * j ];

Though this doesn't work for:

     y[ 100 * i + 0 * j ];

But the idea might be interesting, since the latter can be represented
by

     z[   1 * i + 0 * j ];

or a stride sequence z:(1 0) and y:(0 1) compared to a:(100 1).

Another idea is to type these as arrays of dim 1, but that doesn't
capture the idea they get distributed.  The multiplication by zero
does that very well!

Basically, the type (1 0 2) says 2 things:
- It can be lifted to (1 ? 2)
- It has a memory representation of (1 2)

But the iteration pattern has stricly more information than the memory
representation.

So the trick is in the implementation of the `const' typing.  It
should give a particular "wilcard" array type that has no size, but
maps to stride multiplication 0, and can be unified with any size.


So what if vectored inputs are used in different configurations,
i.e. transposed?  Would that give type constency?  Maybe this would
just be a typing trick, i.e. "casting" (A (B t)) to (B (A t)) by
flipping indices.




[Reply][About]
[<<][meta][>>][..]