Sun Aug 24 22:45:11 CEST 2008
It is easy to implement a stack machine quasi-optimally on a low-end
8-bit machine, the PIC18 being the canonical example. For RISC
architectures, some whole-program analysis is probably in order to
optimize register usage. Is it worth it to keep that road open? In
other words: should Staapl aim at broad-spectrum complexity, or does a
C backend suffice?
There are a couple of things to distinguish: (same for 16 vs 32)
* ease of porting 8-bit apps to 16-bit cores.
* ease of introducing a 'data doubler' for 8-bit cores.
* data-flow analysis and register allocation for DSP/RISC cores
What about ease of porting? If it is possible to define chip targets
in a way that allows static checking and possibly derivation of
optimization rules, a lot could be gained. Is a unified assembler /
simulator feasible? For processor cores this doesn't seem so
difficult: once say 3 random chips are implemented, generalizing them
should be straightforward.
So, what about solving the vendor lock-in problem? Maybe C already
Yes, simulators again. That's where the real beef is. If I take the
effort to write an assembler, I should perform a little more work and
provide an instruction simulator too. Otherwise it's probably best to
try to work the supplier-provided assembler in textual form.
Anyways, I should have a look at tiny device C compiler, see if
there's nothing to snarf there.