[<<][meta][>>][..]
Fri Jan 18 22:45:01 CET 2013

siso.rkt

While I like the basic simplicity of the approach, I'm not sure I
understand why there is a real need to split the abstraction into two
layers:

;; * CORE REPRESENTATION: For generating target C code and test suite
;; the technique of abstract evaluation is used.  Code is represented
;; as a (pure) function which can be evaluated over several abstract
;; domains.
;;
;; * SYNTACTIC SUGAR: To generate the lambda syntax corresponding to
;; this functional representation, a collection of macros is used to
;; remove the notational burdon of explicit state threading.

Only the latter layer has information about "structure" of the code.
The former is truly only a semantic analysis.

What I find weird is the need for "reconstruction" of memoization
structure while performing AI of the core representation layer.

It seems a bit backwards: we have the info and expansion time, so why
throw it away?



Considering current time constraints, is there a way to cut this
problem short?  Does it matter?  Will new insights make it hard to
rewrite end-user code that is written in this 2-phase way?



[Reply][About]
[<<][meta][>>][..]