erlang hacking Entry: Distributed systems Date: Wed Dec 17 19:09:47 EST 2014 We're there. It's no longer feasible to think of "my computer". I've got a bunch of servers, desktops, laptops and routers and on top of that boards like beaglebone xm, beaglebone black, raspberry pi, and some older linux dev boards that can run linux. Systems that can run linux will become smaller and more plentiful over the next couple of years, so it's time to start thinking about networks in a different way. I.e. as a single system. Erlang seems a natural choice, so time to bite the bullet. Entry: Events Date: Wed Dec 17 19:13:49 EST 2014 I'd like to centralize some events in a local "control panel" accessible through the web on my main machine. This should include: - temperature / humidity sensors attached to several machines - notification of incoming call from asterisk pbx (from/to). - email & IM notifications + filters - web application Outputs: - thermostat - lights - buzzers / alarms Entry: Installing Date: Wed Dec 17 19:30:54 EST 2014 debian: apt-get install erlang openwrt: opkg install erlang ( buffalo router, 128M ram, 32M flash ) Entry: Have to start somewhere Date: Thu Dec 18 18:52:17 EST 2014 I've got a bunch of USB-connected microcontrollers that tie into a central infrastructure. First problem to solve: sometimes they crash, so build a thing around it that allows them to restart. This[1] mentions prerequisites: - gen_server - gen_fsm - gen_tcp [1] http://erlangcentral.org/wiki/index.php?title=Building_a_Non-blocking_TCP_server_using_OTP_principles Entry: OTP videos Date: Thu Dec 18 20:30:08 EST 2014 https://www.youtube.com/watch?v=0ZGHzI9F5YE https://www.youtube.com/watch?v=TgTLDFSsgMQ Entry: udev /dev/ttyACM0 Date: Thu Dec 18 21:25:41 EST 2014 To avoid erlang serial support, start socat on the serial port and let erlang connect over tcp. Or even let erlang start the socat process? Or just separate. If anything isn't supported, just map it to tcp. Entry: Getting started Date: Sun Dec 21 20:14:15 EST 2014 Let's start with something simple which has a fault-tolerant property. What is a sensor node? A thing that when working properly sends out messages containing measurement data. Optionally it can receive messages prompting it to reply, i.e. measure-now, are-you-there. Entry: Erlang serial Date: Mon Dec 22 21:49:09 EST 2014 This[1] seems simple enough to get going on an embedded linux target. [1] https://github.com/tonyg/erlang-serial Entry: Trusted sandboxed network Date: Fri Dec 26 23:03:09 EST 2014 How to set up a distributed trusted network? Problem is that typically with one malicious node, the whole can be compromised. So it might be good to also sandbox this on a linux host: - per host: - isolated user - chroot or vm It seems best to stick to TCP for distribution. How to set up a series of nodes that can trust each other and have encryption? How to do this without requiring root access? Maybe SSL is actually a lot simpler. But how does the handshake actually work? Is spoofing a problem? Does it need a monoculture? [1] http://www.erlang.org/doc/apps/erts/alt_dist.html [2] http://www.erlang.org/doc/apps/ssl/ssl_distribution.html Entry: Local VPN Date: Thu Jan 1 19:14:25 EST 2015 So I'd like to set this up first. Using OpenVPN would create a single point of failure. Maybe IPsec? What about running ovpn on each node? Entry: Erlang linking Date: Sun Jan 4 19:17:54 EST 2015 Complicated but useful. Some facts: - exit(normal) does not invoke linking. - exit(shutdown) is used in OTP. Entry: Erlang distribution Date: Mon Jan 12 15:34:24 EST 2015 Maybe I'm looking at this in a wrong way. Maybe it is easier to solve the cross-network issues with low-level (UDP TLS) messages. Entry: erlinit Date: Wed Feb 4 15:29:18 EST 2015 erlang as init [1] http://www.slideshare.net/fhunleth/erlangdc-2013 Entry: ETS Date: Tue Apr 21 10:44:54 EDT 2015 Data store, process based. Not GCd. Seems that it dies together with the runtime system. File store? Entry: Learning curve Date: Sat Jul 18 23:26:26 EDT 2015 Functions are the building blocks; it's hard to reuse processes, easy to reuse functions / modules. State and code that is not referentially transparant is still hard to deal with. It's nice to not have to worry about locks. Entry: rebar Date: Mon Aug 31 15:57:10 CEST 2015 Time to switch to a more standard app build/layout. https://github.com/rebar/rebar/wiki/Getting-started Side track: escript for erlang scripts. Entry: application private directory Date: Mon Aug 31 23:50:39 CEST 2015 Diving into OTP/rebar due to Cowyboy. What is "an application's private directory"? And why do I get the error: {reason,{badarg,"Can't resolve the priv_dir of application erws"}} Let's look in the source code:priv_dir(App) => {error,bad_name} Because you most likely don't use releases or set ERL_LIBS like you should. Cowboy stopped guessing, it caused other issues. [1] https://github.com/ninenines/cowboy/issues/613 Entry: Modular programming: separate listeners from handlers Date: Thu Sep 17 10:35:39 EDT 2015 If connection handling (accept) is abtracted away from listening, it is possible to reload handlers without stopping the server. Basically, what you want is a collection of pure functions in one module to implement the behavior and state transitions, and another one to call into this from a socket handler. It might be time to switch to gen_server. Entry: Pattern: maps over lists that change contents Date: Mon Sep 21 09:13:15 EDT 2015 context: - server with a couple of connections - connections come and go problem: - take a snapshot "map", e.g. list, + apply function need to do this in 1 step, as list might change between ops possible solution: - parallel map - failure during evaluation means no result is returned - device appearing after initial list means no result is returned - how do you know there was a failure? use exceptions Entry: gen_server Date: Mon Sep 21 23:35:56 EDT 2015 %% -behaviour(gen_server). %% -export([code_change/3, handle_call/3, handle_cast/2, handle_info/2, init/1, terminate/2]). %% init(_Args) -> {ok, no_state}. %% terminate(_Reason, _State) -> void. %% code_change(_OldVsn, State, _Extra) -> {ok, State}. %% handle_info(_Info, State) -> {noreply, State}. %% handle_cast(_Request, State) -> {noreply, State}. %% handle_call(_Request, _From, State) -> {reply, void, State}. Entry: rebar cross compile Date: Fri Sep 25 11:04:43 EDT 2015 I have an '-m64' show up in CFLAGS. [1] is merged, so I probably just have an older precomp version. [master] tom@tp:~/humanetics/src/gateway/gw$ ./rebar -V rebar 2.6.0 R15B03 20150619_161736 git 2.6.0 https://github.com/rebar/rebar/pull/459 Entry: Cowboy Date: Sat Oct 3 17:55:31 EDT 2015 https://medium.com/@kansi/erlang-otp-architectures-cowboy-7e5e011a7c4f Entry: Debian erlang with less deps? Date: Sat Oct 10 18:03:57 EDT 2015 try: apt-get install erlang-crypto which is one of the prerequisites of cowboy, but pulls in less system libraries. Entry: lfe Date: Mon Dec 28 01:42:26 CET 2015 It seems not possible to match on the value of variable in a local function (defun foo [(max) (letrec ((loop ([max] 'end) ([other] (loop (+ 1 other))))))]) Maybe because these are not "real" lambdas. My guess is that they need to be lifted to top level functions to make them recursive, and those don't have context so can't do this kind of matching in the VM... Likely a case or if statement is needed. EDIT: Running into more cases like this. Is it impossible to match on the conents of variables? From user_guide.txt: Repeated variables are *NOT* supported in patterns, there is no automatic comparison of values. It must explicitly be done in a guard. This is quite annoying, coming from Erlang. Any way to solve it with a macro? Maybe think about it, to try to understand why this restriction arose. Some more tangential information: http://erlang.org/pipermail/erlang-questions/2008-March/033801.html "no repeated variables" == "linearity condition on pattern matching" Entry: ports and normal exits Date: Mon Dec 28 12:22:05 CET 2015 Is there a way to trap the normal exit of a program, e.g. when running a standard command line program? Up to now I've used line mode with an explicit "EOF" line produced by the command. EDIT: link/1 is possible, but it looks like this doesn't capture normal exit. Entry: Canonical binary encoding? Date: Sat Jan 2 14:38:16 CET 2016 Is there a canonical way to send arbitrary Erlang data over bi-directional stream links? http://erlang.org/doc/apps/erts/erl_ext_dist.html Entry: Pid to integer Date: Sat Jan 2 15:31:50 CET 2016 pid_to_list/1, list_to_pid/1 also pid/3 http://stackoverflow.com/questions/243363/can-someone-explain-the-structure-of-a-pid-in-erlang Entry: .hrl include for erlang records Date: Tue Jan 12 20:18:17 CET 2016 https://groups.google.com/forum/#!topic/lisp-flavoured-erlang/LNNFVGKmcZM (include-lib ...) (include-lib "xmerl/include/xmerl.hrl") Entry: chunk parser Date: Tue Jan 19 18:15:05 EST 2016 This is so annoying so let's solve it once and for all: jiffy:decode Entry: Process mapper Date: Sat Jan 23 12:26:31 EST 2016 How to make a map between pids and other ids, and automatically clear pids that are no longer active? Entry: Erlang Date: Thu Feb 11 15:32:12 EST 2016 http://lambda-the-ultimate.org/node/node/4638?from=220 http://www.unlimitednovelty.com/2011/07/trouble-with-erlang-or-erlang-is-ghetto.html http://www.infoq.com/news/2010/09/javaone2010-concurrency Entry: Inside-out folds in Erlang? Date: Fri Feb 12 12:31:43 EST 2016 I've been trying out the idea of abstracting iteration / sequences as folds. However, for the more general cases, it needs delimited continuations to represent an iteration state. Otherwise, folds are opaque, e.g. you can't just "pick out an element". http://okmij.org/ftp/Streams.html#enumerator-stream Entry: UBF Date: Sun Feb 14 02:04:57 EST 2016 ( Joe being eloquent. ) I basically don’t want to know how you’ve implemented your pile of shit. I just want to send it message to tell it what to do. I’d naively assumed that something like Anything-over-TCP would be fine and easy to implement, so I made a system called UBF which is a layered on top of TCP and hoped that everybody would use it. http://joearms.github.io/2016/01/28/A-Badass-Way-To-Connect-Programs-Together.html http://ubf.github.io/ubf/ Entry: Secure distribution Date: Mon Feb 15 11:01:17 EST 2016 I don't really get past this. For one, I want to stick to ssh to build secure connections. Is it possible to have the erlang distribution algorithm use ssh? Let's forget about the whole thing and start building it using just VPN security. This introduces a single point of failure (the VPN host) but since that is also the internet router, it's likely OK: internet down = end of the world in reality :) Entry: erlang.mk Date: Sat Feb 20 11:30:19 EST 2016 How to add a package to erlang.mk? Entry: optional logging Date: Sat Feb 20 14:31:24 EST 2016 So I was thinking about how to implement optional logging. It's actually quite hard in erlang to introduce shared state, such as a configuration flag that says whether to log or not. So still thinking too much in OO terms. Just kill the process! I.e. always send the message to the logger, e.g. using a name, but optionally start the logger. This is Chuck Moore's: "Don't set a flag, set behavior." Entry: Remote execution Date: Wed Feb 24 19:44:02 EST 2016 To be able to spawn a process running a locally defined function, it is necessary that the two erlang "eval" module versions are the same. So two things can be used: - spawn + fun - rpc:call/4 (but that doesn't use anonymous functions) Entry: Some different stream abstractions Date: Thu Feb 25 14:31:37 EST 2016 In a project I'm using two different stream abstractions. - sinks - lists as (abortable) folds The latter is "pure" in that it doesn't rely on processes: it has the sematics of a left fold, with a pure folding function that combines an element with the old accumulator to yield a new accumulator The former is parameterized by a function that consumes the data in the sequence using a side effect. I.e. it behaves as a message send. The appearance of these two forms is very typical in stream processing applications that are based on a receive/send or read/write core: part of it is pure (functional), written as a map or fold over a stream of events, and part of it has a side-effect, sending data "out", e.g. a sink. There is some design freedom about where to introduce the "sink". In most cases it seems best to do that as late as possible, keeping most of the sequence representation in pure form. Basic ideas: - Write the primitive event stream as a fold (abortable). This works as long as it is not necessary to see individual elements in the sequence. e.g. treat the sequence as a sequence only. - Define 'map' for folds, essentially implementing "map fusion". This is equivalent to stateless stream processing. This allows basic chained event processing without having to introduce processes. - Define new folds in terms of folds. This is equivalent to stateful stream processing. - Apply a composed fold to code that sends each element. This is where processes and outputs are introduced. The main guideline is that a function is easier to handle than a collection of processes. This increases testability. Entry: Against gen_server? Date: Sat Feb 27 22:30:19 EST 2016 Trying this out... I have a 5-line function implementing a server loop that I want to expose as a gen_server. However, this leads to a lot of verbiage that doesn't seem worth it. Maybe gen_server is to be used on coarse grain only? Entry: Basic tension Date: Sun Feb 28 00:57:18 EST 2016 Is between putting a lot in a single "object" (process state), or splitting things of to a different object. Entry: Passive vs. active sockets Date: Sun Feb 28 21:45:35 EST 2016 Active sockets seem more flexible, as they allow processes to support additional message, like 'reload'. There is no real need for blocking calls (except when writing a recursive parser?). Entry: macros and binops Date: Thu Mar 3 14:24:03 EST 2016 Don't forget parenthesis! use: -define(FRAMES_PER_BLOCK,(?BLOCK_SIZE div ?FRAME_SIZE)). this will lead to surprises: -define(FRAMES_PER_BLOCK,?BLOCK_SIZE div ?FRAME_SIZE). I had previously assumed macros do subexpression replacement, but it looks like they do token replacement! Actually, from the code it looks like I ran into this before: -define(PLAYBACK_READFLAGS, (1 bsl 0)). -define(PLAYBACK_FILLBAD, (1 bsl 1)). -define(PLAYBACK_CHECKBLOCK, (1 bsl 2)). Entry: dependencies Date: Sun Mar 20 12:29:50 EDT 2016 git submodule isn't needed. using erlang.mk seems simpler. Entry: oo vs. actor Date: Wed Mar 23 13:37:10 EDT 2016 I'm (still?) using a lot of oo techniques to design erlang programs. E.g. to implement server processes as objects that can be subclassed by delegating to "base class" handlers. Not sure if this is such a good idea. There is always the tension between storing information in the same place (e.g. a dictionary with separate tags for separate pieces) and going through the trouble of separating storage into separate processes that only have a message interface. The latter involves more protocol, the former gets messy at some point. There is a tradeoff.. Entry: dbus Date: Fri Apr 22 20:57:05 EDT 2016 https://github.com/lizenn/erlang-dbus http://stackoverflow.com/questions/4474290/how-do-i-use-emacss-dbus-interface Entry: logging and multiple shells Date: Sat Jun 11 18:52:25 EDT 2016 The Pid returned by erlang:group_leader/0 can be used as an argument of io functions. Entry: make:all([load]). Date: Sat Jun 11 20:28:05 EDT 2016 ? Entry: erlang emacs autocomplete Date: Fri Jun 17 19:10:57 EDT 2016 http://www.lambdacat.com/post-modern-emacs-setup-for-erlang/ https://github.com/tjarvstrand/edts Entry: erlinit Date: Sat Jun 25 16:13:31 EDT 2016 https://github.com/nerves-project/erlinit Entry: let it crash RAII ? Date: Sun Jun 26 22:51:30 EDT 2016 http://erladvisor.blogspot.com/2015/03/resource-management-idioms-in-erlang.html Entry: Singletons Date: Thu Jun 30 23:33:29 EDT 2016 I've been trying to add "let it crash" functionality to non-Erlang processes, and it is quite hard to make this work in the presence of Erlang VMs dying without assuming that cleanup needs to be done before something is started. I.e. to clean up the mess left by a previous startup. Is this a good way of doing things? It breaks modularity, encapsulation: assumes things about before while we really only know about now. So design everything in such a way that a VM doesn't need to be restarted. If it dies, reboot the entire machine or at least perform a reset that performs the same relevant initializations. The problem really is state, and while it's possible to do RAII in erlang, it hinges on external processes cleaing up state when they die. Entry: load code in all connected nodes Date: Fri Jul 1 12:10:09 EDT 2016 c:nl(Module). Entry: distribution & ssh Date: Sat Jul 2 12:18:13 EDT 2016 I really want distribution, and I really don't want to put security at a host boundary, but it seems that this is a "pick one" problem. So I either figure out a way to bring this to user level, or learn to live with host boundaries. Looking into the future, host boundaries are likely becoming more prominent, and looking at the security landscape, local escalation is too hard to prevent. Everyone else is aiming at host boundaries and remote security, why am I insisting on user boundaries? Current stance: improve host-level security by eliminating web, email attack vectors into local code execution. Entry: gen_event handlers and supervisor trees Date: Sun Jul 10 13:22:15 EDT 2016 http://blog.differentpla.net/blog/2014/11/07/erlang-sup-event Entry: erlang VM explanation Date: Mon Jul 18 08:46:12 EDT 2016 https://www.reddit.com/r/erlang/comments/4sogzb/how_do_erlang_microprocesses_work_internally/ Entry: nested RPC Date: Fri Jul 22 11:22:41 EDT 2016 One thing that keeps biting me: it's not possible to perform RPC inside the handler of an RPC call. Obvious in retrospect, but it clearly indicates that new mental models need to be built. RPCs are not simply function calls! EDIT: is this a case for unique references? No.. Entry: alternative carrier Date: Sat Jul 23 14:25:25 EDT 2016 http://erlang.org/doc/apps/erts/alt_dist.html ok went here before... too complicated. Entry: Using all these devices.. Date: Thu Aug 11 22:07:06 EDT 2016 I'd like to make them work somehow. What it would need is a boot loader + linux kernel + NFS root. Filesystems could be managed locally. Or, what about this: boot them in Erlang. Give them all a very minimalistic filesystem. https://news.ycombinator.com/item?id=11830413 I forgot: what about writing an Erlang node? That would avoid having to deal with the Erlang runtime system, and keep the systems minimal. Might even write them in Rust. http://erlang.org/doc/tutorial/cnode.html http://nerves-project.org/ https://twitter.com/NervesProject Entry: cross-compiling erlang Date: Sat Aug 13 21:30:29 EDT 2016 Since there is no working armhf multiarch package, it might be best to cross-compile it manually. Also a good opportunity to look at the source. http://erlang.org/documentation/doc-6.0/doc/installation_guide/INSTALL-CROSS.html export OTP_TOP=`pwd` ./otp_build autoconf # from git ./configure --enable-bootstrap-only && make # builds bootstrap system ./configure --host=arm-linux-gnueabihf --build=x86_64-linux-gnu && make # cross-compile maybe this isn't the way to go... this really needs to be linked against the correct libs. let's just unpack the armhf dev package, or copy the libs from xm/pi - or just use nfs : filesystems are there anyways and hosts will likely be up. more trouble.. looks like this might have just been some unstable code. after cleaning up exo deps to only have jiffy, it worked with: rm `find -name '*.o'` `find -name '*.so'` make CC=arm-linux-gnueabihf-gcc CXX=arm-linux-gnueabihf-g++ Entry: Erlang and Pi Calculus Date: Sat Aug 27 09:28:05 EDT 2016 http://erlang.org/pipermail/erlang-questions/2003-November/010783.html Entry: OTP releases by hand Date: Mon Aug 29 23:04:42 EDT 2016 http://blog.ikura.co/posts/otp-releases-by-hand.html Entry: Sink vs. Fold Date: Fri Sep 2 09:27:59 EDT 2016 I've been using two interfaces to represent sequences. These grew out of actual used: - impure sink-parameterized generators: a sequence is a generator parameterized by a function (procedure) that takes {data,Data} or eof values. - pure left fold: a sequence is a function that takes an initial state and a left fold body, producing a final state. A fold can be turned into a sink-parameterized generator through applying the sink in the fold's iteration body. A sink-parameterized generator can be turned into a fold by employing a process to convert the pushing call to sink into a pulling receive. The decision to use either is driven by representing the sequence as - a push: the sink is abstracts "!" - a pull: the fold abstracts a loop over "receive" EDIT: - When in doubt, express the pattern as left+right fold and use fold:iterate to generalize into list, sink-parameterized generator. - If the data is always sent somewhere, it's ok to keep wrapping sinks until they can be merried to a fold somewhere. Entry: Tail recursion and lists Date: Fri Sep 2 10:34:11 EDT 2016 http://erlang.org/pipermail/erlang-questions/2012-May/066647.html http://www.erlang.org/doc/efficiency_guide/myths.html#tail_recursive http://www.erlang.org/doc/efficiency_guide/listHandling.html#id63380 Entry: Nerves project Date: Fri Sep 2 12:10:52 EDT 2016 https://twitter.com/NervesProjechumanetics/src/modules/t http://nerves-project.org/ Entry: Sequences as left folds Date: Sat Sep 3 10:40:44 EDT 2016 I've converted the default abstraction in the project to sequences represented as left folds. This works a lot better than sequences as sink-parameterized generators because it is easier to compose. The reason is _stateful_ processing of sequences. Composition works by taking an input fold, and have it fold over some compound state, then filter that out again in the final result. To convert to sink-parameterized generators, simply include the sink in the body of a fold operation. Entry: Queues Don't Fix Overload Date: Tue Sep 6 11:27:59 EDT 2016 http://ferd.ca/queues-don-t-fix-overload.html - blocking on input (back-pressure) - dropping data on the floor (load-shedding) Entry: composing folds, dual approach? Date: Thu Sep 8 01:56:41 EDT 2016 Is it possible to transform fold bodies in the same way as folds? That way they could be opened up in a gen_server Example: chunker: take a fold body that operates on complete chunks, and transform it into one that operates on partial ones. This would be a dual or something? EDIT: you'll need: - a transformation of the function - an injection of wrapped state into new state - a projection of new state (if a final version is desired) State injection, projection could be standardized to just pair and picking the first element. Actually, this is what is done in DSPM. EDIT: might be interesting to see how this can be factored out in fold.erl chunks(Fold, {foldl, Split, SplitInit}=_Splitter) -> fun(Fun, FunInit) -> {FunResult, _} = Fold( fun(Chunk, {FunState, SplitState}) -> SplitFold = Split(Chunk, SplitState), SplitFold(Fun, FunState) end, {FunInit, SplitInit}), FunResult end. The custom core here is: fun(Fun) -> fun(Chunk, {FunState, SplitState}) -> SplitFold = Split(Chunk, SplitState), SplitFold(Fun, FunState) end All the rest is generic. So this is not a siso, but a sis transformer. Might be generic enough (remember DSPM has only applicative functors). Anyways. Might work as a guiding abstraction, but until there is a clear need for splitting this in fold.erl that would save a lot of code, it's probably best to keep it more concrete and easier to read. Entry: nerves Date: Mon Oct 17 11:43:49 EDT 2016 http://wsmoak.net/2016/10/17/building-using-custom-nerves-system.html?utm_source=dlvr.it&utm_medium=twitter Entry: folds with abort Date: Tue Oct 18 18:11:08 EDT 2016 Aborting a fold should be done from the foldee. Only structucre-changing functions need to support the protocol: primitives and composites. E.g. map & filter do not. I don't see a good way to support this using an extension of the current code. This needs separate routines, as the typs are clearly different. It's tricky. To implement append, it's necessary to know if an iteration has aborted. Can't tell from return value. EDIT: it's possible to wrap a fold. Entry: dialyzer and typer Date: Fri Oct 28 01:39:54 EDT 2016 Start with running typer on files, and fill in the blanks that are known. Entry: Web continuations Date: Fri Oct 28 16:07:26 EDT 2016 Basically, building SPAs is a pain. Use REST whenever possible. Problem there is to encode continuations. If security is not a problem, erlang terms serialization of closures can be used. Security is likely a problem, so represent the continuation state as an explicit data structure. Then use POST to hide the terms. Representing state: - Query strings - JSON objects - XML - BERT Query strings are ugly anyway, so maybe best to save state as an opaque BERT, and use the safe mode on BERT decode. http://bert-rpc.org/ Entry: grisp VM Date: Tue Dec 6 23:10:16 EST 2016 https://www.grisp.org/ https://news.ycombinator.com/item?id=13118484 Entry: netlink Date: Wed Dec 7 14:14:51 EST 2016 https://github.com/travelping/gen_netlink/blob/master/src/netlink.erl http://www.linuxjournal.com/article/7356 http://man7.org/linux/man-pages/man7/rtnetlink.7.html Entry: memmap Date: Fri Dec 16 09:51:37 EST 2016 Make a C extension that can memmap a {packet,4} file, index it, and play it back. Entry: scraper Date: Mon Dec 26 18:40:42 EST 2016 https://github.com/kivra/robotnik Entry: tag more Date: Wed Dec 28 10:52:07 EST 2016 How to write better Erlang? Use more tagged values to allow for static and dynamic type checking. Entry: segfault on library update Date: Mon Jan 2 15:45:09 EST 2017 Might be because the .so gets overwritten, not unlinked? "cp --remove-destination" fixed it. Entry: exit vs throw? Date: Mon Jan 16 19:25:04 CET 2017 rvirding on http://stackoverflow.com/a/13654007 There are 3 classes which can be caught with a try ... catch: throw, error and exit. throw is generated using throw/1 and is intended to be used for non-local returns and does not generate an error unless it is not caught (when you get a nocatch error). error is generated when the system detects an error. You can explicitly generate an error using error/1. The system also includes a stacktrace in the generated error value, for example {badarg,[...]}. exit is generated using exit/1 and is intended to signal that this process is to die. The difference between error/1 and exit/1 is not that great, it more about intention which the stacktrace generated by errors enhances. The difference between them is actually more noticeable when doing catch ...: when throw/1 is used then the catch just returns the thrown value, as is expected from a non-local return; when an error/1 is used then the catch returns {'EXIT',Reason} where Reason contains the stacktrace; while from exit/1 catch also returns {'EXIT',Reason} but Reason only contains the actual exit reason. try ... catch looks like it equates them, but they are/were very different. Entry: Storing closures in BERT Date: Sun Feb 5 15:33:13 EST 2017 How insane an idea is it to just trust the browser with storing raw closures in serialzed form? As long as they are opaque and tamper-proof, this should be possible. I.e. it doesn't matter that the browser code can read the closure information, but it is a little problematic if it can construct an arbitrary closure. So essientially, it doesn't need to be encrypted, but it does need to be authentic, i.e. generated by the server. Looks like HMAC is what I want: https://en.wikipedia.org/wiki/Hash-based_message_authentication_code http://erlang.org/doc/man/crypto.html -> hmac http://security.stackexchange.com/questions/20129/how-and-when-do-i-use-hmac/20301 Entry: Storing closures, cont Date: Tue Feb 7 19:15:17 EST 2017 I'm embracing the HMAC-tagged encoded closures. It is too useful not to use. Thinking about it, it seems safe. However it feels somehow wrong, because if that HMAC protection fails, then the security breach is total. How much do you trust cryptography? It is important to realize that security is relative. Weigh this against other means to perform code injection in the same system. Entry: Web programming, callbacks and IDs Date: Wed Feb 8 09:05:40 EST 2017 There are 2 regimes: - JS -> E: onclick = wrapped hmac of serialized closure - E -> JS: each element has a unique ID for setting The (large!) benefit is that all routing can be made generic -- i.e. no commands decoupling needs to be created. Entry: OO and state variables Date: Thu Feb 9 09:47:17 EST 2017 I'm starting to wonder if using 'object-with-flags'-style OO in an Erlang program is a code smell. An indication that the problem hasn't been captured well. Entry: React without JSX Date: Fri Feb 10 18:00:08 EST 2017 Instead of this JSX frontend. What about making an Erlang frontend instead? https://facebook.github.io/react/docs/react-without-jsx.html Entry: QuickCheck flow Date: Sun Mar 5 07:16:47 EST 2017 Run tests, then upon failure, run this to get the current failing case: f(E),E=eqc:counterexample(). Entry: Nerves on Raspberry Pi 1 Date: Wed Mar 22 19:39:38 EDT 2017 Let's try on pi1, as I won't use that for Debian again -- too slow. Debian only has 1.3, while nerves needs 1.4 Installing v1.4 from git $ git checkout v1.4 $ make clean test Link mix and elixir into ~/bin Then follow tutorial https://hexdocs.pm/nerves/installation.html https://hexdocs.pm/nerves/getting-started.html $ mix local.hex $ mix local.rebar $ mix nerves.new hello_nerves $ export MIX_TARGET=rpi $ mix deps.get $ mix firmware # fwup -a -i nerves_p1.fw -d /dev/sdb -t complete Alright.. Booted. Elixir prompt on HDMI, USB keyboard works. How to access over the network? https://github.com/nerves-project/nerves-examples https://github.com/nerves-project/nerves-examples/blob/master/hello_network/README.md switch to serial console: https://github.com/nerves-project/nerves/blob/master/docs/FAQ.md Entry: buildroot cross compile Date: Thu Mar 23 19:01:22 EDT 2017 Somehow '-L/usr/lib/erlang/lib/erl_interface-3.9.1/lib' gets added from the Debian host. Is this rebar or jiffy doing that? i586-linux-g++: WARNING: unsafe header/library path used in cross-compilation: '-L/usr/lib/erlang/lib/erl_interface-3.9.1/lib' /home/tom/pub/git/rootfs/buildroot/output/host/usr/lib/gcc/i586-buildroot-linux-uclibc/5.4.0/../../../../i586-buildroot-linux-uclibc/bin/ld: warning: library search path "/usr/lib/erlang/lib/erl_interface-3.9.1/lib" is unsafe for cross-compilation Seems simpler to let erlang build a complete release, and not include the erlang runtime system in the root directory? Or just delete it, and overwrite the release. Currently it's still picking the wrong runtime system. Entry: Get in the habit of writing tests Date: Sun Mar 26 00:20:03 EDT 2017 Especially for small functions, it makes a lot of sense to just add one or two cases, even to serve as documentation. http://erlang.org/doc/apps/eunit/chapter.html http://stackoverflow.com/questions/18031247/eunit-vs-common-test Entry: Alpaca Date: Sun Mar 26 00:20:31 EDT 2017 https://github.com/alpaca-lang/alpaca/ https://www.youtube.com/watch?v=cljFpz_cv2E I'm going to wait this one out... It might make sense to use this for self-contained "algorithmic" code. But writing an application with communicating processes might not be worth it yet. Entry: netlink Date: Sun Mar 26 21:50:40 EDT 2017 https://github.com/Feuerlabs/netlink/ Entry: Erlang + Haskell/Rust Date: Tue Apr 4 22:26:28 EDT 2017 I'd like to find a way to combine best of both worlds. Haskell is for high-performant algorithmic code. Rust is for memory-efficiency. Erlang is for distributed computing. Entry: The BEAM Book Date: Fri Apr 7 15:17:46 EDT 2017 https://github.com/happi/theBeamBook Entry: Run beam code in javascript Date: Tue Jun 13 10:24:30 EDT 2017 Interpret BEAM code: this has been done: https://erlangcentral.org/videos/erlang-in-the-browser/#.WT_1_-ApA1I But ATM what I really want is a simpler interface. I don't need to have much code running in the browser. I just need a message interface that allows accessing the DOM. To bootstrap this, set up a build system that would make it really easy to update javascript code without a lot of manual effort. Entry: Parallel GUI Date: Sun Jun 18 23:48:58 EDT 2017 So I found a way to talk to individual elements in the DOM by treating them as objects with associated behaviour that responds to messages. Now I'm thinking how notify loops are avoided in larger parallel applications. Entry: Link transitive Date: Tue Jun 27 14:03:26 EDT 2017 with spawn_link: A spawns B, B spawns C. B terminates. Are A and C still linked? Entry: recursive types Date: Fri Jun 30 21:37:42 EDT 2017 http://learnyousomeerlang.com/dialyzer Dialyzer does support recursive types, starting with R13B04. What can go into PLT: - modules that do not change - modules that do not call into changing code Entry: Fold over erlang term? Date: Sat Jul 1 11:48:24 EDT 2017 Instead of converting an erlang term to a data structure, it makes more sense to define a fold that is easy to use. How do you generalize a fold over a nested data type? Instead of one folded function which would take the place of cons, you'd have multiple. Entry: Erlang closures and term_to_binary Date: Sun Jul 2 00:14:52 EDT 2017 I believe it is not possible to have closures live across restarts of the beam. E.g. write a closure to file as a binary term, then load it on the next restart. Entry: bert.c Date: Mon Jul 3 13:29:04 EDT 2017 Main idea is to not have intermediates. This can be done by keeping the structure representation abstract: - parse: use a generalized left fold, unflattening structure into a call sequence - print: provide constructors, and use caller's function nesting to flatten structures EDIT: A cool thing here is that by keeping the output abstract, and the generator pure, it is possible to re-generate chunks of the message. This makes it possible to avoid buffering at the expense of higher CPU load. Entry: parser combinators Date: Wed Jul 5 19:16:44 EDT 2017 My rule is: prefix parsing (e.g. Scheme) is OK to do manually. This usually works for binary protocols. Anything that has infix is a pain to do express manually. The information that needs to be used at decision points is just too non-local. So let's figure out if there is a good parser combinator library for Erlang. PEG: https://github.com/seancribbs/neotoma Entry: pid recycling Date: Thu Jul 13 18:01:32 EDT 2017 If PIDs are stored in the state of one process, it is necessary to link to the process to make sure that the stale reference is removed. Pids do get recycled over time so it is possible that a dead pid gets revived into something else entirely. Entry: Mini-react Date: Fri Jul 14 14:16:35 EDT 2017 Web app is based on exml. If this is turned into something where every node is annotated with an ID, it is possible to do the diffing server-side. Second iteration: do the diffing on nodes that are tagged with an id. Is it possible to replace an existing document node with a new one, or can you only replace children of other nodes? I.e. do we need wrappers? el1.replaceWith(el2); or for older browsers: A.parentNode.replaceChild(document.createElement("span"), A); Entry: tracing Date: Fri Jul 14 18:11:49 EDT 2017 https://stackoverflow.com/questions/1274681/query-an-erlang-process-for-its-state FErlang (BEAM) emulator version 5.6.5 [source] [smp:2] [async-threads:0] [kernel-poll:false] Eshell V5.6.5 (abort with ^G) 1> l(ping). {module,ping} 2> erlang:trace(all, true, [call]). 23 3> erlang:trace_pattern({ping, '_', '_'}, true, [local]). 5 4> Pid = ping:start(). <0.36.0> 5> ping:send(Pid). pong 6> flush(). Shell got {trace,<0.36.0>,call,{ping,loop,[0]}} Shell got {trace,<0.36.0>,call,{ping,loop,[1]}} ok 7> Entry: distel problem Date: Sat Jul 15 16:05:07 EDT 2017 (gw@127.0.0.1)13> distel:functions(web, ""). {ok,[]} (gw@127.0.0.1)18> distel:functions(erlang, ""). {ok,["*","+","++","-","--","/=","<","=/=","=:=","=<","==", ">",">=","abs","adler32","adler32_combine","alloc_info", "alloc_sizes","and","append","append_element","apply", "atom_to_binary","atom_to_list","await_proc_exit", "await_sched_wall_time_modifications", [...]|...]} Somehow it can't find the module.. Did something change? This was needed to get the proper module list: (gw@127.0.0.1)34> distel:rebuild_completions(). ok But the function list is still not there... Maybe something broke after erlang update? This does seem to do some disassembly of the .beam files. Needs a closer look. https://github.com/massemanet/distel/blob/master/src/distel.erl Debugging: make -C ~/emacs/distel code:add_patha("/home/tom/emacs/distel/ebin/"). l(distel). Performing raw queries seems to work: xref:q(distel_completions, "(Fun) web : Mod"). Weird, now it works: (gw@127.0.0.1)68> distel:functions(web,""). {ok,["app_input","as_decoded_map","as_map","as_proplist", "atom","bool","button","cell_input","checkbox", "checkbox_set","checked","cowboy_http_handle", "default_user","exml","form","form_data","hmac", "hmac_decode","hmac_encode","html_body","id","integer", "integer_or_atom","link","make_id","resp_body", [...]|...]} Not sure what exactly changed... Maybe just reloading distel? Yes I know what this is. App no longer loads all the modules. Now it brok again... WTF. So the problem is with xref:q I'm getting different results after killing the query server and restarting with distel:rebuild_completions(). (gw@127.0.0.1)88> xref:q(distel_completions, "(Fun) web : Mod"). {ok,[{web,form,1}, {web,id,1}, {web,resp_spa,1}, {web,table_input,1}]} (gw@127.0.0.1)89> distel:rebuild_completions(). ok (gw@127.0.0.1)90> xref:q(distel_completions, "(Fun) web : Mod"). {error,xref_compiler,{unknown_constant,"web"}} It's as if some results get lost. Yeah really no way to debug this apart from knowing how it is all supposed to work. Entry: Erlang XML template transformation Date: Mon Jul 17 13:27:12 EDT 2017 Maybe use XSLT? %% https://medium.com/erlang-battleground/the-hidden-xml-simplifier-a5f66e10c928 scan_templates(FileName) -> {Element,_Misc} = xmerl_scan:file(FileName, [{space, normalize}]), [Clean] = xmerl_lib:remove_whitespace([Element]), xmerl_lib:simplify_element(Clean). Entry: Right fold over tree? Date: Mon Jul 17 18:08:13 EDT 2017 That would be CPS. https://tech.labs.oliverwyman.com/blog/2007/06/11/folds-and-continuation-passing-style/ Entry: custom erlang script loader, see /etc/net/binfmt/install.sh Date: Sun Jul 23 10:07:39 EDT 2017 %% https://stackoverflow.com/questions/2160660/how-to-compile-erlang-code-loaded-into-a-string Entry: Port close + wait for reply? Date: Sun Jul 23 19:21:22 EDT 2017 Is it possible to: - send date - close port - wait for more data until process ends? I'm trying to send stdin to a program, but it will only process when stdin is closed, but then the port is gone.. In this case (sending to curl), it is possible to send the payload as an argument. Entry: xref problems / distel_completions Date: Sun Jul 23 23:54:28 EDT 2017 How is it supposed to work? xref:start(my_xref). {ok,<0.128.0>} xref:q(my_xref, "(Fun) web : Mod"). The reason might be module clashes: (gw@127.0.0.1)20> xref:add_directory(distel_completions,"/home/tom/priv/git-private/humanetics/src/install/gw/ebin"). {error,xref_base, {module_clash,{sqlite3,"deps/erl_tools/ebin/sqlite3.beam", "/home/tom/priv/git-private/humanetics/src/install/gw/ebin/sqlite3.beam"}}} and an error that is ignored here: start(XREF) -> xref:start(XREF(server), XREF(opts)), xref:set_default(XREF(server), builtins, true), F = fun(D) -> xref:add_directory(XREF(server), D) end, foreach(F, get_code_path(XREF)). Entry: Exporting erlang source tree as html Date: Wed Jul 26 21:08:01 EDT 2017 Two things: syntax highlighting and cross-referencing of function calls. http://erlang.org/doc/man/tags.html Entry: Dialyzer: Catching errors looses type information Date: Thu Jul 27 09:52:40 EDT 2017 Additionally, I'm trying to infer types from making a limited number of cases on atoms, but still it uses a generic atom() type. Entry: atom() vs. atom list Date: Thu Jul 27 13:39:52 EDT 2017 my -spec contract says e.g. a|b but dialyzer infers it as atom(). What is causing this? I suspect this has to do with the number of cases. If I remove one, it infers correctly as a sum of atoms. Indeed. This is inferred as atom() -> ok. -module(test). -export([test/1]). test(X) -> case X of a -> ok; b -> ok; c -> ok; d -> ok; e -> ok; f -> ok; g -> ok; h -> ok; i -> ok; j -> ok; k -> ok; l -> ok; m -> ok; %% comment this out and it infers a sum of individual atoms instead of atom() %% n -> ok; %% o -> ok; %% p -> ok; %% q -> ok; %% r -> ok; %% s -> ok; %% t -> ok; %% u -> ok; %% v -> ok; %% w -> ok; %% x -> ok; %% y -> ok; z -> ok end. I guess the idea is : don't have this many cases... If this pattern shows up it's likely better to make a module that has the functions as names. I'd like to know why though. This smells like an aribtrary limit in the definition of Erlang's success typing that is introduced to avoid combinatorial explosion. It should be documented somewhere. Entry: escript Date: Mon Jul 31 23:01:24 EDT 2017 http://www.erlang-factory.com/conference/SFBay2011/speakers/GeoffCant https://stackoverflow.com/questions/9658561/connecting-to-nodes-from-scripts net_kernel:start([Name, longnames]), erlang:set_cookie(Name, list_to_atom(Cookie)). Entry: register before init? Date: Tue Aug 1 09:35:13 EDT 2017 It seems that register/2 doesn't work in supervisor init/1 function. Entry: Modules vs. functions Date: Sat Aug 12 11:31:50 EDT 2017 A lot of trouble is caused by the arbitraryness of abstracting code as a module, or as a function. I've preferred functions - just like pid's they are anonymous and won't clash. Modules are like registered pids: singletons only. Entry: How to have rebar compile a port program instead of a shared library? Date: Mon Aug 14 12:15:23 EDT 2017 https://www.rebar3.org/v3/docs/building-cc In rebar3 it is required to have a Makefile or other instructions for building your C/C++ code outside of rebar itself. https://github.com/erlang/rebar3/issues/23 {pre_hooks, [{"(linux|darwin|solaris)", compile, "make -C c_src"}, {"(freebsd)", compile, "gmake -C c_src"}]}. {post_hooks, [{"(linux|darwin|solaris)", clean, "make -C c_src clean"}, {"(freebsd)", clean, "gmake -C c_src clean"}]}. but where to place the library? (priv?) and how to find it? code:priv_dir(erl_tools) this is what jiffy does: init() -> PrivDir = case code:priv_dir(?MODULE) of {error, _} -> EbinDir = filename:dirname(code:which(?MODULE)), AppPath = filename:dirname(EbinDir), filename:join(AppPath, "priv"); Path -> Path end, erlang:load_nif(filename:join(PrivDir, "jiffy"), 0). Entry: Rebar and rebuilding dependencies Date: Wed Aug 16 08:06:17 EDT 2017 This doesn't seem to work very well.. I need to delete rebar.lock otherwise it reverts the repo. EDIT: Basic workflow: - application project depends on library project - during application development, library project gets changed - rebar insists on reverting the library project how to fix? put a script in erl_tools that takes care of this? No.. don't wrestle it. Use the dep compiler for single-shot compiling only. During development, compile the source library in its own directory. Copy the build products over the rebar build. Once done, update the rebar project and rebuild. Entry: standardizing on rebar3 Date: Wed Aug 16 08:29:40 EDT 2017 Version currently being used by exo is used for everything, so snapshot it: 2cc07d8f1fa0f0f3491720389d9c85b755ec8e59 rebar3 (3.3.6-1-g2cc07d8f) Entry: enc (used in jiffy to compile .so) Date: Wed Aug 16 09:35:12 EDT 2017 https://github.com/davisp/erlang-native-compiler Entry: switch between two states Date: Wed Aug 16 16:48:20 EDT 2017 1. Normal build 2. "swap in" existing directory. This is done by removing the repository from rebar.config, and linking in the build directory from the other project. 3. "swap out" Entry: Erlang web programming, lessons learned Date: Thu Aug 17 00:43:16 EDT 2017 - If persistence across power cycling is needed, use SQLite. Once a data base is there, just use it for everything. - Keep application types explicit so they can be serialized. Use printable terms. Equate user input format with database storage format. - Encoded closures are still very useful for callbacks, as they avoid introduction of arbitrary encoding&dispatching, but beware of duplication. - A web page is a form, which is a representation of a key-value store. Automate the edits to avoid having to handle this explicitly in the web page logic: simply assume user has completed entry when an action is required. - An advantage of having state in a process is that it is gone when it dies, e.g. death is clean. Databases often require explicit cleanup. - Most erlang terms can be stored in html attributes in printed form. This aides in creating a flat namespace, simply by encoding hierarchy in the keys. - Prefer pushing widget sets to the browser and use display='none' to disable them, as opposed to using cells and pushing XHTML fragments. Entry: simple compile time tests Date: Sun Aug 20 22:41:55 EDT 2017 Nice trick. Warning: no clause will ever match test() -> [] = lists:append([1],[2]). Entry: closure encoding Date: Mon Aug 21 13:15:52 EDT 2017 It does appear closures get "garbage collected": only the closed over variables that are actually used are stored. Entry: Hunt for any types Date: Thu Aug 24 22:11:04 EDT 2017 grep -nrI . -e '_ ->' The idea is to make matches exhaustive, as that is exactly what defines the types. Entry: Refactoring is a problem Date: Fri Aug 25 10:03:50 EDT 2017 - Better types - Better tests Entry: expect tests Date: Fri Aug 25 21:52:10 EDT 2017 https://blog.janestreet.com/ironing-out-your-development-style/ Entry: Expect tests Date: Wed Aug 30 12:41:18 EDT 2017 https://blog.janestreet.com/testing-with-expectations/ It would be nice to be able to splice them directly into the source, but that will be a bit of work. So let's put them into an erlang data structure. EDIT: Ok I have something. Sticking to simple erlang term representing an apply/3 call. %% -*- erlang -*- [expect ,{{tools,unhex,["ABcd12"]}, [171,205,18]} ]. Next: automatically add expressions from emacs. Also enforce lexical sort to keep diffs simpler. EDIT: using a clone of an apply/3 function that adds an entry to an expect file. Also formatted it as a map, which is more readable. Entry: Throttle Date: Fri Sep 1 17:31:44 EDT 2017 Having a hard time expressing this properly. Behavior: - perform rate limiting - always send last message It seems simplest to do using two processes: one to keep track of the state and the other to implement the timeout. Entry: Expect tests Date: Sun Sep 3 17:27:46 EDT 2017 So I have a way to unify them with eunit. Simpler would still be to allow actual function code inside the test files, somehow included in the local module's namespace. Maybe use hrl files? Format: -define(TESTS, ...). Then, when saving the new values, pretty-print the syntax. Entry: Send a message? Or call a callback? Date: Tue Sep 5 15:14:58 EDT 2017 Is the return value important? Do we care at all what happens beyond passing it on? If not, then sending a message is probably the right interface. Still there is the "thunk that sends abstract message" thing on the edge... Entry: Function to print stacktrace Date: Wed Sep 6 11:37:58 EDT 2017 What I want is a way to click on it. Either from an emacs buffer (interactive or compilation), or a web link. Entry: Run Erlang in the browser Date: Wed Sep 6 14:05:40 EDT 2017 I don't need much. No scheduler. Just some local evaluation of functions, basically to implement a UI controller. What is needed? Probably best start from: https://github.com/baryluk/erljs But basic beam interpreter can't be too hard to get set up. https://gist.github.com/andelf/5193480 http://www.cs-lab.org/historical_beam_instruction_set.html Entry: Typed overlay Date: Fri Sep 8 22:05:48 EDT 2017 Erlang is nice and all, but I do really miss types for writing "algorithmic" code. There is the ML on the beam project. Maybe give that a try? Otherwise there is Conal's CCC trick. A lot is pointing in the direction of that CCC trick lately... Entry: Improving Dialyzer Date: Wed Oct 18 11:46:44 CEST 2017 - guards - typespecs - number of cases is limited https://stackoverflow.com/questions/34390452/erlang-will-adding-type-spec-to-code-make-dialyzer-more-effective/34391217#34391217 http://www.it.uu.se/research/group/hipe/dialyzer/publications/wrangler.pdf Entry: Calling Erlang from C? Date: Wed Oct 18 17:02:58 CEST 2017 Is it actually possible to nest NIF calls into Erlang? Doesn't look like it. Entry: State machines vs. processes Date: Sat Oct 21 17:18:39 CEST 2017 Once a design is factored into several state machines that could run in separate processes, how do you decide whether to actually run them in separate processes, or just update states manually in a single process? This seems arbitrary. It is a tradeoff: - Multiple processes require communication overhead (a protocol) - Sometimes things are easier to express when factored into processes I'm trying out a rule of thumb: if it is possible to perform a single state transition in a simple way, then go ahead and keep it in the same task. Entry: Synchronization Date: Sat Oct 21 21:49:42 CEST 2017 Some observation about a caching mechanism for my current project.... Why is this so hard? Why does it feel so ad-hoc? What I miss is good primitives. Entry: Monitors and temporary processes in pure functions Date: Sun Oct 22 10:51:57 CEST 2017 I think I now understand a key element of resource management in Erlang: monitors. If some object dies, it is sometimes necessary to remove it from some registry. Monitors do just that. This pattern pops up a lot: if there is some library code that needs some custom synhronization code, it is often easier to use temporary processes to avoid having to clean up. Connected processes are inherently stateful, and to embed such a computation device in a pure function, it is necessary to make the results unobservable after the function returns: i.e. kill all temp processes. Entry: Private processes Date: Sun Oct 22 11:54:13 CEST 2017 See previous post. The need for referential transparency creates the need to make it impossible to observe any communication effects in function evaluation (apart from the time it takes to evaluate a function -- let's assume time is just a computation resource). Two approaches seem to work: - Use temporary processes - Reuse a caller's process, but clean up the mailbox. The latter is not always easy to do; it often requires extra synchronization. The easiest approach on that case is to set up a network of processes, and kill it once the value is computed. Entry: erl_tools expect tests Date: Mon Oct 23 11:23:53 CEST 2017 These are very useful, but it is quite annoying to not have comments or custom formatting to increase readability. Make it such that the original source can be kept. The parser returns line numbers, so it should be possible to find the text span of a function as long as it is kept. Entry: Questions to erlang list Date: Mon Oct 23 11:25:46 CEST 2017 - Why is dialyzer atom sum limited? - Is there a parser that returns character location (e.g. to cut out a function's original text)? Entry: Typed language to write the functional bits Date: Thu Oct 26 09:40:30 CEST 2017 The pattern I've come to, is to split Erlang application development into two bits: - The server object architecture - A collection of pure functional libraries The latter can just as well be written in a typed language. Entry: Types Date: Fri Nov 3 09:37:41 EDT 2017 So. Erlang is hard to type because of message passing. But most of the code that I need types for is pure functional library code. Is there a way to make this better? I.e. use dialyzer as a better type system? Entry: Incremental TTD for hard to specify function composition chains Date: Mon Nov 6 07:55:43 EST 2017 I'm currently implementing some tedious demultiplexer functions which are hard to express all in one go. Also they are hard to get correct in a type-driven manner, as next to their structure, they depend a lot on numbers and signs being correct. In essence, this is mostly untyped code. I'm using an approach to allow for "holes", where the "full circle" is possible to be executed -- i.e. code compiles and an expect test produces an output -- but the function is not complete. Holes here are implemented as stubs, e.g. identity functions. This makes it possible to keep the structure of the edit-test loop constant, while working on one piece of the puzzle at a time. This approach works when it is hard to isolate components in the function composition, simply because they are hard to specify in isolation. Essentially, their specification is pretty much their solution. Entry: Catch-all phrases as holes Date: Mon Nov 6 10:47:31 EST 2017 For untyped TDD, it helps to add a catch-all case in a chain of compositions that looks like the function: foo(A,B) -> {foo, A, B}. This then makes it possible to refine. Entry: Nested error messages: bubble up Date: Mon Nov 6 10:47:37 EST 2017 As an alternative to exceptions which cannot be type-checked by Dialyzer, it is possible to use an Either-style encoding with nested {error,_} clauses. Dialyzer should then be able to reconstruct this nesting in the return type. E.g. to propagate errors upstream, just chain "stack traces": highlevel({error, LowLevelErrorInfo}) -> {error, {{highlevel_info, 123}, LowLevelErrorInfo}} EDIT: Very similar to "abstract interpreting" a stream by annotating the operations on the data -- i.e. constructing a program -- instead of evaluating the operations. Entry: Typed Erlang Date: Mon Nov 6 11:14:49 EST 2017 How to make sure Dialyzer can type code? - Don't - Use catch-all cases - Use exceptions - Do - use {ok,_} | {error,_} for failing computations - use pseudo stack traces (error chaining) Entry: Dialyzer Date: Wed Nov 8 10:33:41 EST 2017 http://erlang.org/doc/man/dialyzer.html "Dialyzer bases its analysis on the concept of success typings, which allows for sound warnings (no false positives)." Still, having quite some trouble figuring out why some list is inferred as []. Mabye because the types don't match, and the only possible intersection is []? Entry: Moving stuff to compile time using parse transformers Date: Wed Nov 29 16:08:20 EST 2017 Might be useful to add more checks. One thing I'm thinking of is to find a subset of Erlang that is typable using a H-M inference engine. Like elpaca, but using the same syntax as Erlang so it is optional. Maybe a good intermediate step is to find a way to write Erlang modules that have proper dialyzer type inference resembling ADTs, to get an idea of where dialyzer needs help. Entry: Erlang parser Date: Sat Dec 2 12:23:56 EST 2017 Not sure how to choose, but this one gives best impression: https://github.com/seancribbs/neotoma Entry: How to enable debug in rebar3? Date: Sun Dec 10 10:50:18 EST 2017 ?DEBUG("Running dialyzer with options: ~p~n", [Opts2]), dialyzer:run(Opts2), -define(DEBUG(Str, Args), rebar_log:log(debug, Str, Args)). Ha, application can have state variables: {ok, LogState} = application:get_env(rebar, log), This is an alternative to using global registry directly. log(Level = error, Str, Args) -> {ok, LogState} = application:get_env(rebar, log), ec_cmd_log:Level(LogState, lists:flatten(cf:format("~!^~ts~n", [Str])), Args); log(Level, Str, Args) -> {ok, LogState} = application:get_env(rebar, log), ec_cmd_log:Level(LogState, Str++"~n", Args). https://github.com/erlware/erlware_commons/blob/master/src/ec_cmd_log.erl #state_t{log_level=DetailLogLevel} https://github.com/erlware/erlware_commons Only place I found where loglevel is set: ./rebar3.erl:84: ok = rebar_log:init(api, Verbosity), ./rebar3.erl:177: ok = rebar_log:init(command_line, Verbosity), Both call rebar3:log_level Which uses: case os:getenv("DEBUG") of so DEBUG=yes should fix this Entry: debugging dialyzer Date: Sun Dec 10 12:04:24 EST 2017 I need a proper error message formatter. Maybe a job for parsec? /home/tom/pub/git/erl_tools/_build/default/lib/erl_tools/src/sqlite3_kvstore.erl:86: The call sqlite3_kvstore:sql(DB::any(),QRemove::binary(),[binary(),...]) will never return since the success typing is (fun(() -> pid()),binary(),[{'blob',binary()} | {'text',binary()}]) -> [[binary()]] and the contract is (fun(() -> pid()),binary(),[{'blob',binary()} | {'text',binary()}]) -> [[binary()]] Entry: Network protocols are dynamically typed Date: Sun Dec 10 23:23:02 EST 2017 1. Is that really so, and 2. Is that why it is so hard to type erlang? Entry: rebar dependency injection Date: Tue Dec 12 11:56:55 EST 2017 I need a setup where dependencies are managed outside of the rebar build. The current hack I use is too hard to maintain. It seems that "raw" dependencies are what I'm looking for. https://github.com/rebar/rebar/wiki/Dependency-management Entry: dialyzer and nifs Date: Mon Dec 18 12:28:30 EST 2017 http://erlang.org/pipermail/erlang-questions/2011-May/058356.html Entry: Actors vs CSP Date: Tue Dec 19 10:48:30 EST 2017 https://en.wikipedia.org/wiki/Communicating_sequential_processes#Comparison_with_the_Actor_Model CSP Actors anonymous identity processes rendezvous asynchronous messages channels mailboxes addresses Entry: Types in Erlang Date: Thu Jan 4 12:54:39 EST 2018 Looking at alternatives in Haskell, Rust and OCaml, but all of them have serious drawbacks, and would introduce quite a bit of overhead before yielding ROI. The problem is really types. I like Erlang for system design, but would like some real types for algorithm design, i.e. functions maping data to data. Maybe, just get better at writing typable code in Erlang. Maybe, keeping it simple just means sticking to Erlang. One thing: how to express something resembling parametric polymorphism in Erlang? Actually it is possible. http://erlang.org/doc/reference_manual/typespec.html Type variables can be used in specifications to specify relations for the input and output arguments of a function. For example, the following specification defines the type of a polymorphic identity function: -spec id(X) -> X. Time to read the doc again. Entry: Erlang subset Date: Thu Jan 4 14:35:37 EST 2018 Since it's such a simple language, and has a type checker, maybe it can be used for heterogenous metaprogramming? Using the concept of downward closures, it might be useful to explore this. Entry: Quick and dirty c(), l() Date: Sun Jan 21 07:39:12 EST 2018 Writing "scripts", i.e. code that is heavy on figuring out the API glue but otherwise doesn't require much thinking. What is needed here is interactivity. Such code is written by "performing basic science" :) Entry: Smalltalkish Erlang -- Erlang live coding Date: Sun Jan 21 07:50:55 EST 2018 Thinking more about build systems and caches, the idea is to have a running Erlang image that corresponds 100% to code on disk. I.e. if an edit is made, the module is compiled, and if it compiles properly, it is uploaded. Entry: Distributed Erlang and trusted code Date: Sun Jan 21 08:00:42 EST 2018 Basically, you need to be able to trust the other nodes, because they can execute arbitrary code. Period. If that is not the case, a different protocol is needed. What I'm looking for is a small trusted base on each machine. Maybe this should not be written in Erlang after all. Keep Erlang for what it is good at: executing logic, distributed. Then write a smaller daemon on each machine with a well-defined interface. For manual maintenance, the key might be to generate explicit scripts, then have the operator validate those scripts before execution. Maybe start with a simple assertion: the Erlang VM contains untrusted code only. This would require a trusted daemon started from init, which then in turn fires up the VM. Entry: Improving "live" coding Date: Fri Jan 26 08:48:02 EST 2018 I need a tool to do immediate code reload on save. It seems simplest to do this from Erlang + distel as the "OS", as the support is already there. https://github.com/massemanet/distel/issues/38 C-c C-d L erl-reload-module That's not it. C-c C-k erlang-compile is more like it, but it is starting a new node emacs@panda. (defvar erlang-compile-function 'inferior-erlang-compile) That uses inferior-erlang-buffer, and not the distributed protocol. My gut feeling is to use distel. Maybe also freeze distel source. There are many ways to go about this, so let's not make it too different. (gwtest_tom@127.0.0.1)7> c("/home/tom/humanetics/src/gateway/gw/src/x.erl"). {ok,x} (gwtest_tom@127.0.0.1)9> l(x). {module,x} Currently I have: build (Makefile -> rebar3): .erl -> .beam install .beam -> .beam load .beam time make host, which is build + install real 0m1.242s user 0m0.760s sys 0m0.228s I want to simplify this. Really, it just needs a single compilation call. Compilation needs a load path for includes. erlc -v -o /tmp device.erl I guess what I want is a way to snoop on the Erlang calls that compile the modules inside of rebar. Simplify the build system so there is only one location where the .beam files reside. Then it should be straightforward to recompile on demand, even with a different mechanism, and just have the VMs reload. For remote, update, use rsync or push beam code from local instance to remote. Let's figure out something else. (gwtest_tom@127.0.0.1)15> erlang:get_module_info(x). [{module,x}, {exports,[{foo,0}, {s,1}, {p,1}, {sh,1}, {module_info,0}, {module_info,1}]}, {attributes,[{vsn,[110629168146813487093543362914119105528]}]}, {compile,[{options,[{outdir,"/home/tom/priv/git-private/humanetics/gw_src/gateway/gw/_build/default/lib/gw/ebin"}, debug_info, {i,"/home/tom/priv/git-private/humanetics/gw_src/gateway/gw/_build/default/lib/gw/src"}, {i,"/home/tom/priv/git-private/humanetics/gw_src/gateway/gw/_build/default/lib/gw/include"}, {i,"/home/tom/priv/git-private/humanetics/gw_src/gateway/gw/_build/default/lib/gw"}]}, {version,"7.0.3"}, {source,"/home/tom/priv/git-private/humanetics/gw_src/gateway/gw/_build/default/lib/gw/src/x.erl"}]}, {native,false}, {md5,<<83,58,103,27,166,121,19,230,147,207,119,239,246, 228,187,248>>}] So it seems rebar copies the files before it compiles it? Maybe just get rid of rebar. No it is a link: tom@panda:~/humanetics/src/gateway/gw/_build/default/lib/gw/src$ readlink -f . /home/tom/priv/git-private/humanetics/gw_src/gateway/gw/src tom@panda:~/humanetics/src/gateway/gw/_build/default/lib/gw$ ls -al total 12 drwxr-xr-x 1 tom tom 50 Jan 8 13:03 . drwxr-xr-x 1 tom tom 66 Jan 8 13:03 .. drwxr-xr-x 1 tom tom 1570 Jan 26 09:04 ebin lrwxrwxrwx 1 tom tom 19 Jan 8 13:03 include -> ../../../../include lrwxrwxrwx 1 tom tom 16 Jan 8 13:03 priv -> ../../../../priv drwxr-xr-x 1 tom tom 16 Jan 8 13:03 .rebar3 lrwxrwxrwx 1 tom tom 15 Jan 8 13:03 src -> ../../../../src So it seems ok to use the source file. So let's start from the list of source files: tom@panda:~/humanetics/src/gateway/gw/_build/default$ find -follow -name '*.erl' Then, the incremental compilation "shortcut" would be: - create basename -> abs path for .erl and .beam map - on save hook, get current module basename - look up abs path - call compiler with proper options - load .beam file into local vm - distribute Perform distel call in emacs? (erl-eval-expression 'gwtest_tom@127.0.0.1' "diag:module_source(bcache).") Actually I don't need to parse anything in emacs. Send a notification about a module, and have the node initiate the compilation. EDIT: Got it working: - emacs saves, sends _build/default prefix + filename, nodelist - erlang node infers location, compiles, sends binary to nodelist Add it to save hook? Or use F key. Entry: reload nif Date: Fri Jan 26 13:21:35 EST 2018 https://stackoverflow.com/questions/33426924/erlang-is-it-possible-to-reload-or-upgrade-a-nif-library-without-restart-the-sh init() -> erlang:load_nif("./q4_nif", reload). http://erlang.org/doc/man/erl_nif.html Latter says reload no longer supported since OTP 20. http://erlang.org/doc/man/erlang.html#load_nif-2 Entry: Solve erlang dependency injection problem Date: Sat Jan 27 08:51:43 EST 2018 This really can't be so hard to set up. Should be just one symlink pointing into src directory. Trying it out with /etc/net on zoo. root@zoo:/etc/net/_build/default/lib/erl_ducktape# rm -rf src ; ln -s ~tom/git/erl_ducktape/src . Problem is that it doesn't rebuild the dependencies. The simplest solution up to now is just to copy over the .beam files from a separate build. This is what I end up with: # Temporarily replace _build/default/lib/ with build directories # from main dev host. This allows the projects to be developed at the # same time. After testing, commit the dependencies, push to repo and # do "make clean all" here. inject: for dep in erl_tools erl_ducktape; \ do (cd _build/default/lib/ ; \ rm -rf $$dep ; \ ln -s /i/tom/git/$$dep/_build/default/lib/$$dep . ); \ done Entry: Monitors Date: Mon Feb 19 17:16:44 CET 2018 Fix some code that could use monitors, e.g. registries. Entry: Type errors Date: Thu Feb 22 10:31:27 CET 2018 It's getting annoying again, especially after writing some Haskell code. How to fix? Whenever a type error occurs, first try to figure out why dialyzer didn't catch it. One way is to inclyde all unit tests code in the dialyzer run: just make sure type errors are not "tested" in the unit tests. EDIT: This seems to be problematic. So, new plan: - more unit and integration tests - type error: make sure to add annotation such that dialyzer catches it Entry: Dialyzer function checks Date: Thu Feb 22 11:19:04 CET 2018 One heuristic: dialyzer works at the function level. Passing functions from one task to another tends to obscure the information that can be obtained by its use. But the question remains: How to see why dialyzer doesn't think a particular function has a type error? It appears that I see a function that breaks a contract, but dialyzer can't figure that out. Entry: nif reload Date: Thu Feb 22 16:32:59 CET 2018 https://stackoverflow.com/questions/33426924/erlang-is-it-possible-to-reload-or-upgrade-a-nif-library-without-restart-the-sh This needs an "upgrade" function, not just "reload"? http://erlang.org/doc/man/erl_nif.html Reload is deprecated. By itself, loading a module does not properly reload the nif. This looks like a bug... A workaround is to delete,purge the code before load. The example below performs load the module (+ module loads nif in onload function). (gwtest_tom@kanda.zoo)39> code:delete(nif_gw). true (gwtest_tom@kanda.zoo)40> code:purge(nif_gw). nif_gw:unload false (gwtest_tom@kanda.zoo)41> nif_gw:checksum(<<"">>). nif_gw:load 123 Entry: Services vs functional updates Date: Sat Feb 24 10:02:40 CET 2018 Happens a lot: writing a service, I need a version of a function that is performed on the state data type for inclusion in a low level message handler. And I also need the same behavior externally as an RPC method. E.g. in devices.erl : m_find vs s_find. How to handle this more elegantly? Often, the fact that a behavior happens in synchronized and non-synchronized code indicates that the non-synchronized version probably misses some synchronization in a way that is not obvious. I.e. this smell is "service spaghetti". Entry: Dialyzer: specify return types Date: Wed Feb 28 12:38:13 CET 2018 http://erlang.org/pipermail/erlang-questions/2015-December/087069.html The type system that Dialyzer is based on (success types) allows for the return value of a function to be over-approximated (i.e. include more values). An unfortunate side effect of that characteristic is that, in general, Dialyzer cannot be sure whether a particular value can really be returned from a function or not neither can it discern whether a particular value is an overapproximation or not. Entry: Alpaca Date: Wed Mar 7 05:57:26 EST 2018 Do I bite the bullet and try out Alpaca? https://github.com/alpaca-lang/alpaca I really have trouble with complex, arbitrary, "human-level" data structures. Not so much with "system stuff". Entry: Rust NIFs Date: Wed Mar 7 06:22:48 EST 2018 https://github.com/hansihe/rustler Complication: favors elixir. Entry: Coding with maps Date: Wed Mar 7 06:38:19 EST 2018 Maps are useful. But also encourage a sloppy coding style that is hard to type-check. However, it seems possible to exactly specify maps. What about separating defaults from configurations? E.g. all core routines have well-specified types, while a constructor function fills in defaults? Entry: Print full term Date: Wed Mar 7 14:45:09 EST 2018 rp(123). https://stackoverflow.com/questions/5434248/erlang-shell-pretty-print-depth Entry: Journal logs and restarts Date: Thu Mar 8 07:49:20 EST 2018 Look into this more. To use Erlangs 'let it crash' (LIC) approach, it appears that using a journal to re-establish a state is a good approach. By itself, LIC works well if it is stateless, or if the underlying operations are idempotent. I'm running into a case where I do have state that isn't easy to reconstruct. EDIT: Let's formalize -> I wat to start a process that is passed the contents of the journal, and will receive further messages in its mailbox, without any duplication. How to do this without race conditions? Entry: List comprensions are map + match filter Date: Sat Mar 10 07:48:42 EST 2018 Not just map! (gw@172.30.3.205)13> [X||{ok,X} <- [error,{ok,yes}]]. [yes] This has bitten me several times. Entry: Standard UI Date: Sat Mar 17 08:54:00 EDT 2018 Building the termostat. Trying to make a ui. Maybe there should be only one UI application? What I really want is widgets that can just be dropped in a page. So I will end up writing a web framework... What is a widget? - presentation model + diff -> rendering - input events -> event handler Maybe it is time to do this differently? Keep the presentation model on the client side? Use purescript or something? Choices choices.. Let's stick to Erlang for now. Ok, ui build system doesn't run on the beaglebone due to tar xf node-v6.11.0-linux-x64.tar.xz WTF is this horrible tools mess! Entry: Build architecture for Erlang Date: Sat Mar 17 09:14:23 EDT 2018 Mainly, cross-compile binary code for any target. Solve this once and for all. Erlang code can always be compiled on a build host. That's today's task. - I have only one code base for all the custom code, and it sits on panda - There are multiple targets Entry: Erlang live coding Date: Sat Mar 17 09:20:25 EDT 2018 1. What does OTP do? 2. How to create a build system that automatically pushes a .beam file to one or all targets. Same for .so https://stackoverflow.com/questions/29047018/how-to-reload-all-otp-code-when-developing-an-otp-application Entry: A new build system Date: Sat Mar 17 09:31:46 EDT 2018 - try not to use native dependencies. for the ui, this means no jiffy. - later, fix native dependency builds as an overlay to rebar builds So first, fix erl_tools such that it can send messages back using BERT. EDIT: Even that is too difficult to set up. What is rebar for? Dependencies that do not change. If they change, it's better to use a monorepo. Entry: Building erlang code with binary dependencies Date: Sat Mar 17 12:02:29 EDT 2018 Basically, it should build all the targets at once. It is possible to add this to the rebar config. Entry: Web widgets Date: Sun Mar 18 01:32:45 EDT 2018 Each widget is: - presentation model: M - event processor (E,M) -> M - model diff: (M,M) -> D - diff to update: D -> js - initial layout, initial M What about writing the widget purely from that perspective, and writing a test for it. Then adding layout and update rendering code. Then, nesting. Probably should try it. Entry: For thermostat ui, use a fully "live" coding approach Date: Sun Mar 18 08:44:41 EDT 2018 What is the infrastructure needed? 1. initial: rebar builds a an application 2. it is packaged and uploaded 3. it is started 4. code edit happens to a module 5. it is saved 6. compiled 7. pushed Most of that is already there as erlang/distel code. So pretty much, this is to build "the monster". The monorepo that can be used to host any project. It's called "ui". Both rebar + staging? - host deps in submodules - have rebar pull from those on rebuild - "overlay" an incremental build Where is the code to do the upload? This should go into erl_tools Use a "presentation model" to perform the updates. Where is this supposed to run? - panda (exo node) - all other exo nodes Now before this gets overwhelming... Where to start? Get exo back up. EDIT: Already getting overwhelmed. Scale it down. There are two separate projects: code update and the exo dist. Entry: Live coding Date: Mon Mar 19 10:33:48 EDT 2018 REPLs are great, but expect tests are more reliable because they include all context in the source file. I'm trying to distill a procedure to avoid all the context setup problems that are bound to occur in a large stateful project. Goal: use erl_tools/src/expect.erl and erl_tolls/src/reflection.erl to have a single key expect test button in emacs: - save buffer to %.expect file - run the expect test %.expect -> %.expect.new - load %.expect.new into buffer, save it to %.expect file The middle step depends on the build system. For now, this is set up as an eunit test. For this, expect.erl now produces a triplet for a failing test, leaving the original expected value intact, but including a third element with either the actual output or an error message in case it is not printable. So for particular setup (eunit), how to implement? Somehow the file itself needs to be associated to a build rule. This can be done by putting a Makefile in the directory that contains the .expect files. Entry: Is spawn() synchronous? Date: Fri Mar 23 10:41:14 EDT 2018 Pid = spawn(...), Pid ! msg. I would assume so, but I've never seen it menioned explicitly. As long as the process doesn't exit, is it guaranteed to receive that message? I.e. is there a race condition between a Pid being returned and the mailbox not being st up? Entry: Incremental website development Date: Sat Mar 24 12:07:16 EDT 2018 I really need something easier to use. Some principles: - Use "expect-based" development to focus on a current (group of) function(s) and their application on some concrete test data. This is a very good way to bridge the "general" and the "concrete". - Incremental loading of JavaScript code. Maybe even get rid of the JavaScript compiler altogether? Can JavaScript be Erlangified? Though compiler isn't that slow. Fix reloads instead. Maybe focus on the actual problem. To create the ui, I need to be able to: - Compile the code, load it on an exo node - Once loaded, compile and reload fast Entry: Multi-platform rebar builds Date: Sun Mar 25 10:24:16 EDT 2018 There doesn't seem to be a good solution for this, as rebar re-uses the _build directory. So move this into the application for now. Entry: Native code? Date: Tue Apr 10 11:00:25 EDT 2018 -mode(native). http://stratus3d.com/blog/2016/07/02/escript-essentials/ Entry: Erlang in the browser Date: Tue Apr 10 12:23:38 EDT 2018 https://github.com/svahne/browserl/blob/master/browserl.js Makes me think about a different kind of experiment. Writing an Erlang interpreter, then compile it to asm.js It doesn't seem to make much sense to try to compile to Javascript. So before messing with this, do something in Rust maybe? Entry: A webserver Date: Sat Apr 14 22:53:21 EDT 2018 is somewhere for a widget to live. Entry: rpc really needs a monitor Date: Sun Apr 15 00:31:46 EDT 2018 This is why OTP is so important I guess... Because timeouts alone are not enough: we need to know if the process died or not. So take a good look at OTP gen_server calls, and put the same mechanism in obj.erl EDIT: obj.erl by default will not timeout, but will print a warning every 3 seconds Entry: Lexical names Date: Sat Apr 21 10:34:50 EDT 2018 Now, wouldn't it be nice to map all those identifiers used in Erlang -> Javascript -> Erlang onto lexical identifiers, so that references can be checked? Entry: Building releases for different platforms Date: Mon Jul 2 16:16:35 EDT 2018 It seems to just not get the idea that there might be different binary dependencies. So let's forget about doing this incrementally: clean _build directory, then rebuild the release. OR... cache the _build directory. Entry: Instantaneous reload Date: Sat Sep 8 21:40:52 EDT 2018 When writing code, I want fast feedback to weed out most errors. For typed code, it is usually possible to weed out a large percentage of errors before running. For untyped code, this is less so, so run time testing becomes more important. For Erlang it is possible to make the cycle almost immedate because of its code reload functionality. In erl_tools, I worked on this, but I mostly forgot how it worked. Here's a new take that I'll merge with the old approach. - Emacs knows the file, and already performs flycheck. How about making that also load the code? It only needs to know where to put it. - Are hidden nodes secure? I would like to just have a permanent connection from the dev image to any of the target images, but maybe that is not really a good idea. So how did it work before? I was using the expect-base approach with the proper command under F6. Let's look at that again. Ok I think this was the "expect" approach. See test/Makefile in hatd project. So that's one way. I want faster, also for application. Currently it already does an update using rsync and module reloading, but I really want instantaneous. EDIT: So I've added the Makefile and %.erl.emacs_notify rule. Now I'm looking for the script that pushes the code. rpc_call.sh seems so thin... I had another script before. Where did it go? It is in exo/bin/rpc_call.escript I forgot how I used this though. tom@panda:~/exo$ grep -re rpc_call.escript * apps/exo/test/Makefile: ../../../bin/rpc_call.escript default $*_expect run bin/update-nodes.sh:./bin/rpc_call.escript $EXO_NODES erlang node || exit 1 bin/update-nodes.sh:./bin/rpc_call.escript $EXO_NODES _ update $TOP $(cd $TOP ; find -name '*.beam' $FIND_ARGS) || exit 1 bin/update-nodes.sh:./bin/rpc_call.escript $EXO_NODES _ copy $BUNDLE $EXO_TARGET/priv/static/bundle.js || exit 1 bin/update-nodes.sh:./bin/rpc_call.escript $EXO_NODES ws reload_all || exit 1 .. Ok so that has an existing mechanism. I'm going to do something else for hatd project: Have makefile create beam, then modify rpc_call.sh to read the file and push it into the nodes. EDIT: No, use/modify the rpc_call.escript from exo. Maybe move to erl_tools? EDIT: Doing single files is easy, but it really doesn't solve the problem. Often meaningful edits span multiple files, i.e. every API change is like that. So it is really necessary to make multi-module updates. Entry: Expect tests Date: Sat Sep 8 22:08:20 EDT 2018 I want a second version of this: Once that leaves the layout of the code alone. Entry: Incremental builds Date: Mon Sep 10 11:08:49 EDT 2018 Currently (in billed project) it is way to slow. EDIT: I've opted to completely separate incremental builds. Basically: - Create a GNU makefile that can run in parallel. Ignore all the files that are not being edited: the main build can compile and upload those. Entry: STM32 Rust + Cauterize Date: Mon Sep 10 11:10:19 EDT 2018 Getting the hunger back. Create an image on STM32 written in Rust, accepting a standard protocol, e.g. either ETF or Cauterize. Entry: Tab completion with variable names? Date: Mon Sep 10 14:47:41 EDT 2018 What I want is tab completion that pops up the definition of the source file. Shouldn't be that hard to do. I already have a parser. How does the tab completion work? bound to: erl-complete Note: this actually has a node argument that by default is set to (erl-target-node) An erlang call looks like this: (erl-spawn (erl-send-rpc node 'distel 'functions (list mod pref)) (&erl-receive-completions "function" beg end pref buf continuing #'erl-complete-sole-function))) Is there a simple wrapper around that? This requires a bit more effort than I thought to understand the design. Entry: GUI widgets Date: Wed Sep 19 15:57:55 EDT 2018 Some ideas. - Use the "unit machine" concept. A single unit machine is a widget that is best managed using a presentation model. - For other widgets, it makes sense to use a looser coupling, where each widget operates independently. In Erlang this can be done using a process basis. I do want to do it right this time, using supervisor trees, because otherwise it is too hard to manage. So what does a widget look like? - a control process that implements the event in -> render commands out control. this is most conveniently implemented using a presentation model, but that is not necessary. other models could work just as well. use what is appropriate. - a web page is then a collection of mostly independent widgets. the only constraint is that there are no circular dependencies. for circular dependencies, a "unit machine" needs to be created. Entry: Multi-process web page Date: Thu Sep 20 12:28:41 EDT 2018 What I want: - Initial (fixed) page layout with "holes" - A collection of Erlang processes updating those holes using DOM manipulations. - A way to add namespace to "id" attributes. EDIT: A widget now looks like this. The spawner only needs to provide the cell in the web page, and a way to forward events to this process. %% Test widget for multi-process widget approch. Context: %% - websocket, talking to web page with a cell called Name %% - events tagged with Name get sent to us test_widget(Ws, Name) -> {handler, fun() -> #{ ws => Ws, name => Name } end, fun(Msg, State) -> log:info("test_widget: ~p~n", [{Msg, State}]), State end}. EDIT: The routing turns out to be a problem. Currently, everything is hmac encoded functions. Can I just send back tagged events? Because they have to be dispatched from the main handler. EDIT: This is what worked: Exml = [{pre,[],[[<<"Test Widget">>]]}, {button, %% Buttons dont' have values, but this is %% necessary to satisfy the input form template. [{onclick, web:app_send_input(handle)}, {'data-decoder', boolean}, {'data-value', true}, {name, type:encode({pterm,{Name,button123}})}], [[<<"Test123">>]]}], It allows decoding using Form = web:form_map(Msg), Yielding: form: #{{test,button123} => {boolean,true}} See later post on "flat events" EDIT: There is also a 'button' decoder. Entry: emacs problems Date: Thu Sep 20 15:42:58 EDT 2018 Started spewing '*erl-output*' suddenly, and I don't know what I did. Entry: Flat events Date: Thu Sep 20 18:01:51 EDT 2018 So I have a way to use the old form-style API, but now these need to be routed to the correct process. I don't want to break anything, so use the following convention: For applications that consist of multiple subprocesses, create some glue code that will: - encode all widget names with a {Name,_} prefix - for every incoming form, lift out the tags and route accordingly. - look up the correct process through the 'supervisor' field. supervisor:which_children - do this for all attributes: before converting pterm to binary, prepend them. Ok, works. But I've only implemented the special case of one form entry, which is likely the only thing we'll see since actual forms are not really used any more. I had this "yes, why not" moment after @joeerl tweeted something about every control being a process. Now, can this be made recursive? Each init call could just start another supervisor, but that might need a different module for each supervisor call, unless the init method is made generic to start different kinds of supervisors? This is getting hacky though. EDIT: What is a widget? It definitely contains some initial layout. So why not include that? Then it contains markers in that layout which are linked to behavior. The behavior is aprocess. Can I get this right first time? Probably not. Is it possible to do it in a way that is probably fixable? Maybe. Otoh, the 2-phase startup is annoying, so why not get rid of it altogether? Let a page just be a cell. Keep that fact constant, and re-render the contents whenever necessary. EDIT: Making this recursive might not be necessary, but it forces proper handling of identifiers. Basically, the widget should not know its own name. EDIT: Trying to disentangle, I run into problems that message decoding is a parameter that should be passed in. This is very annoying. EDIT: Ok, just parameterized it with the type module, and now it actually looks simple. Entry: Better dialyzer types Date: Fri Sep 28 16:12:14 CDT 2018 What about replacing 'receive' with 'receive_abc', where the return type is restricted, e.g. as case receive_abc() of a -> ..; b -> ..; c -> .. end Entry: Dependency graph Date: Mon Oct 1 03:31:27 CEST 2018 https://github.com/eproxus/grapherl Entry: Making it more smalltalky Date: Thu Oct 4 11:30:25 CEST 2018 Basically: there are too many points where reloading a part of the application takes way too much time, and or is very ad hoc due to different systems and languages being involved. One thing that's often useful is to change push to notify + pull. Entry: Recreate thermostat with new ui widget approach? Date: Mon Dec 3 15:32:35 EST 2018 I need to make sure that I can revert, so make it run elsewhere first. Set up the build system that can update all exo nodes. To make this work, first standardize on a set of templates. Entry: Port programs Date: Tue Dec 4 21:03:34 EST 2018 Rebar doesn't handle multi-platform builds well, so what about not putting the C files in there at all? What about just fucking ditching this whole rebar thing for exo? Ok for the deps, but not for the core app. Just split it into 3 libs: - exo.erl - exo.rs - exo.c C is still needed for platforms that are hard to support in rust. So I have a toplevel project already: it's called "cross", and it sits in the buildroot vm. 27G Image. That is "the artifact". Let's move this to the SSD. Entry: Write more standard erlang Date: Mon Dec 31 16:34:50 CET 2018 - Get to know supervisor trees better. - Create a gen_server wrapper for obj.erl - Create a gen_event wrapper for bc It would be good to find out what these gen_ interfaces implement better than the fairly raw obj and bc bits. Entry: typed protocols Date: Mon Dec 31 16:40:49 CET 2018 The idea is to model some object communication as channels, so it is possible to fully specify what goes in and what comes out. A lot of actual program structure already is like that. To do this in erlang, one way to wrap send and receive into dedicated functions, and have dialyzer check this. One way to make this easier is to not handler errors locally through pattern matching. How do I think this actually works? - What goes from receive -> typed function cannot be checked since there is no way that dialyzer knows the type of received messages, and so this is likely ignored. - At the send end, there is a constraint on what can go into the channel. So the spcialized send method could be typed, and that type could then be reused in the specialized receive method. What is a channel in this context? It is a contraint on what can be sent to a pid, so a pid somehow behaves as a channel on the sending side, but on the receiving end a channel is one of the protocols that a pid can receive. So instead of calling it a channel, let's call it a protocol. Channels could be implemented as protocols containing references that are set up using some kind of hanshake mechanism. Procols are more appropriate: a process implements a protocol if it can handle the collection of messages in the protocol. There could be state-dependent constraints as an extension of this. What to do practically? - Remove all wildcard matches unless they are part of the protocol. This allows dialyzer to infer more. - Limit the use of exceptions. Any "exceptional" case that should be handled by the application needs to either result in a return value that can be propagated using the usual return path, or should lead to a process crash, where the recovery is handled by a hierarchy of supervisors. I've been using catch all to throw to provide better debug messages, but the conflicting goal to have type errors be caught by dialyzer is probably better. Entry: Composing receive Date: Mon Dec 31 17:16:37 CET 2018 It doesn't seem to be easy to compose receive: e.g. to split the main receive call into multiple delegates, in such a way that type error is preserved. The typical one is delegation to obj:handle/2 as a tail call. Anyways. A first step should be to make the inferred (dialyzer) types explicit and easily checked, possibly even auto-inserted into the source code as somments? man typer: TypEr is a tool that displays and automatically inserts type annotations in Erlang code. It uses Dialyzer to infer variable types. Entry: gen_event Date: Wed Jan 2 21:25:33 CET 2019 "In OTP, an event manager is a named object to which events can be sent.". "The event manager essentially maintains a list of {Module, State} pairs, where each Module is an event handler, and State is the internal state of that event handler." http://erlang.org/doc/design_principles/events.html Entry: Encoding transition functions Date: Mon Jan 7 10:57:46 CET 2019 State machine protocols can be encoded in types, by creating a sum type that has a {Msg,State} type that is bundled. However, it doesn't seem possible to also validate this at the send end. Entry: gradualizer Date: Wed Jan 16 14:04:13 EST 2019 https://www.youtube.com/watch?v=_u1NDuFsW2A https://github.com/josefs/Gradualizer dialyzer approximates union of tuples as tuple of unions! dialyzer doesn't go very deep into structures Entry: hot code reloading Date: Fri Jan 18 09:57:31 EST 2019 Now, find out a way to do this properly, including reload and restart of other binaries. https://medium.com/@kansi/hot-code-loading-with-erlang-and-rebar3-8252af16605b https://news.ycombinator.com/item?id=10669131 Basically, avoid when possible. Just restart vm. I don't need this in production. I only need it during development to make the turnaround shorter. So for erl_tools I'm sticking with a simpler Entry: Broadcaster vs. handler Date: Sat Jan 19 08:44:03 EST 2019 In many cases it best to always prefer a simple callback handler to a broadcaster. Entry: Distributed code execution Date: Sun Jan 27 09:58:14 EST 2019 If - code is identical - no local name (or other resource) resolution is used, it doesn't really matter on which node code is running. However, I've been relying a lot on whereis/1. Entry: Edge nodes Date: Tue Jan 29 17:42:16 EST 2019 Two approaches: - have edge nodes send data over vpn - log into edge nodes using ssh For ssh, create a dedicated key for each command. ssh-keygen -t rsa -f temper add temper.pub to /etc/dropbear/authorized keys with: command="/root/bin/temperv14.elf" ... tom@panda:~/.ssh$ ssh -v -i ~/.ssh/temper -o IdentitiesOnly=yes -F /dev/null root@nexx1 Entry: numeric representation of IP address Date: Wed Jan 30 09:36:50 EST 2019 ip_nm(IPNM) -> log:info("~p~n",[IPNM]), {match, [_|Ns]} = re:run( IPNM, "(\\d+)\\.(\\d+)\\.(\\d+)\\.(\\d+)/(\\d+)", [{capture,all,list}]), [A,B,C,D,M] = [list_to_integer(N) || N<-Ns], {D+256*(C+256*(B+256*A)),M}. Entry: Model of the world Date: Sat Feb 2 12:59:12 EST 2019 Erlang is: - functions + data structures - processes with send + receive (a.k.a. write and read) The former is the pure part, relating data to data. Tying in the latter introduces notions of context and time, resulting in processes behaving as objects supporting protocols. Other advantages: - code reload - serializable data structures (protocol-oriented) - supervisors - closures across nodes Entry: Restarting monoliths Date: Tue Feb 5 14:39:24 EST 2019 So Erlang's fixed data structures are one of the things that make loads possible without restarting. To do this with a monolith, the state of any services needs to be saved and restored. This is extra work. Once way to resolve this is to ensure an Erlang mirror process has the state, so it can properly restart the monolith. It seems the natural boundary between Erlang and port processes or other monolith services is the level at which state is stored. Keep the leafs stateless, except for cache. Entry: Mutable supervisor Date: Fri Feb 8 13:21:38 EST 2019 I want something very simple: - A supervisor run from the main app - Ability to add/remove processes supervisor:start_child/2 So what about just adding it to the main supervisor? (exo@10.1.3.29)26> supervisor:get_childspec(exo_sup,exo_log). supervisor:get_childspec(exo_sup,exo_log). {ok,#{id => exo_log, modules => [exo_log], restart => permanent, shutdown => brutal_kill, start => {exo_log,start_link,[]}, type => worker}} So start_child/2 takes the same childspec interface. Entry: Relying on anonymous functions across nodes Date: Sun Feb 10 10:11:35 EST 2019 It is quite convenient, but it introduces tight coupling. It can be avoided by replacing all anonymous functions with named functions and extra arguments, i.e. lambda lifting. To understand better: how are anonymous functions encoded? http://erlang.org/doc/apps/erts/erl_ext_dist.html#fun_ext It uses an index into a module and a hash of the parse. Entry: remsh exit Date: Tue Feb 12 14:46:16 EST 2019 Problem is that the remsh application just sits there waiting for interactive input: *** ERROR: Shell process terminated! (^G to start new job) *** I do not see a way around this. Maybe best to use dtach. EDIT: Using ssh + dtach now. Entry: Expect tests Date: Thu Feb 14 15:29:25 EST 2019 Can be done a lot dumber by just saving outputs. obj:get(midi_raw,bc) ! {subscribe,serv:start({handler,fun() -> #{} end, fun(Msg,State) -> log:info("~p~n", [Msg]), State end})}. Entry: just send structs? Date: Sat Feb 23 21:02:17 EST 2019 With all this ado about protocols, why not send raw structs to the microcontrollers, and generate the wrappers to/from Erlang Maps. Entry: distel Date: Sun Feb 24 21:57:12 EST 2019 http://www.erlang-factory.com/upload/presentations/642/distel_10th_anniversary.pdf Distel is actually quite neat. Took me way too long to have a closer look at it. Presentation slides help of course.. Entry: TAB completion of dynamic structure Date: Sun Feb 24 22:19:40 EST 2019 Modify distel to perform some delegation to the application to complete dynamic object structures. There are currently 3 steps, which can be easily seen when hitting TAB while looking at the trace buffer. - no ':', distel:modules is used - with ';' distel:functions is used - with complete M:F, distel:get_arglist is used I see two ways. Dynamically generate module code and just use the module resolver, or do something special in the case the resolver function is displayed. E.g. exo:need( would display the children. TODO: Give this a try. Should be fairly straightforward. See erl-complete-sole-function. A little more reading to do on how to get the values from an erlang function, and how to then display completions normally. Entry: Erlang in the browser Date: Sun Feb 24 23:47:38 EST 2019 So, without the kernel and all, is it possible to interpret core erlang in Javascript? Maybe this is what the previous project did? https://elixirforum.com/t/running-beam-in-the-web-browser/16501 https://github.com/baryluk/erljs Entry: Ports that need kill Date: Tue Feb 26 18:55:30 EST 2019 Example is emacsclient. If the process dies, associated Erlang process gets a message. The other way however, we want to kill the port process if the erlang process dies. It should be possible to do this in Erlang, but it doesn't solve the issue where the Erlang VM crashes and the child is reaped. It seems some kind of wrapper is necessary, and socat seems to be simplest. https://unix.stackexchange.com/questions/265464/process-not-closing-when-stdin-is-closed Entry: cross-node linking Date: Sun Mar 3 14:06:46 EST 2019 Does this actually work? YES. Problem is that I was starting a process on a different node through rpc. This did not link it locally. Entry: distel different nodes Date: Fri Mar 8 18:42:14 EST 2019 Is it possible to talk to different nodes that have different cookies? It probably is possible, but it seems that distel does assume a single network in the way it treats cookies. So I'm sticking to the 1-1 link with exo at localhost as a hub into other functionality. This requires modification of distel to pass an extra parameter with the necessary context to resolve some calls. EDIT: Adding the extra parameter is too much work. I need multiple connections. This requires replacement of (derl-cookie) to make it node-dependent. derl-connection-node could be used as the buffer local variable to map to the cookie. (derl-node-host (intern "exo@10.1.3.2")) "10.1.3.2" (derl-node-name (intern "exo@10.1.3.2")) "exo" Some more interesting buffer local variables: grep -nrI . -e defvar-local ./derl.el:35:(defvar-local derl-connection-node nil ./derl.el:38:(defvar-local derl-hdrlen 2 ./derl.el:42:(defvar-local derl-alive nil ./derl.el:45:(defvar-local derl-shutting-down nil ./derl.el:48:(defvar-local derl-request-queue nil ./derl.el:51:(defvar-local derl-remote-links '() ./distel-ie.el:21:(defvar-local erl-ie-node nil ./edb.el:60:(defvar-local edb-module-interpreted nil ./edb.el:412:(defvar-local edb-pid nil "Pid of attached process.") ./edb.el:414:(defvar-local edb-node nil "Node of attached process.") ./edb.el:416:(defvar-local edb-module nil ./edb.el:419:(defvar-local edb-variables-buffer nil ./edb.el:422:(defvar-local edb-attach-buffer nil "True if buffer is attach buffer.") ./edb.el:666:(defvar-local edb-buffer-breakpoints nil ./edb.el:669:(defvar-local edb-buffer-breakpoints-stale nil ./erl-service.el:328:(defvar-local erl-viewed-pid nil "PID being viewed.") ./erl-service.el:330:(defvar-local erl-old-window-configuration nil ./erl.el:22: (or (fboundp 'defvar-local) ./erl.el:23: (defmacro defvar-local (var val &optional docstring) ./net-fsm.el:5: (or (fboundp 'defvar-local) ./net-fsm.el:6: (defmacro defvar-local (var val &optional docstring) ./net-fsm.el:16:(defvar-local fsm-buffer-p nil ./net-fsm.el:18:(defvar-local fsm-state nil "Current state.") ./net-fsm.el:19:(defvar-local fsm-process nil ./net-fsm.el:21:(defvar-local fsm-cont nil ./net-fsm.el:24:(defvar-local fsm-fail-cont nil ./net-fsm.el:27:(defvar-local fsm-work-buffer nil ./net-fsm.el:30:(defvar-local fsm-put-data-in-buffer nil EDIT: Modified distel to allow for config of function that maps node name to cookie. This makes it easy to keep multiple node connections open with different cookies. Then added wrappers for command completion and source lookup that map the current buffer (file or interaction buffer) to the proper distel node to send the query to. Entry: A better wire format Date: Sun Mar 10 10:10:31 EDT 2019 While ETF is fine, it does have some cruft that requires dealing with every time. Mapping to algebraic data type could likely be done simpler. Entry: I am rediscovering shell programming via Erlang! Date: Mon Mar 11 17:55:57 EDT 2019 A simple single-writer FIFO server: #!/bin/bash # To be used as: # ./shell --run ./serv.sh FIFO=$(dirname $0)/fifo cleanup() { rm -f $FIFO exit 0 } rm -f $FIFO 2>/dev/null mkfifo $FIFO echo "reading from fifo: $FIFO" >&2 trap cleanup EXIT INT TERM while echo again; do (while read line; do $line; done)<$FIFO done EDIT: This has evolved a bit. See asm_tools project. Entry: Dependency graphs Date: Wed Mar 27 10:02:47 EDT 2019 I'd like to restore the partial order for a project. How to get a list of dependencies? EDIT: I'm going to have to learn Xref. http://erlang.org/doc/apps/tools/xref_chapter.html Entry: rollback doesn't work Date: Fri Mar 29 19:58:00 EDT 2019 https://www.youtube.com/watch?v=mSFWw8TJYr4 Entry: learn xref Date: Fri Mar 29 20:02:05 EDT 2019 At least for 2 things: - missing functions - enforce module dependencies Before even running dialyzer. Entry: gen_server / sys Date: Fri Mar 29 20:05:24 EDT 2019 http://erlang.org/doc/man/gen_server.html http://erlang.org/doc/man/sys.html Entry: phoenix liveview Date: Wed Apr 3 20:41:18 EDT 2019 i'm getting intrigued https://polite-angelic-beaver.gigalixirapp.com/ https://dockyard.com/blog/2018/12/12/phoenix-liveview-interactive-real-time-apps-no-need-to-write-javascript https://lobste.rs/s/pqatpx/continuations_for_web_development Entry: links Date: Sat Apr 13 07:44:41 EDT 2019 Something I don't understand. If A is linked to B, B is linked to C, and B traps exit, if A gets killed, does C get killed? Entry: Why use modules instead of anonymous functions? Date: Mon May 13 08:33:54 EDT 2019 It is because of reloads. This is the single most annoying thing about how Erlang is implemented: anonymous functions do not survive reloads! I'm settling on this (tools.erl) : apply({M,F,EnvArgs}, Args) when is_atom(M) and is_atom(F) and is_list(EnvArgs) and is_list(Args) -> erlang:apply(M,F,EnvArgs ++ Args); apply(F, Args) when is_function(F) and is_list(Args) -> erlang:apply(F, Args). Entry: Some usefult commands Date: Thu May 30 23:38:34 EDT 2019 https://www.youtube.com/watch?time_continue=14&v=lXiiiLhwBI4 network load: nl/1. riak ensemble Entry: Native compiler Date: Tue Jul 30 18:39:33 EDT 2019 http://erlang.org/doc/man/HiPE_app.html Maintained separately. Maybe not really an option. Entry: Xref Date: Thu Aug 1 10:08:35 EDT 2019 http://erlang.org/doc/apps/tools/xref_chapter.html Mostly interested in this for now: "Typically, a module or a release can be checked for calls to undefined functions." Entry: Always use monitors? Date: Wed Aug 21 13:48:48 EDT 2019 Suppose we're forking off a bunch of workers for a parallel task, and wait for them to be done. This needs to handle the case that one of them fails so we can propagate the error. Maybe just always use monitors for that? Entry: Continuation cache, routing slots Date: Sun Sep 1 02:09:07 EDT 2019 Basically RPCs contain a pid and a reference to be able to route the response. I need that to go through a uC, and I don't want to serialize everything. So I need a short-lived registry that creates a "routing slot". This problem is quite general when translating between address spaces. For the uC, a small pool of references would be enough. What makes it difficult is the expiry of the cache. If timeouts are errors that lead to crashes, it might not be necessary to expire. Why is this important? It allows multiple state machines on a single uC to be treated as individual things. Entry: distel float encoding problem? Date: Sat Nov 16 16:34:31 EST 2019 (erl-send (rai-buffer-pid "doodle.rkt") [set ((52 0.84))]) (exo@10.1.3.12)63> {rai,doodle_pulse}: {set,[[52,1.869062041146049e-308]]} {rai,doodle_pulse}: p52 1.869062041146049e-308; check erlext-write-float (erlext-encode-ieee-double 0.84) (0 13 112 163 215 10 61 112) (erlext-encode-ieee-double 0.123) (63 191 124 237 145 104 114 176) The first one clearly isn't right. I don't want to fix this atm. Entry: re-inventing event handlers Date: Wed Nov 20 15:05:38 EST 2019 Basic ideas: - Event handlers should run in the context of the handler process. This is most flexible. - If message sending is necessary, it can be done in the handler. The latter requires the destination to be a Pid. Why is the former better? It allows filtering at the source, before spending bandwidth on messages. Entry: processes vs state machines Date: Thu Nov 21 15:04:38 EST 2019 So maybe it is time to view them as different things. The main problem is that processes do not compose well, or have a fairly heavyweigth composition mechanism (supervisor trees, monitors, reconnections, ...) However, state machines expressed in a functional way do compose perfectly as they do not live in the "real world". What about always expressing functionality as a state machine, and then _hosting_ it in a process? I.e. the init/1 and handle/2 functions could be treated as separate entities. The distinction is this: state machines are pure if they do not need to refer to Pid values directly or indirectly. This is a rare thing in Erlang code, but it can be kept in mind as a guide. Entry: the erlang console / shell Date: Mon Nov 25 17:16:01 EST 2019 So this worked, and then it didn't. Ok this had something to do with running it in emacs. That's not it? EDIT: I think this needs a terminal. This doesn't work either: tom@panda:~$ socat - EXEC:"/home/tom/gw_src/bin/remsh vybrid@10.1.3.81" Eshell V8.3 (abort with ^G) (gw_remsh_23028@127.0.0.1)1> Entry: liveview Date: Sat Dec 7 17:05:12 EST 2019 https://www.youtube.com/watch?v=U_Pe8Ru06fM Not too dissimilar to erl_tools web framework. The big difference is of course that it uses diffing. Which is nice. Can this be used in Erlang? This doesn't use IDs. Maybe it uses structural addresses? https://www.youtube.com/watch?v=9eOo8hSbMAc It has some information, but not really all that clear to me. Structural addresses would probably work just fine. EDIT: I think I misunderstood. It's not really doing diffing. It's doing some smart translation of templates that identify parts that do not need to recompute? Ok here's the trick: if you know that a subtree depends on a subset of the algorithm, then you can just know that it didn't change. This is actually much deeper: it is "redo" for a tree. Entry: Erlang and actual blocking threads Date: Fri Dec 27 00:50:44 CET 2019 So I've been writing state machines / transaction machines for a while now. Maybe it is time to create a threading mechanism that can handle "interrupts". I.e. have something that is actually a task with several blocking points, but also has some form of introspection to do things such as restarts / resets, or inspect current state. Maybe the Erlang debugger is what I'm looking for? Entry: Erlang and Haskell code gen Date: Sat Jan 4 13:56:08 CET 2020 I want to try something new. My only issue with Erlang is lack of types. So what about keeping Erlang as a system layer, and move anything "algorithmic" into Haskell code generators? It might be better to go straight for beam code. Or LFE. Entry: Core erlang Date: Sat Jan 4 14:48:29 CET 2020 Looking at intermediates for BEAM code gen. From https://wiki.haskell.org/Yhc/Erlang/Proof_of_concept erlocam: This binding could be a way to use Ocaml in distributed systems, where Erlang is used as a "systems glue" for supervision, load balancing, replication, etc. https://code.google.com/archive/p/erlocaml/ Here's some information about using Core Erlang: https://8thlight.com/blog/kofi-gumbs/2017/05/02/core-erlang.html Entry: Running external scripts Date: Mon Feb 10 08:32:25 EST 2020 So is it possible to use Erlang's CPU core/thread management to intelligently run tasks in parallel? Or do I need to create some kind of throttling mechanism for external processes to make sure I don't spawn too many? Too vague. Probably some kind of work queue is needed because the Erlang side that launches a port program and waits for it will just sit there and will not be running, so it seems there is no bounding mechanism. Entry: State machines vs. processes Date: Wed Feb 12 07:42:42 EST 2020 This should be explored a bit more. In many cases it makes sense to "import" behavior into a state machine. Main reason being that it is often simpler to handle concurrency through a state monitor process, and makeing things more monolithic removes a lot of management overhead. In short, when possible, compose the pure "handle" functions instead of creating process trees. To increase compositionality, expose the handle functions such that they can actually be reused. Entry: Implementing monads in Erlang Date: Mon Feb 17 20:22:15 EST 2020 https://github.com/rabbitmq/erlando This has the monad implemented as a module. Neat. Entry: Logging directly to buffers Date: Wed Apr 22 10:43:42 EDT 2020 Nice idea, but it seems to be problematic due to emacs performance issues. It needs to be throttled somehow.