Hacker News new | past | comments | ask | show | jobs | submit login

You might want to check out Glitch, a programming model that attacks those problems head on with replay and rollback:

http://research.microsoft.com/en-us/people/smcdirm/managedti...

There is nothing in reactive programming that implies being incremental, but glitch does that anyways (in the sense that tasks whose dependencies haven't changed aren't replayed). On the other hand, glitch might replay tasks more than is optimal.




> There is nothing in reactive programming that implies being incremental

Yes there is. Because if it's not incremental, then you could replace any "reactive" program by one which just recomputes its outputs every time something changes, from scratch. Reactive programming solves this by only recomputing parts which change.

PS: Thanks for the link, I'll look into it!


Well then, functional reactive programming is not incremental, especially implementations based on pull (ie polling). Event stream based systems (like Rx) provides incrementally growing event streams, but that's it. I'm not even sure if they suppress event firing on value streams when a value matching the last comes in.

I mean, it seems like reactive computations should be incremental, ya, but you won't find many (any?) models that deliver on that. On the other hand, you have plenty of incremental models that aren't reactive (eg SAC). Glitch is designed to be reactive and incremental, so it does "replay when something changes" and is able to suppress change propagation if/when a change fizzles out.


> then you could replace any "reactive" program by one which just recomputes its outputs every time something changes, from scratch. Reactive programming solves this by only recomputing parts which change.

I find that to be completely desirable. That behavior is exactly what I want... simply done more efficiently [0].

[0] Also modulo a whole bunch of local state. AFRP handles this well by having a good notion of what it means to "switch an arrow in" but it's been a challenge for applicative/monadic FRP. Rx programming (e.g. "not FRP") tends to solve this problem by ignoring its existence and just littering local state everywhere.


Just to note, many definitions of "reactive" are not based on recomputation (just react to those events dammit, whatever that means), while even those that are based on recomputation, there are many other issues to consider vs. incremental performance, like state preservation in a stepped recomputation, or implicit state that arises from aggregating over an event stream (FRP). Note that FRP is incremental in the time dimension (add a new time point to the program, you don't have to recompute the program for previous time points), but we really want more than that (don't recompute parts of the program execution that are not affected by changes in the time point).

It isn't clear to me that arrowized FRP is incremental. I think some of the less pure FRPs are (like Flapjax) given that they do a topological sort of the signal graph to repair it (if they were going to redo everything, the sort wouldn't be necessary).


To be clear, I'm trying to state that a (nice) recomputation semantics is desirable because it means it's possible to reason about your program ignorant of time. Obviously, an implementation is preferable if it can speed things up via incremental computation and that ought to be nice as well

AFRP is a good example here. AFRP semantics are easily stated in a recomputational way. It makes it necessary to talk about causal AFRP and bounded history AFRP which are nice terms to think about a computation (if sort of obvious). Then, the efficient implementations (of causal AFRP) themselves are incremental for efficiency.


Do you have an citations about AFRP implementations being incremental. I haven't been able to find much in my own literature survey.


Perhaps I'm not understanding the terminology, but every implementation I've ever seen consumes input incrementally. E.g.

    data (i ~> o) =           A (i -> (o, i ~> o))
    data (i ~> o) = forall s. A (i -> s -> (o, s))
in each case, the inputs are consumed sequentially, the outputs returned immediately, and the local state updated for the next time around.

If you're referring to the ability for a new event to update the signal network only partially (in Elliott's terminology, "push" semantics) then there's Amsden's TimeFlies library.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: