Hacker News new | past | comments | ask | show | jobs | submit | LittleDan's comments login

LGTM! Smells like Redux (in a good way). But then ultimately at the root you probably want the event to update your “model”, and then that leads to an update of the “view”. This is the part where signals can be useful.


Look at legend-state, it's most definitely not Redux (in a good way IMO).


I think there is room for improvement in how we explain this. The problems aren’t really visible in this small sample and comes up more for bigger things. PRs welcome.


Perhaps mentioning the tradeoffs between a simple easy to explain example vs a more obvious comprehensive example. With links to more complex code bases? With a before & after?


I wouldn't be surprised if Ryan Carniato already has a perfect explanation somewhere :)


> what do you do to retain talent? Wouldn't senior employees look elsewhere for better pay?

Giving everyone a true sense of co-ownership is great for retention. Plus interesting, meaningful work, good pay, etc. Why would you need to know that you make more than your coworkers?


It would be great to hear feedback from everyone on the Temporal survey: https://forms.gle/iL9iZg7Y9LvH41Nv8


We wrote about this in detail in http://v8project.blogspot.com/2016/04/es6-es7-and-beyond.htm... under the heading "Proper Tail Calls".

tl;dr we implemented it, it works, but we are not shipping it because we believe it would hurt developers to lose some stack frames from Error.stack in existing sites, among other issues.


Our approach was to make our debugger still show those frames, and to observe that in the year since we've had tail calls in builds, we haven't seen a single compatibility issue from the change to error.stack behavior.


I program in Lua, which does to TCO and I've never found it to be an issue with debugging. Now, that could be because of the ways I use TCO---I tend to use it in a recursive routine:

    function mainloop()
      local packet = receive()
      if not packet then
        syslog('warning',"error")
        return mainloop()
      end
      process(packet)
      return mainloop()
    end
(mainly because Lua does not have a 'continue' statement, and this is the best way I've found to handle that) or in state machines:

    function state_a(state)
      return state_b(state)
    end

    function state_b(state)
      return state_c(state)
    end

    function state_c(state)
      if somecondition()
        return 'done'
      else
        return state_a(state)
      end
    end
The thing to remember is that a TCO is a form of GOTO. And with GOTO, you have no stack entry, but given that this is a controlled GOTO, I don't see much wrong with it. Do most programmers find TCO that confusing? Or is it the perception that most programmers will find TCO confusing? Have any real studies been done?


You will like the examples in the `original TCO' paper.

http://repository.readscheme.org/ftp/papers/ai-lab-pubs/AIM-...


If the main worry is information loss when debugging, why not figure out a mechanism that's 98% accurate but much more efficient than a full redundant shadow stack?

For example, you could compact the stack every 200 frames it grows, but never remove frames in the top 50 or bottom 50. How often in practice would that give you a misleading view of the stack? (Assume that each frame keeps track of how many tail frames are omitted after it. If needed, assume that tail frames within X distance of a non-tail frame will not be omitted ever.)


We figured out such a mechanism and we call it ShadowChicken: http://trac.webkit.org/changeset/199076

It works great!


One of the things I like most about WebKit is that several of you reliably write awesomely informative commit messages.

(I hope Felix sees this particular commit sometime; I think he might feel a little flattered! Also, wow, that's an excellent summary by Peter Bex.)

Another is that you keep looking into old quiet dark corners of language nerdery and actually make use of the good ideas lurking there (notably while retaining the "no performance regressions EVER" tyranny).

I think there are some neat ideas Chicken Scheme's compiler too.

Also, did you have a raiding party on T when doing DFG? (cf. Olin Shivers at http://www.paulgraham.com/thist.html starting with the paragraph, "This brings us to the summer of 1984. The mission was to build the world's most highly-optimising Scheme compiler." and notably also the paragraphs starting "Richard Kelsey..." and "Norman Adams...". . Always take ideas from Shivers, at least if they're faster in practice. Also, sorry for the several edits. I forgot how good this overview was, and how much meat is in it.)


pizlonator, Is ShadowChicken cheap enough to turn on all the time and make Error.stack work similarly to without PTC?


It probably could be, but we deliberately don't do it, because:

1) I'm not aware of complaints about the change to error.stack behavior from users or developers. I don't know of an app that broke because of the change to error.stack. I don't know of an app whose telemetry got messed up because of the change to error.stack. So, we don't have a real-world test case that would be improved or fixed by integrating ShadowChicken into error.stack. We're not going to impose any overhead, or spend time trying to optimize that overhead, if it isn't going to benefit anyone.

I've heard lots of hypothetical fears about error.stack changing, but I haven't seen a real-world example of the change in error.stack behavior being harmful. If you know of an app that breaks because of our error.stack change, please let us know!

2) Philosophically, we view the feature as PTC (proper tail calls), not TCO (tail call optimization). If it was an optimization then we'd want it to be hidden from the user. But that's not what PTC is: it's a guarantee to the user about how the stack will behave. Therefore, we make error.stack precisely reflect PTC. We go to great lengths to emulate PTCs in some cases to make this work, for example if the optimizing JIT is involved. For example:

function foo() { ... } // say that this is inlined

function bar() { return foo(); } // this tail-calls foo. say that this is inlined

function baz() { return bar() + 1; } // say that our top-tier JIT compiles this

In this case, foo and bar will sort of cease to exist since all that really matters for execution is the code that the JIT generated for baz, which now also includes the code for foo and bar. Inlining is super careful about error.stack. In this case, our optimizing JIT's inlining data will include complete information about the call stack (baz->bar->foo) but will flag the bar frame as tail-deleted so that error.stack will only show baz->foo.

So, instead of making ShadowChicken hide PTC from error.stack, we actually have an entirely separate set of features to make error.stack precisely reflect the reality PTC. On the other hand, if you open the inspector, we want ShadowChicken to show you the tail-deleted frames and to flag them appropriately. The inspector integration WIP is here: https://bugs.webkit.org/show_bug.cgi?id=156685

Screenshot: https://bug-156685-attachments.webkit.org/attachment.cgi?id=...

TL;DR. JSC doesn't try to lie to its clients about PTC. PTC is part of the language, so we precisely reflect PTC's behavior in error.stack and in the inspector (the tail-deleted frames show up but are flagged as such).

(EDIT: I changed the definition of bar and baz above because my original example didn't have the tail calls that I wanted.)


I will bite the bullet: Do you have a sense of how many devs use Safari compared to Chrome/FF for debugging?


I don't have such numbers, and I'm not sure they would be relevant to this discussion.

We already know that debugging isn't the issue. ShadowChicken solves the debugging problem and other VMs could do it, too. ShadowChicken is just one possible algorithm in a much larger family of chicken algorithms.

The only way that PTCs are observable outside the debugger - beyond making you run faster and use less memory - is error.stack. Hence the challenge: find me a website that uses error.stack in such a way that PTC breaks that website. Surely if the other VMs are so against PTC on the grounds that it will break websites, they will be able to tell us about a website that broke in Safari because of PTC.


I do. Sometimes.

Even if were the worst engine since IE4, it's Not Chrome(tm) and can help when chrome dev tools fail (or - more likely - I fail at working with chrome dev tools and need a fresh perspective.)

It's also good practice to run the profilers at least occasionally, because (a) performance in Safari is relevant and I've been bitten by idiosyncrasies where one browser took >5x than another (in all directions. And (b), as above, just by being different they may add useful information.


In the linked commit the "readme" states a 6% overhead.


Ah, too bad to hear that's the route you're going that route given the discussion here https://twitter.com/getify/status/716861612850749440

What changed your/the team's mind?


From the post:

"The V8 team is already working to bring upcoming features such as async / await keywords, Object.prototype.values() / Object.prototype.entries(), String.prototype.padStart() / String.prototype.padEnd() and RegExp lookbehind to the runtime. "

I'm working on async/await in V8, together with Caitlin Potter. Browser support is in progress for Firefox and Safari as well. It didn't make the cut for ES2016, but it is at Stage 3 at TC39. I'm optimistic that it'll get into the main draft specification at the next TC39 meeting after we have a second spec-compliant browser implementation, which doesn't seem very far off.


Sorry for the inaccurate shorthand; maybe that should read that SpiderMonkey supports it. Eric Faust of SpiderMonkey is a co-champion of the proposal, and spoke against implicit PTC at the March 2016 TC39 meeting. It's hard to get much stronger in support of a proposal than being a champion, and Eric works for Mozilla. From that discussion, it also sounded like there was support from the Mozilla devtools team as well.

I'm interested in getting everyone's point of view. We've been discussing pros and cons at https://github.com/tc39/proposal-ptc-syntax/issues and https://github.com/tc39/ecma262/issues/535 , and it'd be great to have your input, including the overturning-prior-consensus issues you raised in committee and anything else that comes to mind.

EDIT: How do you like the new wording "For these reasons, the V8 team strongly support denoting proper tail calls by special syntax. There is a pending TC39 proposal called syntactic tail calls to specify this behavior, co-championed by committee members from Mozilla and Microsoft." ?


Yeah, I won't try to speak for Eric or the SpiderMonkey team. Maybe Eric is firmly in support of STC, but in my experience he's very good at going beyond just implementation concerns and considering all the design constraints (one of which is the cost/benefit analysis of new syntax to the user model). IMO the important question isn't who supports what but what's the best outcome. AFAICT, all three of PTC, STC, and no tail calls are on the table, but there's more hashing out to be done.

Your new wording seems totally fine -- sorry if I was pedantic, and I'm really not bent out of shape about your blog post. I just want to be sure that people don't get confused about where things stand. New features such as STC require time to bake (which is part of what the multi-stage lifecycle for proposals is all about) -- I only meant to clarify the state of the discussion.

Edit: Grrr, re-reading this it still feels like I'm speaking for Eric. He's his own guy, I should shut up about his position! All I mean to say is, I don't think anyone should be staking out strong positions at this point in syntax design. The design process is iterative and uncovers new constraints and effects, and we should all keep open minds and work collaboratively. I'm open to all possible outcomes: PTC, STC, no tail calls at all. Tricky space!


And you can use that sort of library in ES6 because generators were standardized.


Not sure what you mean by virtual DOM. From the WebAssembly FAQ: "Is WebAssembly trying to replace JavaScript?

No! WebAssembly is designed to be a complement to, not replacement of, JavaScript (JS). While WebAssembly will, over time, allow many languages to be compiled to the Web, JS has an incredible amount of momentum and will remain the single, privileged (as described above) dynamic language of the Web. "

https://github.com/WebAssembly/design/blob/master/FAQ.md#is-...

The WebAssembly team is being incredibly thoughtful and open about their motivation and long-term plans, which is very refreshing.


ES2016 is proposed to be based on Ecmarkup, a more modern system which will let us collaborate on Github more easily, rather than a canonical MS Word document on the editor's computer.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: