Hacker News new | past | comments | ask | show | jobs | submit | h-cobordism's comments login

I'm pretty sure you're right on all counts; I'm not even sure that shared immutable borrows would work under the very simple transform given in the README.

Edit: Also, the user-facing language doesn't seem to have linearity at all, i.e. all values may be cloned. But linearity is something that people use to guarantee certain invariants all the time in Rust (and ATS, I've heard), so this seems like a misstep. It also means that memory allocation is implicit, which makes things less predictable for users.


yes, i found the first example quite surprising. there doesn't seem to be any request to copy the string; it is simply unintuively copied/produced three times. i would prefer the language have some kind of `dup` or something, which gives you an additional reference to the object.

as it stands, it has transferred allocation from the runtime to the compiler. but the objective is to put it into the hands of the developer, since only then is it known when reading and writing code.

i begin to suspect that a typed joy may be more practical (or rather: match my desired improvements) than clarifying allocations in haskell.


> hey "ECMA script is portable and accessible"! So lets "browserify" all the things

It's not because JavaScript is portable or accessible (it's definitely not portable, and if you wanted accessibility you could do much better), it's simply because it's the most popular language.


Whic is extremely stupid reason.


I skimmed the Eloquent docs[0] but I couldn't see anything that sets it apart from other ORMs. What makes it special?

[0]: https://laravel.com/docs/7.x/eloquent


Well, the biggest thing is that it writes the SQL I would hope it to write in most cases, reliably.

In particular, I think relations are great to work with in Eloquent.


(1) That sounds like a recipe to unintentionally miss a ton of concurrency. Having to put an `await` there is a great indicator that you're forcing an order of execution.

(2) Can you give an example of an async and a non-async subroutine that have "the same semantics"?


1. There is no need for await. Threads imply sequential execution.

2. sleep vs. delay in either C# or Kotlin.


> 1. There is no need for await. Threads imply sequential execution.

How do you execute doA() and doB() concurrently? In C# you can do something like:

   var t1 = doA();
   var t2 = doB();
   await Task.WhenAll(new[] {t1, t2});
   var a = t1.Result;
   var b = t2.Result;
EDIT: Ah, never mind, I just saw "You only need to submit if you want to do stuff in parallel and then join" above.


The reality is that you rarely want to doA and doB concurrently, so optimizing syntax for that case is not useful, whereas you want to be able to call functions without having to worry about their color all the time, where "all the time" here is typically >1 time per function.

Many of you are perhaps scratching your head and going "What? But of course I concurrently do multiple things all the time!" But this is one of those cases where you grossly overestimate the frequency of exceptions precisely because they are exceptions, and so they stick out in your mind [1]. If you go check your code, I guarantee that either A: you are working in a rare and very stereotypical case not common to most code or B: you have huge swathes of promise code that just chains a whole bunch of "then"s together, or you await virtually every promise immediately, or whatever the equivalent is in your particular environment. You most assuredly are not doing something fancy with promises more than one time per function on average.

This connects with academic work that has showed that in real code, there is typically much less "implicit parallelism" in our programs than people intuitively think. (Including me, even after reading such work.) Even if you write a system to automatically go into your code and systematically finds all the places you accidentally specified "doA" and "doB" as sequential when they could have been parallel, it turns out you don't actually get much.

[1]: I have found this is a common issue in a lot of programmer architecture astronaut work; optimizing not for the truly most common case, but for the case that sticks out most in your mind, which is often very much not the most common case at all, because the common case rapidly ceases to be memorable. I've done my fair share of pet projects like that.


ParaSail[0] is a parallel language that is being developed by Ada Core Technologies. It evaluates statements and expressions in parallel subject to data dependencies.

The paper ParaSail: A Pointer-Free Pervasively-Parallel Language for Irregular Computations[1] contains the following excerpt.

"This LLVM-targeted compiler back end was written by a summer intern who had not programmed in a parallel programing language before. Nevertheless, as can be seen from the table, executing this ParaSail program using multiple threads, while it did incur CPU scheduling overhead, more than made up for this overhead thanks to the parallelism “naturally” available in the program, producing a two times speed-up when going from single-threaded single core to hyper-threaded dual core."

One anecdote proves nothing but I'm cautiously optimistic that newer languages will make it much easier to write parallel programs.

[0] http://www.parasail-lang.org/

[1] https://programming-journal.org/2019/3/7/


I hope so too.

I want to emphasize that what was discovered by those papers is that if you take existing programs and squeeze all the parallelism from them you possibly can safely and automatically, it doesn't get you very much.

That doesn't mean that new languages and/or paradigms may not be able to get a lot more in the future.

But I do think just bodging promises on to the side of an existing language isn't it. In general that's just a slight tweak on what we already had and you don't get a lot out of it.


I would imagine doA() should do@ and return the result, but startA() would start it.

If you just need to wait for both to finish, something like this should do it:

   var t1 = startA();
   var t2 = startB();
   var a = finishA(t1);
   var b = finishB(t2);
finishX would naturally block until X is done.


> embarrassingly knotty around its ObjC bridging (there’s a basic impedance mismatch between those two worlds)

I think they've done an incredible job with their ObjC interop, given said mismatch.

But you're right — the person above who said that

> The world has enough cookie cutter procedural and OOP languages.

definitely isn't looking for Swift.


“I think they've done an incredible job with their ObjC interop, given said mismatch.”

Which is to say, the Swift devs have done an incredible job of solving the wrong problem.

Apple needed a modern language for faster, easier Cocoa development. What they got was one that actually devalues Apple’s 30-year Cocoa investment by treating it as a second-class citizen. Gobsmacking hubris!

Swift was a pet project of Lattner’s while he was working on LLVM that got picked up by Apple management and repurposed to do a job it wasn’t designed for.

Swift should’ve stayed as Lattner’s pet project, and the team directed to build an “Objective-C 3.0”, with the total freedom to break traditional C compatibility in favor of compile-time safety, type inference, decent error handling, and eliminating C’s various baked-in syntactic and semantic mistakes. Leave C compatibility entirely to ObjC 2.0, and half the usability problems Swift has immediately go away. The result—a modern dynamic language that feels like a scripting language while running like a compiled one, which treats Cocoa as a first-class citizen, not as a strap-on.

(Bonus if it also acts as an easy upgrade path for existing C code. “Safe-C” has been tried before with the likes of Cyclone and Fortress, but Apple might’ve actually made it work.)

Tony Hoare called NULL a billion-dollar mistake. Swift is easily a 10-million-man-hour mistake and counting. For a company that once prided itself for its perfectly-polished cutting-edge products, #SwiftLang is so very staid and awkward-fitting.


I don't have enough experience with Swift to agree or disagree with you about it, but it strikes me that there were already Smalltalk-like languages out there that fit this niche somewhat -- such as F-Script -- and Apple could have gone down that road instead of shoehorning Swift into the role.

Objective C already had Smalltalk-style message dispatch syntax, and something fairly close to Smalltalk blocks/lambdas. So it's not like existing Cocoa programmers would have been frustrated or confused.

Clearly the original NeXT engineers were inspired by Smalltalk and wanted something like it, but had performance concerns etc, perhaps there would have been performance concerns with moving to a VM based environment for mobile devices, but I think with a modern JIT these problems could be alleviated. As we've seen with V8, etc..

So I think it was actually a missed opportunity for Smalltalk to finally have its day in the sun :-)


Agreed. Swift is a language designed by and for compiler engineers; and it shows. Contrast Smalltalk which was designed by and for users and usability. Chalk and cheese at every level—especially user level.

Alas, I think decades of C has trained programmers to expect and accept lots and lots of manual drudgework and unhelpful flakiness; worse, it’s selected for the type of programmer who actively enjoys that sort of mindless makework and brittleness. Busyness vs productivity; minutiae vs expressivity. Casual complexity vs rigorous parsimony.

Call me awkward, but I firmly believe good language design means good UI/UX design. Languages exist for humans, not hardware, after all. Yet the UX of mainstream languages today is less than stellar.

(Me, I came into programming through automation so unreasonably expect the machines to do crapwork for me.)

Re. JIT, I’m absolutely fine with baking down to machine code when you already know what hardware you’re targeting. (x86 is just another level of interpretation.) So I don’t think that was it; it was just that Lattner &co were C++ fans and users, so shaped the language to please themselves. Add right time, right place, and riding high on (deserved) reputation for LLVM work. Had they been Smalltalk or Lisp fans we might’ve gotten something more like Dylan instead. While I can use Swift just fine (and certainly prefer it over C++), that I would have enjoyed. Ah well.


I mean, I work in C++ all day (chromecast code base @ Google), and I like the language. But I also know where it does and doesn't belong. For application development, particularly _third party_ app dev, it makes no sense. And neither did Objective C, which is the worst of both worlds. I had to work in it for a while and it's awful.

I agree Dylan (or something like it) would have been a good choice, except that it wouldn't mate well with the Smalltalk style keyword/selector arguments in Cocoa, also it has the taint of being associated with the non-Jobs years and so maybe there would have been ... political... arguments against it.

They just needed a Smalltalk dialect with Algolish/Cish syntactic sugar to calm people down, and optional or mandatory static typing to making tooling work better.


But it doesn’t have dependent types does it?


Nope. I think it periodically gets floated on Swift-Evolution, but someone would have to design and implement it… and I suspect the Swift codebase (which is C++; it isn’t even self-hosting) is already fearsomely complex as it is.

Or, to borrow another Tony Hoare quote:

“There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult.”

..

Alas, big complex codebases facilitate fiddling with the details over substantive changes; and that’s even before considering if the Swift language’s already-set syntax and semantics are amenable to expressing concepts they weren’t originally designed for. Stuff like Swift evo’s current thrash over trailing block[s] syntax reminds me of Python’s growth problems (e.g. its famously frustrating statement vs expression distinction that’s given rise to all its `lambda…`, `…if…else…`, etc nonsense).

It’s hard to scale after you’ve painted yourself into a corner; alas, the temptation is to start digging instead. I really wish Alan Kay had pursued his Nile/Gezira work further. That looked like a really promising approach to scalability. We really need better languages to write our languages in.


Tangential: How would the function `first :: &(Vec a) -> Option &a` (which, given a non-exclusive reference to a vector, returns a non-exclusive reference to the first item of the vector, with the same lifetime) be typed using linear and / or dependent types?

I'm certain that this can be modelled using dependent types (after all, anything can), but I can't think of a way to do it that's anywhere near as ergonomic as Rust.


> - how much of the burden of writing the proof falls on me, and how much falls to the compiler?

The compiler in Idris can do some magic, but usually the burden is mostly on you to write proofs. Idris (and similar languages) have assistance in editors to help you automatically write code and proofs.

> What are the full range of things I can express?

Dependent types as present in Idris, Coq, Agda, etc. can serve as a foundation for mathematics, so… pretty much anything!

For your example, most such languages don't have quotient types, so you'll still need to use setoids or similar to model quotients.

> - how huch more effort falls to me when structuring my code, in order to pass the properties or proofs through?

It's pretty much the same situation as Haskell or Rust, but I think having to write fromJust / unwrap is great. It tells you when you're reading the code exactly what assumption has been made, and it doesn't seem awfully burdensome.

> yet for programming I have to front-load all this complexity to express something so utterly trivial.

What's "all this complexity" that you're referring to?



There's nothing wrong if, as an interviewer, you get bored of interviewing candidate after candidate. The process is pretty repetitive by nature.


As an interviewer, I guess I just don’t get that at all.

Interviews are the opportunity to meet an interesting person, learn about them, talk about their interests, and sell them on the chance to work at your company.

How could that be boring and repetitive?


Imagine your company has a multi-stage interview, your job is to do the whiteboard coding part, and your pool of known-to-be-of-equal-difficulty questions is very small.

You also know that, when people are thinking through a challenging problem in a stressful situation, they mostly won't want to engage in small talk at the same time. Although you can give them hints and ask questions at appropriate times.

It's easy enough to imagine how that could get tiresome, once you've done enough that no solution or mistake is new to you.


>Interviews are the opportunity to meet an interesting person, learn about them, talk about their interests, and sell them on the chance to work at your company.

And all of that is a great way to introduce a ton of unconscious bias into your interviewing process. You should be spending your time in the interview focusing on determining if the candidate meets the requirements to do the job. Being interesting, having interests you find interesting, etc. are presumably not requirements for the position you're interviewing them for. When you find out they have (or do not have) shared interests with you, as a human being, you're wired introduce bias into your decision making process, whether you intend to or not.

Interviews SHOULD be boring in the repetitive sense. The interview should be tailored to the position, and not the person you're interviewing.


Should’ve clarified: Professional interests.

> Interviews SHOULD be boring in the repetitive sense

Repetitive isn’t boring. I believe it’s very possible (essential) for a good interviewer to see the difference.


This is a fair critique, but as sibling commenters pointed out, a well-structured interview is highly repetitive by design, so that every candidate for a particular role gets as close as possible to the same interview experience. When you’ve done dozens or hundreds of these it becomes kind of like mind-reading, you just know what the person is thinking and roughly what they’re going to say and do for the next 45 minutes based on what they do in the first 5. And yes, sometimes, that can get boring.

But your point is valid in the sense that, as an interviewer, I do try to bring energy and interest to each interview, both because it’s what the candidate deserves and it’s better for me too.

And to be clear, most interviews are interesting and enjoyable, even the sessions I’ve done a hundred times. But, just being realistic, neither I nor the candidates are always on our A game, and sometimes the result is a quiet and dull interview.

And to be clear, that may still result in an offer recommendation! Some of these interviews are dry because the candidate knows the material down pat, doesn’t want to chit chat, and has no questions. Those are, in fact, the very kind of person I imagine would improve their performance by smiling and asking what I had for lunch :)


> Does such a thing exist at higher dimensions?

Yes, everything in your first paragraph extends to any number of dimensions (replacing "quaternion" with "rotor").

> I vaguely recall something about having complex numbers for 2D rotation, quaternions for 3D rotation, and octonions for (I'm guessing) 4D rotation

Bivectors and rotors faithfully represent rotations in any number of dimensions. The octonion product can't, because as you said, it's not associative, but rotations obviously have to compose associatively.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: