Hacker News new | past | comments | ask | show | jobs | submit login

Is this a "true" Scala? As in, if I write a Scala program that only uses the scala stdlib, will it run on both the JVM and Scala native with no modifications to the source? Or, is it more like "a Scala" in the same way you would say "a Lisp"? I think a lot of the design decisions for Scala were made so that Scala would work on the JVM and easily inter-op with Java. It might make more sense to modify the language slightly to better suit the native environment. I'm seeing some hints to that on the page with the "@struct" and "@extern" decorators.



Clearly they added a bunch of new stuff that the JVM can't understand.

So it is either "Scala like LISP"

or

It is a superset of the Scala language. Meaning all existing stuff will work with it, but if you use new secret keywords it will work "better".

Which seems not quite ideal, because that means most Scala libraries will not be "tuned", and you will need all new libraries.. just like ScalaJS made you need entirely new Scala libraries that did not use reflection.

Worried the Scala ecosystem is turning into a nightmare ;)


> just like ScalaJS made you need entirely new Scala libraries that did not use reflection.

Most Scala libraries never used reflection, so most of the ecosystem "just worked" on Scala.js.

I think quite a few things in scala-native are there to show the possibilities of this platform, but in the mid- to long-term those improvements will be supported everywhere:

- @struct and AnyVals: As soon as AnyVals can support more than one value on the JVM (as soon as Oracle finally gets some things done) @struct can go away, because it's equivalent to AnyVal.

- @extern and @native could also end up being the same.

- @inline and @noinline are already supported across platforms.


> Most Scala libraries never used reflection, so most of the ecosystem "just worked" on Scala.js.

Do you have personal experience with this? I do not have hard numbers, but for me - basically none of my stack worked.

I had to find a new JSON parser. I had to find new validation for my form stuff. Etc. etc.

It wasn't impossible, but it was a lot of work.


It's not because of reflection. Most Scala libraries are reflection free. Exceptions are Scala libraries wrapping Java ones, or Scala libraries that are doing Java serialization / deserialization, which many times depends on reflection. For example JSON parsing in Scala many times piggybacks on Jackson, THE Java library for JSON parsing. However there are JSON parsing libraries that are pure Scala and that do not require reflection like Jackson does.

But reflection isn't usually an issue for Scala as Scala code does most of the same tricks at compile-time. Scala libraries that have problems are those libraries dealing with multi-threading. Do you block threads anywhere, waiting for a Future, or on a CountDownLatch? Any await/notify anywhere? Sorry, that won't work on top of Scala.js. That doesn't mean that you can't work around it though and it does take effort on the part of library authors. My own library (sorry for the shameless plug :)) is completely cross-platform: https://monix.io

BTW, Scala.js is very new, but the whole ecosystem wants to support it, every major library is being ported if not ported already and everybody is talking about it ;-)


You're talking about Play Framework, I assume. They did some fancy work with Macros (i.e. Scala Reflection) which causes a lot of headaches for Scala.js. The reality is most of the reflection it does is nice but not necessary and I really wish they offered a "switch" to turn it off.


Sorry I've never used Scala macros, beginner's question: are Scala macros not completely compile time? Why would they use reflection? Is it just syntactic sugar for dynamic type inference?

If they just use reflection during compilation: why would that not be compatible with whatever the runtime is? Shouldn't that be unaffected by runtime and purely depend on the compiler's support for reflect?

Or am I looking at this the wrong way?


Scala macros are compile-time. The Scala reflection library (that works at runtime) share most API with macros.

Macros should always work, reflection is harder as some information needs to be retained at runtime (either via Java reflection or additional data – which can both be problematic in Scala.js).


And as the original author of Scala.js pointed out in a Scala Days talk today, unrestricted reflection means that it's impossible to do dead code elimination, which is a non-starter for real-world use.


I've got at least 30 enterprise applications that disagree with your "nonstarter" assertion. Dead code can only exist in real world code.


I'm talking about within the main use case, which is front-end web development. I guess some folks might be cool with 10MB+ JS apps, but I don't think that would be terribly popular, and certainly not popular enough to bother with imitating Java reflection in JavaScript.


Well, Proguard does whole program optimization.

You have to specify the symbols to preserve explicitly.


I believe that works against jvm bytecode, which you don't get for scalajs. Instead the compiler has a JS specific intermediate representation which the optimizations occur on.


Play uses it specifically for in it’s JSON module; you can define readers and writers that allow you to parse the structure of a JSON object. To avoid having to be very explicit, you can define a case class that directly mirrors the structure of that object. e.g. given some case class that looks like this: case class SomeObj(someInt: Int) you could do: val theObj = SomeObj((json \ “someInt”).as[Int]) or implicit val reads: Reads[SomeObj] = ((json \ “someInt”).read[Int])(SomeObj.apply _) with reflection, you get: implicit reads: Reads[SomeObj] val theObj = reads.validate(json)

Seems trivial with this small example but with large objects it keeps your code much more maintainable and consistent since reading and writing from that case class is the same.

As far as it not working in Scala.js, from my understanding it has to do with how the reflection library is shared between both runtime and compile time implementations.

You can read more about it here: http://docs.scala-lang.org/overviews/reflection/overview.htm...


The tricks that Play JSON does don't need runtime reflection. Macros don't use runtime capabilities at all and all macros work in Scala.js. I'm actually surprised by claims of Play JSON needing the runtime reflection of "scala.reflect".

No, Play JSON doesn't work because it wraps Jackson, a Java library.


Your correct that it probably doesn't use scala.reflect, but it still uses reflection.

Jackson works using reflection. Play JSON therefore at least depends on reflection transatively.


Ya play is where I started, then I moved my app into Scala.JS land.

Had some cool parts to it, but not sure I was sold on the overall experience. I hate JS so bad and want to avoid it, but I kind of felt like I was mostly trading evils. Maybe if I worked at it enough it would finally get better? Not sure


When did you try it out? What didn't you like?

I ported a site written in vanilla Javascript. I decided to faithfully port it without any changes before introducing scala specific libraries. That was pretty painful. A bunch of manual casting for UndefOr and js.Function.

Since then I've started using some of the scalajs libraries/frameworks straight away on projects. It has been a much more pleasant experience!


It's not necessarily a bad thing: you can think about the various Scala implementations as dialects in a language family, like Pascal in the old days. It may be difficult to share code between these dialects, but it allows people to write code for a wide variety of environments using the same syntax and nearly-identical semantics.


The language is already a nightmare compared to a nearly syntaxless lisp so I'm not surprised that the ecosystem follows suit.


It's funny to me to see "syntaxless" used as a good thing :)

For me, syntax--good syntax, anyway--makes code far more readable.


Maybe. Syntax imposes structural rules on your code. If your domain doesn't fit the syntax, then you'll end up with far _less_ readable code. The draw towards "syntaxless" lisp-y languages is that you can build the syntax to fit your domain, rather than vice versa. The result is a DSL that naturally models the domain, rather than an unintuitive mess of data transformations to force your domain model to fit the structure imposed by the syntax and type constraints.


This thing that each Lisp developer writes its own DSL library is one of the reasons why the enterprise isn't so found of Lisp.

It is always a steep curve to dive into other developers code.


I agree. There's a complexity tradeoff with either decision.

Having worked with both approaches, there's a time and place for both. Being able to quickly dive into a codebase is not always a good thing. One thing conservative enterprise developers should like about DSLs is that they require new developers to structure their code in a way that fits the intended domain model. I've seen my fair share of code written by developers who "quickly dove in" and they almost never fit the model and end up causing a huge mess which may be unrecoverable. Conversely, if you don't trust your developers to write decent code, trusting them inside a dynamically-typed DSL is probably not a good idea either.


Wouldn't LISP become more popular if students started with LISP at school/university rather than C/C++, Java or Python? How can a young developer compare or choose if he/she had never been exposed to LISP?


It would help, but it isn't sufficient.

I had a very good CS degree in the mid-90's.

We got to use Pascal, C, C++, Prolog, Caml Light, Smalltalk, Oberon(-2), Component Pascal, Lisp, SQL, PL/SQL, x86 ASM, MIPS ASM, Java, across the 5 years it used to take (nowadays thanks to Bologna is no longer the case).

It doesn't mean we got to use many of those languages afterwards.

But Lisp is a special case, if Lisp Workstations hadn't failed in the market or if Sun and others hadn't picked UNIX as their Workstation OS, maybe it would be different IT world, in spite of all DSLs I was referring to.


It's pretty light on syntax compared to C/ALGOL-family languages. Function bodies are just another block (or expression), as are things like try. Operators are just method calls (except for precedence, and the precedence table is much shorter than C/Java/etc.).


The goal is to be as "true" as possible by default with extra flags to trade some exact semantic aspects for performance. E.g. overflow semantics, bounds checks, null safety etc.

Apart from the base language we'll also introduce some language/library extensions to make lower-level programming that is going to be limited Scala Native "dialect" of Scala. E.g. pointers, structs, stack allocation, extern objects etc.


Is there a difference between @extern and @native?

I could have imagined that it would have been more compatible if Scala-JVM and Scala-Native used the same annotations (using JNI behind the scenes on the JVM).


I would explain the difference as: @native says please implement this Scala method of this Scala class in C, respecting all the JVM calling conventions and memory model. @extern says call this C function that never knew about Scala or the JVM, respecting all the C calling conventions and memory model.


@native implies JNI-style definition to be available along the Scala method definition. @extern lets you call straight to C code without any additional ceremony, all you need is a single forward declaration.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: