Hacker News new | past | comments | ask | show | jobs | submit login
The Big Bang project aims to create a typed language with the feel of scripting (jhu.edu)
55 points by luu on Nov 5, 2015 | hide | past | favorite | 26 comments



I feel like this is the direction Julia is going in. It's in this interesting space between statically and dynamically typed languages - technically it's dynamically typed, but you get a lot of the safety and performance guarantees afforded by static type systems through Julia's aggressive type inference [1]. As the type system gets richer, it should only get better on this front.

If you haven't checked out Julia yet, it's a beautiful language. There's a reason why Graydon (the guy who made the very early versions of Rust) likens it to a Goldilocks language [2].

[1] https://stackoverflow.com/questions/28078089/is-julia-dynami...

[2] https://graydon2.dreamwidth.org/189377.html


The difference between Julia and Big Bang is that Big Bang will check your types at compile time. You'll know before any of your code executes whether or not it will yield any type errors.

Edit: (Disclaimer, I work on big bang)


How do you handle, say deserialization of objects over a network (to give the canonical non-trivial example)?


I don't know about Julia or Big Bang, but in Haskell your values conforming to types for serialization / deserialization would typical be checked once when parsing, and throw an error (or use a Maybe type) if anything is off.


Yes, that's the standard approach, but I was wondering how that meshed with not having to declare types (maybe the answer is just, that's the one place where you have to - or that's disallowed), but I was curious.


In Haskell, if you don't do anything weird, you don't have to declare types either in the norm case: it can all be inferred.



Haskell has an equational type system, and doesn't support subtyping in the same way that big bang does.

(Disclaimer I'm working on big bang, so I'm a bit biased)


Chapel Programming Language? By which I mean, Chapel seems like a great project to through support behind to add these feature sets. It is already very capable, had a good amount of resources, a roadmap, and seems that it would lend itself to this.


Doesn't Go basically accomplish that via near instant compile time?


Not quite, polymorphism is a bit iffy in Go (there is interface, and it's doable but not great). Type inference is extremely limited (this is also due to a weak type system, with comparison point being ML-family languages for examples) . Global type inference is off the picture without generic/ polymorphic typing.

I don't think Go achieves the look and feel of working with Python and the like.


The go philosophy is that inheritance "does not scale" and they deliberately went another route: https://golang.org/doc/faq#inheritance


The parent probably meant "parametric polymorphism", i.e., generics, and not "subtyping polymorphism", i.e. inheritance.


Errm - where did the parent say anything about inheritance?


Isn't the feel of working with Python all about inheritance?


Duck typing is far more important.


Yes, one of the design goals was to blur the gap between dynamically and statically typed languages by creating a language that is safe to run and fast to compile, read and write.


In my mind, any language that demands a package declaration and a main function doesn't qualify as a "scripting" language.


That's pretty minor, and could be almost trivially rectified with a preprocessor.


Go doesn't have type inference "even at module or function boundaries"; this aims to.


Use Nim


Doesn't Nim require types (e.g. for function arguments)?


In Nim, you can use the type `auto` for proc parameters & return types: http://nim-lang.org/docs/manual.html#types-auto-type

When you use the `auto` type, the compiler will infer the type automatically from the context of the proc invocation or from the proc body. The inferred types can be different for the different parameters. So you can have a proc declaration that looks like:

  proc inferTypes(a, b: auto)  # `a` & `b` can accept different argument types
Within a proc, you can also create bindings & variables without needing to specify a type:

  let iBinding = 1
  var fVariable = 1.2
The following typeless code will compile & run without any complaints or problems:

  import strutils  # `%` operator

  proc inferTypes(a, b: auto) =
    echo "$1 $2" % [$a, $b]

  proc main() =
    inferTypes(25, 30)
    inferTypes(1, "hello")
    inferTypes(4.4, 7)
    inferTypes("cat", 9.5)

  main()
When the above code is compiled & run, it will produce the following output:

  25 30
  1 hello
  4.4 7
  cat 9.5


How does Nim hold up if you put "auto" everywhere you possibly can in a non-trivial program? (Tone note: Straight question. I have no idea and am honestly interested.)

Context: Type inference is hard. Hindley-Milner is famous for making it possible, but the farther you stray from it, the harder it gets to do with no human-added annotations. I hope the authors of "Big Bang" are intimately familiar with the issues involved or they're going to be in trouble. If they aren't they ought to correct that.

I'd also suggest that the Haskell community would be happy to share their experiences on that front if they are asked, and unless the authors are already experts, they really really should ask. Only good things can come from asking.


I don't know, I'm sorry.

(The majority of my professional & personal programming has been along the Shell-Python-C-C++ axis. My preference for static types increases approximately logarithmically with the size of the program; historically, I preferred Python for most quick scripting needs, but for larger programs, I was glad of static types, so I would switch to C++.)

Now, Nim has replaced C++ for me completely (and also expanded downwards into the upper end of Python's territory). The work I'm currently doing in Nim is well into the "I prefer static types" area of the spectrum.

Maybe some of the Nim core devs would have more experience with this situation.

IIRC, several of the Nim core devs are familiar with Haskell, and consider it to be one of the reference languages guiding aspects of the Nim language design.


You wouldn't really do that. But Nim can do basic type inference. So to initialize string and numeric variables, you don't need to explicitly declare the type. https://nim-by-example.github.io/variables/

The type inference extends to a few other situations I believe. Essentially it is a typed language, but it doesn't always make you declare things when they are obvious.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: