Hacker News new | past | comments | ask | show | jobs | submit login
New Scheme-Based GPU Programming Language (devopsangle.com)
70 points by wx196 on July 14, 2013 | hide | past | favorite | 18 comments



I'm sorry, stuff like this just annoys me: "Lisp is the ultimate example of programmers bending towards making things easiest for compilers."

No. It makes it easier for parsers. Like that macro you want to write. If you think the hardest part of a compiler is the parser, boy you're in for a shock.


No. It makes it easier for parsers.

Not even that. It's about making thing easiest for metaprogrammers.


Well, it makes it easy for metaprogramming because there's very little parsing to be done, but yes I agree: it's the important case.

Things most people don't know: C#'s LINQ is extensible in a fashion quite similar to LISP macros (look up expression trees), but the job of processing all of the syntax means that nearly no-one has ever done it.


Not to mention GPUs have no stack, no recursion.


You can compile away those issues with a sufficiently smart compiler!

https://dl.acm.org/citation.cfm?id=2364563

(Caveat: I'm the author of that paper)


You can do tail recursion without a stack...but still, you'd have to branch, which GPUs hate doing.


NVIDIA hardware with compute capability 2.0 and support for CUDA 3.1 (e.g. Fermi chips) have recursion.


Why do we want to make things easier for compiler writers? Make things easier for developers and once its solved in the backend (by one team) its done.


Exactly. LISP's selling point isn't that it's easier for the compiler: it's that it makes it easy to extend the compiler (through macros) as an ordinary developer.


The idea of Lisp is that there is no strict difference between programmer who uses language and compiler writer. If you program bottom up style, you extend the language towards the problem you are solving. You have meta-object system, macros, compiler-macros etc. Only the hardware specific back-end is not available to the developer.

Programming frameworks, language extensions (Qt,GWT,..) and their programming conventions are all attempts to build up language towards the problem. Often they would benefit from having parser and compiler, but get buy by relying on programming conventions (object relational mapping frameworks for example) and massive amounts of xml or json configuration files.

Ideally Lisp is like ball of mud - you can throw anything you want into it, and it's still Lisp. You are programmer framework configurator.


You know your text annoys me? Although the author is off the point, you continue the trend, and you're exactly as much correct as the author is.

Lisp syntax is just a notation for lists. Lisp implementations process those lists.

Any syntax which is a notation for something generic quite much makes sure that most changes into the compiler does not ripple into the parser. They make it easier to experiment with your compiler design because you're not changing parsing rules all the time. And changing parser code tends to be error prone and take lot of time, even when you're using parser generators.


This would be very useful; I've written some OpenCL code (called from Python, but I coded the Kernel and set up the buffers in C myself) and it's full of pitfalls; much harder than just "here's an array, here's a procedure. run it in parallel for me."

I beleive there are already ways to do this (IIRC, F#'s original demo showed off trivially parallelisible functions), but a simple interface (i.e. something that looks like a high level language with a REPL etc) is key. As someone who's generally excited about anything that makes you write Scheme, or anything that integrates with Python, I'll be keeping an eye on Harlan!


The reason it's harder is because it deals with device specifics as well as type of memory and mapping, queues, workgroup size.. etc.. This framework might be cute but all it really is doing is what compiler vectorizes have done for ages.


Previous discussion: https://news.ycombinator.com/item?id=5970975

I'm very excited about this. I think Functional Programming is good at expressing parallelizable concepts (FP is inherently parallelizable since it avoids shared state) and always wondered why I had to C my way through parallelization.


The project in question: https://github.com/eholk/harlan


Very clever design decisions. The "high level" code should be written in a smallest and cleanest language possible, and then compiled, via C, into all that mess they called OpenCL.


Hah. OpenCL is no mess. It has a very specific spec for achieving performance. A big part of it is the data transfer from host to device and mapping or read/writing.

These high level abstractions aren't decisive on any of that and will cause pain and angst when you are really pressed for performance. C is the best for OpenCL if you want real performance.


Ironically the screenshot shows Python code.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: