Hacker News new | past | comments | ask | show | jobs | submit login
Verified Programming in F*: A Tutorial (fstar-lang.org)
159 points by jstrieb on Jan 4, 2021 | hide | past | favorite | 76 comments



Also see: https://rise4fun.com/fstar/tutorial

Even though this sounds like formal verification gobbledygook, there is some information of practical value there.

How many JSON validators have you seen in python and javascript? Why can't the static type checkers like mypy and pyright do this for us?

The answer is that the type systems of these languages are not sufficiently developed. They can tell you if you're assigning a string to an int, but not if you're assigning 200 to an int whose type allows only numbers in [10, 100].

So one creates a new class with a validate() method, which is completely unnecessary and doesn't protect you against integer overflows elsewhere in the code.

F* and OCaml derivatives are our best hope in developing a type system which perform these types of common checks in one integrated framework via refinement types and dependent types.

Hopefully, one day that work can hit mainstream languages like python, javascript and whatever ends up being the system programming language of choice replacing C in the coming years.


> The answer is that the type systems of these languages are not sufficiently developed. They can tell you if you're assigning a string to an int, but not if you're assigning 200 to an int whose type allows only numbers in [10, 100].

I am pretty sure Ada did that in the 80s. :)

I get what you are saying, but the example you gave is a pretty trivial one that has been solved (although not implemented very often...) for awhile in more "regular" languages!


I picked the trivial example on purpose to make the point that none of the top 5 programming languages do it, even if it was solved in the 80s.

Recent languages like swift/kotlin (which I expect will see increased usage in coming years) seem to have ignored this area.

https://github.com/peter-tomaselli/swift-refined/blob/master... https://discuss.kotlinlang.org/t/refinement-types/9753/19


Because it requires a non-trivial amount of static analysis work on the compiler side to make the UX comfortable. Rust's borrow checker mechanism took years to become pleasant for the end user to use. There is still next to no adoption of Ada outside of aerospace, defence, and certain embedded programming sectors.


> Because it requires a non-trivial amount of static analysis work on the compiler side to make the UX comfortable.

How so?

Any modern language that supports operator overloading enables a half arsed version where you create a struct/class for any subtype of integer you want (e.g. `milliseconds`), and then you create binary operators that takes in on both sides your class, and you add bounds checking yourself, and also this way you can restrict allowing "primitive" types from being just added/subtracted willy-nilly.

As I ranted about below, a simple "milliseconds" type would make life easier all over the place. Who hasn't mixed up an API call where part of it takes seconds and another part takes milliseconds?

That sort of crud should be easy for a simple subtyping system to catch.


So as is the answer to 99 out of 100 questions, it requires a lot of extra time and effort.

Getting those guarantees at compile time is very, very, very hard. Even ADA Spark struggles to verify seemingly simple stuff [1, 2]. It is much easier if you trade compile time for runtime verification, which you can do in C# (but not Mono) or via code comments in Java [3].

WRT subtypes: language designers have plenty of motivation to try and push every non-core feature into a library. Programming in Ada kinda sucks because there are soooo many reserved keywords. You are also stuck with bad design decisions forever, like with Java's Date API [4]. Since it's relatively easy to use the newtype pattern to achieve what you want, it probably doesn't factor into the core language design.

Which is to say that I largely agree with you, but the people paying for new languages have a different set of priorities. Making a cat video database go faster is very lucrative, but it's only with the rise of cryptocurrencies that there has been significant funding for high-assurance programming languages outside of defense and aerospace.

[1]: https://www.imperialviolet.org/2014/09/07/provers.html

[2]: https://blog.adacore.com/using-spark-to-prove-255-bit-intege...

[3]: https://www.openjml.org/

[4]: https://stackoverflow.com/questions/1571265/why-is-the-java-...


Thanks for the awesome answer, with citations!

As someone who has to do day to day programming in languages with minimal concept of safety (e.g. JavaScript, C), my concern isn't for provably correct, it's for "make my life easier by covering the 90% case."

Number of times I've needed to work with 255 bit integers: 0.

Number of times I've confused units of time passing into APIs: way too many.

Heck on one system I was working on, I had one API that took distance in as thousands of an inch. Everything else was in metric.

Fixing that sort of stuff could be done w/o great trouble, and it'd be a huge boon to everyday programmers.

Would it be a sexy language feature? No. But it'd make life better.


It is super annoying that it isn't done more.

I'd love to be able to opt in to bounds checking when I feel it is needed.

I have no bloody idea why language designers seem so intent on not letting me choose what perf/safety trade-off is worthwhile for me. :/


Easy, use C++.

While not perfect, given its copy-paste compatibility with C, you can opt-in into bounds checking by using the STL data types, or similar.

Then you can also make use of compiler switches, to opt-in in changing the behaviour of operator[]() to do bounds checking.

With templates, you can opt-in into checked numerics.

While not perfect, and maybe it will be replaced by Rust or other candidate someday, it already offered plenty of opt-in possiblities since early 90's, no need to keep writing C.


> While not perfect, given its copy-paste compatibility with C, you can opt-in into bounds checking by using the STL data types, or similar.

I've heard of abuses of templates to add something that resembles proper ranged types, but it is not a path I'd want to go down.

Partly, because C++ templates are not an excuse for a proper type system, even if they are a tool that can be used to force a type system into the language.

A proper type system lets you do things like add centimeters and meters together, divide by seconds, and pop out a proper unit and ensure no overflow happens.

A proper type system lets you easily, without the visual noise or compile time hit that templates have, add bounds checking wherever it is appropriate.

    type Age is Integer range 0 .. 130
    type Day_Of_Week is (Sunday, Monday, Tuesday, Wednesday, Thursday, Friday, Saturday);
    subtype Working_Day is Day_Of_Week range Monday .. Friday;

Take a look at https://en.wikibooks.org/wiki/Ada_Programming/Type_System#De...

For all the flak it gets, Ada did this right long ago. And a huge part of all this is compile time only, and you can opt in/out of the runtime checks as performance dictates.

I'd kill for a proper "milliseconds" type everywhere, one that can only have other milliseconds added to it. And then double the fun with a "seconds" type, and a language that keeps track of which is which for me. Why the heck am I writing timeInMs or timeInSeconds still?

Subtyping integers should be a basic and primitive operation in any modern language.


I know most Wirth based languages relatively well.

Actually your examples are even possible in Pascal.

    type
        Age =  0 .. 130;
        Day_Of_Week = (Sunday, Monday, Tuesday, Wednesday, Thursday, Friday, Saturday);
        Working_Day = Monday .. Friday;
However in 1992, C++ was a viable path forward on PC, whereas Ada was too expensive and Modula-2 compilers weren't getting the love they needed from OS vendors (https://freepages.modula2.org/compi.html), so C++ was the best alternative to move from Object Pascal into cross platform development.

So while C++ isn't perfect, its copy-paste compatibility was key for its adoption, but also the reason why C++ will never be Ada in regards to security.

Just like TypeScript or the JVM guest languages, have to deal with the semantics of the underlying platform.

In any case, in what concerns opt-in, C++ provides the options, and using C already in 1992 was a matter of being religiously against C++. RAII, proper strings and arrays data structures with bounds checking and reference parameters instead of pointers were already available.

From the 5 surviving Ada vendors, there is a free option, so anyone that cares about security with the issues that C brings into C++, can also have a look at it.

In fact, GenodeOS, seL4 and NVidia Autonomous Vehicle project are all using Ada/SPARK to some extent, as they considered investing into Ada a better option than adopting Rust, given its history in production.


> In fact, GenodeOS, seL4 and NVidia Autonomous Vehicle project are all using Ada/SPARK to some extent,

As far as I am aware, seL4 does not use Ada/SPARK, but C (with some assembler snippets), and Isabelle/HOL (for the proofs). This is also what is written in https://wwwnipkow.in.tum.de/teaching/proof21/SS18/files/14-f...


My mistake, I thought that Componolit work was for seL4, but it is also for Genode.

https://componolit.com/index_en.html#components


> Why the heck am I writing timeInMs or timeInSeconds still?

Because "keeping track of types" is much more complicated than one would think at first. How many languages are there that can do something like (3A * 4V == 12VA), (12VA / 3A == 4A)? What if you want to use a generic routine that is just about the integers? It will lose track of all the types. I claim that there is still no nice way to do this stuff except scripting languages (dynamic types).

One of the languages that go a little more into types than mainstream languages but still are somewhat close to mainstream is Haskell. Have you ever tried to use it? You get quickly sucked down into a sea of language and type extensions that you have to use, and that are often mutually incompatible, most of them probably the result of a PhD thesis. And you never quite get at what you want (if you want sth slightly differend the card house tumbles and you have to pick a different set of extensions). That is unless you're very, very smart, and have so much investment into Haskell that you know the current limits, so you're able to choose a model that is still within the bounds of representability in the language.

For all of us mere mortals and practitioners, we use simple integers and get our job done.

For everyone still clinging on the idea of types, I strongly believe the type system vs runtime values dichotomy is a dead end. Sometimes you need dynamic computations with types - sometimes types of expressions are known statically, but sometimess not. That's why most mainstream type systems keep it simple and provide escape hooks (like void-pointers or RTTI).


> You get quickly sucked down into a sea of language and type extensions that you have to use, and that are often mutually incompatible

"Often mutually incompatible" suggests that you should be able to name, say, ten such pairs of extensions. Can you? (I've used Haskell professionally for nearly eight years and I can't name one such pair. I could probably find a few marginal cases if I spent 30 mins, but they don't come up in practice.)

> most of them probably the result of a PhD thesis

Can you name any extensions that are the result of "just" a PhD thesis (as opposed to a peer reviewed academic paper published at a top conference)?

> if you want sth slightly differend the card house tumbles and you have to pick a different set of extensions

Can you name any examples where a card house tumbled and someone had to pick a different set of extensions?


It's a long time that I've tried to do anything in Haskell, but I had to use extensions such as GADTs, type families, existential quantifications, kind signatures, multiparam type classes...

I can't name you any ones that are mutually incompatible because I simply don't remember what all of those were about anymore, but I remember there was something (I'll take back the "often"), and I'll bet that some of those that I named above are now deprecated or superseded by others.

> Can you name any extensions that are the result of "just" a PhD thesis (as opposed to a peer reviewed academic paper published at a top conference)?

Well, no, I can't, since I don't follow it anymore, and if I understand correctly you are making my point by stating that a PhD thesis is not the level where you already make language extensions. I know someone who's made significant contributions to GHC also as part of their PhD work, though.

My actual point was simply that the idea of modelling more than basic ideas (like physical layout of data) with static types is a bad idea. You quickly get sucked in to places where you need to be able to express even more complicated typing deductions, and you'll end up resorting to unergonomic stuff like templating, dynamic types and various RTTI approaches, and all kinds of language extensions that are really hard to grasp or that are still in the making. As a guy of modest intelligence, I feel like a dog chasing its tail (and I assume that this is how the super-smart guys out there feel like, too, a lot of the time).

> Can you name any examples where a card house tumbled and someone had to pick a different set of extensions?

Really anything where you're trying to represent a concept with compile time language primitives (which is what Haskell guys are about, a lot of the time - and take pride of). I'm convinced you know this situation yourself where you start of modelling something statically (in type language) and then you have this additional requirement coming in, and you don't know how to represent it with compile time language feature anymore, so you end up scratching everything and building it with a more modest mindset, most of the stuff available with run time values - maybe a few more run-time case switches or asserts that invalid situations can't happen, but overall simpler and easier to understand code.


> I'll bet that some of those that I named above are now deprecated or superseded by others.

All of them are in common use.

> if I understand correctly you are making my point by stating that a PhD thesis is not the level where you already make language extensions

I'm not entirely sure what you're trying to say. I thought you were trying to say that many things that people depend on in Haskell are merely the result of a PhD thesis, i.e. perhaps a messy body of work designed by an inexperienced person. On the contrary, language extensions are well designed and thoroughly thought through.

On the other hand, sometimes people get the idea that Haskell can encode everything in types. It can't. They get stuck and get into messes. Nonetheless the ability to encode some things into types makes Haskell by far the most practical language I've used.


> On the other hand, sometimes people get the idea that Haskell can encode everything in types. It can't. They get stuck and get into messes. Nonetheless the ability to encode some things into types makes Haskell by far the most practical language I've used.

It's not even that, I'll assume that the majority of Haskell novices are aware that there are humble limits to the expressiveness of most (including Haskell's) type systems.

The real problem IMHO is the dichotomy that this approach entails. You have to decide which language to use to model a concept (type language or run time language), and if the idea of what you want to do changes a little, chances are you can't continue on your current path.

In other words, I think the type system approach prevents a lot of the fluidity that I want from a programming language as a tool to get things done.


Personally my experience doesn't match that description. I find it's fairly reasonable to convert existing code between "run time type checking" and static types as necessary. I realise that it doesn't click for everyone immediately. Sorry you had a bad experience with Haskell. If you ever try again I'm happy to help. Feel free to drop me a line with any questions whatsoever. My contact details are in my HN profile.


> I'm not entirely sure what you're trying to say.

My understanding is that "because it took a whole PhD thesis to come up with, it must take a lot of effort to understand".

But I don't think that holds up. Much like the "if I had more time I'd have written a shorter letter", it can often take effort and skill to make a thing smaller and easier to understand.


Ada cannot do this at compile time. Predicates with arbitrary logic like that are checked with runtime assertions. There are limits on which static predicates you can write.

F* implements refinement and dependent types, which allow true logic to be computed at _compile_ time.


For what it’s worth, the typescript type system is at or nearly at a point where these kinds of constraints are possible (with the caveat that runtime validation requires extra work or compiler level hackery).

When TS 4.1 hit beta, I did a proof of concept validating that a `number` was a non-negative integer, and it works[1]. Many similar or even substantially more complex constraints are possible (I’m exploring type level JSON Schema validation of example values, and I’m not quite there but do I believe it’s possible).

I’m sure this kind of thing is child’s play in ML family languages, but it’s really cool to get this level of compile time certainty in a dynamic language.

[1]: my proof of concept was using literals without accounting for exponential representations etc, but I have no reason to doubt these are also achievable based on the ways I’ve seen 4.1+ used.


I think the problem with what F* is doing is that it’s so foreign to most people that they don’t actually understand it.

Typescript cannot do what you’re saying at compile time. F* knows _at compile time_ whether or not a value is within a range, and you can get a compiler error _without executing the program_.


TypeScript can do that. That’s what I was saying.


I'm pretty sure TypeScript 4 didn't add support for dependent types. Please cite your source or include a link to source code : ).


Please provide a code example.


This was my proof of concept for non-negative integers. You can clearly use the "template literal type" functionality to do arbitrary type processing. People have used it for JSON, SQL and GraphQL type validation. Validating a numerical range is trivial compared to those.

    type IntegerChar =
      | '0'
      | '1'
      | '2'
      | '3'
      | '4'
      | '5'
      | '6'
      | '7'
      | '8'
      | '9';

    type IntegerString<T extends string> =
      T extends IntegerChar ?
        T :
      T extends `${infer Char}${infer Rest}` ?
        Char extends IntegerChar ?
          Rest extends IntegerString<Rest> ?
            `${Char}${Rest}` :
          never :
        never :
      never;

      type NonNegativeInteger<T extends number> =
      T extends infer U ?
        number extends U ?
          never :
        `${U}` extends IntegerChar ?
          U :
        `${U}` extends IntegerString<`${U}`> ?
          U :
        never :
      never;

    type NonNegativeInteger<T extends number> =
      T extends infer U ?
        number extends U ?
         never :
        `${U}` extends IntegerChar ?
          U :
        `${U}` extends IntegerString<`${U}`> ?
          U :
        never :
      never;


It’s not difficult when you can enumerate every member of a type. Can you write a type for prime numbers this way?


I didn’t enumerate every member though, just the digit. I’m sure it’s possible to calculate primes. The TS type system has been turing complete since 2.something, it’s just easier to use now.

Edit to add: they don’t use semver, they just roll over at 9, so it’s been turning complete for ~20 releases.


In poking around, I found this: https://github.com/microsoft/TypeScript/issues/14833

Where someone actually implements a compile-time check for prime numbers. I guess saying that it _can't_ be done is inaccurate. It's just absolutely horrendous.


Right that issue was in mind in my last comment. But that was almost 4 years ago, and the type system has become a lot more capable. Including the template literal types as used in my dumb little non-negative int or in compile time JSON, SQL and GraphQL parsing. Which are still horrendous but a great deal less so than some of the hacks from 2027, and a lot easier to read.

It’s definitely not an ML type system, but I’d bet a dollar it takes more inspiration and guidance from F# than C#.


> How many JSON validators have you seen in python and javascript? Why can't the static type checkers like mypy and pyright do this for us?

Because JSON is external data, mypy and pyright operate on Python code.

> They can tell you if you're assigning a string to an int, but not if you're assigning 200 to an int whose type allows only numbers in [10, 100].

Mypy can, in fact, do that (via a union of literal types), though the UX is horrible for that specific case and it only handles simple literals, but that's actually enough for JSON semantics. Of course, to validate JSON with it, you'd have to turn the JSON into Python code with type annotations and validate that.


Like you're suggesting, external data can be seen as a variant of literals embedded in code.

https://www.python.org/dev/peps/pep-0589/

  movie: Movie = {'name': 'Blade Runner',
                  'year': 1982}
It's all possible in theory, but the support in widely used static type checkers is very basic or has bad UX.

On the other hand, this is exactly what you need to write secure code.


Type systems do not thinking for you, you still have to make a type with a range check. So you could just as well write

  function foo(n) {
    if(n < 10 || n > 100) throw new Error("Expected n=" + n + " to be between 10 and 100");
    return n*2;
  }
And while you are at it you can write an unit test that is tested at compile time or in the editor/IDE.

  // assert(foo(10),20);


Runtime safety and compile time safety are not the same thing. Compile time safety is not a silver bullet but it does protect against quite a few classes of errors.


Try reading the tutorial. F* uses refinement types, which means that logic is checked before the program is executed, at compile time.


I'm confused. Is this either very new or very old? What's the context?


F* primary aim is correctness - think something between Coq/LEAN/Isabelle and OCaml/F#. It uses a dependent type system to verify deep functional properties of programs (say that qsort actually sorts, or that a tree is balanced and implements a mathematical map).

Typically such languages are more academic than F*.


I would also say take a look at Bosque it is also from the same camp as F* but much simpler to use Z3 with.


i am not familiar with fstar. But I will try to provide some context badly so someone who does know it can correct me.

Javascript is dynamically typed, and so has no type checking until runtime, you don't know whether a passed in parameter is a string or not until you try to use it.

C# is statically typed, so the compiler knows whether something is a string when passed in at compile time, not making you wait until runtime.

Ocaml/F# have a stronger type system than c#, so you get the compiler looking for you not only that something is a string, but that it is a named type like fifteenString, which is a string with a length of always 15 characters.

F* takes this even further, where the compiler checks for you that you've passed in a fifteenString and that it matches criteria in its value like "starts with hello" or "does not contain bye".

C# saves you from needing to check if a param is even a astring at runtime.

F# additionally saves you from needing to check if it is null and 15 characters in length at runtime.

F* additionally saves you needing to check its value starts with hello at runtime.

Again this is a single language feature probably poorly explained!


The only difference between F# and C# with respect to the notion of a "type which represents a string of length 15" is the amount of code to do it - as described by Scott Wlaschin [1]. You could do the equivalent in C# too. I don't know if this is also the case in Ocaml, though.

[1]: https://fsharpforfunandprofit.com/posts/designing-with-types...


Ocaml beginner here, didn't know you could restrict an Ocaml type to a string of a given length !! How would you go about that ?? Cheers :)


The easiest way is probably to define a private subtype of string (type small_string = private string) with smart constructors that enforces the restriction at runtime. This often means that creating such values may fail, but if you have a value of this type you are assured that the constraint on the size holds.


As pointed out by dang, the same (I think) tutorial was discussed in 2016 (https://news.ycombinator.com/item?id=10949288). I went through the first half of the tutorial in 2018, and I think some aspects of F* were genuinely new, but the development seems to have stopped. There is a need for a framework for program verification of real-life software by non-experts, but F* might not be it. In the latest article [1] of Project Everest one can read: "[...], it required several person-months to implement and verify a generic doubly-linked list library in F*, while it required only three hours to do so in Dafny."

[1] A Security Model and Fully Verified Implementation for the IETF QUIC Record Layer (Antoine Delignat-Lavaud, Cédric Fournet, Bryan Parno, Jonathan Protzenko, Tahina Ramananandro, Jay Bosamiya, Joseph Lallemand, Itsaka Rakotonirina, Yi Zhou) To Appear In The Proceedings of the 42nd IEEE Symposium on Security & Privacy 2021.


  it required several person-months to implement and verify a generic doubly-linked list library in F*, while it required only three hours to do so in Dafny
This could be deceptive. Structures like linked-lists are fiendishly hard to implement in safe Rust too, but people are able to stay productive by (mostly) not implementing things like that themselves. In Rust's case the difficulty comes from ownership: anything involving complex reference graphs for the sake of traversal is going to be really gnarly to establish a single-ownership story for. I don't know about F*.


According to the F* landing page [1]:

> F* (pronounced F star) is a general-purpose functional programming language with effects aimed at program verification.

> The main ongoing use case of F* is building a verified, drop-in replacement for the whole HTTPS stack in Project Everest [2]. This includes verified implementations of TLS 1.2 and 1.3 and of the underlying cryptographic primitives.

[1] http://www.fstar-lang.org/

[2] https://project-everest.github.io/


Fairly new language.


I’ve recently had a interest in formal verification in regards to programming, but my background is so god awful I’m afraid to approach it. How does F* fair for beginners with little prior exposure?


> but my background is so god awful

I suggest you to read (or rather solve) this masterpiece

https://softwarefoundations.cis.upenn.edu/


This tutorial aims to be beginner friendly, assuming you have a background in functional programming. However the pace is very brisk and would serve better for an intermediate learner familiar with the concepts. As a practical tool it would not provide much unless you really know what you are doing and plan to leverage the use of SMT Solvers and code generation. You'd be better off with simpler and more focused learning materials. The Little Prover or Software Abstractions for example.


If curious see also— related from 2016: https://news.ycombinator.com/item?id=10949288


    Just joking: will the version 3 called f***?
    I like the language btw.


How does the language compare to a language like Idris ?


F* is a great idea for writing libraries that cannot fail. But setting up the environment could be better documented.


I find the syntax both terrible to read and terrible to write.

Syntax is what I look at first with any unfamiliar programming language. The syntax is the UI of a language and if that's terrible, I won't enjoy using it.


people get used to syntaxes, by avoiding languages that don’t immediately appeal to your current syntactic preferences you are missing out on actual important and interesting ideas over triviality that would go away with familiarity


It's the ML family of languages syntax, very familiar if you've studied programming languages. Not sure what you can find terrible about it, it's a clean and minimal syntax.


> I find the syntax both terrible to read and terrible to write.

I don’t, but then I've spent time with a variety of languages that, like F*, are from the ML syntax family, and find that syntax quite comfortable.

I can see why it would seem that way at first impression to someone who, for instance, has almost exclusively used languages with Algol-derived syntax (which is pretty common these days, since outside of Haskell from the ML family and Clojure from the Lisp family, most currently popular languages have Algol-derived syntax, or at least superficially Algol-like syntax.)

> Syntax is what I look at first with any unfamiliar programming language.

It's one of the things I look at early on, too, but it's also something I try to not to judge to firmly early on, because subjective view of syntax, like other UI impressions, is highly dependent on familiarity, and it takes a while to get past superficial similarity and dissimilarity to what one is used to and into any deeper strengths and weaknesses of the syntax (or other UI).


Since syntax of F* looks okay to me, I am curious what particular points of syntax of F* you find terrible to read and write.


In general I like ML based syntax, but Haskell using type signatures really helps read code easier. Wish other languages would adopt them


I don't understand this critique with regards to F*. As far as I can tell it has a type signature syntax that is very similar to Haskell, and is used extensively in the linked tutorial.


What’s an example of a language with a good syntax?


I think Python (if nothing else) has good syntax. Forcing students not to write spaghetti is probably the only thing I actually like about python.

I really like the philosophy behind Haskell, for example, but I really think it's syntax let's it down - it encourages very snippet-y programming which in turn means structure can be require parsing rather than glancing at and also variables name that are very short.


How's the interop with .NET and C#?


AFAICT not great. It's an academic language and doesn't focus much on interop. The F# extraction system has been broken previously and I wouldn't recommend relying on F* for any commercial work unless you have a connection to the core development team.


Are you thinking F#?

I believe that other than being ML-like language and being developed jointly by Microsoft Research and Inria, F* has nothing to do with F# and .NET.


"Programs written in F* can be translated to OCaml, F#, and C for execution. ... The latest version of F* is written entirely in a common subset of F* and F#, and bootstraps in both OCaml and F#."[1]

[1]- https://en.wikipedia.org/wiki/F*_(programming_language)


Thanks, I stand corrected.


No I was just hoping it was connected somehow. I only found a single mention of .NET:

> F* provides a facility to specify interfaces to external modules that are implemented elsewhere. For example, operations that perform file input/output are implemented by the operating system and made available to F* programs via the underlying framework, e.g., .NET or OCaml.


Who reads f * * * instead of f*?


"It will help if you are already familiar with functional programming languages in the ML family (e.g., OCaml, F#, Standard ML), or with Haskell—we provide a quick review of some basic concepts if you're a little rusty, but if you feel you need more background, there are many useful resources freely available on the web, e.g., Learn F#, F# for fun and profit, Introduction to Caml, the Real World OCaml book, or the OCaml MOOC."

Isn't it yet another copy of OCAML? I think it shouldn't say familiar but something different. I really don't underestimate any effort to make OCAML available on dot-net, but referring to excellent OCAML books can't be explained by simple 'familiarity'...


You are probably thinking of F#, not F*. Very different languages.


"Programs written in F* can be translated to OCaml, F#, and C for execution. ... The latest version of F* is written entirely in a common subset of F* and F#, and bootstraps in both OCaml and F#."[1]

https://en.wikipedia.org/wiki/F*_(programming_language)


I’m not sure what point you’re trying to make with that quote. The languages that F* can be compiled to and the ones it’s written in aren’t relevant to what was being discussed.


Why is that relevant though? You could also write a version of (for example) Prolog or Forth or Lisp using OCaml that would compile to OCaml for execution. Doesn't mean any of those are OCaml. Coq isn't OCaml. Haxe isn't OCaml. The fact that F* is written in, uses a subset of the syntax of and can be compiled to F# (or OCaml) is an implementation detail, the language and the purpose of the language aren't to be OCaml/F#.


> Isn't it yet another copy of OCAML

No, closer to ATS maybe?




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: