Hacker News new | past | comments | ask | show | jobs | submit | more simonrepp's comments login

Hey, first off thanks! :)

1a) because faster was only one aspect, it also needed to be easier, even more pressingly that in fact. 1b) see answer by other poster (thanks!) 2) Not whitespace sensitive, no way to enter wrong types through syntax mistakes, hardly any learning curve for users because there is so little syntax to memorize, fully localized, hand written parser and validation errors (provided on the library side) ... and so on, check out the website for more, it's all there! ;) Thanks!


Regarding types check out https://eno-lang.org/javascript/#loaders - basically eno allows arbitrary types on the language level and provides loaders for all primitive types and currently also a small set of non-primitive types through the libraries. The extent of loaders provided out of the box might grow, likely also externalized into companion packages like https://github.com/eno-lang/enojs-exploaders/, which currently serves as experimentation ground for this.

The core might in fact be reimplemented in C or Rust and used across implementations through native bindings, although only a small portion of the actual parsing core can actually be outsourced like that, so it will depend what the actual benchmarks say then, there's also cost associated with passing around the data through bindings, the devil's in the details there unfortunately. :)

The editor in the introspection demo is atom, the introspection is based on the excellent automcomplete boilerplate at https://codersblock.com/blog/creating-an-autocomplete-plug-i... paired with a few lines that utilize https://eno-lang.org/javascript/#Section-lookup to determine the exact context for the autocomplete suggestions. Glad you like, thanks for your interest!


One of the design conderations was and is that the format is very strict (and that way predictable), but at the same time as helpful as possible in identifying, communicating and resolving issues.

To that end all error messages that can occur are handwritten, fully localized and shared across all eno libraries (see https://github.com/eno-lang/eno-locales/blob/master/specific...) and the API implicitly handles them for you when you write programs that consume eno.

So basically eno does no magic fallbacks of any sort when faults occur, but it is candid and friendly about it when it happens. :)


From what I saw, I think for at least a few parsers this might be the case because they are built on generated parser code, and it's just easy to run into unfavorable bits and pieces in the output that way, which can drag down performance completely although 95% of the parser are just fine. Technically there's no reasons why toml parsers shouldn't be just as fast or faster as yaml or even eno. :) In any case I'd be happy if the benchmarks stir up some movement and maybe kick off some high performance toml parser intiative, toml is an awesome format and also 0.5.0 was just officially released so there's a good reason to update the parsers now anyway. :)


Conveying what types to enter is not an eno-specific problem, as a user without schema or code access you don't know which types a blank YAML/TOML file expects either!

Asides the absolutely valid meta solutions (e.g. in-file comments, clear key naming, documentation) there is an additional way this is approached in eno: If you use the type loaders provided by the API (say 'my_var = document.url('website')'), and properly expose errors to the user, the user will get a localized (!) error message in his language, like "'website' must be a valid url (e.g. https://eno-lang.org/)".

In the long run we can have community packages for any number of important locally unique types (loaders are just simple functions, so they can be easily authored), so at some point you likely don't have to write any one-off validation code, and neither the error messages or their localizations, you just pull it in as dependencies.


How about type inference? You can look at rebol/red/tcl for inspiration, that already look like a config format but have a defined way to types:

https://randomgeekery.org/2004/12/26/rebol-datatypes/

http://www.re-bol.com/rebol.html


I appreciate the input :) but the thing is that the typing concept in eno as it is now is essentially what makes eno eno. Every application that uses eno decides for itself what types it supports and requires, and that in turn is how eno manages to be so simple and usable on the language level, even for completely non-technical people who normally feel uncomfortable with the idea of editing their content as raw text files.

If I would add types and type inference again, then I would essentially arrive at YAML and TOML again, and I don't want to reinvent them. ;)

But if I actually misunderstood you there, please let me know and do clarify!


I get that, but I wonder how give a base set of types, that avoid small incoherences.

JSOn is a good example:

https://www.tutorialspoint.com/json/json_data_types.htm

Is so spartan that everyone need to encode dates somehow, to make a simple example. I think this are the base types (also, my experience with RDBMs and building a relational lang now, and always having troubles with cvs, json, and others formats in ETL kind-of-task):

- String

- Floats. Can be split Ints/Floats but stick to just Float is ok. However, make it Float64.

- Date(Time). And be ISO. Not ambiguity.

- Boolean

- Decimal64. This is a pet peeve of mine. A lot of data in the business space is about money, and floats are not ok. What if like in rebool $3.2 is decimal?

Then the composites.

ie: This is json + dates/decimal. And make a single encoding (utf8?).

Is insane that, for example, you save a CVS in excel and open it again and excel get lost, and can't parse it fine.

Apart from this, url, email, host, website, phone, cellphone, city, country, state are so common that maybe with a import like "!schema:common-fields" or something.


You're right, not yet! Jump-starting the whole ecosystem was a major time investment for me but now that there is public exposure providing a formal spec has a higher priority because someone might actually see it and do something with it ;) Keep an eye on https://github.com/eno-lang/eno, this is where I'm working on it, I'll also announce it on the newsletter (http://eepurl.com/dA9LcH) when it's there!


1) Yes! (You can directly dump it to a language-native structure with the raw() method too, this is not 1:1 YAML/TOML style generic deserialization though as there are no fixed types in eno)

2) Some detail aspects of whitespace-parsing around the line continuation syntax will need to be specified by the language, the shared official API I am implementing for the different platforms is fully open to improvement and future reinvention though, I'd love to see a completely new take for a library API if it comes up in the future. :)

3) Definitely!

4) I try to keep things as consistent as possible across the platforms, but if there are important language specific paradigms I think these should be taken advantage of! I can't answer details regarding the PHP implementation yet but keep in touch, I'm happy about a dialogue here! (Also I can't be good at everything :)).

5) I want one! Obviously there can't be a stable generic "just dump it already" implementation, but a smart builder-type API is definitely on the list, I even started one for enojs but had to re-prioritize because there was so much else to do for the whole ecosystem. ;)


section vs. list in eno is like object vs. array in JSON - you need both.

eno has neither indentation nor closing tags of any sort, that means if you use a section to group some values, you need to start another section to end the previous one (no closing tags!), that's why there are fieldsets, which allow short groupings that automatically end with the next field/list/fieldset.

I share your opinion that a single syntax would be the ideal thing, but not having closing tags (which keeps the language simple and fast to write) required a trade-off in the language design to be made.

Why not JSON, Lua table-notation or S-expressions? Because the prime design goal was to achieve greatest possible simplicity and accessibility - almost anyone should be able to use it, no matter the background. If possible I would have wanted eno to be even more reduced and simple, but at some point you have to draw a line too, otherwise you end up with a toy, and then you won't ever get adoption by devs either. So this is why eno ... :) Thanks for your question!


I fully agree with your assessment - eno allows arbitrary types, therefore if and to what extent non-primitive type loaders should be included as core functionality needs to be thoroughly considered and negotiated soon. I included non-primitive loaders (also the exotic lat/lng ;)) to (a) show that this is a possibility and (b) get hands-on insight how well this works in real world usage. (in short: I love it so far, but I'd love to hear other experiences!) It took months to get the whole ecosystem jump-started as a one man show, so that's why some loaders are ... pragmatically coded, you're of course right on that, although admittedly I had no idea the email spec was that complex, thanks for making me aware. ;)


What you're seeing there is actually validation :), the lat/lng type is not magically inferred but instead explicitly requested by the code - if it were not valid it would generate a user-friendly, localized error message. Also the underlying document hierarchy that holds the data is validated. So on the contrary it is actually hard not to validate content in eno.

The quirky looking syntax you mention is probably familiar to you from YAML or paper forms ("Name: Joe"), and Markdown ("# Section"), likewise if you have two levels you use "## Subsection".

There's quite a few things that eno solves (there's only so much space on a frontpage, sorry), but if you want one of the more prominent ones: It's considerably hard to win over users without technical background to switch to secure, statically generated content solutions when the most prominent format works like this: http://yaml.org/spec/1.2/spec.html I've explained eno in 5mins to a non-technical intern who is now managing content at a client of mine and in months I haven't heard a single question about how eno works! User empowerment. <3


> What you're seeing there is actually validation :), the lat/lng type is not magically inferred but instead explicitly requested by the code - if it were not valid it would generate a user-friendly, localized error message. Also the underlying document hierarchy that holds the data is validated. So on the contrary it is actually hard not to validate content in eno.

Please make this clear on the website! I think it’s a clever idea.

Also, this has garnered some attention on Lobste.rs [1], if you’re interested to read the discussion there as well.

[1]: https://lobste.rs/s/jno6gb/eno_notation_language_libraries


Thanks for that feedback! I'll see what I can do to communicate the API type concept better, I'm generally struggling to pack all the bandwidth of things into the little available prominent space on the website, but eventually I will get it right for most people I hope. :)

Also thanks for letting me know about the lobste.rs thread, I'll see that I get invited and answer the few not yet-answered points, couldn't get to it yesterday unfortunately amidst all the comment and issue and PR flood here and on github. ;)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: