What I was looking for on the website and think is more than implementing a parser in another language is schema support. That is, you should provide something like XSD for XML, JSONSchema for JSON, TOLS for TOML.
Why? There is a need (see above enumeration) to declaratively specify how an Eno file should look like. I do not want validation to creep into my code, like you do with `document.string('author', required: true)`. This just scares the hell out of me. Say you want to parse some Eno file with different languages, you also end up replicating validation, which mains you end up maintaining it, or rather not maintaining it... Apply leverage by moving validation into your parser.
Another thing is that it appears you are implementing the parsers by hand instead of using a parser generator that consumes a grammar for Eno. What is your reasoning behind this? Is it performance? Did you benchmark using generated parsers (maybe wrapped in a nice API)?
As someone who has worked has written parsers & worker with generated parsers, to add to what OP has said about performance, from my experience there is a lot of noise that comes from a generated parser, and interacting with the code/output is quite unpleasant.
Also you have to decide on checking in garbage generated code into your source or adding a build step so you don’t check the code in (which is less trivial for certain languages/stacks).
Unless I’m making an MVP or a prototype I would write the parser. It’s not as hard as it sounds
If there is demand or initiative for a portable schema solution I'll gladly support it! The native architecture in the eno libraries is programmatic because that does have its own powerful merits which are employed to the fullest in the API design, like always there's not one best choice, and the 'validation creeping into code' can be turned around into 'external schema definition creeping out of line with code' just as well. ;) Do you have some concrete usecase in mind or planned where we could explore how a portable schema solution could look like for eno? Drafting things from various real life use cases has worked great for eno so far, so that's the route I would love to go here too if we follow that track!
Custom parser implementation is easier to answer: By now I've iterated through dozens of custom parser designs for eno in multiple languages, and I'm pretty much confident that generated parsers will not stand a chance of being faster, they after all do the same thing as I do, only I can't really hand-optimize what they produce afterwards. :) You can study the benchmarks I linked to under [3], there are some generated TOML parsers included with rather disappointing performance to put it mildly, and as it stands there's not much that's faster than the eno parsers in YAML/TOML land anyway, so I have low incentive to experiment in that domain currently. :) Long term goal is to (optionally) integrate (generated or custom) C (respectively Rust) parser cores through native bindings as well, so that will bring up this question again then for sure.
I've come to really appreciate the difference between "syntactically correct" (ie: is a file valid xml) vs "semantically correct" (ie: does it follow a specific dtd if it's valid xml.) More than that, I've come to realize how many other people don't have this appreciation even though they will identify problems that directly relate to this distinction in every day usage.
To truly be able to have a portable file format, there needs to be a way to do both validations reliably in different contexts (eg: different languages). If you ignore this part of your design then it may become the slowest part of the eno ecosystem because your grammar will have quirks that you'll end up needing to support long-term. I suggest toying with this functionality now and providing something which is extremely pessimistic on what it will pass. Only loosen things up as people show a need and keep your entire spec as tight as possible.
I would imagine that you could even use eno syntax to describe document structure, much like xml/dtd has such strong parallels with each other. Then you get the fast parser in both places essentially for free!
Finally, on the format of eno itself, I'm curious on your thoughts relating to unicode characters that visually masquerade as common characters. eg:
Syntax vs Semantics is distinguished by ParseError vs ValidationError in the eno libraries - I'll keep the importance of distinguishing them in mind for the schema development too - thanks for pointing this out!
Right now only an ASCII colon is interpreted as an operator, but this looks like a question to thoroughly consider for the next and final spec (which is planned for 2019, currently we're in frozen RC) - work on this currently happens at https://github.com/eno-lang/eno.
There is escaping for arbitrary keys by using backticks - see the advanced language feature documentation at https://eno-lang.org/advanced/, in the case of # #twitter you wouldn't need it though unless you omit the space.
Why? There is a need (see above enumeration) to declaratively specify how an Eno file should look like. I do not want validation to creep into my code, like you do with `document.string('author', required: true)`. This just scares the hell out of me. Say you want to parse some Eno file with different languages, you also end up replicating validation, which mains you end up maintaining it, or rather not maintaining it... Apply leverage by moving validation into your parser.
Another thing is that it appears you are implementing the parsers by hand instead of using a parser generator that consumes a grammar for Eno. What is your reasoning behind this? Is it performance? Did you benchmark using generated parsers (maybe wrapped in a nice API)?