Hacker News new | past | comments | ask | show | jobs | submit login
Over the past 21 months I’ve written a code editor from the ground up (edita.vercel.app)
410 points by gushogg-blake on Jan 31, 2023 | hide | past | favorite | 136 comments



> A unique property of writing a code editor is that after a very short bootstrapping period, the thing you’re writing will also be the main tool you’re using to write it.

I love these types of projects that happened because someone needed it and wanted to, and I loved that they came to us with the project in hand so they didnt have to talk down 100 but-why's like "dude why do you need yet another hobby editor, just use vim/emacs/whatever, i program in $LANG too and use these 37 plugins and 4 keymappings and it all Just Works".

I contribute to a popular IDE that I'm sure a lot of you use, and I'm still going to put down my setup and put a week all-in into this just to, if anything, spiritually put something out there, something positive, to validate this person's work. I see value in projects like these, and though it's hard to put into words I hope someone more eloquent echoes this sentiment in a more tangible way. This is great stuff, I wish more people broke the norm and just did-the-thing.

> Randomly deleting unit tests may seem like insanity to some of you, and in certain contexts it probably is — but writing your own editor is a particular endeavour, and if you embark on it I encourage you to do whatever feels right to you, refactor as you please, and embrace the chaos. Happy hacking!

I wish more people dared to buck the trends. Good on you, Gus.


> i program in $LANG too and use these 37 plugins and 4 keymappings and it all Just Works

I've been wondering if there is some analgory between programming editors and armoured military vehicles.

The backend guys might need a full-on tank (Abrams/T90) - an IDE/hyper-custom vim. The front-end infantry might need a lighter weight but highly mobile IFV (Bradley/BMP) like VSCode/whatever.

It's possible the IFV/artillery SPG/etc vehicles can also be built on the body of a tank to "save money" and time. Modularity is important right? But it also creates frankensteins like the BMPT [1] which is an IFV built on top a widely available T-72 tank platform but costs 2-4x what a normal IFV/BMP costs [2] and lacks the benefits of either an IFV or main battle tank.

Ultimately it has little to do with the inherent generic capabilities of the base editor/platform. It's about designing the platform around a certain audience and skillset, then building a community around it's mission. Then iterating on that particular usecase.

I was a hardcore fan of Vim for the longest time but I've come around on VSCode as the essential default for frontend/Typescript development which I do. Sometimes extensions/plugins aren't enough. You need an editor built around an idea/community that fully embraces it. Much like programming languages.

[1] https://en.wikipedia.org/wiki/BMPT_Terminator

[2] "Price of the BMPT is equivalent to two modern Russian main battle tanks." https://www.military-today.com/tanks/bmpt.htm


What a fascinating combination of "analogy" and "allegory".

This aside, I agree with your point - the best articulation I can think of right now is the human equivalent of "do what I mean not what I say". Where sometimes the collective intent gets stuck in an ideological rut and can't become what everyone wants it to be for lack of expression.


This is mostly how I work today. Intellij for Java backend work, VSCode for front end work. Both using VIM keybindings.

And to your point, I couldn't imagine developing Java without Intellij. It's like they are made to work together.


Learning VIM keybindings definitely have been one of the biggest productivity investments I've done across tools. And I rarely use vim.

However, I love the vim community and how they port VIM bindings to almost any IDE(or most software that works with text and allows for some extensions). Same way to deal with text across OS/Machines and software feels very liberating.


Hah. Same! Every so often I try something like LunarVim and want to be a 'real' vim user, but end up back in my IDEs with vim keybindings instead.


Have you tried Webstorm?


I have Intellij Ultimate. It's just really heavy when I'm moving around a js project. Not against it though, some colleagues use Intellij for everything and seem happy.


I'd expect software to be more "stretchable" than armoured military vehicle designs. Hopefully, both the tank and the IFV can be Vim or Emacs or VS Code, and both can be good without compromises.


Yep - for projects like this, unit tests serve to reduce the dev's mental load. Once a test gets in the way of making easier progress, it's no longer serving that purpose.


Generalized a bit, unit tests on all projects serve to make progress easier and regression harder. There are secondary benefits on evidence/documentation. When the ROI on them turns negative, just delete them. Overcoming the belief that you can't delete unit tests is the lowest cost way to start writing more and better tests.

It's so weird when you think about it. People don't delete unit tests because not having them is bad. So, they... don't write them? "No unit tests" is a state we can always get back to in a seconds. Don't be afraid to leave it.


> Randomly deleting unit tests may seem like insanity to some of you

I too, like to live dangerously.


>Don’t worry about breaking things. If the new structuring you’ve thought of is better, get stuck in and change the code. You will introduce bugs this way, but because you’re using the editor every day to write itself, you’ll find them quickly (in fact, almost by definition, you’ll tend to find them more quickly the more important they are) and your policy of always working on whatever needs fixing right now will make sure they’re fixed promptly.

Solid advice in the early-ish days of any project.

It's not like your code was perfect before the refactor - you just hadn't found some of the bugs yet. You will have bugs after too, but now you have more experience and will probably find most of them quickly. Don't get too attached to the maybe-bugs you currently have.

Also:

>When you’ve got a while (true) loop that searches a tree, it’s gonna throw you an infinite loop every now and then.

A surprisingly effective strategy I've found for development purposes: never have infinite loops.

Nothing is actually infinite. You think X might loop 100x? Put a limit of 10,000 ("absurd and impossible"), and crash your debug builds when it happens. With enough of this, you'll get hits, and they're extremely effective at catching pathological cases that you didn't realize could happen. "It's a bit slow" is often too hard to notice.


In personal project’s refactors are dangerous. Not because of bugs, but because they tax your motivation more heavily than other tasks. Personal projects don’t fail until you decide they do, so maintaining your enthusiasm is key.


I guess it depends on what motivates you! I find refactors a zen-like experience. Figuring out the way to make everything fit together elegantly is a really satisfying task for me.

Also, "personal projects don't fail until you decide they do" is a great insight :)


Depends what the refactor is. I think personal/side projects are a great place to give into the attractive nuisances we encounter in programming.


I actually love refactoring.

I really like to keep my code "elegant" and so I tend to constantly refactor it (which also makes a typical refactoring step small, and thus not a chore).

At work it's sometimes a hindrance: I tend to overdo the clean-up stuff; for me it's hard to resist a temptation to have even simple throw-away code clean and maintainable, and it is difficult for me to cut corners when working on something urgent. Of course sometimes throw-away code is not thrown away and is maintained for years and years, and cutting corners accumulates code debt (which you later on pay with interest), but overall I tend to be slow with small projects, although reasonably fast (if you include debugging time) with large ones.

But when working on a personal project, and can tidy the code up to my heart content! It actually helps to keep up my enthusiasm, not the other way around.

I guess different people have different preferences and predilections :-)


For GCC I just added a debug counter facility. You could tell it what value a given debug counter should stop on from the command line.

This also made automatic minimization of bugs much easier and faster.

One thing that other (applicable). projects dont seem to do that llvm and gcc were quite good at was using automatic tooling to minimize test cases for bugs. Find a large test case causing an issue. Run automatic minimizer and leave it alone for a bit, come back to minimal test case that reproduces your issue


Many things are infinite from the perspective of the program, for some classes of programs. An IoT device generally is always doing the same loop and responding to interrupt signals for anything else. Game loops work in a similar way.

I guess you can have a limit on the loop, have it break out to make sure you're still listening to interrupts do some logging and go back to looping, but in an environment where interrupt signals are native, an infinite loop can and is fine.

Glad I got my bikeshedding allowance out for the day.


Most of the things that are infinite from the perspective of the program tend to be event loops of different forms, though, where that perspective is "unless the right event arrives". There are rarely many of them in the same application, and they tend to be high level. If you find yourself writing an infinite event loop at a lower level, odds are something is off (e.g. an infinite loop waiting for a reply packet over the network could do with a timeout)

I agree it's usually fine to use infinite loops for high level event loops, as it's usually easy to test that you can break out of them by triggering the right event (though sometimes it might be worth thinking about pathological cases if you can do something about them, such as whether something is wrong if there's no interrupt for a long enough period; and you can also prevent writing an infinite loop even for event loops by making the break condition whether a suitable "exit"/"restart" event has been received - very few things should be without one or the other of those two even if intended to run forever in normal circumstances).

The advice, I think, is more important for algorithms working on input data that may well lock up the application if the logic is wrong. E.g. a sort should eventually complete; if it takes more passes through an outer loop than the theoretical worst case for the algorithm, something is wrong (be it your implementation or your understanding of the worst case...). A habit of estimating worst cases and treating them as invariants to check and make assertions about can be useful.


Well, many document-processing tasks also require something like an infinite loop.

For example, parsing of an XML document naturally resembles an event loop, with each token being an equivalent of an event.

Although usually there is a natural end to this loop, so you have not "while(1)" but "while (not EOF)" or something, which helps avoid some bugs, but I believe a compiler turns it into an infinite loop anyway, and breaks it on these conditions.


>You think X might loop 100x? Put a limit of 10,000 ("absurd and impossible")...

This feels like a lot of line noise for those rare-until-you-hit-them bugs. Just capping a loop with a fixed iteration size deserves a comment as to the rationale. Do you do it everywhere? Just IO?

The idea has merit, but in practice, I am not sure where I would implement it. Plus, there would always be the temptation to remove that do-nothing code.


If it's much "do nothing" code to remove, perhaps it'd be better to move to a more expressive language. E.g. for Ruby at least, block arguments means it's trivial to do this:

    def at_most(num)
       num.times {  yield }
       raise "Should never get here" # or "binding.irb"/"binding.pry" to drop you into a repl, perhaps contingent on a command line flag.
    end

 and do:

     at_most(5) do
        .. logic
     end
If the number isn't obviously absurdly high, I agree it'd be worth documenting why the precise number was chosen, but then it might be worth documenting why as an indication of why you thought an infinite loop was appropriate anyway.


Yep. In essentially every language there's a reasonably efficient way to do it in one or two lines at most.

It's really not much noise, and it serves as (mechanically checked!) documentation of the expected size of a loop. In some ways that's an improvement, since it's just a loopy assert.


> Plus, there would always be the temptation to remove that do-nothing code.

It's not 'do-nothing' code, though. Isn't the temptation just the same as the temptation to remove 'do-nothing' exception handling or 'do-nothing' unit tests?


> This feels like a lot of line noise for those rare-until-you-hit-them bugs.

Yeah, but if doing so can eliminate an entire class of bugs...

...you know something? I think I'm starting to see where those Rust guys are coming from.


As a counter-argument: why are we generally ok with allowing things to be infinite when they're expected to be finite? Precise bounds are obviously best, but sometimes it's not efficiently calculable or not worth the effort - don't choose the literal worst possible value there (infinity), pick something that'll stop itself when it goes insane.

And no, it's a tool. Use it where it makes sense.


Writing an editor from scratch is a fun exercise. I've written a few using codemirror, and can recommend the kilo-tutorial[1], which walks you through antirez's kilo editor (a 1000 line editor) and explains each change in steps.

[1]: https://viewsourcecode.org/snaptoken/kilo/


+1 for kilo. Another great approach is just to read the source code itself. It's only a thousand lines and it's well commented. https://github.com/antirez/kilo/blob/master/kilo.c


> Don’t waste time keeping unit tests constantly up-to-date if the code is working. Once the code is both correct and well-factored, feel free to disable or remove the unit tests so they don’t clutter your test runs with errors after subsequent refactorings.

Wait, what?! To each their own, but you'll pry my well-maintained collection of unit tests--which describe all _defined behavior_ in my application--from my cold dead fingers, thank you very much.


Take a second and consider the tradeoff here. I think you're underestimating the benefits of deleting tests. This doesn't mean that you should delete your tests, but you should definitely should be able to steelman the argument that you should delete your tests.


Well, look at it this way--what benefits are there in doing so?

- The test suite runs faster

- Less time is spent maintaining the test suite

The argument wasn't whether tests should ever be deleted, but rather an encouragement to delete them as soon as you feel confident in your code. Personally, if a test is still relevant then I will maintain it until the behavior it's testing is no longer part of the application. And often that's a zero sum situation, because now you need to test the some new behavior.

A failed test is literally your test suite saying, "Remember this part of your application that you promised would behave a certain way? Well, surprise! It doesn't behave that way anymore." There is value, if nothing else, in knowing what the implications of a change are, especially as the system grows. So even if you're okay with a change in your application's behavior, I'd argue that tests are the best way to be aware of what those changes are.


A huge benefit of deleting tests isn't merely the reduction in maintenence but the reduction in "conceptual drag". Tests cause design decisions to get baked into code to a higher degree than if the same decisions were made in a codebase without tests.

Tests certainly can be valuable for verifying the scope of changes, but they're not the only way to do this. I could imagine supporting the process of delete-as-you-go testing if you could convince me that whatever you are doing to identify regressions works just as well as the test suite would.


I'm not sure you can separate "identify regressions" and "test suite" into different buckets. I'll concede that if you can, it'd be an acceptable replacement--assuming, of course, that it doesn't add more friction than it replaces.


I’m constantly surprised by the ubiquity of javascript.

Of all the languages I would choose for a desktop app, it ranks pretty far down, yet I am clearly in the minority.

But I’m not here to show my age or biases - I am genuinely curious. What am I missing that makes JS/node (or is that a redundancy? Is there any other choice than node for JS local development?) such an attractive choice for local development.

Over say java / swift / rust for static languages or python / closure / julia for dynamic?


Most likely it is simply the fact that they use it for their day job. Tackling a tough new project from scratch is already mentally taxing, doing it in an unfamiliar lang doubly so.


Using JS/Node and React/Native, you can target absolutely every platform with a unified codebase.

There's essentially no technical debt in choosing these technologies because if you start with a web app and then want to make a "desktop app", there's Electron. Want to make a mobile app? React Native. There's even React Native for Windows and macOS for a more true desktop app with a native UI. You can also run your JS on the backend with Node, and even inside the database (PL-V8 runs V8 for stored procedures inside PostgreSQL, aside from obviously CouchDB or MongoDB and others). Heck, you can even write an app for the Xbox or build out on several IOT platforms using JavaScript!

Is there any reason to create the same thing in 4+ different languages? Are you going to rewrite your frontend in C#, Kotlin, and Swift? Are you going to deal with different language models and incompatibilities? Rewriting the same exact functions over and over, and unit tests for every single one, keeping in mind the subtle semantic/scope differences between each language--and then realize you still have to serialize/deserialize to JSON to talk to your backend and other services and just juggle JS data models between all these languages and ecosystems.

Choosing JavaScript and React today are the sensible default where you won't code yourself into a corner, is what I'm getting at. And you can also now choose TypeScript or ClojureScript in all such cases as well.


i have used nodejs for all of my personal projects for years (almost 10?) now. my reasons are: * JS has the largest developer and runtime install base * JS is very competitive with Go/Java on performance and the runtime (v8) actually improves on cpu/memory performance over time * you get a UI that supports everything on all major platforms (the browser) for free * the apps i write are generally IO heavy and so the async event driven model is very useful to have as a 1st class feature * starting with an async event driven language/runtime makes it very easy to decompose your app if you want to split something out into a separate service * i know js, and work with it professionally already, and it has a large market for professional dev work, so everything i learn in my personal projects increases my value as a professional


I too have the same sentiment.

I realize though there are not many frameworks for cross platform development. If you do want a native language you'll probably have to use QT. There are more out there, but most are based on native widgets which have their own unique behavior which makes them difficult to abstract.

It seems to me that these electron web apps provide this 'leak free' abstraction.


The editor is based on Electron 22.1.0

https://gitlab.com/gushogg-blake/edita-release.git


Thanks. First thing I look for when looking at new editors.

Electron leads to high latency and slowdown once your editor grows big enough- even on 8+ core machines. We have so many electron based editors at this point, including incumbent VSCode....


> Electron leads to high latency and slowdown once your editor grows big enough

Any large application who extends the number of features forever leads to high latency and slowdowns, unless you structure things as extensions/plugins and load them ondemand/allow users to disable/enable them.

It has nothing to do with Electron, same thing would happen with a application written in C/Rust. Might happen faster with JavaScript, as JS developers tend to have less focus on performance, but it is less about Electron vs the World and more about who the developer(s) is.


>I start most of my algorithms with while (true)

That's gonna hurt

>When you’ve got a while (true)* loop that searches a tree, it’s gonna throw you an infinite loop every now and then. Fortunately, the more times this happens the better you’ll get at lightweight debugging strategies, and the more familiar you’ll become with the algorithm and the data structures

I like your style, kid.


Reminds me of another style:

"Just do it, hack! I approach code like games, rush deep into room, trigger all NPCs, die, after respawn I know where NPCs are." --@bkaradzic


I always add a counter to the infinite loops to break out of them incase of unexpectedly high number of iterations


Error out or something? Otherwise it would seem the program is in a weird state.


Error, or do the sensible “no results found” for example in the tree search.


... and then what?


Like the article said. Break out into a debugger and figure out why it got stuck.


I'm working on a new programming language and every point resonates with me on a spiritual level. One thing that helped me a lot was writing more integration tests at the outer edges of the application, opposed to unit tests of individual modules. Usually the logic doesn't change between rewrites, so keeping a somewhat stable interface keeps me from breaking stuff in between grand rewrites.


I use my own editor as my daily editor, both for code and most other text files. A few observations on the article:

> Use it early

This was key to me. If you're going to wait until you have a perfect editor to use it, you're better off spending your time on something else, because it'll take a really long time to get there if you ever do.

I started working on my editor because I was endlessly tinkering with my emacs config and realised I could write an editor in fewer lines than my config...

You need to have a reasonable clear picture of what an MVP looks like to you. But having as a goal to use it early can also drive design.

E.g. in my case the first thing I did to enable this was to have the editor serialise all open buffers to JSON and write a dump regularly. If I didn't expect to use the editor until it was stable, I might not have. But I came to like being able to start the editor after a reboot and have all of my buffers as they were.

I later also moved the buffers into a separate process using DrB (the editor is in Ruby). Combined that made it near impossible to lose changes as DrB makes the backend near crash proof (exceptions are forwarded to the client), and the checkpoints/dumps deals with the last little piece of risk.

The day that was finished, I switched 90%+ of my editing to my own editor even though it was still buggy and lacking in functionality.

As a result I as of writing have 2199 open buffers, some dating back a couple of years. I kill some now and again - e.g. if I open a particularly large file, or something sensitive - but by default I just stopped caring as the dump is only 66MB.

> If you’re using the code editor to write itself, the most pressing bugs and missing features will make themselves apparent to you for free.

This is a big deal with using my own editor. I have a "roadmap" of things I like the idea of, but "what is painful right now" trumps all of it, because I don't have any users (nor do I want any - I'm slowly splitting functionality into gems that I'm happy to deal with issues with, but the github copy of the repo of the editor itself is wildly out of date and heavily dependent on my personal setup, so likely won't even run for anyone else)

It also means I can follow the "refactor when it seems like a good idea" advice without caring if the editor is half broken for a while. I have bugs in there that are clear regressions that I'm ok with working around but that I'd never allow myself to commit code for if I was writing this with the expectation of other people using it.


This is really interesting! Is your editor written is Ruby then? Is the code hosted somewhere public?

I am also curious what advantage you found in putting the buffers in a separate process. Is it just that a crash in the front end doesn’t cause you to loose changes? Why not just save at regular intervals instead? Is the idea to persistent undo or file revisions?

I am also curious what you thought was a limiting factor of Emacs so that you found it easier to write an editor from scratch.


It's on github [1], but the version on github hasn't been pushed to in ages and is a mess because, well, I haven't really cared about making it work for anyone else, so not sure how useful it is to look at. It's an ugly mess, and frankly the one downside of doing a project like this is that it's a bit embarrassing to show people - since I'm the only user I only clean things up when the mess starts hampering me, and since the codebase is so small, that's rare (my current iteration, not the Github one) is about 3.3k, with about 1k of that being syntax-highlighting stuff I'd like to try to get upstreamed to Rouge)

I started out with Femto [2] and ended up quickly gutting it (there's next to nothing left now), and as an example of a minimalist Ruby editor, Femto is perhaps a better starting point - it's certainly both cleaner and smaller.

I've started breaking a bunch of functionality into gems (termcontroller, file_writer, ansiterm, editor_core, keyboard_map) of various stages of usability for others, and my goal is to over time eventually be left with the editor as a tiny shell around reusable components + my personal config, but as per this article, I plod along and fix the things most pressing to me personally and nobody else so who knows when that will happen.

As long as it keeps being useful to me, that's what matters. In that respect it's also a meditative experience of sort that takes my mind off work and having to work in a more professional manner.

> I am also curious what advantage you found in putting the buffers in a separate process. Is it just that a crash in the front end doesn’t cause you to loose changes? Why not just save at regular intervals instead? Is the idea to persistent undo or file revisions?

I also save at regular intervals, so you're right that if that was the only consideration I probably wouldn't have bothered.

But the editor is a terminal application, and I use a tiling window manager (bspwm), and so I figured that if I could spawn another editor instance and have my wm open it in the right space and with access to the same buffer, I didn't need the editor itself to have any understanding of windows itself. Here's "split_vertical" that opens a second view of the same buffer:

    def split_vertical(buffer_id)
      system("split-vertical 2>/dev/null term -e e --buffer #{buffer_id}")
    end
[EDIT: Should also clarify here that "term" is a wrapper for whichever terminal I currently favour, at the moment mlterm, and "e" is an alias for my editor; if I ever were to try to make this usable for anyone else, there'd be a whole lot of cleanups to make - the biggest concession towards this I've made has been to recently move most dependencies on my environment into a separate file]

split-vertical is a helper (I split out a handful of things that depends on external tools) that just runs "bspc node -p south" to get bspwm to halve the height of the current window and "swallow" the next window being opened into the space freed up below the active window, and then exec's its argument. So meta-2/meta-3 works roughly like ctrl-x+2/ctrl-x+3 in Emacs (EDIT: fixed; already forgetting the Emacs keybindings...), except the frames end up in their own windows, running their own client.

Had remoting the buffers been a lot of work, I wouldn't have, but the the Drb specific part of the code is <100 lines and the combined effect of the resilience (if an exception is thrown in the server code, I get thrown into a Pry repl in the client) and the extra abilities it enables makes it worthwhile.

Similarly, I've avoided implementing my own directory handling, and just farmed that out to a wrapper that currently uses rofi, same for theme selection (for syntax highlighting), buffer selection, etc.

The minimalism of delegating all of these things elsewhere appealed to me.

> I am also curious what you thought was a limiting factor of Emacs so that you found it easier to write an editor from scratch.

Strongly disliking Lisp was high on the list (I like the concepts; hate the syntax intensely). Also, it wasn't so much a limiting factor, as realising that my Emacs config file was larger than toy editors I'd played with in the past, and a minimalist bent making it appeal to me to go for something smaller. Also seemed fun (and has been)

[1] https://github.com/vidarh/re

[2] https://github.com/agorf/femto


Just wanted to say this is fascinating. Good luck with it all. For the 2 major features - short demo clips would go a major way.


For comparison table in code patterns:

https://codepatterns.vercel.app/

There’s one column missing: whether the query is purely syntactic or has semantics as well.

In particular, JetBrains allows not only searching for all _expressions_, but also, eg, constrain expressions to be of a specific type, something which isn’t possible with just a parser:

https://www.jetbrains.com/help/idea/search-templates.html#ty...


This reminds me of the time I hacked together an editor in assembly. I’m all for writing “just another xyz”, however after 2 decades of development I’m tired of the infinite amount of todo apps/editors/etc. on the web. I am conflicted though. On the one hand reimplementing something existing is a great way to learn coding, sdk’s, api’s, etc. On the other hand I feel it’s limiting imaginations and I would love to see something new and unique. How many “better mouse traps” do we need.


The CodePatterns piece deserves serious consideration as its own project; it could be the basis for a generic refactoring engine. It would be great to have some kind of refactoring scripting language for e.g. updating clients of your library when you make a breaking change to it. Just include the update script and your clients will have a painless upgrade to the new library version.


Turbo C for dos was written in 2 1/2 months. Now it takes almost 2 years to write a code editor? This is not a ding towards the OP. It is an awareness that something has gone fundamentally wrong with development.

vscode and modern IDE now take 1-2 gb of ram for medium projects. What has caused so much resource usage?


Congrats for building something new. I too, am trying to build a note taking app on top of Tauri. I am still extremely early but stories like this keeps me going. Keep up the good work!


I've fantasised idly for some years over building an editor around something very like the AST Mode, so I'm looking forward to trying this out.


Somewhat unrelated but I fantasised about creating an assembly language editor which uses binary executable to save code and some kind of attributes or whatever to save comments, identifiers and so on. Will not work with macroses, but an interesting idea IMO.

Might actually work with Java-like language (JVM bytecode is very close to Java AST).


(mods forgive me for double-posting this link!) https://twitter.com/PaniczGodek/status/1463860646253060104

(Scroll up in that thread for AST inspiration)


So refreshingly pragmatic.

Some of these ways of working will be brutally beaten out of you in most dev teams unfortunately, whether they're effective or not.

Yay cargo culting!


The code of the editor is in JavaScript and not in TypeScript. Which makes me wonder how hard it goes in terms of regressions when the codebase becomes progressively larger.

I love vanilla JavaScript, but I always try to switch to TypeScript if the codebase becomes larger than 100-200 lines of code. Or otherwise I may regret, and I know it too well from previous experiences.


Using JavaScript in 2023 is a code smell. You get a massive bug catcher for free with TypeScript, and that's only the most superficial benefit.

When I hire devs, I don't care much about their previous stack experience, but if they've actively chosen JS over TS, it tells me their philosophy/priorities are completely backward.


> You get a massive bug catcher for free with TypeScript...it tells me their philosophy/priorities are completely backward.

This is a superficial take. TypeScript is not free. You are adding a compilation step that takes time, slowing down feedback in certain workflows, as well as a rather large dependency. It certainly can be worth asking why they chose JS given some of your perceived benefits of TS but not using it doesn't mean that their philosophy is backwards.


> This is a superficial take.

I've been using TypeScript in server-side, client-side, and hybrid projects for 8 years. Since then, I've inherited JS projects and felt like I was being forced to use a rock when I was used to having hammers. My take isn't superficial.

> You are adding a compilation step that takes time

The entire project is compiled only when testing or deploying. As far as testing goes, having static types dramatically reduces the number of times you have to run a program before shipping it. The net is that there's less time waiting for builds, not more.

And anyway, most projects compile in 2 seconds or less. There is a huge net time saving no matter how you look at it.

> slowing down feedback in certain workflows

This is incredibly vague. Which workflows? And why does it matter in those workflows?

The majority of feedback given by TypeScript is during the editing process, and that feedback is instantaneous because it comes from the language server and doesn't require recompiling.

> a rather large dependency

No. It's a dev dependency and doesn't ship with your code. You have to install it locally at some point, so I guess if a few MB of space is important to you, it's a problem. Otherwise it isn't.


I agree 1000%. I am baffled by people still saying that the cost of TypeScript is not worth it. 1 unit of effort in setup at the start of a project for 1000 units of effort saved every day through lack of bugs, structural integrity, extensibility, intellisense help, and so much more.

O(1) setup cost for O(n2) productivity gains.


Just a headsup, these days the typescript tooling is actually ergonomic with esbuild and vite. If you still use javascript and know about this tooling, I am very curious why you would choose to do so, because esbuild will literally compile AND bundle all your js before you can say "compilation".


What if you're not building a browser app. For a server side, NodeJS app? Sure, you can add a build step on the back end, but that's not free, as it comes with setup, having to run a transpiler before your code.

Also, using esbuild to transpile your TS will not actually check the types. It just transforms it back to JS. For typescript, you really need the official library which is slow and bloated.


Yes you need an extra build step, but really I cannot overstate how simple this step is. There is no webpack config hell, there is no giant dependency chain, there is no long wait for anything, there is no magic file globs and mimetypes. In 10-100ms, esbuild (a single small dependency) will take 3 command line arguments, grab all your linked TS from a single TS entrypoint file, and spit out a single (possibly minified) js file that you can immediately invoke with node. Instead of node bla.js, you do esbuild && node bla.js, you don't care what happens inside.

Indeed it will not check your types, but as stated elsewhere your IDE can do this for you. This is actually beautiful because it allows you to get correctness feedback on your code but your code will never fail to run due to arbitrary type complaints. If you later feel like you want the full correctness of TS, maybe during CI/CD, you can run the slower compiler and get a gatekeeper telling you if you are doing something fishy.


> What if you're not building a browser app. For a server side, NodeJS app? Sure, you can add a build step on the back end, but that's not free, as it comes with setup, having to run a transpiler before your code.

It is free if your IDE uses the TypeScript language server and gives you errors integrated with your code editor. You don't have to transpile if you don't want to. You can, as you said, just use esbuild to strip away the type information.

> For typescript, you really need the official library which is slow and bloated.

See above. You absolutely don't. And it's not slow or bloated at all. It has incremental builds, which are essentially instantaneous after the initial build.


deno consumes typescript.

IMO every JS engine should just consume typescript (without type checking). AFAIK TypeScript is designed to be easily consumed by JS engines. It's a pity that V8 ignores that property.


> deno consumes typescript.

I don't think it does, it comes with a TS compiler built-in, and "can run TS" by compiling said TS to JS and evaluating that. Deno is still using V8, so unless Deno has changed recently, it does not "consume TS" and no mainstream JS engine does.


As a user, I don't care how Deno works under the hood. All that matters is that I don't have my own build step.

There likely will never be an actual TypeScript runtime because that doesn't make sense as a concept, and it's against the philosophy of the project. TypeScript is metadata for devs.


> As a user, I don't care how Deno works under the hood. All that matters is that I don't have my own build step.

That's fine, doesn't make the truth any different that it doesn't actually read TS without compiling it, as parent said.

> There likely will never be an actual TypeScript runtime because that doesn't make sense as a concept [...] TypeScript is metadata for devs

It does make sense to have a runtime for TypeScript directly. There is bunch of optimizations that can only happen after the runtime understands the type of the values, and if you keep switching the type of the variable value it'll be able to do less of those optimizations.

So you could probably end up skipping a lot of inference if the types gets shipped to the runtime as well as the rest of the source, vs just the source.


Needing constant, instant recompilation is itself a smell that indicates either a lack of confidence in the usage patterns of the apis you are depending on (types help with this), or an immature understanding of the tech stack in general.


There are no code smells in personal projects, but this isn't one anyway. Every text editor I've ever used has had a single button inside that editor to change the code in the editor to what you just typed in. For an author who has written a blog post on how they eschew unit tests, this is probably the most important feature for one.

A one sentence comment about how the author is bad at programming says more about you than the author. People have been writing text editors like that for decades. They aren't bad programmers.


I think that switching at some point from JS to TS is a great way to find a lot of bugs and edge cases in established functionality that already works well enough.

I think starting in JS and then migrating to TS is a very good strategy that lets you develop fast but once you figure what you needed to wrtie let's you get additional assurances and conveniences about what you wrote.


I don't think any of this stuff really matters for 1 person projects. The goal isn't to have multiple developers adding features in parallel with each other, so if the author understands it, it's more than likely OK. I wouldn't pick that language, but they wrote Roller Coaster Tycoon in pure assembly and wrote Emacs in Lisp 40 years ago (and we're still using the code today), so this likely won't be a project-killing decision.

I would take Emacs Lisp over Typescript any day, but that's just me.


As long as you're disciplined it probably doesn't matter, but I think it takes significantly more effort to ensure correctness with JS vs TS.


Is using mypy the typescript of Python? Python desperately needs an analogy to your scenario.


It might be the best option you have, but it's not even close. nim would be a better option.


Very cool project! The AST mode is super interesting. I also code an editor, and using it from the beginning is really the key.


Projectional editing (and storing the AST, not the projection) are both grail-level items for me. I’m traumatized by the lack of attention mbeddr got; while it was active / maintained it felt like the next big thing for C-family development, and the projectional editing (plus the language concept stacking) were the key features in my eyes.


I'ma use this as an opportunity to plug my own AST mode project: https://twitter.com/watware/status/1607026712755458055


The unit testing section is interesting because I feel like this may not be the perfect approach to them, but it is not a bad method. Too many times I see unit testing being used as the holy grail of debugging and then half of them get ignored because "Oh that's a known issue."


Higher level functional tests are nice for that reason. Yes, they won’t pinpoint a bug instantly but they survive refactors nicely.


If you use micro services properly, unit tests can still call APIs and survive refactors.


Being in a quibbling mood, and remembering stories of people who would reboot a device using their own drivers and no OS, I object to "ground up".

No sand or gallium containing stone was transmogrified in this story.

"I wrote a code editor from the ground up. I started with notepad and stopped there."


He didn’t re-evolve from primordial soup: disappointing.


Anyone got this working on macOS? Out of the box, I get:

> App threw an error during load

> Error: --start-args argument required to separate command-line args from node/electron paths

> at filters (~/progs/edita/build/electron/mainProcess/utils/getArgs.js:13:10)

after `npm start`.


same on windows


Should be fixed now. Thanks!


Perfect — thank you!


I got a kick out of the Agile Manifesto parody on the main page:

>>

- ...

- Perfecting smooth and ergonomic interactions over shipping quickly.

- Providing rich and configurable features over simplicity or minimalism.

- A visually pleasing and quiet interface (over keeping up with design trends).


What a lovely story. Makes me want to write an editor.


I tried to do the same with my iPad but the limitations of NPM have made it a bit difficult, def past the first 12 months


The off-center plus icon in the sidebar is bothering me more than it should....


Is there anything like AST mode for neovim? Sounds like a cool feature.


Because it is basically undiscoverable (yet highly valuable!) I’ll take this moment to link to Panicz Godek’s “or this this thing by…?” Twitter thread in which he archives links to anything resembling an AST editor / structural editor: https://twitter.com/PaniczGodek/status/1463860646253060104


here a video, where he shows some features: https://codepatterns.vercel.app/


For a small hobby editor, this is fine. For anything serious, much of this advice will not scale at all.


Most serious editors started as small hobby editors, including Vim and Emacs

What historically doesn't scale is trying to get everything "scalable" and "extensible" from the get go - most ambitious projects like that get abandoned because they get the initial designs wrong and its too hard to change, or because there's just too much effort to get to usable state...


Not convinced. I worked on a project with some friends that got to about 20kloc and then collapsed under its own weight because it became impossible to change anything without breaking something else. While there's a wrong way to design for scaling or extensibility, development practices that rely on holding the whole thing in your head will hit a brick wall sooner or later.


It's fascinating reading posts like this from people who have actually put in the work on 20k+ LOC projects with multiple people and those who just repeat a few articles or blog posts they read.

The difference is stark.


I find the comment a little rude, as if implying the grandparent poster (me) "just repeat a few articles or blog posts they read", whereas the anecdotal parent comment about a 20k+ project they've worked on shows the real experience.

I've worked in multiple 20k+ LOC projects with other people and everything (teams of 10 or more people are good enough to you?). And 20k+ is not even that big, it's not like it offers any great insight regarding software development.


I don't think the size of a project is that important if you're looking for insight into software development. There's certainly poor ways to go about developing small and large projects and most of the insight is to be gained from looking at the mistakes (and lack thereof), whether it's a 100 line piece of glue you wrote which became unreliable and caused major issues or if it's some 100k line project which became increasingly difficult to maintain.


There's a difference between making something a scalable and extensible product from the get go, and writing unmaintainable code.

You can write simple and maintainable code for a simple product which isn't scalable or extensible.


The recent Randall comic is quite fitting:

https://xkcd.com/2730/

I rather think, it has to do a lot with momentum. If you think hard about every step and every change, you will be so slow, that a competing project that just focus on shipping useful features, will get so much traction, that they overcome their wrong design decisions with more man power and after a while, so many people use it, that they continue to use it, despite its flaws.

Otherwise we wouldn't have javascript as the most widespread language for example, with monumental efforts invested in fixing the initial flaws and making it do things, it was never designed for.


the question is 'what is serious?' generally one can go about every piece of code with a mindset that if it fails the sky will fall, or on the other extreme 'let it fail' philosophy.

I like to take a mix of slow and fast approach. While some cases demand test-driven development, in other scenarios the test cases can follow the user demand. I like to build test coverage slowly depending on most used parts of the code. So they coverage catches up slowly but at the same time i am not spending time on test cases for things that don't get used at all.

This means, I would prefer releasing features in small batches and as the features start being used, I start improving the coverage. While this may not work for all the teams or environments, it is one approach to build early stage products. which I mostly do.


Most if not all of the points are not applicable when medical software and firmware is involved. I could never play so fast and loose at work.


Obviously when you want to scale things up, specially to webscale you want mongodb but I rather like the approach of bootstrapping the minimum viable product and then actually using it in order to design the rest.

The world would be a much better place if that was the case.


Perhaps OP should try piping their code to /dev/null, which is fast in web scale. Does this editor support sharding? Shards are the secret ingredient in the web scale sauce. They just work.


Thank you for reminding this gem https://youtu.be/b2F-DItXtZs


> when you want to scale things up, specially to webscale you want mongodb

What is about coffee in your actual nostrils that wakes you up more than the full cup itself?


> What is about coffee in your actual nostrils that wakes you up more than the full cup itself?

Not the person you're replying to, but as a food and beverage expert, I'm confident most beverages will yield more powerful emotional and physical responses and quicker absorbtion of many chemical components when taken nasally.


“Do things that don’t scale”

http://paulgraham.com/ds.html


I don’t want to diminish the work the author has done. For a nice hobby project, sure hack away!

But if you are looking to give this to the world, the lack of planning and emphasis on testing is less than ideal.

It wouldn’t hurt to play with some editors and write a spec so data structures and algorithms can be planned appropriately. After all, there are tons of great examples to get inspiration from!


I only unwrite code.


So many licenses for other peoples software there, you sure this is from scratch?


I'm sad to see comments like this.

When I was starting out years ago, my passion project ended up on HN with a very dismissive comment similar to yours - and it almost turned me off of my passion project of 4 years.

I looked at their package.json, they pretty much pulled in the standard library (if there ever was one) of JS. Like lodash and that's about it. Not once did they claim "zero dependencies" or "running on my own OS" so your comment is not only dismissive, but wrong.

I for one would encourage people to experiment with building new editors, not dismiss them. Otherwise we end up in a world where editors are built only by large corporations.


> Not once did they claim "zero dependencies"

The title literally says "Over the past 21 months I've written a code editor from the ground up". The linked blog post is titled "How to Write a Code Editor from Scratch in 4 Months".

I don't know what you would consider "from the ground up" or "from scratch", but in my view that means no external libraries, if something isn't in the standard library I code it myself. That means that someone only needs a compiler and the standard library to build the application, rather than having to either use a dependency manager or (worse) track down and build the dependencies themselves.


> So many licenses for other peoples software there, you sure this is from scratch?

If you wish to make a text editor from scratch you must first invent the universe.


"'Abort, Retry, Fail?' was the phrase some wormdog scrawled next to the door of the Edit Universe project room. And when the new dataspinners started working, fabricating their worlds on the huge organic comp systems, we'd remind them: if you see this message, always choose 'Retry." – Bad'l Ron, Wakener, "Morgan Polysoft"


From scratch means if I am on a fresh install of whatever OS they are targetting and have a compiler (compiler, not build tool, so rustc, not cargo) of the language they use I can build the project without having to install any dependencies.


Is developing a software "from scratch" defined somewhere?

Seems like you arbitrarily draw the line. rustc but not cargo?

Even if you don't use any libraries, cargo is a useful tool. And if you reject tools, what editor/IDE would you code in?


I'm not saying anything about coding, I'm saying that if you code a thing in whatever language (let's say Ada) and I have a fresh install of an operating system (let's say Debian) and a compiler for the language you use (like the GCC Ada compiler) I should be able to compile your script into a program without having to install additional dependencies (like Alire).


I'd say that's more describing "without (library) dependencies".

I'm not in the construction business, but I guess if you build a house from the ground up, that means you're not building on top of some existing structure, but you might still be using prefabricated elements.

I'd put the line here: whether ground work is used that was laid for editors, as opposed to general programming. If the thing that they are making (an editor) does not contain editor-specific prefabricated parts, then all the editor-specific work was still made from the ground up.


Real programmers use butterflies. https://xkcd.com/378/


So let's see, first, Build some vaccum tubes, Invent Assembly, Write a kernel, Write an OS, Build CRT monitor, Upgrade vaccum tubes to Si chips, Write a DE, Build the colospace software, 60 years later post on HN, I built an editor from scratch.


No, why wouldn't he use good tools others have built?

That's just silly.


I can write an editor using tools others have built, too:

  apt install emacs
Or should we, perhaps, consider “from the ground up” and “from scratch” to mean something?


Your argument is reductio ad absurdum.


My argument is that words have meaning.


And my argument is that meanings are fluid and "from scratch" is something a reasonable programmer should understand in this context.

For example, for me someone writing a new editor not using advanced components such as existing programming editor components counts as "from scratch".

They don't need to use magnetic needles to shift individual bits on HDDs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: