Hacker News new | past | comments | ask | show | jobs | submit login
Why LSP? (matklad.github.io)
373 points by afdbcreid on April 25, 2022 | hide | past | favorite | 247 comments



> Contrast this with Emacs and Vim. They just don’t have proper completion as an editor’s extension point. Rather, they expose low-level cursor and screen manipulation API, and then people implement competing completion frameworks on top of that!

This is just wrong, https://vimhelp.org/insert.txt.html#compl-omni

The first occurrence of "omni" I can find in the git log is this commit from 2012:

https://github.com/vim/vim/commit/5d3a8038b6a59e6f1b219f27ec...

which means it's even older


I think he has a valid point even if he didn't state it in a technically correct manner.

Having LSP support was a godsend for things like Emacs and NeoVim.

I don't expect that very long time users and hardcore adopters (people who actually regularly spend time programming their editors in Elisp/Vimscript/Lua/etc) of these programs are not going to see it because they have their own tailor made custom workflows that work for them and would be disrupted by switching to LSP.

But for everybody else LSP has made things SOOOO much easier and better. It's ridiculous how much better things work nowadays.

Years ago I would spend weeks farting around with this or that python plugin to get something working really well in Emacs or Vim. There was a whole mess of different frameworks and plugins, dependencies, and scripts that any user had to wade through. A lot of "work for me" type post. Posts implying that icicles the best thing you could ever possibly use in Emacs and things of that nature.

Nowadays I can just install Doom-Emacs, enable the LSP support for whatever language-of-the-day I happen to be looking at... run doom doctor to tell me what dependencies to install and have actually really good language support up and running in about 20 minutes of work and reading. Another 2-3 hours to get used to the key bindings and basic functioning and I am off to the races.

It is actually really nice.


But that isn't becaus vim or emacs lacked an API for completions. The hard part isn't displaying completions, it's figuring out what good completions are.


I want to further bang about on this: This is a plain false statement. Emacs's completion-at-point was (C-h C-f) introduced at 23.2, in May of 2010. company-mode has commits going back to 2009.


That's his point, people use company mode because Emacs doesn't have built-in "modern" completion UI. Completion at point uses the minibuffer, right? If so I wouldn't call it a real alternative, specially without minibuffers plug-ins like ivy and co.

I don't think the Emacs situation is bad at all, but I agree 100% that it needs third party plug-ins to be anything close to what people expect of an editor in 2022.


His point is poorly expressed in that case. Let's look at the quote again.

>Contrast this with Emacs and Vim. They just don’t have proper completion as an editor’s extension point. Rather, they expose low-level cursor and screen manipulation API, and then people implement competing completion frameworks on top of that!

People don't implement competing completion frameworks on top of "low-level cursor and screen manipulation API". There are more high-level APIs included in Emacs, such as completion-at-point. To quote company-mode's website:

>The CAPF back-end provides a bridge to the standard completion-at-point-functions facility, and thus works with any major mode that defines a proper completion function.

To me the author misrepresents what Emacs has built-in and what package writers use.

Technically company-mode isn't third party, it's part of GNU Emacs. It needs to be downloaded, yes, but it's not third party. Emacs needs a package recommendation engine (I know VS Code has this, awesome!).


Thanks for the correction! Indeed, I am not super familiar how things work under the hood, I just observed a proliferation of completion frameworks on top!


This was perhaps the case in the very beginning, but everything is switching towards the built-in completion interface as a core, where you can extend both ways: provide elements to complete, or provide ways to perform the completion.

What you get with ivy/company/etc is different ways on how to expand/handle/show completions and they're all still relevant.

I'm using different frontends depending on the mode that I'm using. I personally dislike pop-up style completions. I'm using a mixture of completion buffers (the boring emacs built-in ones) ivy and ido instead.


> I wouldn't call it a real alternative, specially without minibuffers plug-ins like ivy and co.

Why is that? Emacs comes built in with modes like `fido-vertical-mode`, which are arguably more powerful and modern than `company-mode`. They're just not turned on by default.


> To get a decent IDE support, you either used a language supported by JetBrains (IntelliJ or ReSharper) or.

VS, Eclipse, Netbeans, Delphi, C++ Builder, KDevelop, QtCreator,...

> Notably, the two-sided market problem was solved by Microsoft, who were a vendor of both languages (C# and TypeScript) and editors (VS Code and Visual Studio), and who were generally loosing in the IDE space to a competitor (JetBrains)

Nope, the IDE space on Windows has always been owned by Microsoft since Borland dropped the ball with their management crisis where to go next.

VSCode and LSP were created out of Monaco and Erich Gamma's stewardship, yep one of the four on the patterns book, and original Eclipse team.

"VS Code an Overnight Success… 10 years in the making"

https://www.youtube.com/watch?v=hilznKQij7A


> VS, C++ Builder, KDevelop, QtCreator

My understanding is that the state of semantic IDE features for C++ was pretty miserable until CLion and clangd came along.

That is, sure, there were a whole lot of “literally” IDEs — gui wrappers around editor/build system/debugger/compiler combos. In the post, when I say IDE, I focus on “refactoring rubicon” connotation of the term, not on the literal meaning (https://martinfowler.com/articles/refactoringRubicon.html).


Nope, there were plenty of VS plugins like Visual Assist, and the first IDE to offer something like LSP was a C++ one actually, Energize C++.

Here is the documentation to cadillac, https://dreamsongs.com/Cadillac.html, and the respective demo of their XEmacs based environment from 1993.

https://www.youtube.com/watch?v=pQQTScuApWk

Refactoring is one feature of IDEs, not the whole package.

Still, here is your refactoring rubicon in Visual Age for C++ (1999), http://www.edm2.com/index.php/VisualAge_C%2B%2B_4.0_Review


Maybe I'm "misremembering", but there was a version of SunStudio in the very late 90s that had "intellisense" style help for C or C++. It was really basic, but it was there.


It was based on NetBeans, SunForte.

https://www.developer.com/java/suns-forte-developer-7-simpli...

Still available under Oracle, now rebranded as Oracle Solaris Studio.

https://docs.oracle.com/cd/E24457_01/html/E21989/gkofj.html#...


I've used KDevelop and QtCreator to write C++, and they had pretty good auto-completion and semantic information way before clang came along.

Example, these blog posts from 2009 showing that KDevelop already had semantic highlighting, smart auto-completion, quick fixes and whatnot https://zwabel.wordpress.com/

KDevelop had its own C++ parser at the time, and it was quite good. Same for QtCreator. They both moved to use libclang much later.


Visual Studio was pretty decent for semantic assist for C++ at least as far back as '05 when I was doing my undergrad. It was night and day better than etags and whatever was available in the Linux world at that time, at least as far as I was able to get it configured.


Even longer than that. You could install Visual Studio 6 on Windows 95 and get intellisense/debugging that's quite good. A lot of newer languages still haven't caught up.


> VS, Eclipse, Netbeans, Delphi, C++ Builder, KDevelop, QtCreator,...

Speak for yourself. Editing C# is much better in Rider than VStudio.

Eclipse was a competitior, but I dread having to use it for anything nowadays. CLion just doesn't work correctly, yet.

For Rust VS Code is better, but only because of awesome rust-analyzer plugin.


Not keeping up with the times with VS?

Eclipse is still my get to go, InteliJ wants me to buy Clion + InteliJ licenses for features Eclipse does out of the box.

VSCode isn't an IDE, plus Rust is the new kid in town.


> VSCode isn't an IDE

Add enough plugins and it will be. Rust being new is one part, other part is that Jetbrains really insists on their plugins vs LSP.

I get Rust errors in Clion on programs that run normally. Bugs that rust-analyzer/cargo don't report.


Bolting 200 things to a bicycle doesn't make it a truck, and no amount of addons can turn VS Code into an IDE. The I means Integrated, and VS Code with plugins and LSPs is anything but.


It has a terminal, debugger, completion, git/source control clients, refactoring tools (more or less depending on the language, obviously), extensions... At what point does it become integrated enough? It's not a bicycle to begin with, it's a small car.


It becomes integrated when the tool internally works on an AST of a language and not on a textual buffer. That’s what makes it an IDE, that the basic text editing features and the compiler are both integrated.

This allows significantly more functionality than any type of LSP + editor ever can. The refactoring functionality is what makes an IDE into an IDE. I use every single feature of IntelliJ every day, and I actually would prefer even more refactoring functionality. LSPs can’t come close to this.

LSPs can compete with classic Visual Studio, but there’s a reason JetBrains started with ReSharper, replacing the refactoring engine of Visual Studio.


> That’s what makes it an IDE, that the basic text editing features and the compiler are both integrated.

Rather arbitrary distinction that doesn't even hold true for all IDEs out there.


By this definition neovim is an ide just because it has native support for treesitter. Yet I don't think it is one


Can you operate on AST elements? e.g., if you move a selected line of text up/down, does it move the text itself or does it move the underlying AST elements and adapt the text to the changed AST?

Can you select a function and drag and drop it into a different file or class and have the code automatically adapt, add/remove parameters, imports, etc as needed?

If you have two functions called doA, in different modules, and you have a file that imports doA from module 1, and you copy that doA call and paste it in a new file, does the IDE correctly add the import as well, or does it just paste the text and you have to import manually?


hey I recognize your over-attached tone and username, do you ever write comment about things rather than your lame preference? Last I remember you were making some point about Kotlin being created to sell intelij.


As far as I can tell, the point about Kotlin is correct.


[citation needed]

I don't remember having any opinions on Kotlin. I've never seen it use in production, just tales.

You sure, you don't have me confused for someone else.


it seems to me I replied to the right comment, which is not yours.


Yeah, easy to lose track of threads on mobile.


Do you want a refresher from JetBrains blog or do you manage to find it yourself?


Just because editing C# in Rider is better than in VS doesn't mean it isn't decent in VC


It's not about being decent, it's about being straight up better. It does get a bit sluggish, but that's more on Java being memory hungry.


But parent was about decent

>> To get a decent IDE support, you either used a language supported by JetBrains (IntelliJ or ReSharper) or.

>VS, Eclipse, Netbeans, Delphi, C++ Builder, KDevelop, QtCreator,...


What exactly is better in Rider?


Autocomplete, refactoring, UI, support for newest features, etc.

Anecdata time. I was working on codegen in C#. So I wrote the codegen in netstandard2.1. That is until a team member that used Visual Studio complained it didn't work for him. So I had to rewrite the thing in netstandard2.0 (shudder) and it worked.

I haven't seen anyone that wanted to go back to VS after using Rider. I did use it for a while, but after Rider it just seemed like an inferior version.


Can you be more specific?

How is autocomplete better in Rider? Faster coming up with suggestions? More helpful suggestions? IntelliSense and IntelliCode in latest VS versions is pretty decent...

Performance in VS has improved since going 64-bit.

What refactorings are unique to Rider?

UI? What features?

Your statements smells of preference and sound vague.


> What refactorings are unique to Rider?

I know Rider/ReSharper can fix variable names that are mentioned in comments, which VS can't do (and it gets on my nerves when I don't have ReSharper).


Of course it can. It's built-in (see image below) VS can rename variables contained in strings and comments.

Also, it's been a long time now since ReSharper was "needed" to work in VS. ReSharper also bogged down VS a lot.


Please.

https://i.imgur.com/mD6Z4zO.png

Idk if it comes with VS out of the box or comes from

free extension called 'Roslynator', but eitherway it is there.


It's built-in, no need for extensions.


> smells of preference

Because it is. My preference, never claimed it wasn't one. But I don't know people that would recommend anything else. VS Code is faster but poorer refactors/debug experience. It was just a subjective concensus.

I haven't timed the IDEs nor do I intend to in near future.

I think overall the helpfulness was better, and support for bleeding edge features like CodeGen, but I haven't used VS in like year or two.


Also, did you disregard my point about support for code gen. Netstandard 2.1 CodeGen in C# didn't work properly around 1 year ago in VS, while Rider had no problems with them.


As someone who does web dev in VS Code, and on the side does C# in Visual Studio, I'm always astonished at just how slow and painful typing code is Visual Studio, but then impressed by how far ahead of VSCode its refactoring features are.


There's the same trade off for C++: if you are very patient you can use CLion and enjoy great completion or if like me you're not a zen monk you use VSCode as a text editor with its 'not so great' IDE features (usually grep works better than VSCode for finding things).

Of course it depends on the size of the project, the plugin used for VSCode (I'm using Microsoft Intellisense).


I think this post hits the nail on the head. Before LSP, you could google for “$language Vim support” and find some results but more often than not, you were going to get nothing.

Once LSP gained popularity, the question shifted to “why doesn’t $language have a language server yet?” which set an easy goal post for languages and compilers to go from 0 to 1 in terms of IDE support.


But isn’t that exactly what he starts of arguing against?

The first chart shows that before lsp you needed special language support for each language for each editor, which is unfeasable. After lsp you just need lsp support for each language. The author then goes on to say “this is wrong”


I don't think the author is quite saying "this is wrong".

It seems like he's more saying "this is an oversimplification, and the reality is a more-nuanced version of that statement" -- specifically, that rather than "now languages just build a language server, and editors just accept LSP", it's that "now languages just build a language server, and editors just accept LSP, plus a bit of LSP-specific configuration to point the editor at the particular LSP for a context."

Breaking things down a bit further, there are three scenarios discussed in the article:

1. M × N -- every language needs a from-scratch, bespoke plugin for every editor

2. M + N -- the "standard explanation" for LSP, where you build one LSP client implementation per editor, one LSP server implementation per language, and you're done

3. The reality of what exists today: one LSP server implementation per language, one LSP client implementation per editor, and a tiny bit of (usually-end-user-supplied) configuration in your editor to glue those two sides together.

The key difference between scenario 1 and scenario 3 is that scenario 1 requires someone with _very deep_ knowledge of a language's structure and semantics to implement a fully-fledged plugin for _every_ editor, usually requiring deep knowledge of that editor (or vice-versa), where scenario 3 allows both ends to wrap that deep knowledge up behind a standardized interface so that any shmuck can write like <100 lines of lua to glue the two ends together (speaking from my own experience as an "any shmuck" using neovim)


From TFA:

> I believe that this standard explanation of LSP popularity is wrong. In this post, I suggest an alternative picture.

If the intent was not to say "this is wrong" that's a very strange way to start your article.

The point of TFA (which is wrong, IMO) is that the M × N explanation was wrong, and the evidence for this is:

1. The LSP implementation itself is trivial compared to the work to get all the information the LSP needs.

2. Outside of dedicated IDEs, nobody implemented the "get all the information LSP needs" before LSP existed

3. GP editors didn't implement the high-level protocols necessary for being a good IDE; a big advantage of LSP for the editor side is that the LSP client can include its own implementation of these high-level protocols. One example of such high-level protocols is code completion.

4. If the standard (quadratic complexity) argument were true, we would have had a world where some GP editors were very good for developing some languages

My rebuttal:

1. Quadratic growth trumps any small constant-factor

2. To the extent that this is true, it's driven by the quadratic growth of M × N. Efforts to implement this get divided between many different groups. Even something as simple as ctags got both a VIM and Emacs implementation

3. As others in HN comments have pointed out, this isn't universally true, and where it is true, the editor communities tended to settle on one plugin that implements these protocols well before LSP existed.

4. I can think of at least two places where this was true: Emacs for Lisp; Vim for C (particularly with cscope, which gives you completion, cross-references, &c.). Nothing was good for C++, but as TFA notes, C++ essentially requires using the full compiler, and prior to libclang supporting C++ there was no Free tool for doing this (Many people tried with GCC, but got burned by the internal frontend API changing so damn often). That there weren't good Free tools for working with C# is kind of ... expected?


mlcscope was decent for early C++. I still miss cscope's ability to distinguish between reading and writing.


If you look at Neovim, they provide a built-in LSP client, so you just have to bring your own language server and everything just works.

Now there are special extensions like rust-tools.nvim popping up that provide additional support on top of the built-in LSP client or enhance the features of the language server.


Right. My takeaway after reading the whole post was "so it really _is_ M * N after all".


This is a “correction” of a strawman only: The left diagram is exactly what would have been necessary to achieve the same result. Since that’s clearly infeasible, most of the arrows didn’t actually exist, and never would have. That was always the point. It’s precisely because the left picture is impractical that something like the LSP was necessary.

Somehow, the author misunderstood the original argument, dismissed it, independently discovered the actual argument, and now claims they’re the one who found the true reason for LSP, when in reality, it’s just the standard argument.


No. The point is that while we need analyzer for each language, the connections between the editors & the analyzers have no problems to be N*M because they're very small. So we don't really need a protocol like LSP. But the rise of this protocol caused people to write independent, non-per-editor analyzers, and this is indeed necessary. I completely agree with that.


If that was the point, it'd be a wrong point. The connections would not be small. They would be small on the 30,000-foot view, but up close at the implementation level each and every one of them would come with a crapton of assumptions about how they operation that would at times require architectural changes to the editor to work. This one "helpfully" works based on the files on the disk. That one connects via Protocol Buffers. Each of course have their own set of error messages and a protocol to implement. Whoops, this one is more synchronous than I thought. Whoops, the way this one works conflicts with my incremental compile feature. Crap, this one turns out to implicitly require a certain threading model. Oh dear, this one wants a stream of every keystroke made, but that one wants the entire file sent over when we want to run an analysis. Aw heck, our IDE model requires the whole file to be sent over for incremental analysis but doing that six times a second kills my whole dev machine, and doing it not six times a second makes the latency intolerably slow compared to our current custom implementation for our dominant language.

The original NxM problem is that each language needed support per editor and each editor required support per langauge. While "each language writes one custom server" and "each editor has to support each language's custom protocol" would still be an improvement, it wouldn't be enough of one.

Now, the idea that an editor can just implement this protocol and magically all the unicorns start singing in unison is a pipe dream as well. But what it does is make something that is completely infeasible for even a well-funded commercial team into something that some new little open source editor can afford to start doing. Support for Haskell may be a bit rough until someone wants to use Haskell in that editor and can fix the editor support for Haskell specifically, because of this quirk and that quirk and the other quirk (the quirks will always be with us), but at least it's in the realm of possibility now instead of a ludicrous pie-in-the-sky idea.


Hmm, maybe I misunderstood the author’s point slightly, but I find what you suggest the argument was even less convincing.

Here’s the history: Every editor has a language abstraction already. And some language analyzers had an “editor abstraction”, perhaps notably Roslyn for C# with its programmable refactoring rules that worked in multiple editors.

None of that was new. But that’s still N*M if you do the math, so that graph was never completely filled — for the obvious reason that this would have been practically impossible. No mystery about it.


And considering how VB.NET was incorporated first-class besides C#, Roslyn seems rather ahead of its time, with both language abstraction at the compiler level and language-agnostic analysers based on semantic info.

I still can't get over the awesomeness of being able to just throw together a fully supported custom analyser from a template—unique to Roslyn to this day, AFAIK.


I mean, this is theoretically true, if an editor provided a language-agnostic protocol for talking to potential semantic plugins. The article says

> Rather, a language should implement a server which speaks some protocol, an editor needs to implement language agnostic APIs for providing completions and such, and, if both the language and the editor are not esoteric, someone who is interested in both would just write a bit of glue code to bind the two together

I mean, yes, ideally that is what should have happened, but the first editor to actually do that was VSCode, and the protocol it chose is called LSP, so that's where we are.


But it doesn't need to be a protocol, just some API. And VSCode actually does that - I think it doesn't even have LSP builtin (but I may be wrong).


You are correct, and this is indeed one of the important facets I try to illuminate. VS Code doesn't support LSP natively, it just exposes a bunch of APIs to provide IDE features. Actual LSP support is implemented as a separate library on top of these API, an adapter pattern in the wild.


It's really not about the arrows.

The benefit of the LSP is that it decoupled the plugin development environment from the editor development environment.

This is a BIG deal.

Remember building plugins for Eclipse? Exactly. You don't because building them was massive, massive pain. You had to install a universe of build tools, understand how they went together, build your plugin, fit into the UI correctly, pull it into the system and configure it, and finally you could do some development.

The existence of the LSP means that the plugin development and the editor development are decoupled. I don't have to put together the enormous build system of the editor in order to create the LSP plugin. I'm not stuck in the language of the editor implementation when creating an LSP plugin.

This dropped the cognitive load for creating a plugin by order and orders of magnitude.


The problem with this argument is that it's based on only a few of the most popular IDE's that the author happens to remember, but there were a bunch of others. For Java, you might remember NetBeans or VisualAge. There's a long tail.

Some of these IDE's supported more than one language. For example, VisualAge also supported C++ and SmallTalk. But each IDE supported a different set of languages with implementations of varying quality, they each had their own complicated plugin framework, and that's the N:M problem.

Even if it were just IntelliJ and Eclipse, that's still implementing support for a new language twice from the ground up, and you'll be doing it in Java both times, which is a lot of work for a team working on a new programming language.

You can look at the Wikipedia page to get a sense of how many different IDE's and languages there are out there: https://en.wikipedia.org/wiki/Comparison_of_integrated_devel...


> you'll be doing it in Java both times

I honestly think this is highly underrated as a reason. Teams working on a language want to program in that language. It doesn't really matter if it's complex, or they have to repeat the work for multiple IDEs. They love doing that. They will actually compete to do that (look how many posts on HN these days are just "$MUNDANE_TOOL written in Rust"). The main thing LSP did is it moved the boundary of the implementation to the other side of the language barrier.


> Dart, despite being a from-scratch, relatively modern language, ended up with three implementations (host AOT compiler, host IDE compiler (dart-analyzer), on-device JIT compiler).

We also had an earlier analyzer written in Java that was mostly discarded and rewritten as well as two compilers to JavaScript that share less code than you would expect.

Some of the redundancy is because, as the author correctly states, the needs of an IDE are quite different from the needs of a batch compiler. Likewise, an ahead-of-time whole program compiler has different constraints from a JIT compiler running a program from source.

But a lot of the redundancy in Dart's case is incidental. We have a fairly large, distributed team with a lot of autonomy and sometimes that led to people doing their own thing instead of putting in the effort into coordinating and sharing more code.

Also, the language changed radically from Dart 1.0 to 2.0. 1.0 was a dynamically typed scripting language designed to be run from source in a native VM directly embedded in a browser with a separate optional type system mostly used at dev time. Dart 2.0 is a fully statically-typed language with a more typical compilation process.

That monumental change led to some duplication and technical debt that we are slowly paying down by sharing more and more of the front end over time.

The path I see most new languages take is:

1. Start with a command-line batch mode compiler.

2. Eventually get popular enough that users clamor for real IDE support.

3. Cobble together an IDE plug-in based on the batch-mode compiler's front end.

4. Discover that the latency is horrific and slowly and painfully realize you need to write a new front end architected for IDE use.

5. Now you have two front ends.

I believe the throughput loss of a front-end designed for interactive IDE use is less than people realize. So I suspect that a more efficient path for any new language is to design your front end for interactive IDE use from day one.


> So I suspect that a more efficient path for any new language is to design your front end for interactive IDE use from day one.

Maybe! One problem is that this is going to be harder than starting with a batch compiler: you might not get to 2. due to the lack of resources.

Another plausible trajectory is the following:

1. you start with a non-self hosting batch compiler, whose explicit goals are simplicity, language-designer iteration speed, and short time horizon, and which has deliberately minimal capabilities when it comes to error messages and other user-facing niceties.

2. using this bootstrap compiler, get to the 1.0 of the language.

3. implement a second, self hosting compiler with a long-term focus, interactive capabilities and all that stuff.

4. either discontinue bootstrap compiler, or explicitly support it as a minimal implementation for the purposes of differential fuzzing and specification.


One particular problem of M+N is that it requires LSP to cover all the wanted features provided by server and supported by client.

This is not always the case for existing servers. Semantic highlighting wasn't in the protocol until 3.0 (iirc), and many servers have extra non-standard endpoints. So the real world situation is that there's a core feature set allowing M+N, and additionally, custom endpoints and features requiring MN.

Nevertheless the non-standard MN endpoints at least follow the "shape" of LSP -- same communication channel and message format, and that makes MN much easier than before already! As the core protocol evolves, more MN features will be decoupled to M+N.


This is definitely true but like you mention because there’s a common communication channel and message format, these language-specific RPC endpoints exist whereas before LSP, language servers didn’t have “custom” features because there weren’t IDE-first language servers at all.

There are already a more than a couple LSP features I wish existed, but in the mean time it’s good enough to have the (typed! and using rich data structures!) RPC endpoints exist for enterprising users to wire up to their IDE’s presentation layer.


I also would like to note that this could provide a chilling effect for programming language design in the future. If LSP's become ultra-mainstream it might carve out a path of least resistance that prevent future innovation and adoption of language features incompatible with the protocol. Although I must admit I have no idea what such features could look like.


Hmm maybe these:

- Programming based on fragments, not documents (e.g. LEO https://leoeditor.com/)

- Live programming (e.g. smalltalk environments)

- ... where certain actions are not available, e.g. a PL geared towards speech recognition may not support "hover"

But on the other hand, these are not well-aligned with current text editors (text based, has a cursor, organize info as documents etc.)

So languages are one side, editors are the other.


Standardization ossifies some things but also allows building many things on top of a stable base.

Innovation could just happen at a higher level.


> Innovation could just happen at a higher level.

That, or retro-backing into the protocol novel features required by those experimental languages.

In my experience, industry-wide ossification and chilling effects appear not because the environment makes it difficult to build new designs, but because it makes it difficult to think outside the current paradigm and imagine novel features.

Once those innovative ideas are out of the box and many people understand their need, the frameworks needed to support them are created. I know because right now we're experiencing such a paradigm shift in terms of note-taking and knowledge building (abandoning WYSIWYG word processors in favor of networked-thinking bidirectional-linked graphs of notes), which IMHO sooner or later will extend to programming tools as well.


> I know because right now we're experiencing such a paradigm shift in terms of note-taking and knowledge building (abandoning WYSIWYG word processors in favor of networked-thinking bidirectional-linked graphs of notes), which IMHO sooner or later will extend to programming tools as well.

What paradigm shift? Can't say I've experienced it.


If you've been using something like Evernote or Emacs outliner, you may have experienced an early version from its beginnings. There's a a rise of new outliners and notetaking tools based on bi-directional linking, which facilitate a style of workflows dedicated to compiling related ideas that appear in very different contexts. This makes it easier to group them together to generate a bottom-up structure of concepts that allows you to organise your ideas and personal projects into larger and larger structures.


I wonder if any of these tools have crossed the chasm, though. In 99% of office environments I've seen, it's a split of probably 90% MS Office and maybe 9% G Suite.

Sure, people use OneNote and such as a secondary thing, but outside of some startups, they're not the central thing. And startups are early adopters, very fickle.


No, these tools are not being used for producing documents (not directly at least).

But that's not their purpose; their primary role is as personal databases, where you can dump any factoid, TODO or interesting article that you know you'll want to process later, freeing your short-term memory and your mental burden. Having all your notes in a single searchable knowledge base allows it to grow organically and find patterns, so that you can do incremental iterative refinement of your intellectual activities.

Creatives using them are saying that it's way easier to compile ideas for writing and publishing articles or blogs entries when they are compiled and processed with these tools.


I don’t know why the author spends so much time “debunking” the M * N argument for LSP. It seems a minor side point.

It’s a bit incoherent anyway. The author’s argument seems to be that if you have language-provided pluggable language servers and editors that can host these language server plugins appropriately, then the M * N shrinks down to a small problem, without the help of LSP. Well, OK, but that’s actually the original LSP M * N argument, except with LSP replaced with something just like LSP.


The point is that you don’t need common protocol. A bunch of language-specific protocols would have worked! It’s interesting to ask why that didn’t happen.


Is it really interesting? Every editor would have had to support a bunch of different protocols, instead of just one. That's back to square one.


But in fact that did happen. Several languages started to implement “editor neutral” code analysis and refactoring. That worked, for the most popular editor/language combinations, but not as a general solution because of N*M.


Can you please give a few examples?


OCaml (Merlin), Common Lisp (SWANK), Clojure (nREPL), Python (Jedi), even Idris 2 (its IDE Mode).


We can and do have both. LSP is a lowest-common-denominator protocol. Several languages have more-powerful language-specific protocols that can be wrapped with LSP, at the cost of some advanced features.


It's not IDEs in 2022 that use LSP, it's editors. As a grey-beard, M x N wasn't "small".

From memory, notepad, notepad++, scite, vim, emacs, netbeans, eclipse, VS studio, IDEA, sublime text, bbedit, plus a number of linux only ones. Languages, VB, C++, Pascal/Delphi, Fortran, Java, C# (from 1998/9?).

The problem was tight coupling of languages to IDEs, in particular VB, and C++ to VS, and Java to Eclipse.


I feel really alone being happy without autocomplete. Doesn't anyone else find it annoying and distracting to have your IDE or editor constantly throwing "suggestions" at you? I am quite happy with that feature turned off.

I guess I do use it in Python REPL quite often though. But there at least you have to prompt it by pressing tab.


As someone who heavily uses not only autocomplete but also Resharper and GitHub Copilot, I sorta reject the premise of the question. It is annoying, but it's still worth it even so. I have to deal with IntelliSense, Copilot, and Resharper all fighting with each other and I think there's a lot of room for improvement in that interaction. But including the time spent fighting with them, I'm still writing code much faster. Ten steps forward, one step back. With standard code completion especially, most of the time I'm not looking at the things popping up, I'm using it via muscle memory; I know what the completion will be, so I just type the prompt and hit tab, I don't wait to see a popup.


I can't link for hopefully obvious reasons

In user studies it was found that beginner/novice coders (or really, users that aren't SWEs) autocompletion was the most requested feature for improvement in IDE support. It's foundational for understanding what they can write and making sure it's valid.

And that makes sense to me, if you have a lot of experience in a codebase and ecosystem it's not as important. If you don't then typing `.` and seeing a bunch of aptly named methods show up, then snippet completion for their arguments to tab through them, you are immediately productive. Autocomplete turns known unknowns into known knowns.


I feel the same for every LSP feature. They're all useful, but I must specifically ask for them or they're too annoying. The only exception is diagnostics which is fine to show a small marker in it's line, though they also get annoying if the description is expanded without explicitly asking.


I think I'm with you in that I like autocomplete but I want to ask for it; prompting feels too busy.


Agreed! Always-on autocomplete feels like the editor equivalent of someone asking "are we there yet?" every few seconds during a road trip.


Just today I had the following interaction with Outlook's web interface:

    Me: I want to type "Hi Andrew,"
    Outlook's autocomplete: I see an <H>, I see an <i>...
    Oh, oh, I know this one! Do you want me to autocomplete "Hi" for you?
    <space>? You do! I'm helping!
and I'm left getting two characters out of my three key presses in a frustratingly unexpected way.


I set the delay to 8 seconds. If I stop and think about a line for too long, then I get autocomplete.. but if I'm just pausing to prepare my next batch of code, it doesn't get in the way.

It's a good middleground.


I am in a similar position actually -- I don't find autocomplete to be the main feature of IDE, it doesn't add that much over dumb hippie-expand. Code navigation is much more important.


100,000% this.

If I could get the navigation and analysis (go to def/decl/parent/etc., show/jump to uses/children/impls/etc.) all on its own, without auto-complete, type hints, or syntax highlighting, I'd be a pretty happy camper.

I'm fine with features that are on-demand (on-demand completion, type info, actions, etc.). They add occasional value without distraction.

About the only other things I care about in a development environment are compile output navigation (optionally with isolated warnings & errors, but full output is mandatory), configurable run and tooling integration (doesn't need to be close integration, either, just flexible and configurable).

Gravy/Frosting (depending upon your preferred metaphor) would be debugger and documentation integration. Both are hard to do well, and can usually be accomplished using a decent run/tooling interface.


> Doesn't anyone else find it annoying and distracting to have your IDE or editor constantly throwing "suggestions" at you?

I see where this is coming from. That's why I do not let neovim hit me with completions all the time, but only when I request them. Most of the time, I can work with the (neo)vim builtins, like "complete word" or "complete line" and do not even use the language-server provided semantic autocompletion. But when I need to, it is only two keystrokes away.


Me too. I have turned off auto complete in company-mode. Instead I bind a key shortcut to do it manually on demand when I need it. It works great.


The main value I've seen from it is in CodeBasesWithReallyLongNamingConventions.

I'm glad that LSP exists, and I've got it hooked into vim... but autocomplete is seldom very useful to me. It's usually faster to just type the thing out.


How do you work in new codebases?

Do you remember all those properties, method names, like wtf? how?


You're not alone! =)


good to know! there are dozens of us!


Great article! LSP brought me back to Emacs. Real IDE support with best in class editing abilities is bliss.

Microsoft really nailed it with LSP (albeit, I agree with the caveats the implementation has some warts). Jetbrains must be sweating - more and more of my colleagues and coworkers are using VSCode these days.

I think Jetbrains should seriously consider LSPifying their backends. If they don't, they risk someone else building a better JVM LSP and then they are going to lose a lot of market share. Plus it gives them the opportunity to shape the LSP spec. I'd happily pay Jetbrains right now a sub just for the backend if I can use it with my editor of choice.


intellij isn't popular just for code completion

people use intellij because of the various plugin/features/debugging for all the jvm ecosystem/frameworks https://www.jetbrains.com/idea/features/#jvm-frameworks

similar to how Rider became immensely popular (1st class Unity Engine support)

and now Rider C++ support for .sln solution and 1st class Unreal Engine support

intellij's problem is their plugin API and development cycle, for a while you needed to restart the whole IDE whenever you wanted to recompile your plugin, as a result, the 3rd-party plugin ecosystem is very poor

hopefully they fix all the annoying things with Fleet https://www.jetbrains.com/fleet/

they went with JVM so it gives them a poor start already (no comparatif benefit over VSCode and electron, same bloatshit, but they can eat some market share by providing a better out of the box experience)


> intellij isn't popular just for code completion But it's still ten times better than anything that vscode or Visual Studio offers when it comes to f.e. C#. VS was supposed to get better with IntlliSense but it will still put some obscure system class names higher than your local variables in autocomplete suggestions... It's really hard to get to vscode/VS after using JetBrains products for a while. To me it feels like shooting yourself in the knee just for the sake of being hip and 'cool'.


exactly, i found the same issue with VSCode and various LSP implementations, it's very poor compared to intellij, that IDE knows your code perfectly, as a result navigation is super smooth, you have access to advanced refactoring tools and great debugging support, jetbrains mastered it, they are unmatched, i wish they supported more languages (D, Zig for example)


It really depends on the LSP implementation. Some are pretty bad. Some are really good. For example, Jetbrains Webstorm uses the same Microsoft LSP server as VSCode. Typescript support is really good in VSCode. I also find the golang LSP server to be pretty comparable, although it could use some polish. Another area where VSCode/LSP blows away Jetbrains is with C/C++ code, using something like ccls, and rust with rust-analyzer.

Like I said, in my ideal world, Jetbrains would simply provide commercial LSP servers for all of the things they support. Like you said, they have some of the best in class IDE support for various languages, especially JVM.


Eclipse already did (integrate JVM LSP support). It kinda sucks still in lots of ways, but at the same time, it is pretty surprising how well it works and is totally usable, eg: for working on React. If more work is done on it then it's definitely going to start change things a bit (in part also because Eclipse itself has improved dramatically in the last few years).


Same for me. I used Emacs for many years, but other editors (especially VSCode) were getting better language support and it was starting to feel long in the tooth. With LSP support, it’s amazing again. I’m happy to be back in my favorite environment.


JetBrains new Fleet editor that’s coming out soon uses LSP and looks pretty great.

I’d love an excuse to stop using VS Code, and Fleet is shaping up to be that, without losing the actual plugins I care about (which are largely LSP features anyways).


The existence of multiple LSP clients demonstrates the success of LSP and is a strength, not a weakness as the author suggests.

Often, it's because LSP clients have differences not in interacting with the LSP Server, but in how that is surfaced to the user. Some LSP clients are written in different languages, and that can make some implementations faster than others. This is why LSP is a success. It allows for clients to be written in many ways but for all of them to do the most basic thing they need to (go to def, rename, refactor, search symbols etc).

Multiple LSP clients is a success and offers users more choice about how they want to interact with their code. Do I want a search for symbols to open in a new window, to replace the current buffer, in a searchable list etc.


> Notably, the two-sided market problem was solved by Microsoft, who were a vendor of both languages (C# and TypeScript) and editors (VS Code and Visual Studio), and who were generally loosing in the IDE space to a competitor (JetBrains).

Talk about commoditising your complement!

----

Unrelated: how are LSP implementations and Emacs support generally these days? When I last tried 2–3 years ago the Emacs-specific plugins for various languages were still more reliable and predictable than their LSP counterparts, but has this changed a lot? If so, I might be inclined to try again.


I've had a terrible experience with lsp-mode, gopls and Emacs 27. Minor issues like inconsistencies and CPU/RAM hogging that requires manual restarts, and the major one that made me stop using it is that Emacs would frequently crash when editing Go files.

I reported this to lsp-mode, but they kicked the can over to Emacs[1], which I never bothered reporting. The fact this is now a combination of tools with no clear reporting path when something goes wrong is awful UX.

Every few months I update the entire setup with hopes things improve, but eventually I always get the crashes. I would really like for this to work, as the DX when it does is great, but for me it's just not there yet.

[1]: https://github.com/emacs-lsp/lsp-mode/issues/3242


I will just note that I have the opposite experience, as a daily user of go-lsp for about two years. It has been more reliable and much faster than the older Go modes I tried. It has also worked much better for me in Emacs than it does for colleagues using VSCode's integration with gopls.

It's not without problems, occasionally taking quite a bit of processor power or needing the odd restart when it falls out of sync with a multi-module project, but it has never once crashed Emacs for me (using it on Ubuntu on WSL2, for what it's worth).

I am not writing this to invalidate your experience. It's clear that something is rotten with the way the plugin uses emacs on your OS, and it's frustrating to not get help with that. I will note that their observation - that an Emacs crash is first and foremost is an Emacs problem, not a plugin problem - is fair though. But, I also would not want to go through a long series of communications between different projects, so I get your pain entirely.


LSP support with lsp-mode is fantastic with at least Rust. I use Elixir at my day job and while it does a good job of doing 98% of what I want (go to definition, find references (iffy; Elixir is difficult to analyze like this), code lenses, autocomplete, etc.) there are a few rough patches. This is likely due to shortcomings with the lang server implementation, not the lsp-mode library itself.

Like I said, editing Rust is awesome in Emacs; saw someone working on Rust in VS Code and didn't notice anything that Emacs couldn't match.

See also lsp-ui; I don't turn this on personally, but it does give you some significant bling.


rust-analyzer is easily the best LSP I've played with, by a huge margin. It's good enough to sell the whole concept on its own.


lsp-mode is still not part of the official distribution, though.


And I’d argue that that’s right! As the post says, there’s no LSP in core VS Code either!


Neither is eglot, the other implementation. AFAIK there is no LSP package in the official distribution.


eglot is part of the GNU ELPA packages archive, managed by the people doing Emacs. It's possible to install it from the Emacs package manager, and the package will be signed and verified. So although it's not part of the official distribution indeed it's pretty easy to install.


Is that not also true of lsp-mode?


No the same, lsp-mode is part of the MELPA archive not GNU ELPA. So it's easy to install too but MELPA has no package signature verification. For commonly used packages I don't think it's a big problem, any funny business would be detected quickly.


It's moving fast, and the experience will depend a lot on the LSP server itself now. As for me, I've recently switched to LSP with eglot for C, C++ (clangd) and Python (pyls). The experience is good, in that it's easy to forget about it until you need it, and then it works OK.

It's best to have a recent Emacs though. At least Emacs 27, which introduced support for a C based JSON library (in Emacs 26 all the LSP parsing is done in elisp). This is the biggest change. Emacs 28 introduced an elisp JIT. I haven't any hard data, only a subjective experience but in my daily use with Emacs 28 it's fine, whereas it could be a bit sluggish at times with Emacs 26.


It's pretty good, LSP related modes are best used in tandem with the languages' major modes and LSP-mode has gotten much better the last year or so imo. It's more reliable and consistent across languages.

I've used it with Python, C/C++, SQL, HTML/CSS/JS, without issue and of course it has stellar support for Clojure.


It's catching up. You can use them in tandem though and ignore the features that overlap by configuration. I use Eglot with a bunch of different languages. Clojure is the only one where I still use other packages for dealing with similar things.


Can you please comment on your experience with Eglot? Is it better/ faster?


Eh, gonna be really hard to be Emacs' prowess in the land of Lisps. This rings pretty true to me: they all work well together.


IME, if you are using an up-to-date Emacs, compiled with the right libraries and options (M-x lsp-doctor), and you are using a language where the LSP support is good, you'll have a good time writing in that language in Emacs. I do C# at $DAYJOB, and I write most of it in Emacs. Outside of work, the only LSPs I have tried are Java and Kotlin. The Java LSP is great at the language level, but falls down in understanding project structure. The Kotlin LSP is vice-versa. I haven't tried one where there's a good Emacs-specific plugin like Python, though I probably should.


I switched back to Emacs + lsp-mode for Python, and it’s been excellent.


> If the standard theory were correct, then, before LSP, we would have lived in a world where some languages has superb IDE support in some editors. For example, IntelliJ would have been great at Java, Emacs at C++, Vim at C#, etc. My recollection of that time is quite different

Interesting theory but my recollection of that time is pretty much exactly that. Lots of IDEs with great Java support (IntelliJ, Netbeans, Eclipse), a very small number with good CJJ support (Visual Studio, Qt Creator, maybe KDevelop or Code::Blocks if you're generous with "good"), and not much else.

How many IDEs had great Delphi support? As far as I know exactly one - the Delphi IDE.

Ok there were a couple of exceptions - Eclipse had decent C++ support IIRC and IntelliJ sort of had support for loads of languages, but that was a business whose main product is IDEs.

I think the alternative suggestion is also true. They're both good reasons for the success of LSP.


I got my hands on LSP and developed a language extension for a DSL. I found the overall idea great. he only thing that grind my gears is that currently, LSP focuses on some very common language support features and some Microsoft / VS Code inventions such as CodeLenses. The latter being a feature of the VS Code API, I am wondering whether the LSP will stay a platform agnostic protocol.


Of course it won't [stay a platform agnostic protocol]. They'll leverage it as soon as they think they can get away with it. MS owns Github. They have already rolled out AI auto-completion. This LSP will be used as another way to get between users (in this case the users are devs) and the machine and "extract rent".

Ultimately, complexity itself is the enemy.


Tangentially related: If a talented folks out there could please help out with (to the best of my knowledge) the only LSP for Kotlin out there[0], that would be amazing!

I'm really attracted to Kotlin as a language[1], but, last I checked, IntelliJ is the only way to get great tooling for Kotlin, which is a show stopper for me since I've invested a lot into my neovim config.

[0] https://github.com/fwcd/kotlin-language-server/projects/1

[1] What attracts me to Kotlin is that it is actually statically typed (ruling out Python/JS/TS), with a relatively expressive type system (ruling out Golang), great type inference, not verbose (ruling out Java), not too involved with low level details (ruling out Rust), not too steep of a learning curve (ruling out Rust again), and a great cross-platform community (ruling out Swift).


I guess you're aware but the Jetbrains folks (who do Kotlin and the IDE range) have been a bit cold towards LSP since it's basically commodifying their USP for their IDEs (rich, language specific integration) which is why kotlin LSP is a bit of a no man's land.


Yeah, that's true. I can see their reasoning, but I'd hope they wouldn't be so cold.


Their USP is native, efficient tooling for something that's usually implemented in managed platforms like Electron/JS or the JVM. They would in fact benefit from a rich LSP ecosystem, since their tooling would be more easily applicable across languages.


Their USP is native, efficient tooling

Why is Android Studio so slow, then?


I don't use them, but aren't their IDEs still JVM?


Yes. They use their own packaged version of the JVM.[1]

[1] https://confluence.jetbrains.com/display/JBR/JetBrains+Runti...


That's their existing (and open) codebase, but I was referring to their recently announced new IDE product called Fleet.


Fleet can't be their USP, because it doesn't exist.


Yes, I'm aware.


I once had the idea of implementing an LSP server for Kotlin/Java by embedding it as an IntelliJ plugin and backgrounding the IDE while doing the actual coding in Emacs.

It kind of worked, but once I stopped needing to use Java for my job it became too much of a hassle to flesh out.

https://github.com/Ruin0x11/intellij-lsp-server


Nice!


Yeah that is my pain point with Kotlin. I really like it, but the (pretty inexistent) non-JetBrains tooling is a deal breaker for me.


> which is a show stopper for me since I've invested a lot into my neovim config.

Level of confidence just 0.4, but, if you invested the same amount of time into IntelliJ platform, you might have got more out of it. The “advanced semantic features” rabbit whole is as deep as “advanced vim”, it’s just less popular :-)


If you can share more about your reasoning here, or share blog posts etc (especially if they compare/contrast with the LSP/vim experience), I'd be very grateful!

I've been looking everywhere for evidence of the value-add of IDEs as compared to my neovim_plugins+LSP setup, but haven't found anything that convinced me to do the investment.


I've wrote a bit about this in https://matklad.github.io/2020/11/11/yde.html

Some specific things which IntelliJ brings to me:

1. Rich refactors -- things like "change signature" are a huge time saver in big codebases. LSP still can't do interactive refactors which require user input.

2. Structural Search Replace -- you can grep and sed sourcecode by matching on AST and semantic information.

3. Significantly more polished basic features. When I search for a thing in VS Code, I get a flat list of 20 occurrences. When I search for a thing in IntelliJ, I have 20 occurrences, but I immediately see that 12 come from tests, and, among the remaining 8, I can clearly identify 2 write and 6 reads, and it's writes I care about.

4. Significantly better integration with language specific tooling. When I want to run a custom command in VS Code, I open terminal and type `cargo run --package foo --bin bar`. In IntelliJ, I press a shortcut and get auto-completion not only for general cargo command, but for project specific things as well (eg, IntelliJ knows that `foo` package exists and completes that).

5. Local history -- rather than maintaining a hard-to-visualize and unintuitive undo-tree, IntelliJ transparently remembers all transient version of a file and shows you a fine-grained "git log" with diffs and such.

6. Merge tool that understands semantics of the language and can resolve a lot more conflicts automatically (though, to be fair, the actual Git interface still is not as good as magit)

7. A lot of polished "small things". IntelliJ helps a lot with adding punctuation, placing the cursor in the right position and doing other small editing tasks. Subjectively, it significantly reduces the cognitive load for just typing the code in.

I guess, the overall thing here is not so much as some big, flashy things, but rather than a myriad of small, polished details. Like vim has a dedicated command/keysequence for anything, IntelliJ exploits any semantic information it can get from the sourcecode to be helpful.


Didn't realize I was talking to the person who worked both on IntelliJ Rust and rust-analyzer!

This is the best case I've ever heard for IDEs. Thank you!!

1 and 3 seem to me to require extensions/improvements to the LSP protocol.

2, and 6 seem like one could build them off of treesitter (but I don't think anyone has as of yet).

5 seems already implemented in a vim plugin[0].

Thanks for 4 and 7 (the "myriad of small, polished details") also, I feel like I have to spend a week using an IDE to get a feel for them.

[0] https://github.com/dinhhuy258/vim-local-history


If you weren't aware, VS Code recently gained (5). (Though JB's is still better.)


Indeed, I wasn't, thanks!


It seems you could also work on the IdeaVim plugin instead?

NB: Using the compiler that way probably isn't the right way to go. IntelliJ is somewhat modular. You could parse Kotlin into PSI and then re-use parts of the Kotlin IDE plugin in headless mode. Other JB products do this e.g. Qodana so you don't really need the GUI to be running to do IDE type stuff with it.


> IdeaVim

I really should try that out more. But from my limited foray into using vim emulation: it's not perfect + I really don't like how resource intensive and slow IDEs are + it's not just using vim text navigation, but my neovim plugins, tmux, ripgrep, fdfind, etc.

> You could parse Kotlin into PSI and then re-use parts of the Kotlin IDE plugin in headless mode

I currently don't have the bandwidth to look into this kinda stuff, hence my call-to-action for "talented folks" :)


Right, but part of why IDEs are resource intensive is they're doing a lot of analysis of the code. If you write an LSP you'll have the same issue because the same work has got to get done somewhere.


It's true that some LSPs are resource intensive too. But in my experience, the "little things" like opening the editor/IDE, and new files, quickly/instantaneously tend to be better with neovim rather than IDEs. I'd love to hear about others' experiences about this though!


I have feelings about the last paragraph there. Most protocols are limiting by nature, so although LSP has made wonders to a lot of languages for a lot of editors out there; I'm stuck thinking that it might have stifled progress for some as well. I don't have a proposed solution or anything, just pondering if we could have done better.


Well (Ok, I'm partly contradicting the article here), most of the things LSP supports (autocomplete, jump to definition, ...) have been pretty standard features of more advanced IDEs (the original Visual Studio, the various JetBrains IDEs, even good ol' Delphi) for decades now. So the "problem domain" is pretty well defined already, LSP is just standardizing the interfaces.


Advanced IDEs are advanced because they provide so much more beyond basic jump to definition and autocomplete (which, again, can be very advanced and be context-aware, provide code shortcuts etc.).

It's good that tooling has improved across the board for many languages with LSP, but there's so much more still.


The protocol also has the concept of custom commands, so whenever I wanted to add some feature that they hadn't thought of I'd just use that.

Obviously you have to add custom implementation on the client for each ide, but that's the case without LSP.


cool, so we can have "best viewed with ie" again?


It is unrealistic to expect one server to have everything for every language. What does Haskell do with "extract to method"? What does C++ do with "convert to point-free"? This isn't like HTML where browsers were deliberately fragmented for strategic reasons, languages have these differences for their own good reasons.

But sharing as much as possible and harmonizing as much as possible is a huge step forward, even if each language still has a bit around the edges to implement. At least they agree on language, fundamental communication model, common functionality, etc.


So when using rust-analyzer with Emacs, I've noticed that it lists code transformation suggestions based on where your cursor is, and you can optionally choose to apply those.

That seems to be generic, e.g., rust-analyzer provides the suggestions, and lsp-mode is just listing them. Though I don't know that for sure.


> it might have stifled progress for some as well

which language got stifled as a result of the existence of LSP? Would that language still have stifled anyway without LSP?


I haven't looked much into the implementation details of LSP, so I was not aware of custom-commands upon writing my comment. Maybe it hasn't stifled at all, but efforts have definitely been split, as prior art is still maintained years after LSP's launch


We could always have done better, but the nice thing is, that most of the time, we can.

Will it take time and energy? Sure! Such is life.

At least it's not as bad as e.g. JavaScript.


I tried a few language plugins for VSCode that were all based on LSP, and they were all broken.

So for me, the question is wrong. It's not "Why is LSP so great?".

The question should be "Why are people so hyped about tech that's based on questionable assumptions and that hasn't proved itself?"

The reason VSCode invented LSP is that different programming languages don't have a reliable way to intercommunicate (other than as separate processes communicating over a socket).


> The reason VSCode invented LSP is that different programming languages don't have a reliable way to intercommunicate (other than as separate processes communicating over a socket).

The processes-communicating-via-IPC architecture is way underrated for plugins. It's much easier to separate the plugin from the host, and to cleanly stop or restart the plugin. For example, sometimes (rarely) rust-analyzer hits a bug and starts consuming 100% CPU. When that happens, I just kill that process and VS Code transparently restarts rust-analyzer; I don't have to restart the IDE, and all other plugins are unaffected as well. That would be practically impossible if rust-analyzer were a library instead of an independent process.


What you just described with great enthusiasm as a feature is to me a terrible bug.

It means that as a user of the system you must understand the inner workings of the system.

Imagine someone says this about a car.

"If you are driving on the highway and some component dies midway, you just open the cover and press a button on the broken component and it restarts itself! Isn't that much better than waiting two weeks for some repairman to repair it?"

I don't want a car that has components that just randomly die mid flight. That's not a feature. It's a deal breaker.

In more general terms, a system where failures are hard is easier to make robust because the errors are visible to developers.

In a dynamic system where failures are "not my problem", it's very easy for the system to develop into a wobbly unstable mess, because no one owns the system. It's just a bunch of plugins that may or may not behave erratically.

That's not a good way to build a stable relaible IDE.


Isolating failures to small, restartable components, instead of letting them spread to the entire system, is a bug in your eyes?! Do you think rust-analyzer wouldn't ever crash if it were loaded and tightly integrated into the address space of VS Code?

Completely bug-free software is a desirable, but utopian goal. What do you think why processes with separate address spaces were invented? Would you rather go back to the Windows 3.1x days where a bug in a single program could bring down your entire system? Do you believe those programs were "easier to make robust because the errors [were] visible to developers"?

> "If you are driving on the highway and some component dies midway, you just open the cover and press a button on the broken component and it restarts itself! Isn't that much better than waiting two weeks for some repairman to repair it?"

Yes, that would be much, much better than waiting for two weeks while being stranded in the middle of nowhere.


> Isolating failures to small, restartable components, instead of letting them spread to the entire system, is a bug in your eyes?! Do you think rust-analyzer wouldn't ever crash if it were loaded and tightly integrated into the address space of VS Code?

How far would you extrapolate this? Where would you draw the line?

Would you say VS Code would be more resiliant and reliable if it had more isoldated processes for different parts of the editor?

- Process to blink the cursor

- Process to render the text

- Process to run syntax highlighting on the text

- Process to show the buttons on the left most side bar

- Process to draw the file/directory tree

I think it would be obvious that this would be insane.

Yet, why should it be? Wouldn't it be better - according to this way of thinking - because it means the editor will not crash if the cursor blinking code crashed! or if the file tree viewer crashed!


> How far would you extrapolate this? Where would you draw the line?

To where it makes sense from an engineering perspective. Yes, that's what makes good engineering: selecting a pragmatic point between two extremes.

> Yet, why should it be? Wouldn't it be better - according to this way of thinking - because it means the editor will not crash if the cursor blinking code crashed! or if the file tree viewer crashed!

Your argument is absurd. You're trying to counter my point by making up an extreme version of what I said and then refuting this imaginary point. This kind of logical fallacy is called "straw manning" [1]. This is how it works:

"According to your line of thinking, wouldn't it be better if every program shared the same address space, even in general purpose computing? Maybe even with the kernel? Then every programmer would be really* invested in avoiding bugs. I think it would be obvious that this would be insane. Therefore, you're wrong and I'm right."*

Can you see now why that kind of argument is a fallacy?

[1] https://en.wikipedia.org/wiki/Straw_man


> To where it makes sense from an engineering perspective. Yes, that's what makes good engineering: selecting a pragmatic point between two extremes.

You didn't answer the question though.

That is just assumed within my question.

The question is where do you draw the line.

Why is it ok for the autocomplete to be a separate program (and somehow that's good because the autocomplete can crash without crashing the editor)? Why is it not ok for the syntax highlighter to be a separate process? or the file viewer?

> You're trying to counter my point by making up an extreme version of what I said and then refuting this imaginary point. This kind of logical fallacy is called "straw manning"

I'm taking your basic premise and showing you that if you extrapolate it to its natural conclusion you will end up with an absurd situation.

This is not a fallacy.

Although my general feeling is, when you start accusing people of being fallacious, it's difficult to continue the discussion in good faith.


Hmm. I have the opposite impression - separate processes using IPC seems way overrated to me, especially for language servers. When you compare the IntelliJ architecture to VSC it's hard to escape the feeling that LSP exists only because JavaScript isn't type or thread safe. In such a language you have no choice but to create a 'backend' of some sort that exchanges messages with the frontend. Yet that's not a great way to implement an IDE. The task is just fundamentally well suited to fast, memory safe, type safe, thread safe languages and very poorly suited to languages like JS and Rust.

Why is that?

1. An in-process type safe API can be evolved and iterated rapidly. There's no need to write informal English specs for everything because the API itself provides so much guidance. Actually, IntelliJ is rather notorious for its absence of JavaDocs though this reflects cultural decisions inside JetBrains itself more than anything intrinsic (they take "code should be self documenting" perhaps a bit too seriously). In contrast, the LSP spec is enormous and the document itself doesn't provide much assistance implementing it (the article discusses this somewhat).

2. A protocol spec doesn't give you any re-usable code. If you write an IntelliJ plugin there's tons of re-usable helper logic and you can easily customize other plugins. The whole thing is designed for on-the-fly presentation compilers. With LSP you may or may not have anything to base your work on.

3. An IDE is fundamentally a highly parallel mutable database, in which the data can be in arbitrarily invalid forms at any point and the user is allowed to change anything whilst many different analyses are running in parallel. Shared data structures are everything in this problem, yet the LSP model effectively forbids all shared state between backend and frontend. This feels like a constraint inherited from browsers - it arguable doesn't make things easier, it just replaces relatively simple and statically/dynamically analyzable locking protocols with complicated sync and rec protocols. Both sides have to maintain their own copies of each other's data structures instead of re-using a single source of truth. This can get really messy, really fast. You're also constantly paying the costs and overheads of multi-process like context switching (especially expensive on Windows), IPC copies, process overhead, serialization etc.

4. A presentation compiler doesn't seem to benefit much from being written in Rust or C++. Developer workstations are fast and have plenty of RAM, so GCd/JITd languages don't seem like a particular problem here. Developer productivity on the other hand is at a premium. People want features. One reason JetBrains can add new languages and features so fast is that they're writing in relatively forgiving languages like Java and Kotlin, where you don't have to think too hard about ownership or memory management. Some thought is still needed - see the disposable model inside their IDEs - but you can more or less just go for it with a high rate of feature throughput.

You raise the question of how to kill hung operations. In practice that doesn't seem to be a common problem. I've been using IntelliJ with various languages for years and there have been many bugs, but I can't remember any time that a plugin hard-hung whilst analyzing. There are cases where plugin bugs happen but exceptions and GC mean the IDE can simply catch the errors and disable it. These days it can do that temporarily and simply reload the plugin on the fly. Other times e.g. if a pure static analysis crashes it just logs the error and uploads it for analysis. That particular analysis might not appear as a consequence but there are so many different optional quality-of-life analyses being done you often hardly notice.


>The task is just fundamentally well suited to fast, memory safe, type safe, thread safe languages and very poorly suited to languages like JS and Rust.

Huh? The first part of the sentence makes me think of Rust. It's fast, memory safe, type safe, and thread safe.


But JS isn't, so people look for other languages to fill the gaps. They seem to often settle on Rust, but it pays a relatively high productivity price to get those properties without garbage collection.

Is GC a problem in language analysis systems? Doesn't feel that way these days, and anyway JVMs have pauseless GCs anyway now.


I don't think GC is an issue, pauses would usually not even be noticed by the user.

In terms of speed, as long as you don't choose Python I think you'll be fine.

I do think in terms of productivity, something like F#, that has GC, ML style syntax with piping and partial application, and a strict enough type system with Hindley-Milner type inference, would be the most productive.

My hold up was mostly with the sentence I quoted which seems to contradict itself.


I agree it was an ambiguous and poorly phrased sentence.


The net benefit of using the "separate processes using IPC" model lies in the fact that the language used is an implementation detail that editors/IDEs don't have to care about. Want to write your Rust Language Server in Rust? Fine. Want to write it in Kotlin? Also fine. Your editor is written in JS? Cool. Elisp also works. This wouldn't really work so well if LSPs were some sort of library. IntelliJ plugins are great (if they exist) but they are inherently limited to their own ecosystem, nobody outside of that benefits from that. LSP sure isn't perfect but it is a hell of a lot better than the situation we had before it. If IntelliJ works well for you then that option is always there and hopefully here to stay, all power to you. But not everybody wants that.


You can load code written in different languages into different address spaces, you know. Nothing stops Emacs loading a library written in Rust or indeed Java/Kotlin.

Microsoft didn't go that way and again it feels like maybe JS limitations are the cause. Calling into a native library is tough in JS because there's no standardized FFI (browsers wouldn't allow it) and you're going to end up blocking the one JS thread you've got. Native code is going to expect the host to be able to spin up threads and use them because that's a super reasonable assumption, but one V8 can't meet. So they want everything to look like a web server because that's what JavaScript is intended for. That pushes them down a particular path, but it's definitely worth stepping back and saying ... does this really make the most sense, overall?


> You can load code written in different languages into different address spaces, you know.

That's basically what separate processes are, so what's your point?

> Calling into a native library is tough in JS because there's no standardized FFI

No need for using anything standardized: VS Code brings its own custom browser engine – for better or for worse – so it's not limited to what any Web standards allow or forbid.

> you're going to end up blocking the one JS thread you've got

There are worker threads in JS.


Er, sorry, that was a brainfart. I meant you can load them into the same address space.

JS has workers but they aren't sharing memory, so they aren't really threads.


> Shared data structures are everything in this problem, yet the LSP model effectively forbids all shared state between backend and frontend.

The "shared state" in this context is IDE program text, and analyzer diagnostics. Copying those across address spaces and keeping them in sync is super fast, the overhead is too low to matter.


And auto-completion suggestions (potentially means switching context back and forth on every keystroke), and inline editor hints / styling / folding regions, whatever is being changed by refactorings and so on.

Recall that most analysis is done async, so results arrive at different times. Even if you're not doing anything then, your IDE/backend will be context switching back and forth as stuff arrives to be rendered.

In the limit this degrades to the backend being the entire IDE and the frontend being just an translator between the OS IO system a stream of JSON rendering commands.


Agree with the analysis here!

The way I see it, there are possible two grand architectures:

* you either have a single, multi-language combine supporting any language. In a sense, LLVM/GCC but for front-ends

* or you have a bunch of independent servers

The first is clearly less code — there’s a lot to share. The second is embarrassingly parallel though. So, if you are a (single-threaded company), you should go for the first approach. If you are a (multi-threaded) OSS community, the second approach has shorter critical path.

Some second order effects:

1. While backands are more or less the same between languages and LLVM is a clear win, frontends tend to differ a lot. That’s the point of having different programming languages. So in practice sharing the code in IDE is harder. 2. An interesting benefit of separate process architecture is that it forces you to have async interface, which is important for responsive IDEs. But that’s an argument for separating back and front, not necessary for sharding back.


I'm not sure I understand. Are you saying that compilers/analyzers should be written in the same language in which the IDE is written? If not, then how are you going to benefit from an "in-process type safe API" and those shared data structures you mentioned?

This would mean that you'd have to write a compiler not just for each host language/source language combination, but for every IDE/source language combination, because Atom would have different internal data structures than VS Code.


No, but they should probably share the same runtime, have some level of interop and be garbage collected because that makes crossing the barrier between language backend and IDE frontend so much more efficient and productive.

Historically for IntelliJ that meant writing everything in Java. Nowadays plugins are written in many different languages. You could write plugins in Ruby or Python if you wanted to. They run on the JVM. You could even use C++ and get the same benefits because GraalVM can run C++ on the JVM itself, i.e. it will apply GC to the C++ parts so you don't have to worry about ownership across the interface. But that'd be an advanced move and I don't think anyone has tried that; for such big gaps in language they tend to fall back on the more conservative multi-process model again. Stuff like Sulong is too new to have made such an impact.

Atom would have different internal data structures than VS Code.

LSP doesn't solve that. It either forces everyone to implement the LSP model and data structures, or you have a mismatch and translation anyway.


Just defer to the classical non solution of writing everything in C. The kernel was written in C, now everything else must be as well. You don't have a say in that, apparently.


That's because separate processes are not "plugins" at all, but part of an outside toolset that may or may not be supported by a thin "plugin"-like IDE component. The patterns involved are completely different; the separate-processes pattern was popularized by *nix, whereas plugins were always more common in Windows and Mac.


The concept of a "plugin" isn't restricted to dynamically linking shared libraries, but rather defined by the concept of having an independent host application, which can load additional functionality at runtime. It doesn't matter whether this is achieved through shared libraries, interpreted code, or separate processes.


So I would say rust-analyzer works quite well, while Rust RLS is terrible. However that seems to be a non-LSP issue. The F# ionide plugin also works really well, it's also LSP based?

Most issues with the VSCode plugins I have is that they don't report errors to the user properly. It's really hard to find out _why_ something is failing. So when everything is configured correctly, the LSP works great. When you have a misconfiguration you need to play detective.

Plenty of plugins hang forever, fail silently, or they give you very generic error messages. But this seems to be a VSCode issue.


> But this seems to be a VSCode issue.

That’s a protocol issue! While LSP has a method to report a transient (edge triggered) error, it doesn’t have a way to report a persistent (level triggered) status. And to solve configuration problem, you really want a “status” concept - you need an LSP health indicator which is either red or green, and which you can click on to get info about what the server thinks the project is. There’s no such feature in LSP at the moment.

https://github.com/microsoft/language-server-protocol/issues...


The only "LSP" plugins that work well with VSCode in my experience are those developed by Microsoft (likely by the same team that developed VSCode).

But even then, not all of them are great. For example, the Python plugin is of questionable quality, but that is probably because Python does not amend itself well to intellisense, even when you add type hints to the picture.

> Most issues with the VSCode plugins I have is that they don't report errors to the user properly. It's really hard to find out _why_ something is failing. So when everything is configured correctly, the LSP works great. When you have a misconfiguration you need to play detective.

Good observation. This is not just a "feature" of LSPs. It's a feature of all distributed systems developed with the unix mindset.

For example, Docker suffers the same problem. Someone gives you a docker image, it works great! For a while. But when something breaks, it really breaks, and it doesn't tell you why. It prints some superfluous error message; one that does not help you understand what's going on.


>It's a feature of all distributed systems developed with the unix mindset.

That is also a good observation. It's hard to create a good UX with a system where you don't somehow handle all of the failures of the subsystem in a central manner.

My problem here with VSCode is that it often doesn't even expose the failure of the subsystem, essentially not even giving you the information you need to find out the issue.

If my corporate proxy fails, the extension store will say "XHR failed", and before it used to just hang forever.

But as a user, what is XHR? Why does this hang forever now? There could be many reasons.

At least tell me that you had a failure while downloading something.


> That is also a good observation. It's hard to create a good UX with a system where you don't somehow handle all of the failures of the subsystem in a central manner.

It's one of the reasons why the Linux UX will always be subpar.

Apple's macOS is based on unix underpinnings, but they take ownership of the whole thing and this is why they can beat it into shape.


As someone who has worked on two of them (paradox interactive scripting and f#): I think both work well...


I've just dug through the Neovim ecosystem trying to upgrade my 5 years old .vimrc and I feel it has come a long way since LSP became a thing.

It can now actually act like real IDE as the editor knows about the code pretty well.

https://github.com/folke/trouble.nvim

https://github.com/tanvirtin/vgit.nvim

https://github.com/kosayoda/nvim-lightbulb

Some features look pretty close to what you get in JetBrains.

I can see some parts are kind of rough until you Google enough to fix/customize to your needs but if I take enough time, I might as well feel like its a decent replacement for a real IDE.

For a starter, AstroNvim helped me figure out what the juicy plugins are these days.

It seems NvChad and LunarVim are preconfigured competitors to it.

I wonder if JetBrains one day gets left behind for not using LSP ecosystem but perhaps Fleet might implement it.


Can't recommend AstroNvim enough as a starting point to Nvim. It is the perfect bootstrap needed (though I would disable their integration with the scroll plugin — Neoscroll. Prefer the native scroll instead).


Exactly what I did. Not sure if people like that slow scrolling.


> And try to write more about how to implement language servers — I feel like there’s still not enough knowledge about this out there.

Yes, please! There is next to nothing about "How to write a language server using LSP". All I can find is "how to write a language server for VSCode".

   - Written by a neovim user



I think the idea is that those should be the same though. Like the move to standardise web extensions so a 'chrome' extension works in obscure browsers too.


There are N programming languages and M libraries (e.g., dates, file-formats, databases, data analysis method X). Currently, each language (Go, Rust, Python, Julia), needs to reimplement each library from scratch, which is a huge effort and delays language readiness for users.

Maybe building another programming language can be accelerated by deriving its M libraries from another, using some kind of abstract libraries or translation methods, through a service similar to LSP.


Working on a subset of this with https://github.com/celtera/avendish - C++ reflection makes some aspects of this problem very easy to solve ; while my lib focusses on media objects (audio / video / GPU processors with a control UI), the same concepts could likely be extended to a more general shared object ontology..


That’s what deno is doing IIRC! They rely heavily on design work that went into web standards and Go standard library, and harness open-source community to deliver implementation.


That's in no way what he's suggesting and it's something every language in creation has done: getting inspiration from existing languages.

He's saying about actual working code, something akin to universal libraries you could use from any language. Or maybe some auto-translation/hooking up of libraries from another language, through some sort of "library protocol".


Bartek from Deno here, this is mostly true. The runtime heavily relies on the Web standards, while Go was a heavy inspiration for Deno's standard library (deno_std [0]) in the early days. Since then, we've been relying on Web standards even more and the standard library started seeing more modules added that were based on popular npm modules.

On another note: huge thanks for rust-analyzer, not only is it my main daily driver, but we also used it as a reference for implementation of "deno lsp" - the language server that ships with Deno.

[0] https://github.com/denoland/deno_std


<3

I can't say I am a user of deno today, but my feelings about deno remind me my feelings about Rust circa 2015: it's not clear if the thing succeeds, but, if it does, that would be a huge boon for the whole industry, as finally "better is better" would take over "worse is better". Deno has all the potential to become the first really dependable scripting environment!

So, I wish you luck and hope one day I'll migrate my jekyll+asciidoctor blog to deno :-)


The batch style tools, where you launch it with stdin input, and get back results to stdout I still think are the majority of operations an IDE want to run.

It's when you have a tool that has ongoing async communication which enables the editing the author mentions that LSP is really useful. Implementing a goofy protocol for each tool is a pain.


> But don’t make LSP itself a first class concept.

Why not? I’m interested in any concrete reasons why doing so would be a bad idea, why you should maintain independent extension points and bind LSP to them, rather than providing LSP-shaped extension points, as it were.


Several things:

_First_, a general clean architecture considerations -- protocol is "view", it shouldn't infect the model.

_Second_, LSP is a way to consume semantic info, but there might be others. A different protocol might come along, or you might want to embed the server as a library somewhere.

_Third_, LSP is language agnostic, so, to implement LSP, you'd have to dumb-down internal rich language specific representation to a language agnostic one. The system will be more evolvable if such dumbing down happens near the edge of the system.

_Fourth_, LSP really just stretches the surface of whats possible. Language servers should push the boundaries of whats possible to both deliver a better UX for a specific language as well as to pressure the protocol itself to implement more advanced features.

The design guideline we use in rust-analyzer is that we are building not for LSP/VS Code, but for a hypothetical, rust-specific perfect IDE whose frontend is capable of doing anything we want.


One thing I have difficulty understanding is that given MS is (was?) the primary proponent of LSP, and the spec was originally written with TS in mind, and VSCode has first class support for LSP, why do community projects like typescript-language-server (along with a bunch of others like the one by sourcegraph) need to exist ?

Why can't arbitrary LSP clients connect directly to tsserver (maintained by MS) ?

[1] https://github.com/typescript-language-server/typescript-lan...


> Why can't arbitrary LSP clients connect directly to tsserver (maintained by MS) ?

As far as I'm informed, they can.

For example, NeoVim language client has a pretty wide range of officially supported servers [0], including tsserver.

[0] https://github.com/neovim/nvim-lspconfig/blob/master/doc/ser...


It connects to a community maintained lsp server that wraps tsserver https://github.com/neovim/nvim-lspconfig/blob/master/doc/ser...


Is it not possible to connect to tsserver directly? If so, do you have any information why?


TSServer predates lsp iirc


Oh, so TSServer is not an LSP server?

That's why an arbitrary LSP client can't connect to it.


> Why can't arbitrary LSP clients connect directly to tsserver (maintained by MS) ?

Can you explain a bit more about why they can't connect?


Because tsserver (maintained by MS) is not an LSP server.

There is an open issue [1] that has been open since 2020.

[1] https://github.com/microsoft/TypeScript/issues/39459


tsserver isn't LSP compliant, iirc they started with tsserver way before LSP was out, but since then didn't migrated to LSP. I think they didn't migrated because:

- It's a big selling point for vscode. - It might hard to migrate what they have to LSP, they might implement a lot of non LSP standards. - Migrate to LSP could possibly delay new features because of the rewrite in tsserver.


I think the problem is that the tsserver for historical reasons even though is quite similar, still does NOT implement the LSP protocol. That's why you need another level here.


To an extent the thesis here (that M*N is not insurmountable) has a grain of truth, but depends on the idea that an editor will support some specific protocol for that communications. From the article:

> Rather, a language should implement a server which speaks some protocol, an editor needs to implement language agnostic APIs for providing completions and such, and, if both the language and the editor are not esoteric, someone who is interested in both would just write a bit of glue code to bind the two together

Yes, this is the ideal world, but the first editor to actually go ahead an do that was VSCode, and the protocol they chose was LSP, so everyone else fell in line, because having a protocol that worked with one editor was at least a workable start.

Another example of this is syntax highlighting, which is definitely N*M -- people have written syntax highlighting plugins for every conceivable language for almost every editor, which always requires some manual patchwork. Editors are increasingly migrating over to tree-sitter for this support, which seems to have become the de facto standard, and as a long-time vim user and occasional syntax highlighting maintainer, this cannot come soon enough.


Rust-Analyzer on VS-Code is probably the best development experience I've ever had. Makes me want to gouge my eyes out whenever I have to use something else.


LSP is a great thing, one of the very few things i appreciate from microsoft


I've felt for a while that this is one of the most amazing advancements in my field in my lifetime. What's missing for me is an updated standard curriculum on building compilers; we shouldn't be teaching students how to just build batch compilers anymore. We need a standard set of algorithms that we use to create interactive compilers and IDE support, including refactoring.


I think LSP is great. I've used it in one my side projects to quickly add support for VS Code. We took a more direct approach. Instead of having a separate LSP server our main program runs on LSP server mode. This made a lot of things like distribution and interaction with the core functionality simpler.

LSP also solves a big pain by making it easy to build tooling when teams choose to implement an external DSL's.


I recently switched my neovim config over to LSP from ncm. My experience is now much buggier, slower, features work some days and not others, syntax highlighting just stops working randomly, trash files get spewn all over my repositories that I have to .git/info/exclude, etc.

Not sure what the Right Direction is when it comes to text editing but this isn't it.


I have been using neovim native lsp for a couple months for TS, JS & Go and overall have been very happy with how well it works.

The issue is trash files is very weird and I doubt it has to do with lsp integration.

Maybe tryout AstroNvim [1] which is a nice pre-integrated environment with a lot of sane defaults.

[1] https://github.com/AstroNvim/AstroNvim


The LSP daemons themselves have trash files, namely clangd.


Assuming you're talking about this ncm[0], are you aware that ncm is a " completion framework for neovim", which is different from an LSP? ncm is either an LSP client itself, or, it is talking to neovim's internal LSP client, to get completions from LSPs.

It's also important to note that LSPs almost always provide you more than auto completion (for example, go to definition, go to implementation, find references).

Do let me know if you have more questions, neovim and LSPs are my "daily driver" as a dev, so to speak.

[0] https://github.com/ncm2/ncm2


Yep I'm aware of what it is, I used it for years and years, thanks.

NCM ancillary plugins that interacted with e.g. libclang were more what I was referring to, but I'd hoped the average reader could deduce that.

Those features of LSP I do not use, namely because LSP is a buggy mess. Some files have wild syntax errors reported for them that don't exist when compiling, but not others. Half the time, the auto-completion simply doesn't do anything. Nothing pops up, nothing auto-completes, etc.

And now, just a few weeks after switching over, without changing a thing about my config, the arrow keys for selection have completely stopped working inexplicably.


So, synthesizing your reply here with your reply to @mjlbach, it seems to me like you are attributing to the whole LSP ecosystem (by saying things like "LSP is a buggy mess") (1) problems with treesitter, and (2) problems with clangd, one LSP server implementation.

I feel like if your comments' utility to the conversation would be greatly improved by prefacing them with something like "in my experience with clangd, treesitter and neovim".

I say this because my experience with LSPs is the polar opposite of yours[0]. While I believe there's value to sharing our personal experiences, I think we have to be careful not to present our personal experience (especially with (neo)vim setups, which tend to vary wildly from one to the next) as representative of the whole ecosystem.

[0] I also have nothing but praise for most of the LSPs I've used (clangd, typescript-language-server, rust-analyzer, clangd, pyright, css-language-server, sumneko/lua-language-server, of the top of my head).


> My experience is now much buggier, slower

You'll have to file a bug report, because none of the built-in LSP stuff should cause any performance degradation.

> syntax highlighting just stops working randomly

This has nothing to do with LSP.

> files get spewn all over my repositories that I have to .git/info/exclude

Neovim does not automatically create files in your repositories.


> You'll have to file a bug report, because none of the built-in LSP stuff should cause any performance degradation.

That's a bold claim. Migrating from NCM + clangd integration is absolutely slower.

> This has nothing to do with LSP.

Sorry, you're right. Tree-sitter-based syntax highlighting is buggy and slow. My point still stands.

> Neovim does not automatically create files in your repositories.

No, you're right. Clangd does. I was commenting more on my experience, less Neovim's LSP behavior specifically.


> That's a bold claim. Migrating from NCM + clangd integration is absolutely slower.

You'll have to define slower (time to diagnostics, time to completion, are you really talking about the performance of nvim-cmp instead, etc.) preferably in a bug report with benchmarks. I've done several rounds of performance optimization since 0.5, and I haven't heard any complaints since 0.6 when I swapped out our json decoder.


I would just back up @mjlbach here, and provide my datapoint as a neovim user: I have no complaints with regards to neovim's builtin LSP client, especially with regards speed.


(1) WebStorm does a pretty good job as a Javascript IDE so far as I'm concerned (doesn't seem that you need TypeScript)

(2) Maybe it is better today but historically Eclipse was a workable Java IDE but the "plugins" for other languages pretended to work but didn't really work, at least not as well as Java.


I see that the days when people just sat down with their favorite code editor and wrote code are long gone.


I've argued with others over the utility of "Do What You Think I Meant" editors. I feel the difference is significant; the kiddies are trying to draw using pantographs and exoskeleton rigs, in terror of the possibilities they might explore had they not the artificial constraints on their tooling.


my favorite editor was the turbo pascal one! (then Delphi, then PHP, then PHP with unit tests and something like TDD/BDD, then entirely too much Bash, and then python, and then TS, Scala, Rust)

I never learned to code without a superfast feedback loop. (maybe I've never learned to code!!!?)

oh, okay, there was a C++ project (a music recognizer - shazam clone) and I've mostly hacked that together in vim until the compiler stopped complaining about move semantics and right-references.


Does any of the LSP-powered editors come any close to the smartness of IntelliJ? This is a genuine question, the last time I checked (over a decade ago) nothing could really compete to IntelliJ, everything else felt dumb.


For the 20% of the things you use 80% of the time (basic completion, goto definition, syntax highlighting), I'd say advanced LSP servers like rust-analyzer are pretty close to what IntelliJ provides.

But the heavy tail of 80% of the features which are used relatively rarely is very heavy. The biggest one there is the "change signature" refactor. LSP today can't really express that.


Nice to know, thank you.


I was hoping this was an article about Lumpy Space Princess. I am disappointed.


It's worth noting that typescript/js support in VSCode doesn't come from an LSP. It's actually tangled with VSCode itself


These are only hard problems in Unix world. VisualBasic and Delphi always had really good auto-complete.


Ugh. This whole thing is exhausting because everything is so oversimplified (as is any conversation on the topic) and there isn’t a right answer.

You can’t compare the architecture of VSCode, vim, emacs, VS, IntelliJ, etc. and come to any conclusion.

They are completely different tools, built at different times, with different goals, by different (dis)organizations.

Emacs was a bunch of primitives that were wrapped in Lisp, and you could do anything. It was a day 1 goal. Ignoring the things that evolved to become normal in the Lisp on top is weird. It was extensible by design and completely open ended, with packages evolving to being part of the expected toolchain.

Vim took vi and bolted on some scripting interface that kept being pushed, adding extensibility by demand.

Now we have neovim embedding lua which is more emacs-like, and driving vim features as well. But it got to add higher level “primitives” because it’s a new implementation.

IDEs like VS, Borland X, Sun Studio, etc. were purpose built tools for a specific toolchain package. Only after the fall of traditional IDE/RAD environments did VS start trying to be an everything IDE.

Eclipse and Netbeans started as Java IDEs in Java, were all focused on architecture, and enabled plugins. People started making plugins to support all the languages (and then totally break the IDE by installing the wrong combo). As eclipse, well eclipsed, NetBeans you started seeing Eclipse “distros” of canned plug-in combos to not implode. I know folks with an eclipse install per codebase they work on.

IntelliJ got the architecture part from Eclipse, but stayed very hand curated like the old RADs and IDEs. You can install all the things in IntelliJ Ultimate or buy the more focused language specific ones (PyCharm, CLion, etc). The plugins are less flaky because they’re predominantly 1st party. The language specific ones make each toolchain ergonomic like a traditional IDE.

VSCode got the benefit of being the newest. Strip down below an “IDE” but above a text editor exposing primitives. Use web tech so the kids can make stuff because Java isn’t the lingua franca it once was. It could look back for a better sweet spot.

So whatever. Use what you find productive and comfortable.

I wander between probably 10 languages. I tend to have an editor and terminal side by side. It’s comfortable for me. I prefer the editor+ experience more than the integrated IDE one. I also am generally mouse-adverse. But that’s me and I’m not “right.”

I like my neovim acting like VSCode more than VSCode acting like vim. But it’s because it matches what my hands do already. When I do !!fmt or whatever and it doesn’t work, it takes me out of my flow. And when editing remotely, I still get vi/vim just without bells and whistles so my hands still work. Many just never have that use case.

That comes at a cost of maintaining a crazy config for me vs clicking for plugins. It used to be 10-15 vim plugins. Now it’s more LSP + a few. But I revision control it and revisit every 6 months or so. That trade off’s worth it to me, maybe not to you.

Some folks still love their full IDEs with tighter integration with a toolchain and language-specific features ergonomically placed. They likely use Java/C#/C++ and have to dabble in html+js at times.

Use and make what you want. Make an LSP or don’t. Integrate them into something or don’t. All of the experiences described in the article still exist. No one has to use any of them. Follow your heart (or I guess your hands). Saying one is best is like saying some kind of shoes are best for everyone. They’re tools, but ones we live in. Wear what’s comfortable.

In the words of a song by Sheryl Crow that I don’t particularly like “If it makes you happy, then why the hell are you so sad?”


LSP is what made me make a full switch to Vim[0]. Neovim even made it part of its API out-of-the-box, and that's a perfectly reasonable move given how much benefit it brings.

[0] https://www.vimfromscratch.com/articles/vim-and-language-ser...


I might be a bit contrarian here, but

> In particular, add assist/code action/(bulb) as a first-class UX concept already. It’s the single most important UX innovation of IDEs, which is very old at this point. Its outright ridiculous that this isn’t a standard interface across all editors.

In the grand scheme of things, this particular super-local tactical optimisation seems like entirely the wrong problem to focus on with all the inefficiencies that come with badly designed software. Whether I write

    public double Hours { get { return _seconds / 3600; } }
or

    public double Hours => _seconds / 3600;
is really not going to make a big difference to the quality of my software. Is this where we want to focus our attention?

The day I get a little pop-up that says

- "are you sure you need to invent your own time abstraction here?", or

- "this is a derived value that should only be used in presentation and not business logic or persistance", or

- "the customer has not been using the feature that this code is relevant for in six months, are you sure you want to spend time on this?"

then we can start to rave about the single most important UX innovations.


Interesting! So the problem of LSPs (or code tools like IDEs) approaches automating the evaluation of "one-level up" abstractions similar to the feedback during code reviews.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: