Creating an app is one thing. What I want to know is what's the update experience like 3 years later when I haven't touched the code in forever and I'm getting a flood of dependabot notifications about critical vulnerabilities.
- not having to update due to issues in dependencies of dependencies of dependencies
- not having to rewrite it all because framework maintainer decided to shift to a new paradigm and no longer maintain the old version
- not having your massive framework generating weird request flow that isn't defined in any RFC and can't integrate with industry standards
- not having to pay for specialised hosting because only one or two companies know how to properly host and scale a website built with your massive framework
- being able to integrate or understand actual web technologies because you don't have a massive framework abstracting it all from you
Some frameworks or libraries may have been designed with some of this in mind. If you want to scale and integrate with other stuff, pick wisely because your whole code base will end-up tied to that framework, especially if "full-stack". Tho I have to admit developper experience can be awesome.
* having code that is more than an order of magnitude smaller and even faster to build and execute.
* working with developers that can actually program and aren’t super afraid of state management, APIs, DOM, measuring things, test automation, and so much more. I really get tired of hearing about reinventing wheels by people who cannot create wheels or drive cars.
I can relate to working with devs who actual knows about web technologies and are not afraid to learn about stuff outside of their framework-provided comfort zone.
Im in this phase right now, I used gatsbyJS and netlify in my last projects and things became unmaintainable and difficult to upgrade after a few years, am in the process of just doing a complete rewrite where I have more granular control over everything regardless of framework / business corporatization
Your vanilla app will have the same vulnerabilities, but you just won't know, because there are not thousands of people checking your code constantly. Not knowing, and not having vulnerabilities are two very different things.
I’d like you to elaborate on how an app with no external dependencies has the same line or vulnerabilities that an app which pulls literal thousands of external dependencies. Even if you still use Node/Bun/Whatever as your core but then build your own app with no external dependencies you’re going to reduce the security risks you have greatly.
This is if you’re using JS. If you’re building your application in something like GO only using the standard library and writing all the JS you use yourself, then you’ll have almost no external vulnerabilities.
I know it’s common in software engineering to rely on others to be your security, but this has proven to be a terrible idea so many times by now. The most known example is Log4j, but there have been many other examples. This is made even worse with how attacks are done today, which is largely automated. So if you use an extremely common technology which turns out to have vulnerabilities then you’re also going to be far more likely to be attacked.
Front end is tricky, but is less security critical as you should not trust the client anyway.
I think Node may be a lot worse than this, but looking at two Django projects, one on which I have really kept dependencies to a minimum and one I feel has two many dependencies I get 9 and 74 dependencies respectively.
> I’d like you to elaborate on how an app with no external dependencies has the same line or vulnerabilities that an app which pulls literal thousands of external dependencies.
Because it is not vetted. These dependencies are run through multiple checking tools day by day. You will get automated alerts for dependencies, which you can then easily patch. There is an entire industry behind checking for vulnerabilities in public libraries. The best incentive for something is and has always been money.
You have neither of this if you roll your own entirely.
> I know it’s common in software engineering to rely on others to be your security, but this has proven to be a terrible idea so many times by now.
This is a false equivalency. Log4j was found, publicly discussed and was subsequently patched.
That depends a lot on how complex, large and critical your app is. And on the skills of the builder and the maintainer.
The surface attack is considerably reduced if you do not rely on a large list of dependencies. Moreover, you do manage to know _better_ the ins and outs of each bit of your system (because you wrote it, and the whole surface has been walked on at least once).
Unless you are of a specific interest to an attacker, not relying on external dependencies makes you less detectable as your app may not behave as scanner expect it to.
It's a bit like comparing going on a trip by walking with one person you know well or with 100 people: you still get to make the trip, but, the delays, the risks, the provisions, the contingencies, the consequences of an incident are not really in the same scope.
But for any large project involving more than 4/5 people, perhaps relying on no external dependencies may be a bit complex to manage in the long run.
> Moreover, you do manage to know _better_ the ins and outs of each bit of your system (because you wrote it, and the whole surface has been walked on at least once).
This only holds if you work only for yourself.
> Unless you are of a specific interest to an attacker, not relying on external dependencies makes you less detectable as your app may not behave as scanner expect it to.
Another argument can be made that you're investing too much valuable time into things that have been solved before - and likely better than a single person could manage.
> But for any large project involving more than 4/5 people, perhaps relying on no external dependencies may be a bit complex to manage in the long run.
In my experience, this is true for any one project with more than one (1) developer, even if the developer might change in the future. Any junior front end developer can be productive in a years old angular project. They might not be in a custom dependencyless project.
I think a vanilla project will have fewer vulnerabilities, not more. Most of the vulnerabilities in frameworks are in the very complicated build tooling or deeply nested dependencies. A vanilla project doesn’t have that, so there are entire classes of vulnerabilities that don’t occur there. Vanilla does come with a higher risk of XSS, but basically all you need is a templating function that does XSS defense like lit-html and you’re good to go.
I suppose that you are not writing your db driver from scratch and you are using a driver that was at least written in the last 10 years, so it will support for sure prepared statements and you will use it, that means no possibility of SQL injections.
Having a very tiny app helps. In 30 lines of JS I couldn't implement a small data structure. But if you're building a front-facing app for consumers then public opinion is the judge of whether your app is emotionally delightful (more important than useful).
Technically obfuscation is perfect until it's found. And many times it's never found (the same is true for passwords in general, you could argue that's a form of obfuscation). For example who would ever find a compressed file that masquerades as completely unrelated and harmless looking plain text that was made with a custom dictionary? I think it's more about how good the obfuscation is rather than it being used at all.
This has been disproven many times. Way back in 2010 it was demonstrated that any 14 character password with special characters could be guessed within 10 seconds using a rainbow table. Likewise obfuscation only hides defects from people, not bots, which only serves to speed compromise and slow resolution.
This is not what the author was suggesting. The author is suggesting that, more people using an open source piece of code has a higher chance to be revised which ultimately would lead to a better security.
Who checks dependencies other than the author of the library ? The only time I check them is when they break and that's not a good thing.
I see this argument as “it’s not my job” type of argument.
Most of the time you just install and use. If I had infinite time, I’d do it because it’s fun but I don’t so I don’t.
If there’s a trust chain and I know for sure certain libraries are reviewed I’d have a peace of mind. Alas, that’s not the case and we spend our days in back burner paranoia or blissful ignorance.
This argument comes up super frequently. Yes, more people actually reading the source code is better for identifying security vulnerabilities, but that almost never how it’s either articulated or implied.
When most people make this argument the suggestion is that popular software must be more secure because somebody would have certainly identified and reported the vulnerability. That makes several assumptions not qualified by evidence. In other words it’s wishful thinking.
As a case in point when I reported my first V8 defect it was around the time of Node 4.4. Chrome had been out for several years at that point with many millions of users. The defect I found was that V8 could not perform recursion using only function name. WTF. The problem was missing test cases, not a lack of eye balls.
Exactly!. Why use a library when you can write everything from scratch yourself. That for sure won't have any bugs or security issues and will be super easy to pick up by the next dev.
I also follow this approach with my operating system. I'm tired of vulnerabilities so I'm writing my own.
Aw guys sorry. I used to be a virologist but now I am an expert on war and geopolitics in about a dozen countries. Gotta do what you can to keep up with the Twitterses. Best of luck with that whole medicine thing though!
Don’t forget to become an organic chemist. What does big pharma know about drug development and production? Plus, it means all my vaccines and medications are Organic and Homemade. Those words definitely mean “better” in all cases.
Going vanilla (assuming it's built of sufficient quality) will save you untold amounts of headache and cost. The bloated dependency stacks and unnecessary breaking feature churn in a lot of these libraries are just a waste of money and time for most organizations.
I mostly use Golang, but you could do Python or NodeJS if you wanted. I would probably prefer Deno since node's HTTP functionality is a bit low level.
Getting all the way to 0 dependencies can be tough depending on what you're building, and maybe not worth it. But it's a great feeling when it happens.
What's more important than how many dependencies you import is the quality of them, how well they fit your project (using a high percentage of the library's functionality is a good sign), and the total number of transitive dependencies.
One thing I do sometimes is using git submodules for dependencies. It's annoying to work with, but that friction helps me avoid importing too many things on a wim. It also encourages me to use dependencies that are self-contained.
> What I want to know is what's the update experience like 3 years later
Exactly. I've been there with vanilla js projects. It doesn't make any difference whether it's been written by me or by another engineer.
It's a nauseating experience. There are little if any conventions. You can't trust any APIs if you've not read what they do exactly.
I will never understand this sort of purism on HN. For any medium web app, by choosing a framework, you will move the mental load of maintaining and bike shedding to thousands of other engineers, leaving you to do the actual work.
> For any medium web app, by choosing a framework, you will move the mental load of maintaining and bike shedding to thousands of other engineers,
Suppose you chose a framework. Suppose you were unlucky in your choice. Let's say you chose Angular.js in 2014... only to discover in 2016 that something called Angular 2 is coming out, and that Angular.js is on its way on being discontinued, and the migration story is horrible...
Or suppose you chose Backbone in 2012, and then by about 2015 it kind of died...
Or suppose you chose Meteor in 2013. It was quite popular then... until it wasn't.
Or suppose you chose Aurelia, which had its moment somewhere around 2015 or 2016, when Rob Eisenberg was promoting it taking advantage of the dreaded Angular migration story and of a sneaky clause in the React license. Yeah, it's dead now.
And so on and so forth. At least what you write yourself, for the specific purposes of your application, lives as long as the application itself, and is as fresh as ever.
I was still writing and maintaining Angular JS projects in 2020, when I left my old job. It was a worse option than Angular 2.0, but it still worked fine. The documentation still existed.
I was also maintaining Backbone projects.
> And so on and so forth
The web space needed some time to settle down, I'll give you that. The web as we know it is really only less than 20 years old. Technologies took a similar time to settle back when we chose between C, Fortran, Ada and Java (the new kid on the block at the time). If you were _really_ cool, you may also looked into Python.
Engineers had similar discussions back then.
Nowadays, choose React, Angular or Vue and you will have no problems maintining projects 20 years from now.
I tend to use git submodules to vendor into ./lib. It's rather clunky to work with, but once you know a couple commands it works pretty well. As a bonus, GitHub pages recursively clones submodules automatically. If you're using ES modules, you can host a site with no build step (ie github actions) directly from your main branch.
That repo is currently unmodifiable due to dependency deadlock/hell. As an experiment we tried re-implementing it as a vanilla web component app. The experience was immediately so pleasant that we committed to that course. We're almost finished. No regrets.
I started learning FastHTML but somebody on reddit mentioned htpy. In my opinion htpy+fastapi is awesome combo. I like the way htpy handles declaring html components.
I don't quite get an abstraction like this, if in the end both in our minds and through [trans|com]piler it's necessary to know the underlying abstraction in details(?)
The benefit comes as you're generating chunks in python, then you're passing around python objects rather than strings, string templates, etc.
It's also a bit of an odd thing to be in python, then just because you need to loop you need to write Jinja or other templates which are different languages basically. Might be nicer to write a loop in python.
I've used FastAPI, but haven't done a lot with Starlette directly. If you are building a full stack app, I can imagine the integration between FastAPI and Pydantic can make it easier to work with the data that you might want to render in the HTML that you generate using htpy?
htpy is just server-side rendering of HTML. Your routes are returning strings instead of structured data, so from the perspective of responses you're not going to be using Pydantic at all. That doesn't stop you from using it to validate objects you're passing around in your server, but I personally wouldn't do that because Pydantic can have a pretty hefty memory footprint. I've seen over-reliance on pydantic lead to plenty of OOMKilled errors.
It's a bit different for requests though. FastAPI will allow you to define your request schema (application/json or application/x-www-form-urlencoded) and validate it using pydantic, but starlette doesn't do that OOtB. It's trivial to implement though, and if it were me I would probably choose to do that rather than deal with FastAPI's inflexibility.
I have a feeling the Python Template Industrial Complex; which includes Jinja, tag(anothertag("text")) DSLs, GVR's pyxl3, Kivy UI language; will all go away if Python has something as convenient as JSX.
Building simple CRUD apps are often a single code-generation command in Phoenix/Rails/Laravel, and adding common features like Auth, Queues, Emails, File Uploads, etc. are similar.
The downside is that this is a stateful monolithic approach that requires a server running 24x7 and can break without some effort to cache and reduce the load on the database. They are also often memory-hungry frameworks.
The tradeoff for productivity is worth it in my view, for the vast majority of cases where it's just a small team of 1-3 developers.
Yes, and most often we don’t fully understand the problem without partially solving it, which is perfect for these monolithic, batteries included frameworks.
I find building apps in Django, with the prescribed convention to box me in, helps tremendously to stop me from overthinking and just experimenting with the problem space.
I’ve even started using Django for apps that aren’t web apps, just because it has whatever I will eventually need, whether it’s database, auth, caching, admin portal, tools for building out CLIs, it’s all there.
> What are people doing where rewriting tens of thousands of dev hours of work from scratch makes more sense than spending money on servers?
1) "spend money on servers" is a boring solution. This is no good if you are a career software engineer and your career, salary and prestige within the company depends on writing code.
2) thanks to the cloud, server/infrastructure is now priced with ~100x margins, so "spend money on servers" is a significant investment, with these (stupid) moves to serverless/etc being seen as a cost-saving measure (that of course just like with the cloud itself supposedly being cheaper will never actually materialize).
I also agree with the sentiment, although two "but..."s did come to mind so I'll play Devil's Advocate:
1: does this contribute to the inefficiency and bloat we see on the web and applications nowadays? Of course everyone complains about Discord and Teams and other Electron apps (which aren't a direct comparison) but one I deal with regularly is the Microsoft Power BI Gateway application, which allows access to on-prem data for reports and automations. I'm sure it does a little more than just establish connections to Azure and send data, but it's a 672MB download! That's larger than the ISO for Windows XP. Throwing more hardware at a problem becomes less effective when the application has fundamental inefficiencies.
2: although server hardware is affordable compared to western salaries, a lot of the world has far less purchasing power and server hardware prices aren't as regional as labor costs. So some developers may have more time than money for more silicon. I haven't run any numbers on this and doesn't mean that rolling your own "everything" is worthwhile.
Do people still use them for full production systems? We use them a bit for ancillary things but TBH, if you have some k8s or similar, solution, it's maybe not worth it to not use a standard container deployment environment that everyone knows.
I inherited some lambdas on my team and the amount of effort I have to go through to:
- make them testable locally
- planning/qa-ing yaml configs with devops team, because they only grant me readonly access on their precious over engineered helm chart stack, they don’t even offer running my unit tests in their pipeline for me
- painful debugging process because everything that touches a lambda is another aws service config somewhere
I honestly don’t know why anyone wastes their time. We will be deprecating these aws lambdas for traditional api on our next version of our app. Serverless is garbage way to deploy code and is designed to tax you/charge you fees at every turn. It is for people who want to deploy poorly thought out code and rewrite it later, and explain the bill later.
Yes, I inherited a Lambda code-base too, and it is an absolute knot - Lambdas sending requests to other Lambdas, Lambdas sending messages and other Lambdas reading the messages, Lambdas reading and disregarding messages due to poorly thought-out queues etc.
I remember at one job, they were talking about the overall architecture of code at the company and when I asked how can I run this on my computer, they said well you can just run it but I pushed... How can I run this whole thing on my computer to interact with everything else and it was met with silence.
I can understand having to stub out external calls to vendors and clients but this is ridiculous. There is no local story.
Oh yeah I have a lambda that needs another lambda to complete before running. The first is on a Cloudwatch event interval. So the data in the second can be stale, but the one saving grace of the situation is that no one cares enough to make me fix it, not even me.
I worked on a system built on FAAS for a few months and it was (already when I arrived) one of the more well-tested systems I ever worked with, so it is clearly doable.
I took the system from dev to prod and much of it was boring in the very best sense of boring (also helped that I had a good team, although they were new to it as well).
I had one major gripe with it (and everything Azure) afterwards:
On dev there are absolutely no guarantees: components might go down half a workday and they don't care.
And when you go to production the prices goes absolutely through the roof (IIRC $9000/year for the most basic message broker I could configure that wouldn't have the risk of being offline half a day, and every component is like this).
So while it was really cool to work on a cloud native system if someone ever asks me to design one for them, that design will be presented with a price tag for what it will cost to take it to production.
I wouldn't equate the lambda UX with "serverless" at large. I work on a serverless system that runs the same code you upload (e.g, python). You write it as a traditional API then upload it to the cloud and done.
One thing that makes it possible is that "orchestration" is embedding in your code, using a library (https://github.com/dbos-inc/dbos-transact-py). With lambdas you need step functions which is not exactly easy to test locally.
There are a bunch of gotchas outside of UX, that make them just as awful. Like if you need secrets you have to find a way to cache them so that aws.secretsmanager doesn’t ding your bill every request. Depending on how your devops team requires you to load secrets this can make the code worse in operation and review.
But the main point why serverless is garbage is that the old stack does everything better.
Yes, SST [1] uses lambdas heavily but makes it more seamless and less visible, just the place your code runs.
I’ve also found Azure Container Apps to hit the right balance. It’s kubernetes under the hood, which you don’t have to mess with at all, except that it can use KEDA [2] scaling rules to scale your containers to zero, then scale up with any of the supported KEDA scalers like when a message hits a queue.
Except when you scale to zero, you get a 23+ second cold start time on .net apps. Google cloud run pulls some black magic to get ~3 second cold starts on .net apps, and ~500ms for golang/python/native apps.
I really enjoy laravel currently. It's just fun that I can focus on my app instead of the stuff that's just tedious. Also not relying on some 3rd party Auth is a huge bonus for me.
As someone who has written a few experimental apps with Phoenix, with and without LiveView, and later had to deal with many inscrutable errors when attempting to upgrade from Phoenix 1.16 to 1.17 with basically no help whatsoever around the web, putting it next to Rails and Laravel is kinda laughable. HN-darling-that-will-bite-in-the-ass alert! Be warned. Use something boring instead.
It has it's compromises but it's great for just building stuff, with UI updates streamed to the client, no JS (or as much as you want), no extra API building just for the sake of your SPA. Note that I'm not talking about Blazor WASM.
If you're interested in working as a developer for corporations outside of the SF bubble (e.g. the other 80% that use Windows instead of macOS) it's worth checking out, especially for internal corporate stuff.
We use GO (with templates) and HTMX for internal apps in enterprise which is completely tied in Microsoft. It’s a much better experience than Blazor. I think that any productive language (think what you would likely have used Python for) with templates that can be mixed with HTMX is easily the best experience you’re going to have writing internal enterprise applications. We use GO because we’re slowly replacing our C# and TS backends with GO, but you can achieve the same with many techs. Probably even C# and (ironically) web forms.
It obviously has some limitations, and if you need complicated role based access control you’re going to want to use something else. But for 95% of your use cases it’s soooo easy to build an app with JS with server side functionality with HTMX and templates. The major disadvantage is that it introduces multiple frontend techs as you do not want to use it on anything facing customers / investors / whatever on the internet.
As someone who hasn't written C# regularly for over a decade, but has done Go every now and then recently, why would you replace C# with Go? It sounds like it would be a step down in maintainability with not much advantage.
Long term c# dev, who's used golang on a few projects. I really want to love go with an htmx+templ, amazing speed and gc, but find it sort of weird/quirky and just plain tedious in use.
Linq in c# is so nice, and lots of little c# features(maybe too many) in recent years has made it quite nice for daily use. With aot definitely prefer c# the language over golang. I do sort of loathe aspnet/mvc and especially blazor stuff. We desperately need a better web framework than asp but nothing will ever gain enough traction because of the ms dominance. Microsoft the sprawling corp never fails to disappoint, but damn the .net framework team does do some awesome work.
That said, I'm instead putting future efforts into python because let's be honest, uv/fastapi/fasthtml are more than fast enough for nearly every single project I've ever worked on.
We work with solar data, which involves a myriad of data formats and delivery systems. This is mainly because solar inverter engineers didn’t think anyone would put solar inverters directly on the internet. It even said so with big red block letters in one of their manuals. Anyway, basically the entire industry decided to never read the manuals and just put the inverters directly on the internet.
Which means we’ve collected data with various services. Especially because it used to be done by different parts of the business, which means some of it is in power apps, some of it is in Python some of it is in C#, some is in Typescript and some is in C. We want to lessen that and GO fits our purpose better than C# because of how async/await has been inherently flawed from its creation. A big part of this is developer based, because of how it was designed it’s just very easy to fuck it up. It’s very leaky and you need to propagate everything, it’s very easy to create deadlocks and you can’t cancel tasks in any meaningful way. On the technical side it’s just terrible for our use case, we’ve cut our Azure cost immensely with go-routines compared to C#. The biggest cost saving has obviously been with power apps, followed by Python but because of the amount of data we handle C# is also very expensive. On the philosophical side of things. We prefer things like explicit error handling and not having to create a class to have a function.
So a lot of it isn’t technical, and some of it is.
Tasks specifically allow to never worry about the kind of race conditions you get with WaitGroups and channels for unary cases - after all, you `await` the result immediately, or later on, and even if you do it multiple times with a Task<T>, it is foolproof and will not allow you to make a synchronization mistake because there is no synchronization to do by the user.
In addition, all meaningfully cancellable operations accept 'CancellationToken' which is way easier than goroutine context passing, some overloads also accept timeout instead if that's expected to be the main use of that.
You could make a case about preference for different language syntax and expressiveness, different quality of an SDK from a particular vendor, but the technical arguments made here just do not correspond to reality.
You're not really making a lot of argumentation to back up your claims. Anyone can Search engine benchmarking and get the results they want. Just look here:
For our specific use case C#'s TPL was ok, but it comes with a massive overhead compared to Goroutines. We can have tens of thousands concurrent threads running at the same time at very little CPU usage (which is primarily what we pay for). It’s not that you can’t use async/await tasks to do the same, but they use a lot more CPU and as such is less cost effective for us.
As far as blocking and having meaningful (again meaningful is the keyword) cancellation I encourage you to look to Microsoft and understand the limitations outlined by them in the learning articles starting here:
> You're not really making a lot of argumentation to back up your claims. Anyone can Search engine benchmarking and get the results they want.
This is only possible if you completely ignored the text in my comment and never used C# and Go concurrency primitives side-by-side to evaluate them on their merits. The race conditions risks in Go are very real and so are issues w.r.t closing channels, passing nils, etc. The __much higher___ cost of spawning channels and gorotines is equally as real and you can measure it yourself. Both are intended to be spawned for longer than shortlived scenarios. But even then, you will run out of memory much faster if you have thousands of goroutines waiting on a timer channel versus tasks waiting on PeriodicTimer. This does not make either choice strictly better than the other as both have their tradeoffs but tradeoffs nonetheless that must be acknowledged (like pros and cons of implicit suspend, stackless vs stackful, etc.).
You can also take the data from the code snippets from the gist I linked and replicate it with current versions of Go and .NET on your own machine. This is the perfect case that illustrates how much memory consumption you will see with massive amounts of mostly-suspended-periodically-calling-a-service Tasks and Goroutines. But you did not and have a history of constructing arguments in bad faith, not actually discussing strengths and weaknesses of a particular language.
Not sure what is the purpose of the link for tasks (which does not talk about cancellation, this one does: [1]) and Go's stack implementation. I'm very well aware of both, which is the driving reason to respond here to salvage the discussion with otherwise wrong claims.
Lastly, the data from 2019 is irrelevant because it uses completely different threadpool implementation and is likely measuring also the one-time cost of JIT compilation for code. .NET has significantly evolved since then and provides much better native compilation[2][3] than Go today, as well as much higher throughput[3][4] code w.r.t GC and JIT for long-running services.
You realize you can change the default memory stack on go routines right? I mean, I’m just going to ignore anything you say going forward as this stalking of you is beyond silly, but at least get things right.
World does not revolve around you, it's an infantile outlook :)
I simply see a discussion that might be interesting and respond to comments related to topics I feel proficient at.
In any case, how many deployments realistically adjust their goroutine stack size? What are the implications of starting with <500B-1KiB default stack (because asynchronously yielding task will use as little as <100B)?
I am not convinced. I have tried using blazor and it has been painful at best.
I know of at least one internal tool at Microsoft (Azure) still using razor pages. I don't know how much I am allowed to talk about this but most teams as far as I can tell at Microsoft use react. You should too. This is not a fight worth fighting. For internal applications, stay with asp net razor pages. It doesn't need anything fancy. Don't run after a fad that even teams within Microsoft refuse to chase.
Even amongst those numbers, .NET itself is a pretty small fraction of the overall dev community. I'm not sure why you're posturing as though it's some huge majority.
> As an anecdote, I had an easier time using Cursor + Claude to build the app in FastAPI and Next.js, and a harder time with FastHTML and SvelteKit. Since FastHTML is barely a couple weeks old (at the time of writing), its code and docs likely hasn’t made its way into the training data of most LLMs yet, explaining their limited proficiency with FastHTML.
In cursor settings, click features and then add the documentation URL for each framework or library you are using so they can be indexed.
It would be best if you did this regardless of how well trained a model is on certain code - it helps immensely.
FastHTML has markdown formatted docs which can be used by Claude, just add .md to the end of the URL:
You can find markdown docs for most libraries on GiHub, where you can have Cursor index.
I suspect that with the increased use of LLM-aware code editors, single-page markdown-formatted documentation will become more common (even better would be if Cursor hosted an external vector db with up-to-date docs and tutorials for all the most popular libraries and frameworks).
I'm the maintainer of an open Source CLI. The documentation site [1] is a static HTML site generated by Jekyll, from a bunch of Markdown files. Is there advices for LLM ready single page doc?
- should I just aggregate the Markdown files and that's all?
Just a tip, one way to cut down on the Next.js and SvelteKit code would be to use the “actions” feature they both provide rather than manually creating API routes.
In SvelteKit (don’t know about Next.js), this also makes for a nice, gracefully degrading JS-enabled form experience for the user, while always also working for those running script-less. You’ll automatically get partial validation, loading states, error handling, and so on, as opposed to API routes where you’ll need to do that on your own.
I had success in a project using Sapper, a precursor to SvelteKit, with using an early version of this called server routes. On server routes, you could define a JSON api which could also have its data statically exported.
On the other hand, I’ve seen server actions abused in large production apps leading to lots of sprawl when handling similar data patterns. Everything becomes an action and it becomes difficult to update the system cohesively or get data externally.
I do feel there could possibly be a bit more magic with Server Actions to generate actual API endpoints. I’m a fan of FastAPI’s handling of pulling path and body variables into an endpoint, typing the params and then accessing them in Next.js on the other hand feels more burdensome. I could very well just be not using it correctly.
Good points on code assistants effecting language/framework usage. Myself I've found that copilot will happily suggest usages that were deprecated 10 years ago and waste a couple hours of time.
Doesn't surprise me. Copilot often suggests nonexistent functions in libraries I've written, which code it's seen many times. Though it's occasionally useful, it probably doesn't save me any time overall. If work didn't pay for it, I certainly wouldn't use it.
I’m really interested in using FastHTML, but it feels like it’s baking and not actually production ready.
For example, the sample projects store passwords in plaintext if they even allow login, which most don’t.
I really wish there was a way to use the FastHTML fast tags in FastAPI, so that I could use their cool HTML generator, but have robust and reliable deployment and auth, and possibly migrate to FastHTML once it’s a more mature product.
First, just because an example app stores the password in plain text, doesn’t mean you have to do the same. Hashing a password isn’t really that complicated.
I wrote on how to implement a simple login system in FastHTML [1]
Second, you can use fast tags in other python projects. Import it from fastcore and call the „to_xml()“ function on the fast tags. This will convert it to HTML
The reason the FastHTML bare-bones from-scratch auth example is bare-bones is to show you the minimal pieces that need to be built if you want to do auth from scratch. That doesn't meaning doing it from scratch is a good idea -- it's shown for folks that have the need for something fully custom.
Just want to say thanks for introducing me to HTMX. I know a bit of react, but as a more backend/data focused person HTMX seems better way to go and have way less breaking changes as these frontend frameworks are constantly changing the way they manage state and everything I've written has to be rewritten every few years. It's an awesome idea to just serve html where there aren't constant breaking changes.
I think most projects have very bare-bones examples. I just need to be able to find a course on udemy with a fully production ready example with things like managed databases where a service is doing backups of the db and I don't lose data, auth pathways for creating users, updating passwords, and code that securely segregates things by user, scalable, deployment and sanitizes queries to protect against injection attacks to get an idea of how someone else would do it and have the confidence that I'm not building something that is easily hacked.
I think FastHTML will quickly get there, because it's built on starlette and gunicorn that many production systems already use, but I'm coming from react and I'm not super familiar with these.
First of all, this app is very limited to differentiate which stack is better and faster.
Second, it should consist of completely different programming languages like C#, Ruby, PHP, JavaScript, Python, Go etc. Hopefully I will do it one day.
> Given React and Next’s wider use and longer history, it’s likely most LLMs are trained on more React and Next code than Svelte code. Ditto for FastHTML. This could lead to coding assistants being more effective when working with and suggesting code for established frameworks such as FastAPI, React, and Next.js.
Yes, but also more stale code from old versions which use patterns that the community has for various reasons moved on from. I ran into a lot of trouble with deprecated patterns while teaching myself react last year with assistants on the side. React 17 and prior version patterns kept coming up all the time.
Remix introduced me to data loaders and actions. when I used SvelteKit, the transition was easier.
Next.js doesn't offer an ideal solution to the problem of data loading back then despite being a meta framework. that's why we have apps that use useEffect to fetch data.
Solid.js is just perfect in every way. Docs (or navigating them) could be better though.
IMO the biggest hurdle in frameworks like Svelte or Next isn't the framework -- it's the language.
This type of app is a prime use case for something like LiveView or a Go framework. Just today I had the most marvelous experience using Tailscale's ACP, where I've changed the ACL and it instantly saved it. It was so fast I had to make sure it's not optimistic UI, and sure enough, 78ms round trip for the request.
Even if it was a FE-heavy app using SQLite in the browser, I wouldn't have used JavaScript. After months of Gleam, I am spoiled.
The days of JavaScript-because-we-have-to are thankfully over. JS is now only for when the flexibility is required.
the reason I use JS is def not flexibility, it's to enhance the usability and interactivity of my app. even for my Python and Go web apps, I still inline JS to achieve the functionality I want. examples: client-side routing, pin the scroll to bottom, mutating classList, etc
Yeah, that ability to improve usability beyond the basics is what he means by flexibility I think.
I agree with that sentiment. I can now build decent, working websites in something like Streamlit without touching JS (even if JS is being generated behind the scenes).
nothing can beat the simplicity of Nextjs using the old page router. Try it, all you need for a starting point is a package.json (react, react-dom and next as dependencies), and a pages directory with a single index.tsx, no need to even install typescript or manually create tsconfig.json. And you can do multi pages or single page app or static generation. All other options can only either do web server or static generator, not both.
And who said one is required? Not me and not the article of this post. Libraries like htmx enable placing most code on the backend, a saner and more stable place for code to live.
> at best it can serve rendered tsx or bundle.
Being able to leverage TSX/JSX on the backend sounds like a great way to develop composeable, organized applications. Doing so with less dependencies is icing on the cake.
> it doesn't serve the bundle
How so? It has a super fast HTTP/websocket server. I'm confused.
> split it by pages
Code splitting is an arbitrary requirement that adds unnecessary complexity to most projects. Very few applications need that. The very article shows 2 solutions without code splitting.
I'd rather aim to write leaner applications than deal with their bloat with solutions like code splitting.
the point is that Next can bootstrap an app with only 2 files (your entry point and package.json) as I mentioned.
Of course you can write code on top of bun to turn it into a mini web framework but it is nowhere as lean as the Nextjs minimal setup I've just described, in terms of how much code you have to write.
I should not have to explain this, but that just runs typescript. It doesn't even render tsx (you have to write the code to call renderToString or whatever), it also does not create the minimal html scaffolding behind the scence that nextjs does for you
Here is a one-liner:
$ bun add react{,-dom} next; mkdir pages;echo "export default () => 'Hello world'" >> pages/index.tsx; bun next dev
Note that if you replace "Hello world" with a stateful component (like a dummy counter) it would still work, because again Nextjs serves the js bundle and not just html.
With your `bun run pages/index.tsx`, it doesn't even work with the minimal tsx like that (obviously, it's for running a script). And after you add a bunch of code to render and serve html string, it still CANNOT serve a stateful react component.
Converting JSX to HTML with Bun requires only the react{,-dom} dependency.
And renderToString() is awesome in the sense that it is obvious code. Much less magic.
Don't get me wrong. Frameworks have their uses. But Bun offers routing and jsx rendering out of the box, making frameworks much less desirable for many use cases.
When a framework version deprecates, there's work to be done to convert to new versions. OTOH there's much less work if you don't use a framework.
It gives you building blocks to hack together your own router and SSR. Not the same as offering things "out of the box".
You talk about framework deprecation while using non standard APIs of an alternative runtime. The risk is the same. If anything, Bun is much less popular and more likely to die than Nextjs.
Don't even get me started on htmlx, something revolved around some wishful idea from 20 years ago on how the web should work, it never picked up all that time so why now?
> Don't even get me started on htmlx, something revolved around some wishful idea from 20 years ago on how the web should work, it never picked up all that time so why now?
I can talk about my own experience with this "just fetch and replace HTML instead of having a fat js frontend".
Before htmx or its ancestral intercooler even existed, I created a system using this methodoly. It fetches and replaces HTML using ajax/js.
It has been over a decade and it is still easy to maintain and getting new features. It has over 900 database tables and over 1000 different pages full of business logic, to give you an idea of the size. It is dead simple to deploy. Has no build step. The code is KISS but organized in layers, and can withstand regular developer churn just fine. New devs are productive week 1, sometimes day 2 even.
So yes, I'll get you started with htmx because, it feels dirty I know, but it scales and produces long lasting maintainable software.
> Bun is much less popular and more likely to die than Nextjs.
Can't agree. Bun is one of a kind, innovative, tool in many aspects. The package management alone is already leagues above yarn, pnpm and npm. They have other strong fronts such as builtin testing, bundling, and others.
Meanwhile Nextjs has some merits but it is just one of the thousand js web frameworks. There's sveltekit, nuxt, Angular. Next got popular because Vercel keeps pouring their 300 mil USD VC funds into marketing and youtubers.
Next has no big differentiation compared to other frameworks.
> most code on the backend, a saner and more stable place for code to live
sorry a very flawed opinion, code should live where it runs. Code that runs on server should live on back-end. Code that runs on browser should live on the front-end. You're clearly making an arbitrary engineering rule based on what is more comfortable to you personally.
csv_data = [",".join(map(str, tbl.columns_dict))]
csv_data += [",".join(map(str, row.values())) for row in tbl()]
Sigh. He's not "building the app". This code is wrong. It's not escaping the CSV properly, so embedded commas and similar control characters will result in gibberish output.
I'm just so fed up with this JS+HTML SPA framework demos where everybody thinks that stringly-typed programming is the only way to do things, where instead of using a proper library that actually encodes/decodes file formats properly there's this kind of quick & dirty script snippet that is basically broken under all but ideal conditions. ("It worked, once, on my computer, with toy data. Job done!")
I get it, this exercise is about comparing the essentials of different frameworks. But that comparison ought to include things that matter, such as correct handling of Accept-Language, safe escaping of data, sorting on date columns, virtualising lists too big to handle in one go, etc... That's what actually matters, that's what takes actual time when getting something to production. Not the folder structure or file naming conventions.
End-to-end correctness matters and should be a key deciding factor when choosing a web framework. Some frameworks make this easy, leading people to fall into the pit of success, other's make it very difficult, and it takes eternal vigilance starting the day after the Hello World demo.
E.g.: Most of his demos use mixed languages (Python+JavaScript). This architecture leads to madness. There are endless subtle differences between how any two languages represent data, what they can and can't express, and whether they agree on things like data validation regex syntax or not.
A proper comparison would cover issues such as internationalisation, async, data streaming[1], dependency injection, client and server validation that's kept in sync, authentication and authorization through the layers, etc...
[1] Sure, he's got a toy use-case of CSV files, but CSV files can be huge! Most JS frameworks and Rest APIs will process such tabular data as giant blobs of JSON as a single response. They can't stream, they can't handle paging or windowing, or if they can, you have to "wire that up" on the client which gets complicated rapidly. Do that! That's the demo that interests me. Not an incorrect script that will run out of memory and crash the server if you process a file big enough to be useful. Too much work for a quick demo? Well, yes, that's my point! It shouldn't be.
End-to-end correctness does not matter in a comparison. If every app implements the same functionality, then that is a valid comparison. Even if that functionality is incorrect.
If five apps implement an authentication flow, 4 relying on the framework and one building it from scratch because the framework doesn't provide one, then surely that should be factored into any comparisons.
If a framework comes with an opinionated templating system whereas another requires me to come up with a way to have nested templating (and whatever else I might require), then again, that should be a factor in my comparison.
Creating a CSV correctly has nothing at all to do with choosing a framework. In Python there's a CSV module in the stdlib that you can use -- but talking about that in a discussion of frameworks tells you nothing about the frameworks, and would simply be a distraction.
It’s only a distraction if you can safely assume that code that runs on the server will never need a client-side component to match. Sure you can pull in random Python packages… but you can’t run them in the browser.
I'll just have vanilla thanks.