I guess you can just start loading a first batch, add an intersection observer to the last 3 elements (if you have 3 lanes) and then when one of those intersects you simply start fetching the next.
But why not make it "per GB Logs ingested" or "per triggered job" (or both)? These should reflect the points where GitHub also has costs - but not per minute.
ZIRP ended, its remaining monopoly money has been burnt through, and the projected economy is looking bleak. We're now in the phase where everything that can be monetized is being monetized in every way that can be managed.
Free tiers evaporate. Fees appear everywhere. Ads appear everywhere, even where it was implied they wouldn't. The lemons must be squeezed.
And because everybody of relevance is in that mode, there's little competitive pressure to provide a specific rationale for a specific scheme. For the next few years, that's all the justification that there needs to be.
I thought that "Bitbucket" was in your original post and you added only your edit message to say that it was, in fact, Gitlab and not Bitbucket that added cost for self-hosted runners.
I initially felt a bit offended when I saw this. Then I thought about it and at the end of the day there's a decent amount of infrastructure that goes into displaying the build information, updating it, scanning for secrets and redacting, etc.
I don't know if it's worth the amount they are targeting, but it's definitely not zero either.
You would think the fat monthly per-seat license fee we also pay would be enough to cover the costs of checks notes reading some data from the DB and hosting JSON APIs and webpages.
Yeah, I think we’re seeing some fallout from how much developer infrastructure was built out during the era where VCs were subsidizing everything, similar to how a lot of younger people complained about delivery charges going up when they had to pay the full cost. Unfortunately, now a lot of the competition is gone so there isn’t much room to negotiate or try alternate pricing models.
I was a fan of NextJS in the pages router era. You knew exactly where the line was between server and client code and it was pretty easy to keep track of that. Then I've began a new project and wanted to try out app router and I hated it. So many (to me common things) where just not possible because the code can run in the client and on the server so Headers might not always be available and it was just pure confusion whats running where.
I think we (the Next.js user community) need to organize and either convince Vercel to announce official support of the Pages router forever (or at least indefinitely, and stop posturing it as a deprecated-ish thing), or else fork Next.js and maintain the stable version of it that so many of us enjoyed. Every time Next comes up I see a ton of comments like this, everyone I talk to says this, and I almost never hear anyone say they like the App Router (and this is a pretty contrarian site, so if they existed I’d expect to see them here).
I would highly recommend just checking out TanStack Router/Start instead. It fills a different niche, with a slightly different approach, that the Next.js app router just hasn't prioritized enabling anymore.
What app router has become has its ideal uses, but if you explicitly preferred the DX of the pages router, you might enjoy TanStack Router/Start even more.
Tanstack anything has breaking changes constantly and they all exist in perpetual alpha states. It also has jumped on the rsc train with the same complexity pitfalls.
Some libs in the stack are great but they were made pre rsc fad.
If you're using an alpha library then that's on you for not expecting breaking changes. They have plenty of 1.0+ libraries that do not receive any breaking changes between major releases and have remained stable for well over a year.
Also, you're just wrong? You literally cannot serve RSC components _at all_ even in TanStack Start yet. Even when support for them is added it will be opt-in for only certain kinds of RPC functions and they will work slightly differently than they do in Next.js app router(where they are the default everywhere). RPC != RSC.
Plus you can always stick to using TanStack Router exclusively (zero server at all) and you never will even have to worry about anything to do with RSCs...
OK I am personally surprised that anyone likes the Pages router? Pages routing has all the benefits (simple to get started the first time) and all the downsides (maintainability of larger projects goes to hell) of having your routing being determined by where in the file system things are.
I don't care about having things simple to get started the first time, because soon I will have to start things a second or third time. If I have a little bit more complexity to get things started because routing is handled by code and not filesystem placement then I will pretty quickly develop templates to handle this, and in the end it will be easier to get things started the nth time than it is with the simple version.
Do I like the app router? No, Vercel does a crap job on at least two things - routing and building (server codes etc. can be considered as a subset of the routing problem), but saying I dislike app router is praising page router with too faint a damnation.
Remix 2 is beautiful in its abstractions. The thing with NextJS Roadmap is that it is tightly coupled with Vercel's financial incentives. A more complex & more server code runs ensure more $$$ for them. I don't see community being able to do much change just like how useContextSelector was deprioritized by the React Core team.
Align early on wrt values of a framework and take a closer look at the funder's incentives.
I've been using React since its initial release; I think both RSC and App Router are great, and things are better than ever.
It's the first stack that allows me to avoid REST or GraphQL endpoints by default, which was the main source of frontend overhead before RSC. Previously I had to make choices on how to organize API, which GraphQL client to choose (and none of them are perfect), how to optimize routes and waterfalls, etc. Now I just write exactly what I mean, with the very minimal set of external helper libs (nuqs and next-safe-action), and the framework matches my mental model of where I want to get very well.
Anti-React and anti-Next.js bias on HN is something that confuses me a lot; for many other topics here I feel pretty aligned with the crowd opinion on things, but not on this.
You still need API routes for stuff like data-heavy async dropdowns, or anything else that's hard to express as a pure URL -> HTML, but it cuts down the number of routes you need by 90% or more.
Some of the anti-next might be from things like solid-start and tanstack-start existing, which can do similar things but without the whole "you've used state without marking as a client component thus I will stop everything" factor of nextjs.
Not to mention the whole middleware and being able to access the incoming request wherever you like.
Personally, I love App Router: it reminds me of the Meta monorepos, where everything related to a certain domain is kept in the same directory. For example, anything related to user login/creation/deletion might be kept in the /app/users directory, etc.
But I really, really do not like React Server Components as they work today. I think it's probably better to strip them out in favor of just a route.ts file in the directory, rather than the actions files with "use server" and all the associated complexity.
Technically, you can build apps like that using App Router by just not having "use server" anywhere! But it's an annoying, sometimes quite dangerous footgun to have all the associated baggage there waiting for an exploit... The underlying code is there even if you aren't using it.
I think my ideal setup would be:
1. route.ts for RESTful routes
2. actions/SOME_FORM_NAME.ts for built-in form parsing + handling. Those files can only expose a POST, and are basically a named route file that has form data parsing. There's no auto-RPC, it's just an HTTP handler that accepts form data at the named path.
Route files are no different than the pages router that preceded them, except they sit in a different filepath. They're not React components, and definitely not React Server Components. They're not even tsx/jsx files, which should hint at the fact that they're not components! They just declare ordinary HTTP endpoints.
And they're what everyone here is talking about: the vulnerabilities were all in the action/use server codepaths. I suppose the clearest thing I could have said is that I like App Router + route files, but I dislike the magic RPC system: IMO React should simplify to JSON+HTTP and forms+HTTP, rather than a novel RPC system that doesn't interoperate with anything else and is much more difficult to secure.
I find myself just wanting to go all the way back to SPAs—no more server-side rendering at all. The arguments about performance, time to first paint, and whatever else we're supposed to care about just don't seem to matter on any projects I've worked on.
Vercel has become a merchant of complexity, as DHH likes to say.
I think the context matters here - for SEO heavy marketing pages I still see google only executing a full browser based crawl for a subset of pages. So SSR matters for the remainder.
Htmx does full server rendering and it works beautifully. Everything is RESTful–endpoints are resources, you GET (HTML) and POST (HTTP forms) on well-defined routes, and it works with any backend. Performance, including time to interactive and user device battery life, are great.
Probably an unpopular take, but I really think Vercel has lost the plot. I don't know what happened to the company internally. But, it feels like the first few, early, iterations of Next were great, and then it all started progressively turning into slop from a design perspective.
An example of this is filesystem routing. Started off great, but now most Next projects look like the blast radius of a shell script gone terribly wrong.
There's also a(n in)famous GitHub response from one of the maintainers backwards-rationalising tech debt and accidental complexity as necessary. They're clearly smart, but the feeling I got from reading that comment was that they developed Stockholm syndrome towards their own codebase.
I pretty much dumped a side project that was using next over the new router. It's so much more convoluted, way too many limitations. Who even really wants to make database queries in front end code? That's sketchy as heck.
Resume keeps one session alive. Grov gives you knowledge from all past sessions (yours + your team's), filtered to what's relevant right now. It's really built for teams using coding agents to build software together.
Example: My cofounder debugged the payment flow last week. When I touch payment code today, his reasoning is automatically injected into my session.
We are currently using GitHub Actions for all our CI tasks and I hate it. Yes, the marketplace is nice and there are a lot of utility actions which make life easier, but they all come with the issues the post highlights. Additionally, testing Actions locally is a nightmare. I know that act exists but for us it wasn't working most of the time. Also the whole environment management is kinda odd to me and the fact, that when using an environment (which then allows to access secrets set in that environment) it always creates a new deployment is just annoying [1]
I guess the best solution is to just write custom scripts in whatever language one prefers and just call those from the CI runner. Probably missing out on some fancy user interfaces but at least we'd no longer be completely locked into GHA...
reply