Hacker Newsnew | past | comments | ask | show | jobs | submit | Jyaif's commentslogin

The tricky part is not doing the capability-based OS, it's getting adoption.

Linux is good enough, so a slightly better OS is not going to cut it.


I agree with the assessment that Apple has by far the best platform to ship features.

That being said, if people spend all their time interacting with LLMs for nearly everything, which is the direction we seem to be going in, what locks them in the Apple ecosystem?


> If you [..] do not fully understand the nuances and limitations of SQLite DO NOT USE IT.

So, what are the limitations compared to Postgres?


It doesn't scale out, only up, is a fairly big limitation.

So if you have a single DB server running sqlite and your server goes down, well, your shit is down and there is no failover. I.e. no built in replication or clustering.

It doesn't support multiple simulataneous writes (like PostGres and SqlServer etc).

No stored procedures or functions.

There is no real client/server architecture. i.e. if you have applications on multiple servers which need access to the DB then you're in a bad place. The database has to be embedded along with the application.


>It doesn't scale out, only up, is a fairly big limitation.

This is the main limitation. That being said you can scale out with projections if event sourcing is your thing.

>It doesn't support multiple simulataneous writes (like PostGres and SqlServer etc).

A process with a single writer tends to be faster because it reduces contention. You only need MVCC in postgres because of the network.

What's even better is you can query across multiple databases seamlessly with ATTACH (https://sqlite.org/lang_attach.html). So it's very easy to split databases (eg: session database, database per company etc). Each database can have its own writer and eliminating contention between data that doesn't need to have atomic transaction across databases.

>No stored procedures or functions.

It's an embedded database the whole thing is effectively a stored procedure. You can even extend SQLite with your own custom functions in your application programming language while it's running (https://sqlite.org/appfunc.html).

In terms of access by multiple applications etc, if it's read access you can create read replicas/projections with litestream etc.


I would say that customer diversity may be a marker of past resilience, and likely results in moat.

Customer diversity says nothing about current or future resilience.


> in this case justifiably so

Oh please. What LLMs are doing now was complete and utter science fiction just 10 years ago (2015).


In what way do you consider that to be the case? IBM's Watson defeated actual human champions in Jeopardy in 2011. Both Walmart and McDonald's notably made large investments shortly after that on custom developed AI based on Watson for business modeling and lots of other major corporations did similar things. Yes subsidizing it for the masses is nice but given the impressive technology of Watson 15 years ago I have a hard time seeing how today's generative AI is science fiction. I'm not even sure that the SOTA models could even win Jeopardy today. Watson only hallucinated facts for one answer.


When Watson did that, everyone initially was very impressed, but later it felt more like it was just a slightly better search engine.

LLMs screw up a lot, sure, but Watson couldn't do code reviews, or help me learn a foreign language by critiquing my use of articles and declination and idiom, nor could it create an SVG of a pelican riding a bicycle, nor help millions of bored kids cheat on their homework by writing entire essays for them.


This.

I’m under the impression that people who are still saying LLMs are unimpressive might just be not using them correctly/effectively.

Or as Primagean says: “skill issue”


Why would the public care what was possible in 2015? They see the results from 2023-2025 and aren't impressed, just like Sutskever.


What exactly are they doing? I've seen a lot of hype but not much real change. It's like a different way to google for answers and some code generation tossed in, but it's not like LLMs are folding my laundry or mowing my lawn. They seem to be good at putting graphic artists out of work mainly because the public abides the miserable slop produced.


My teams velocity is up around 50% because of ai coding assistants.


Not really.

Any fool could have anticipated the eventual result of transformer architecture if pursued to its maximum viable form.

What is impressive is the massive scale of data collection and compute resources rolled out, and the amount of money pouring into all this.

But 10 years ago, spammers were building simple little bots with markov chains to evade filters because their outputs sounded plausibly human enough. Not hard to see how a more advanced version of that could produce more useful outputs.


Any fool could have seen self driving cars coming in 2022. But that didn't happen. And still hasn't happened. But if it did happen, it would be easy to say:

"Any fool could have seen this coming in 2012 if they were paying attention to vision model improvements"

Hindsight is 20/20.


Everyone who lives in the show belt understands that unless a self driving car can navigate icy, snow-covered roads better than humans can, it's a non-starter. And the car can't just "pull over because it's too dangerous" that doesn't work at all.


That works fine. Self driving doesn’t need to be everything for all conditions everywhere.

Give me reliable and safe self driving for Interstate highways in moderate to good weather conditions and I would be very happy. Get better incrementally from there.

I live solidly in the snow belt.

Autopilot for planes works in this manner too. Theoretically a modern airliner could autofly takeoff to landing entirely autonomously at this point, but they do not. They decrease pilot workload.

If you want the full robotaxi panacea everywhere at all times in all conditions? Sure. None of us are likely to see that in our lifetime.


Btw that’s basically already here with http://comma.ai.


We definitely have self driving cars, people just want to move the goal posts constantly.


We do not. We have much better cruise control and some rudimentary geofenced autonomy. In our lifetimes, neither you nor I will be able to drive in a car that, based on deep learning training on a corpus of real world generated data, goes wherever we want it to whenever we want it to, autonomously.


This works now. We just don't dare let it. Self-driving cars are a political problem, not (just) a technical one.


Posting this from the backseat of a Waymo driving me from point A to B on its own.


They said "goes wherever we want it to whenever we want it to", Waymo is geofenced unless I missed some big news.

(That said, I disagree with them saying "In our lifetimes, neither you nor I", that's much too strong a claim)


I guess I'm worse than a fool then, because I thought it was totally impossible 10 years ago.


> Is there even a case where more RAM is not really better, except for its cost?

RAM uses power.


It also consumes more physical space. /s


Not really /s, since it is a limited resource in e.g. Laptops.


As soon as a platform gives control to the fullscreen, harmful apps are possible.

See for example Apple detecting if a user is typing on a keyboard while in a fullscreen website, and then blocking the website. Yes it's as crazy as it's sounds.


They have thousands of highly paid employees working on open source; they are spending at least 1 billion per year.


They want to profit from the IPO of OpenAI. Private investors get a free 20% - 30% gain not available to the retail investors.


> They also mastered the world of DC lobbying, successfully outmaneuvering Boeing and Lockheed’s attempts to use anti-trust regulations to shut the European entrant out of the US market.

No amount of engineering can compete with good old bribes.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: