Hacker Newsnew | past | comments | ask | show | jobs | submit | matwood's commentslogin

Which can be solved with insider trading enforcement of political figures and their associates. This SEC already does this to 'normal' people.

It can be. But it's quite hard. And frequently unsuccessful. The concept of "insider trading" sounds great and unambiguous until some examination, but becomes quite ripe for loophole-finding exercises afterwards.

Yep. And there's a lot of people making use of LLMs in both coding and learning/searching doing exactly that.

One of my favorite things is describing a bug to an LLM and asking it to find possible causes. It's helped track something down many times, even if I ultimately coded the fix.


When I've use agents with TS, failing tests due to typing seems to help the agent get to the correct solution. Maybe it's not required though.

What do you mean by "failing tests", are you talking about runtime code? TypeScript erases all types at compile so these wouldn't affect tests. Unless you meant "compile errors" instead.

I've noticed LLMs just slap on "as any" to solve compile errors in TypeScript code, maybe this is common in the training data. I frequently have to call this out in code review, in many cases it wasn't even a necessary assertion, but it's now turned a variable into "any" which can cause downstream problems or future problems


My code has tests and the LLM wrote more tests.

I tell the LLM to include typing on any new code.

The agent is running the test harness and checking results.


I think you're spot on. OpenAI alone has committed to spending $1.4T on various hardware/DCs. They have nowhere near that amount of money and when pushed Altman gets defensive.

https://techcrunch.com/2025/11/02/sam-altman-says-enough-to-...


As others mentioned, 5T isn't money available to NVDA. It could leverage that to buy a TPU company in an all stock deal though.

The bigger issue is that entering a 'race' implies a race to the bottom.

I've noted this before, but one of NVDA's biggest risks is that its primary customers are also technical, also make hardware, also have money, and clearly see NVDA's margin (70% gross!!, 50%+ profit) as something they want to eliminate. Google was first to get there (not a surprise), but Meta is also working on its own hardware along with Amazon.

This isn't a doom post for NVDA the company, but its stock price is riding a knifes edge. Any margin or growth contraction will not be a good day for their stock or the S&P.


Making the hardware is actually the easy part. Everyone and their uncle who had some cash have tried by now: Microsoft, Meta, Tesla, Huawei, Amazon, Intel - the list goes on and on. But Nvidia is not a chip company. Huang himself said they are mostly a software company. And that is how they were able to build a gigantic moat. Because noone else has even come close on the software side. Google is the only one who has had some success on this side, because they also spent tons of money and time on software refinement by now, while all the other chips vanished into obscurity.

Are you saying that Google, Meta, Amazon, etc... can't do software? It's the bread and butter of these companies. The CUDA moat is important to hold off the likes of AMD, but hardware like TPUs for internal use or other big software makers is not a big hurdle.

Of course Huang will lean on the software being key because he sees the hardware competition catching up.


Essentially, yes, they haven’t done deep software. Netflix probably comes closest amongst FAANG.

Google, Meta, Amazon do “shallow and broad” software. They are quite fast at capturing new markets swiftly, they frequently repackage OpenSource core and add the large amount of business logic to make it work, but essentially follow the market cycles - they hire and layoff on a few year cycle, and the people who work there typically also will jump around industries due to both transferable skills and relatively competitive competitors.

NVDA is roughly in the same bucket as HFT vendors. They retain talent on a 5-10y timescales. They build software stacks that range from complex kernel drivers and hardware simulators all the way to optimizing compilers and acceleration libraries.

This means they can build more integrated, more optimal and more coherent solutions. Just like Tesla can build a more integrated vehicle than Ford.


I have deep respect for cuda and Nvidia engineering. However, the arguments above seem to totally ignore Google Search indexing and query software stack. They are the king of distributed software and also hardware that scales. That is way TPUs are a thing now and they can compute with Nvidia where AMD failed. Distributed software is the bread and butter of Google with their multi-decade investment from day zero out of necessity. When you have to update an index of an evolving set of billions of documents daily and do that online while keeping subsecond query capability across the globe, that should teach you a few things about deep software stacks.

You're suggesting Waymo isn't deep software? Or Tensorflow? Or Android? The Go programming language? Or MapReduce, AlphaGo, Kubernetes, the transformer, Chrome/Chromium or Gvisor?

You must have an amazing CV to think these are shallow projects.


No, I just realize these for what they are - reasonable projects at the exploitation (rather than exploration) stage of any industry.

I’d say I have an average CV in the EECS world, but also relatively humble perspective of what is and isn’t bleeding edge. And as the industry expands, the volume „inside” the bleeding edge is exploitation, while the surface is the exploration.

Waymo? Maybe; but that’s acquisition and they haven’t done much deep work since. Tensorflow is a handy and very useful DSL, but one that is shallow (builds heavily on CUDA and TPUs etc); Android is another acquisition, and rather incremental growth since; Go is a nth C-like language (so neither Dennis Richie nor Bjarne Stroustrup level work); MapReduce is a darn common concept in HPC (SGI had libraries for it in the 1990s) and implementation was pretty average. AlphaGo - another acquisition, and not much deep work since; Kubernetes is a layer over Linux Namespaces to solve - well - shallow and broad problems; Chrome/Chromium is the 4th major browser that reached dominance and essentially anyone with a 1B to spare can build one.. gVisor is another thin, shallow layer.

What I mean by deep software, is a product that requires 5-10y of work before it is useful, that touches multiple layers of software stack (ideally all from hardware to application) etc. But these types of jobs are relatively rare in the 2020s software world (pretty common in robotics and new space) - they were common in the 1990s where I got my calibration values ;) Netscape and Palm Pilot was a „whoa”. Chromium and Android are evolutions.


> No, I just realize these for what they are - reasonable projects at the exploitation (rather than exploration) stage of any industry.

I get that bashing on Google is fun, but TensorFlow was the FIRST modern end-user ML library. JAX, an optimizing backend for it, is in its own league even today. The damn thing is almost ten years old already!

Waymo is literally the only truly publicly available robotaxi company. I don't know where you get the idea that it's an acquisition; it's the spun-off incarnation of the Google self-driving car project that for years was the butt of "haha, software engineers think they're real engineers" jokes. Again, more than a decade of development on this.

Kubernetes is a refinement of Borg, which Google was using to do containerized workloads all the way back in 2003! How's that not a deep project?


Well put. I haven’t thought about it like that.

But the first example sigmoid10 gave of a company that can't do software was Microsoft.

Yeah I'm not convinced Microsoft can do software anymore. I think they're a shambling mess of a zombie software company with enough market entropy to keep going for a long time.

Yeah the fact they had to resort to forking Chrome because they couldn’t engineer a browser folks wanted to use is pretty telling.

I don't think that tells us anything.

Maintaining a web browser requires about 1000 full-time developers (about the size of the Chrome team at Google) i.e., about $400 million a year.

Why would Microsoft incur that cost when Chromium is available under a license that allows Microsoft to do whatever it wants with it?


You could say the same thing about all Microsoft products then. How many full time developers does it take to support Windows 11 when Linux is available, SqlServer when Postgres is available, Office when LibreOffice exists?

And so on all under licenses that allows Microsoft do whatever it wants with?

They should be embarrassed to do better, not spin it into a “wise business move” aka transfer that money into executive bonuses.


Microsoft gets a lot of its revenue from the sale of licenses and subscriptions for Windows and Office. An unreliable source that gives fast answers to questions tells me that the segments responsible for those two softwares have revenue of about $13 and about 20 billion per quarter respectively.

In contrast, basically no one derives any significant revenue from the sale of licenses or subscriptions for web browsers. As long as Microsoft can modify Chromium to have Microsoft's branding, to nag the user into using Microsoft Copilot and to direct search queries to Bing instead of Google Search, why should Microsoft care about web browsers?

It gets worse. Any browser Microsoft offers needs to work well on almost any web site. These web sites (of which there are 100s of 1000s) in turn are maintained by developers (hi, web devs!) that tend to be eager to embrace any new technology Google puts into Chrome, with the result that Microsoft must responding by putting the same technological capabilities into its own web browser. Note that the same does not hold for Windows: there is no competitor to Microsoft offering a competitor to Windows that is constantly inducing the maintainers of Windows applications to embrace new technologies, requiring Microsoft to incur the expense of applying engineering pressure to Windows to keep up. This suggests to me that maintaining Windows is actually significantly cheaper than it would be to maintain an independent mainstream browser. An independent mainstream browser is probably the most expensive category of software to create and to maintain excepting only foundational AI models.

"Independent" here means "not a fork of Chromium or Firefox". "Mainstream" means "capable of correctly rendering the vast majority of web sites a typical person might want to visit".


They did incur that cost… for decades. They were in a position where their customers were literally forced to use their product and they still couldn’t create something people wanted to use.

Potentially these last two points are related.


The prosecution presents windows 11 as evidence that Microsoft can’t do software. Actually that’s it, that’s the entirety of the case.

The prosecution rests.


Due to clerical error the frontend updates of GitHub was not part discovery so not allowed as evidence. Still, though.

Huang said that many years ago, long before ChatGPT or the current AI hype were a thing. In that interview he said that their costs for software R&D and support are equal or even bigger than their hardware side. They've also been hiring top SWE talent for almost two decades now. None of the other companies have spent even close to this much time and money on GPU software, at least until LLMs became insanely popular. So I'd be surprised to see them catch up anytime soon.

If CUDA were as trivial to replicate as you say then Nvidia wouldn’t be what it is today.

CUDA is not hard to replicate, but the network effects make it very hard to break trough with new product. Just like with everything when network effeft applies.

Meta makes websites and apps. Historically, they haven't succeeded at lower-level development. A somewhat recent example was when they tried to make a custom OS for their VR headsets, completely failed, and had to continue using Android.

Remind me which company originated PyTorch?

Remind me that PyTorch is not a GPU driver.

Genuine question: given LLMs' inexorable commoditization of software, how soon before NVDA's CUDA moat is breached too? Is CUDA somehow fundamentally different from other kinds of software or firmware?

Current Gen LLMs are not breaching the moat yet.

Yeah they are. llama.cpp has had good performance on cpu, amd, and apple metal for at least a year now.

Thw hardware is not the issue. It's the model architectures leading to cascading errors

Nvidia has everything they need to build the most advanced GPU Chip in the world and mass produce it.

Everything.

They can easily just do this for more optimized Chips.

"easily" in sense of that wouldn't require that much investment. Nvidia knows how to invest and has done this for a long time. Their Ominiverse or robots platform isaac are all epxensive. Nvidia has 10x more software engineers than AMD


They still go to TSMC for fab, and so does everyone else.

For sure. But they also have high volumne and know how to do everything.

Also certain companies normally don't like to do things themselves if they don't have to.

Nonetheless nvidia is were it is because it has cude and an ecoysystem. Everyone uses this ecosystem and then you just run that stuff on the bigger version of the same ecosystem.


> Did you ever hire any duds when you were not hiring remote?

Bingo. I had an exec ask me once how will we know people are working if they are remote? I asked back, how do we know they are working now?

Remote work is harder on management and leadership. It’s easy to see if someone is at their desk and seems friendly, it’s hard to really think about what value a person brings.


I've worked at a bank where one of the oft heard jokes was that 'I spend 8 hours per day there but I really wouldn't want to work there'. It was true too. 145 people in the IT department, and absolutely nothing got done.

This was a bit of a let-down for me, all these people, so much fancy hardware. I had a hard time believing it at first. The whole place was basically caretakers that made the occasional report printing program and that based their careers on minor maintenance of decades old COBOL code that they would rather not touch at all.

Something as trivial as a new printer being taken into production would turn into a three year project.

On Friday afternoons the place was deserted. And right now I work 'from home' and so do all of my colleagues and I don't think there are any complaints about productivity. Sure, it takes discipline. But everything does, to larger or lesser degree and probably we are a-typical but for knowledge work in general WFH can work if the company stewards it properly. It's all about the people.


And he's right (and the sources he points out), that some bubbles are good. They end up being a way to pull in a large amount of capital to build out something completely new, but still unsure where the future will lead.

A speculative example could be AI ends up failing and crashing out, but not until we build out huge DCs and power generation that is used on the next valuable idea that wouldn't be possible w/o the DCs and power generation already existing.


Huge DCs and Power Generation might be useful, long-lasting infrastructure, however, the racks full of GPUs and TPUs will depreciate rather quickly.

I think this is a bit overblown.

In the event of a crash, the current generation of cards will still be just fine for a wide variety of ai/ml tasks. The main problem is that we'll have more than we know what to do with if someone has to sell of their million card mega cluster...


The problem is the failure rate of GPUs is extremely high

The bubble argument was hard to wrap my head around

It sounded vaguely like the broken window fallacy- a broken window creating “work”

Is the value of bubbles in the trying out new products/ideas and pulling funds from unsuspecting bag holders?

Otherwise it sounds like a huge destruction of stakeholder value - but that seems to be how venture funding works


The usual argument is the investment creates value beyond that captured by the investors so society is better off. Like investors spend $10 bn building the internet and only get $9 bn back but things like Wikipedia have a value to society >$1 bn.

What I like most about Gemini is it's perfectly happy to say what I asked it to proofread or improve is good as it is. Never has ChatGPT said, 'this is good to go', even its own output that it just said was good to go.

ChatGPT has recently been linking me directly to Amazon or other stores to buy what I'm researching.

> …at least if you let these things autopilot your machine.

I've seen people wipe out their home directories writing/debugging shell scripts...20 years ago.

The point is that this is nothing new and only shows up on the front page now because "AI must be bad".


Superficially, these look the same, but at least to me they feel fundamental different. Maybe it’s because if I have the ability to read the script and take the time to do so, I can be sure that it won’t cause a catastrophic outcome before running it. If I choose to run an agent in YOLO mode, this can just happen if I’m very unlucky. No way to proactively protect against it other than not use AI in this way.

I've seen many smart people make bone headed mistakes. The more I work with AI, the more I think the issue is that it acts too much like a person. We're used to computers acting like computers, not people with all their faults heh.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: