Hacker News new | past | comments | ask | show | jobs | submit | ezoe's comments login

Most goods in US directly or indirectly relies on importing. So practically, I think it just mean US introduced VAT.


Except VAT and Sales Tax is typically applied regardless of where the item / service originated from.

It also doesn't apply potentially several layers down.

That's not to mention things like reverse charge for B2B etc.

In other words, not similar at all!


Except it seems like the president has more control over tariffs.

Taxes are approved by congress, so this was surprising to me. Does the president essentially has full reign to tariff whoever they want (for whatever reason) until the end of their term?

Maybe this is less about economic independence and more about grabbing whatever power is within reach. If domestic taxes were up to the president and tariffs were up to congress, would we see exactly the same situation with domestic taxes?


Not quite the same.

An item sold for $1000 say would pay $100 at 10% VAT. The items in the supply chain all charge VAT and reclaim VAT they spent. I think usually this terminates at the import (maybe?)


Well, when you put it that way... it doesn't seem that bad at all.

Maybe the real innovation here is the political manouvering of coming up with a new, desperately-needed government revenue stream.


Any nesting and indirection burden the mental fatigue on understanding the code: nesting conditional statements, macros, functions, classes while it's necessary to use or we will have so many duplicated codes which also burden the mental fatigue anyway.

There was an extreme argument on a SNS recently that someone claimed that he prohibit nesting if in their work.

Shorter-lived Variables argument doesn't always work. One of the most horrible code I read use very short-lived variables:

val_2 = f(val), val_3 = g(val), ...

It's Erlang. Because Erlang's apparent variable isn't a variable, but just a name bound to a term.


The root cause is it need 30 years to establish a self-sustaining life. The average first birth of Japanese women are reaching to almost 30 years old

Most Japanese are expected to graduate high school(18 years), more than half of them get a degree(22 years minimum). Then you start working for many years.

The issue is clear. If we make our life self-sustaining in late teen, problem will be solved.

I have an idea that works but nobody(including myself) appreciate it.

Spread certain religions like Christianity or Islam.

Significantly reduce University students.

Significantly reduce high school students, especially female students.

Abandon modern health care so significant percentage of people die before reaching 15 years old.(It may sounds like counter-intuitive, but this situation motivate human to give more birth)

It has serious moral issue as well as political and economical issues. We somehow manage to keep the political stability and food supply under this condition.

I think it's more reasonable we wait for the natural selection that evolve the human to give a healthy first birth in their 50s.


I really don't understand the market thinks Nvidia is losing its value.

If DeepSeek reduce the required computational resources, we can pour more computational resources to improve it further. There's nothing bad about more resources.


Well you have to keep in mind that Nvidia has a 3 trillion dollar valuation. That kind of heavy valuation comes with heavy expectations about future growth. Some of those assumptions about future Nvidia growth are their ability to maintain their heavy growth rates, for very far into the future.

Training is a huge component of Nvidia's projected growth. Inference is actually much more competitive, but training is almost exclusively Nvidia's domain. If Deepseek's claims are true, that would represent a 10x reduction in cost for training for similar models (6 million for r1 vs 60 million for something like o1).

It is absolutely not the case in ML that "there is nothing bad about more resources". There is something very bad - cost. And another bad thing - depreciation. And finally, another bad thing - the fact that new chips and approaches are coming out all the time, so if you are on older hardware you might be missing out. Training complex models for cheaper will allow companies to potentially re-allocate away from hardware into software (ie, hiring more engineering to build more models, instead of less engineers and more hardware to build less models).

Finally, there is a giant elephant in the room that it is very unclear if throwing more resources at LLM training will net better results. There are diminishing returns in terms of return on investment in training, especially with LLM-style use cases. It is actually very non-obvious right now how pouring more compute specifically at training will result in better LLMs.


My layman view is that more compute (more reasoning) will not solve harder problems. I'm using those models every day and when problem hits a certain complexity it will fail, no matter how much it "reasons"


I think this is fairly easily debunked by o1, which is basically just 4o in a thinking for loop, and performs better on difficult tasks. Not a LOT better, mind you, but better enough to be measurable.


I had a similar intuition for a long time, but I’ve watched the threshold of “certain complexity” move, and I’m no longer convinced that I know when it’s going to stop


Markets aren't rational.

NVidia is currently a hype stock which means LOTS of speculation, probably with lots of leverage. So, the people who have made large gains and/or are leveraged are highly incentivized to panic sell on any PERCEIVED bad news. It doesn't even matter if the bad news will materially impact sales. What matters is how the other gamblers will react to the news and getting in front of them.

:)


Another thing, the markets had priced in X demand scaling @ ~145 and suddenly it's X/30 demand scaling and therefore the price should drop.


That's wrong.

DeepSeek is a problem for Big Tech, not for Nvidia.

Why?

Imagine a small startup can do something better than Gemini or ChatGPT or Claude.

So it can be disruptive.

What can Big Tech do to avoid disruption? Buying every SINGLE GPU Nvidia produces! They have the money and they can use the GPUs in research.

The worst nightmare of any Tech CEO is a startup which disrupts you so you have to either be faster or you kill access to needed infrastructure for the startup. Or even better, the startup has to rent your cloud infrastructure, this way you earn money and you have an eye on what's going on.

Additionally, Hyperscalers only get 50-60% of Nvidia's supply. They all complain of being undersupplied yet they get only 60% and not 99% of Nvidia's supply. How come? Because Nvidia has a lot of other customers they like to supply to. That alone tells you how huge the demand is that Nvidia even has to delay Big Tech deliveries.

Also the demand for Nvidia didn't drop. DeepSeek isn't a frontier model. It's a distilled model therefore the moment OpenAI, Meta or the others release a new frontier model, DeepSeek will become obsolete and will have to start again to optimize.


You're wrong on deepseek. It's not distilled (actual r1 model is based off v3).

So, so many misinformed takes in this thread (not just you...)


True, but current price isn't based on fundamentals, it's based on hype-value.

nVidia is going to be a very volatile stock for years to come.

I don't see deepseek changing nvidia's short term growth potential though. Efficiencies in training were always inevitable, but more GPU still equals smarter AI....probably.


- "I really don't understand the market thinks Nvidia is losing its value."

because the less GPU need to train, the less money to be made

- "If DeepSeek reduce the required computational resources, we can pour more computational resources to improve it further. There's nothing bad about more resources."

thats why you are not hedgefund manager, these guys job is to ensure that the HYPETRAIN for company to buy as many nvidia gpu to sell no matter what, if we can produce comparable model without using B (as it stands billions of dollar), it means there are less billions of dollar to be made and the HYPETRAIN is near the end


The market might be right that Nvidia is overvalued, but if so I think only accidentally and not because of this news. Like you said, at least for now I think it's fairly clear that if a company has X resources and finds a way to do the same thing with half, instead of using less they'll just try to do twice as much. This could eventually changed but I don't think AI is anywhere near that point yet.


yeah I think people just got spooked and sold to take some profits they were going to take soon anyways


The entire article is a lot of name calling for many C++WG members for sexual harassers, rapists, hate speakers etc.

It argues that C++WG made various mistakes and full of incompetent members by presenting various seemingly technical topics but it's so random and unorganized it's hard to follow.

It doesn't make sense at all.


It's even more hard to believe.

So this person is infamous for submitting ChatGPT generated WG papers.

Then he was blamed for "question" title, he refuse to change the title, his sponsor cut the tie.

I don't know. I'm deeply worrying about brain drain on C++ standard committee.

I was a C++ committee member once. They failed to understand the importance of char8_t, thinking char is enough. Then, they depends on locale on std::format. I quit for I lost hope on C++.

You can't expel members from SC/WG but this is... what are they doing?


There are many non-UTF-8/16/32 character encoding used in the wild which use these value in multi-byte character encoding. These values are used in the wild.

I think the decision forbidding newline in pathname is also wrong. It may break tons of existing code.


I wish Linux/etc had a mount option and/or superblock flag called “allow only sane file names”. And if you had that set, then attempting to create a file whose name wasn’t valid UTF-8, or which contained C0 or C1 controls, would fail. The small minority of people who really need pre-Unicode encodings such as ISO 2022 could just not turn that option on. And the majority who don’t need anything like that could reap the benefits of eliminating a whole category of potential bugs and vulnerabilities.


> There are many non-UTF-8/16/32 character encoding used in the wild which use these value in multi-byte character encoding.

Like what? I am genuinely curious: Shift-JIS, GB2312, Big5, and all of the EUC variants do not use bytes that correspond to C0 characters in ASCII.


Don't even assume UTF-something is the only character encoding. There are so many existing character encodings before Unicode. It's still widely used.


Don't assume UTF-8 is the only character encoding used in the wild. There are character encoding with leading bytes not easily detectable like UTF-8.


In 2024, if you don't get the correct result decoding a text as UTF-8, the bug is the text, not the decoding. And luckily, adoption of UTF-8 in the past 30+ years have gone will enough that you don't need to worry.

Caveats for cursed hardware standards demanding two-byte encodings like USB.


I hope you're happy in your ivory tower, but I personally work with a lot of files with other encoding, most often that weird utf16 (Windows), sometimes also legacy files with different ANSI encoding. Declaring "my decoder is fine, it's the text that is buggy" is not going to score a lot of points with my boss and clients.


The only valid reason for still having files stored in legacy ANSI encodings is that their only use is input to software that has not been maintained for ~30 years and cannot be updated. That's fine because they're just binary inputs in a closed ecosystem that no one touches.

But if they are supposed to be treated as text, then yes it's the text that's buggy - they should just be converted to UTF-8 once and have the originals thrown away.

UTF-16 is something that Microsoft has cursed us with by inserting it into specifications (like USB) so that we cannot get rid of it, even if it never made any sense what so ever. But those are in effect explicit protocols with a hard contract, very different from something where you would "assume an encoding".


Shouldn't hurt to tell clients to right their weird proprietary software originated encodings though.


why people assume utf8 had only know locale encoding still?

you're probably guilty of the sin you preach and is showing wrongly decoded utf8 and don't even know.


Many of the futuristic items I read on manga(especially Doraemon) back in the day I was a child became a reality.


My mother — daughter of a literal NASA rocket engineer — lost her father around c~2000. She said to me c~2012 "I sure wish my father had lived long enough to see the iPhone's invention: watching scifis he'd tell me I'd live long enough to see telecommunicators even more capable than these movie-props."

This mother died six months before ChatGPT's debut, and I thought "sure wish Mom had lived long enough to see LLMs." It wasn't until her first anniversary, flipping through some of our letters, that I realized she had tasted GPT-2 while we played around with ThisWordDoesNotExist.com (made-up words & their "definitions").

Maybe I'll live long enough to see telecommunicators even more capable than these pretrained-transformers.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: