Hacker News new | past | comments | ask | show | jobs | submit | lostdog's comments login

It's really hard to do in practice.

Yes, you can train an ML model on rendered data, but the model tends to fixate on rendering artifacts, and performance doesn't transfer to real world images. Plus, it's very difficult to make generated scenes with the variety and complexity of the real world. Your trained model will fail to generalize to all the distractions in natural scenes.

Yes, there are techniques for all these problems, but none of them are good, reliable, or easy to get right.


Once upon a time Facebook told you about friends' parties so you could go out and have fun. It also told you your friends' "status," giving you something to ask them about when you see them.

That version of Facebook added to my social life and happiness so much! Unfortunately it didn't make enough money or something, and got turned into political ragebait and ads.


I remember when a notification meant something I actually wanted to see.


The endorsement lets them write why they support the candidate. Laying out the reasons is what could be convincing, and is what's also being blocked here.


I think the intuition of the parent comment is right, but you also make a fair point[1]. I just wonder if you genuinely believe that any prospective Trump voter could be convinced by any argument to vote for Harris at this point. I mean after all the things that have already been written and said by so many, even by Trump himself, and have failed to convince ~47% of Americans that he's unfit to be the president.

Honestly, I look at the billions of dollars being poured into political ads, and I can't help but think that it's all a tremendous waste because it's hard to imagine that there's anybody left who didn't already form a strong opinion about Trump at some point over the past ten years.

[1] Like even assuming that prospective Trump voters don't read this newspaper, an especially novel or powerful argument could get picked up and spread by other outlets that do reach prospective Trump voters.


Undecided voters will decide the election, yes they exist and yes they can change opinions


Political ads have many purposes, including convincing people to vote for their candidate, instilling a sense of urgency/motivation/purpose to actually get out and vote, and then second order goals like general PR/familiarity for the party, etc.


I don't really expect that there are Trump voters who could be convinced to vote for Harris by the op-ed, but I do think there could be voters who may have been on the verge of reconsidering Trump who may hear something like this story (of Bezos stopping an endorsement) as something that makes them identify their moment of reconsideration as a mistake because they think Bezo's behavior should be perceived as supporting the concept that Democrats and other anti-Trump parties are the ones off-kilter.


But this change doesn't stop any of the journalists from writing editorials endorsing a candidate, including laying out reasons, correct?

The difference is that the paper as a whole won't endorse a candidate?


I've tried, but once you fall behind you're done. The current algorithms just drown you in the cards you've forgotten. There's no way to say "I went on vacation. Please reset my progress to something manageable to me."

Plus, memorization is still hard. You have to really focus on internalizing the cards as they show up, and not just skimming them. It's work.


I reduce the number of new cards (possibly to 0) and spend 1-2 weeks primarily on review until the backlog is cleared. This works well in Anki.

If you have any cards you forgot, they will be reset on their own by the algorithm. Like if you get a new card today and go on vacation for two weeks, then that card will be 2 weeks overdue. Fine, you review it, get it wrong, study it a couple more times and then it's back on track for the next day. That's exactly how the algorithm is intended to work.

Once the backlog is cleared or at a reasonable level I restore the settings for new cards.


You can do even better with radar. If you place a set of radar reflectors around the tower at known locations, then you can detect them from the booster and triangulate the distances to a precise position. Plus, radar gives you relative velocities, so your speed and roll rate estimatimates get even more precise. I bet you could get down to millimeters with a setup like this.


Is this really viable given all the electromagnetic interference the rocket motor exhaust plumes are generating?


Falcon-9 uses radar altimeters for determining vertical "distance to go" during landing.

While a sideways position error of even ten meters is not fatal, it is critical for the rocket to be quite close to zero altitude when deceleration brings the velocity to zero. (Any residual error must be dealt with by the shock absorbers, and their capability is modest.)


The most common bug in that type of code is mixing up x and y, or width and height somewhere in your loops, or maybe handling partial blocks. It's not really what Rust aims to protect against, though bounds checking is intended to be helpful here.

I don't get the argumentshere. In practice, Rust lowers the risk of most of your codebase. Yeah, it doesn't handle every logic bug, but mostly you can code with confidence, and only pay extra attention when you're coding something intricate.

A language which catches even these bugs would be incredible, and I would definitely try it out. Rust ain't that language, but it still does give you more robust programs.


The issue is a memory safety issue, which Rust aims to protect against.

But I am not saying Rust is bad. My issue is the complete unreasonable exaggeration in propaganda from "C is completely dangerous and Rust is perfectly safe". And then you discuss and end up with "Rust does not protect against everything, but it still better", which could be the start of a reasonable discussion of how much better it actually is.


> C is completely dangerous and Rust is perfectly safe"

Nobody in this conversation said that.

If you're actually continuing an argument from somewhere else you should save everyone a lot of time and say so up front, not 10 comments in.


The start of the thread was "The difference is every line of C can do something wrong while very few lines of Rust can." but this is an exaggeration of this kind.


yeah well quote that line then


I wish there were a good TUI for handling merge conflicts. Vimdiff seems to be the closest, but doesn't have keyboard bindings for 3-way merges.

Nothing beats Meld for me, but if you're on a remote GUI-less machine, there aren't good options.


Lazygit works well for merge conflicts (and many other things):

https://github.com/jesseduffield/lazygit

(I am not actually sure what a three-way merge conflict is though TBH.)


emacs' smerge has always worked well for me.


How about abbreviating the 12 most common things in the codebase, but everything else is long?

That's a nice compromise where you need to learn a few core concepts, but the code itself is easier to scan for bugs in a lot of places.


The parent clearly said that approved parents were bullshit, and I agree. I have several patents, and have seen how nonsense the process is. When lawyers obfuscate the text enough to confuse the patent examiner, the patent gets approved. I can't tell if an individual patent examiner is competent or knowledgeable, but patent decisions have nothing to do with factuality or novelty.

I do remember your comments from past threads too. It really interesting to hear the perspective from the patent office's side, but the idea that the patent office had some secret and specialized method of evaluating novelty is ridiculous. Any expert can read a sample of granted patents and tell you that. I'd estimate maybe 5% of patents in my field have any novelty, and that's being generous.

I'm sure this has more to do with incentives and the overall system, and that individual patent examiners would prefer to do a good job. But you have to admit that the results are atrocious.


> The parent clearly said that approved parents were bullshit, and I agree.

Just because they said it was granted, doesn't mean that it was. A lot of people here don't seem able to distinguish between a granted patent and a rejected patent application. Here are two examples that I bothered to reply to in the past:

https://news.ycombinator.com/item?id=38766101

https://news.ycombinator.com/item?id=36563425

> the idea that the patent office had some secret and specialized method of evaluating novelty is ridiculous

I don't think they do and I never said they do. The USPTO follows some legal standard that I personally don't agree with. I agree with you that too few granted patents have genuine novelty.

> But you have to admit that the results are atrocious.

No, I don't. You've seen a small selection of what the USPTO outputs. Only the bad cases appear in the news. In contrast, I've seen a far larger and unbiased selection and know that the majority is fine. Most applications are rejected. I probably rejected over 75% myself.


I have seen the results from searches of patents in my field, and the patents that my colleagues get granted. It's hard to find even a single good patent in the bunch.

Is there a way to sample 5 random ML patents? I'd be surprised if half were any good.


I think I haven't been clear on a few things.

I think the quality of examination and search is excellent given how little time examiners have. But mistakes still happen too frequently, and the mistakes can be highly costly. Better to stop problems upstream in my opinion by giving examiners more time.

Patent quality is related but different. I agree that patent quality is awful, but there's only so much an examiner can do to influence that. Attorneys have basically gamed the system to write vague legalese that's patentable but basically useless. And to paraphase a supervisor I knew at the USPTO, "Just because it's stupid doesn't mean that it's not patentable". I can't reject them if it meets the legal standards but is stupid.

Anyhow, I think there might be a random sort feature that can do what you want in the USPTO's public search (no time to check, though): https://www.uspto.gov/patents/search/patent-public-search


The patent office is also horrendous at evaluating novelty, so I suppose ChatGPT has already reached human level performance on this task!


Similar to the way in which software developers are terrible at delivering quality software on-time and on-budget, so I suppose ChatGPT has already reached human level performance on this task!


ChatGPT is a mirror where we don't look too good ...


My point was more that just because humans are terrible at something doesn't mean ChatGPT can't be much worse.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: