Hacker News new | past | comments | ask | show | jobs | submit | yarg's comments login

You don't know what artificial means.

You've made that point three times now, I understand your disagreement and have addressed it

Not sufficiently though

You're creating a definition that doesn't align with the existing definitions for the words you're using.

Which is fine, it happens all the time, though it's less useful.


In your opinion do people mean the same thing when discussing enemy AI in a game engine and LLMs / AGI?

They're wrong.

Any actor that simulates any aspect of intelligence is a genuine AI - it doesn't even need to be adaptive.

Pong had AI.


This is technically correct. A* pathfinding is technically AI.

However I don't know anybody these days who would seriously call A* proper AI.


See my comment above to the original statement

> It's not an intelligence, artificial or otherwise

That's what artificial means (even Pong had AI). If it ever becomes intelligent, it will be a synthetic intelligence (not an AGI).

For example, artificial vs synthetic diamonds; the former just looks the part, the latter is the genuine article - but manufactured as opposed to naturally formed.


That's not the usage I'm familiar with, e.g. sci-fi novels will frequently use AI for actually intelligent and possibly sentient beings that just aren't biological (see Skippy in Expeditionary Force), the intelligence isn't usually a party trick or something that just looks like intelligence. While it's true that people use "AI" to describe e.g. enemy behavior in games, I don't think people fundamentally mean the same thing when they discuss AI in the context of AGI, which some people fully believe LLMs will become.

The term is fuzzy though and people mean different things by it.

Although your analogy does make sense, but I've never run into "synthetic intelligence" used as a term.


Yes, because our society rampantly abuses language ignoring the actual meanings of words.

As far as science fiction goes, it's irrelevant - we live in the real world and have had AI for a long time now (e.g.: https://en.wikipedia.org/wiki/Bertie_the_Brain).


My point is that when people say AGI they don't mean a super sophisticated version of a game engine's enemy behavioral mechanics.

Sci-fi is many times a speculative guess about the possible future of technology, IMO its not completely irrelevant as an insight into what people mean or expect when they say certain things about future tech.


No, the meanings are just different. 60 years ago AI was a clever imperfect search of an array. There are multiple meanings. Yours is not the only one.

Yeah, I think you're the one that's abusing the meaning of words, if you're calling that an AI. AI does not mean "a very simple program that responds to human input in some way".

In Schlock Mercenary, an AI is truly sapient, and therefore has rights. A "synthetic intelligence" is below that threshold ("synthetic intelligence means 'kinda stupid'" is one line), and therefore can be used as guidance for missiles and such, where their survival is not expected.

That's how one sci-fi universe draws the distinction. I'm not sure that that's binding on anyone else, but it was an interesting distinction.


By the way artificial diamonds and synthetic diamonds are the exact same thing:

> A synthetic diamond or laboratory-grown diamond (LGD), also called a lab-grown diamond, laboratory-created, man-made, artisan-created, artificial, synthetic, or cultured diamond.

Artificial also means non natural, not necessarily a mock or something that behaves like something else:

> made or produced by human beings rather than occurring naturally, especially as a copy of something natural


> artificial diamonds and synthetic diamonds are the exact same thing

No, they aren't. Artificial diamond generally refers to cubic zirconia - which is not a diamond.

Synthetic diamond refers to genuine diamonds produced by humans (rather than natural forces).

https://en.wikipedia.org/wiki/Diamond_simulant#Artificial_si...

https://en.wikipedia.org/wiki/Synthetic_diamond


> By the way artificial diamonds and synthetic diamonds are the exact same thing

I think that was their exact point. The terms are equivalent but carry different connotations


No, Pong had "artificial". It did not have "intelligence".

I mean, I guess if you insist on the 1950s definition of AI, perhaps it did. I'm not sure it did even by the 1970s definition, though, and it absolutely did not have it by the 2020s definition.

And if you're going to claim that we should keep using the 1950s definition, well, languages change over time. If you want to communicate with people in the 2020s, use definitions from the 2020s.


Of course it didn't have intelligence, as I stated - it's artificial.

Artificial not-intelligence is not artificial intelligence.

I don't know why you're defining words the way you are, but it's leading you to an absurd position. As I said elsewhere, if you want to communicate with the rest of us, you need to use the same definitions we do. Otherwise you're talking, not about AI, but about trying to re-define words, and that's a really uninteresting conversation.


Artificial not-intelligence would be an actual intelligence pretending not to be.

Like a cuttlefish disguised as a rock.


> It would be better if the GC can be turned off with a switch and just add a delete operator to manually free memory.

This breaks the fundamental assumptions built into pretty much every piece of software ever written in the language - it's a completely inviable option.

Incorporating a borrow checker allows for uncollected code to be incorporated without breaking absolutely everything else at the same time.


To my understanding (not the best) there's a huge disconnect between the physics of the very small (quantum mechanics and the standard model) and that of the very large (general relativity).

The disconnect seems to be unresolvable (I don't understand this part at all) and so efforts are being made to quantise gravity and incorporate it into the standard model.


It seems kinda vital to me - there needs to be an informational bottleneck somewhere.


> Because Chrome even explicitly states that traffic to a site with an expired certificate is unencrypted.

If that's the case, then Google's condescension is doing a disservice to its users.


(Tested with Chromium, at https://expired.badssl.com) It says "Not Secure" on the left side of the address bar. It says "Privacy error" as the tab title. And then the body of the page:

<bold>Your connection is not private</bold> Attackers might be trying to steal your information from expired.badssl.com (for example, passwords, messages, or credit cards). Learn more about this warning net::ERR_CERT_DATE_INVALID


It's insane the way that browsers shit the bed if there's any issue with the certificate.

Just throw in a big red exclamation point on top of the little padlock icon next to the URL bar - it's literally only there to inform the user about any potential security issues. Use it and (unless the site is known to be or obviously malicious) load the bloody page.

Honestly, it's absolutely insane that the browser misrepresent out of chain HTTPS as more of a threat than HTTP.


I don't know why my bank's website's got this red button, but I really need to transfer my funds right now, so lemme just mash whatever button I need to mash to get to the website. Ugh, why are computers so dumb!

Seems fine.


Well that seems "obviously malicious", feel free to re-read my comment if you're feeling less illiterate than before.


You'd think they could give a less-scary warning for like, the first week after expiration. It's not really any less secure 2 days past expiry than it was 2 days before, and a grace period would give the host a bit more time to address these issues.


This makes me wonder, how do you reliably and securely encode steganographic content for distribution within a noisy medium?


In this case I think it's mostly about using different sub-carriers (kind of a "channel in the channel"), so that the data information and the audio are separated in frequency and do not disturb each other. That's generally called Frequency Division Multiple Access (FDMA), IIRC.

Another more advanced technique is Code Division Multiple Access (CDMA), e.g. used by GPS and some mobile communication modulation schemes. It allows you to have multiple senders on a single radio carrier frequency, and the receiver "selects" which sender to listen to by knowing its "code".

There's also Time Division Multiple Access (TDMA), i.e. senders take turns sending content in allocated time slots.


Goodhart's law can diagnose an issue, but it prescribes no solutions.

However, it's still better to recognise a problem, so you can at least look into ways of improving the situation.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: