Hacker News new | past | comments | ask | show | jobs | submit | zitterbewegung's comments login

I think that people balked at the cost of 4.5 and really wanted just a slightly more improved 4o . Now it almost seems that they will have a separate products that are non chain of thought and chain of thought series which actually makes sense because some want a cheap model and some don't.

Linking to this without the fennel-lang.org main page which states the following

"Fennel is a programming language that brings together the simplicity, speed, and reach of Lua with the flexibility of a lisp syntax and macro system." is a bad idea. Not having this sentence on your justification is ill advised.

Not to detract from the language or anything I have found many programming languages justification to just not have an elevator pitch and I have a hard time understanding why this is the case. Unfortunately people's attention spans are extremely short.


> Not to detract from the language or anything I have found many programming languages justification to just not have an elevator pitch and I have a hard time understanding why this is the case.

But they do have one, that you just copied?


I'm pretty sure the parent comment is pointing out that the quoted sentence from the main page ought to be present in the rationale page that is linked.

You can run a version of CYC that was released online as opencyc https://github.com/asanchez75/opencyc . This is when a version of the system was posted on source forge and the GitHub has the dataset and the KB and inference engine. Note it has been written in an old version of Java.

If you were troubleshooting this and I know what I’m saying is with 20/20 hindsight why wouldn’t you try to test this on someone else’s machine to see if it is an environment issue ? They seemed to get use extensive analysis at that point. Also I’ve seen Jenkins deployments that have test runners that would run JS unit tests.


Sorry to hear this. I was at Goto Chicago for one of his talks and he used a bunch of people to model TCP and participated and learned a lot about the protocol. Really nice person and great lecturer.

He also noted that GL INET devices have the LibreQos pre installed and recommended those routers.


Is it just me or does this seem like a bad design where a TCP port is exposed to share information?


Yes. Any local process can connect to a TCP port (unless special care is taken) so it should be a last-resort option. Additionally the sever either needs to be run as root to bind a privileged port or any application can race over binding that port. UNIX sockets are a much better option as they can be protected by filesystem permissions including who can bind the socket and who can connect to it.

This can be mitigated by having authentication inside the socket, but now your authentication code is an attack surface and how are you going to share the secrets? On the filesystem? You are basically back to a UNIX socket with extra steps.


As long as you bind to localhost it's fine in theory. Though any network code still needs to be rigorously hardened.


> As long as you bind to localhost it's fine in theory

But only if you assume that the data being transferred is public, right?

With the described method, any non-privilieged user could access the data from the TCP socket, right?


Information in top isn't much of a secret though.


There is a lot of discussion here about the motivation of companies and what they can share or remain private in relation to their objectives and making a profit. But, I think to provide a much better argument is what are their actual motivations to have privacy in all of the systems to begin with.

Apple and Amazon will at minimum compromise your privacy to improve their products. And since they have no extra motivation since they don't make more or less money (because Siri and Alexa are loss leaders) they will have no extra considerations of privacy regardless.

Comparing Signal to protonmail is a much more interesting problem and you can go on to what has been subpoena from Signal and protonmail. Since there was one actually disclosed we can see the information (or really lack of) that was given by Signal [1] . We have a statement by proton mail on what can be subpoena [2] but there have been arguments against it.

[1] https://signal.org/bigbrother/cd-california-grand-jury/ [2] https://proton.me/legal/law-enforcement [3] https://protos.com/protonmail-hands-info-to-government-but-s...


Apple used to have privacy as the differentiating and profit-driving factor.

This will immediately get thrown out of the window when it hurts profit (and may have been already been thrown out of the window, see OpenAI partnership).

On top of that, at this point all they have to do is to just be ever so slightly better than the rest when it comes to privacy. The bar is so low as to be non-existent


> Apple used to have privacy as the differentiating and profit-driving factor.

> This will immediately get thrown out of the window when it hurts profit

This is the important thing I'm always trying to note to people that think incentives are enough (as I used to). You can never know what the incentives of the company will be 5, 10, 15 years from now, or whether that company or division will exist or have been sold to some other company.

Incentives based on current conditions only matter for outcomes that don't have ramaifications far into the future. That's definitely not data collection and privacy, where you could find that 10 years worth of collected information about you has been sold at some future date.

And lest anyone think they can predict the stance a company will have on a topic a decade or two later, all I can say is that any example someone can point to of a company that has stayed the course we can easily look at point in history where a series of events could have gone the other way and they would be close to being bought out if not defunct. Even Apple had a period where they were bailed out by investment from Microsoft, and many other large names of that period were gobbled up.

Always keep in mind, Sun was an amazing company with amazing products and engineers that embraced open source and competed with Microsoft in the enterprise market, and eventually after declining they got bought by Oracle.


OpenAI integration in current Apple products needs to be enabled to begin with, and then it still prompts you before sending anything to OpenAI servers, so I'd say it's in line with their practices so far.

The reason why I trust Apple a little bit more than, say, Google on something like this is that Apple is pitching their products as luxury goods - a way to be different from the hoi polloi - so they need features to plausibly differentiate along these lines. And privacy is one thing that they know Google will never be able to offer, given its business model, so it makes perfect sense for Apple to double down on that.

(Ironically, this means that Apple users benefit from Android being the dominant platform.)


That is kind of proving OPs point, though: the differentiating factor isn’t actual privacy, it’s an impression of privacy that you’re sold; the warm, fuzzy feeling that you’re using a superior product because you’re special and this phone is for special people that have important data that needs to be protected, and as a manufacturer of special people devices, Apple obviously takes care of this—because you’re important, duh!

If they can get away with appearing to care about privacy instead of actually doing so, they will. That’s all it takes to look better than Google.


You don't even have to make the argument that Apple is untrustworthy. The stronger argument is that you can't know what Apple will be like in the future, or even if they will still be independent (which seems far fetched since they're so big) or a division that deals with user data won't get sold off with the data even if it's to a respectable company, because that company may eventually sell if to a slightly less respectable company, and repeat.

The risk for PII being utilized nefariously never goes away as long as it exists, so the only same stance is to not allow it to exist in others handle if at all possible. It's the same reason you don't share your banking credentials with your friends. Sure, you might trust them, but you can't know the future, so why even expose yourself to risk you don't have to?


That is true, but the catch is that reputation is very easy to lose and very hard to gain.


That and the data that is a part of X can be used by the xAI team. I expect it will be in the TOS if it isn't there already.

"xAI and X’s futures are intertwined. Today, we officially take the step to combine the data, models, compute, distribution and talent."


Weren't they already doing that? I recall before I bailed from X/Twitter they already added an AI training consent toggle, which silently defaulted to "I consent" for all existing users of course.


> I recall before I bailed from X/Twitter they already added an AI training consent toggle, which silently defaulted to "I consent" for all existing users of course.

There's a solution for that: build a Twitter bot that posts strange things.


Are the porn bots still around? Maybe they were the heroes we needed after all.


There is a setting (opt-out outside of EU, opt-in inside EU I think) that allows/forbids data sharing with X since Grok was released.


Oh yes, you can definitely be reassured that Elon and his openai dropouts working at xAI will follow that one to the tee.


I don’t really see how X data, being riddled with spam and bots, is all that uniquely useful to an AI company. Even if it was, it’s not all that clear that access to data is a defensible moat.


I think any LLM that’s not trained on X data comes out ahead.


I suppose xAI could already use the data from X, but anyway. X is a cesspool of vile hate and bots. Why would you want to train a AI on that?


Why does that warrant an acquisition?


It doesn't actually looks like another financial shifting chairs on the deck due to the xAI valuation being extremely high.


Even OpenAI wasn’t going to release ChatGPT because the product internally was that it wasn’t that good but with some obvious internal pressure we are where we are at now.


I thought at the time OpenAI were clamoring that they couldn't release it because it was "too dangerous?"


That was GPT-2.


I guess one way to test stuff is good or not is to see how many people complain about it so open betas seem to have a good audience.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: