Hacker Newsnew | past | comments | ask | show | jobs | submit | willsmith72's commentslogin

depends, "out the nose" is relative based on what else you could be doing and what else is out there

and no job i've had considered 9-5 40 hours after a 1 hour lunch break


Well, I don't know anyone who takes a full 1 hour lunch break -- back when I was in the office, I reckon it was more like 30-45 minutes? But people at all 4 office jobs I've worked did a standard 9-5 schedule.

But yes, "out the nose" is qualified by your particular situation. For me, that might be 2-3x my normal salary, which would mean I could take breaks for a few years or retire sooner.


including downdetector... that's annoying https://downdetector.com/status/npm/


And https://downdetectorsdowndetector.com says that https://downdetector.com/ is up! >:( What a world we live in




I really expected that one to be a joke


Good god, what have we done ...


This is now my favorite post on HN


Turtles!(Expletive!)



you shouldn't try to innovate on everything, have to draw the line on buy/build somewhere


Well OpenAI say users' names, emails and locations have been divulged, one of them is going to accept there was a "breach"


OpenAI was sending that data to MixPanel. If anything, OpenAI is culprit for sensitive data leak. There’s absolutely no reason to send that data.


Companies use sub-processors all the time, OpenAI is no different. Unless you want to have everybody get a major case of NIH tomorrow (I wouldn't mind, then we can get rid of third party cookies and all advertising as well while we're at it).

Every time a google tag is included on a page a ton of sensitive data gets sent to another party than the one whose website you are visiting.

Whether it was wise or not for OpenAI to share this information with Mixpanel is another thing, personally I think they should not have but OpenAI in turn is also used by lots of companies and given their private data and so on.

This layercake of trust only needs on party to mess up for a breach to become reality. What I'm interested in is whether or not it was just OpenAI's data that was lifted or also other Mixpanel customers.


I agree. On all the implementations of Mixpanel that I've been involved in, I've made it a point to not send any PII to Mixpanel. It's not needed for Mixpanel analytics to work, Mixpanel is not a CRM, it does not need customer email and other details.


Mixpanel has "session replay" support: https://docs.mixpanel.com/docs/tracking-methods/sdks/javascr...

And it's easy to let things like names and emails slip through.


But why do they send email addresses instead of anonymous identifiers? To link data with data from other sources?


It’s how they do it in the Mixpanel setup guide: https://docs.mixpanel.com/docs/quickstart/identify-users#cod...

Also probably people on the product marketing team want to have identifying info in their dashboards of top users and churn risks and whatever, and someone has to be the one to tell them no.


If Mixpanel is subprocessor of GDPR'd data from OpenAI, OpenAI is obliged to notify affected European customers about the data breach within 72hrs.


Correct. And they're already out of that window.


True, but we don't know if oai emailed their customers to tell them as soon as mixpannel told them. The regulation says they only have to notify affected parties.


I wonder whether OpenAI could be okay if they themselves weren't notified within 72hrs.


Typically: yes. The clock starts ticking the moment you or anybody within your organization becomes aware of the breach. Three days is plenty. It even gives you time to consult your lawyers if you are not sure if a breach is reportable or not, but you could always do a provisional which gives you a way to back out later.


I just hope that in 100 years time, people will be shocked at the prevalence of social media these past 2 decades


I predict that in much sooner than 100 years social media will be normalized and it will be common knowledge that moderating consumption is just as important as it is with video games, TV, alcohol, and every other chapter of societies going through growing pains of newly introduced forms of entertainment. If you look at some of the old moral panic content about violent video games or TV watching they feel a lot like the lamentations about social media today. Yet generations grew up handling them and society didn’t collapse. Each time there are calls that this time is different than the last.

In some spaces the moral panic has moved beyond social media and now it’s about short form video. Ironically you can find this panic spreading on social media.


We moderate consumption of alcohol, sugar, gambling, and tobacco with taxes and laws. We have regulations on what you can show on TV or films. It is complete misuse of the term to claim a law prohibiting sale of alcohol for minors is ‘moral panic’. It is not some individual decision and we need those regulations to have a functioning society.

Likewise in few generations we hopefully find a way to transfer the cost in medical bills of mental health caused by these companies to be paid by those companies in taxes, like we did with tobacco. At this point using these apps is hopefully seen to be as lame as smoking is today.


Only over the air TV is regulated by the FCC. Films and non broadcast TV are only regulated if they contain obscene content. If anything there was more regulation of film production in the past. Hayes Code etc.


> We moderate consumption of alcohol, sugar, gambling, and tobacco with taxes and laws.

For the US, would it be accurate to put "sex" on there as well?


Of course, and not just in US. I don’t think any sane person thinks we should not limit cousins getting married, non-consensual sexual acts, pedophilia, etc. In many places in Europe sex work is perfectly legal and taxed like any other business.

In the past there has been stupid regulations on what consenting adults (of the same sex for instance) could do together. This created a system where group A could get married and were lauded, while the group B went to jail. I am not saying all laws and regulations are good, but we absolutely do need them, and we need to make them just and good. For instance, today we protect the group B’s rights to marry and love with laws.


The US kinds of stands out as far more um... "prudish" than most other countries though, at least in my understanding of things.

And the US seem to be trying to spread its level of um... "prudish"-ness to any country it feels isn't as anti-sex?

The VISA and Mastercard + Steam kerfuffle recently seems like that kind of thing, but there are heaps of instances in that set historically.


"Society didn't collapse" is a very very low bar.


> Yet generations grew up handling them and society didn’t collapse.

Society did not collapse. That does not mean those things did not have negative effects on society.


I don't think any of those items have had the significance and decisiveness of social media, or have been controlled by billionaires who have corrupted the election systems.

Social media seems far more dangerous and harder to control because of the power it grants its "friends". It'll be much harder to moderate than anything else you mentioned.


Not only social media but addiction to phones too. The impact on kids and teenagers is well documented by now.


Where are the parents when you need them?


In 100 years time they will be so fried by AI they won't be capable of being shocked. Everyone will just be swiping on generated content in those hover chairs from Wall E.


In Mad Men, we have these little moments of mind=blown by the constant sexism, racism, smoking, alcoholism, even attitudes towards littering. In 2040 someone's going to make a show about the 2010s-2020s and they'll have the same attitude towards social media addiction.


> Starting to roll out in the Gemini API and Google AI Studio

> Rolling out globally in the Gemini app

wanna be any more vague? is it out or not? where? when?


Currently, it’s rolling out in the Gemini app. When you use the “Create image” option, you’ll see a tooltip saying “Generating image with Nano Banana Pro.”

And in AI Studio, you need to connect a paid API key to use it:

https://aistudio.google.com/prompts/new_chat?model=gemini-3-...

> Nano Banana Pro is only available for paid-tier users. Link a paid API key to access higher rate limits, advanced features, and more.


Phased rollouts are fairly common in the industry.


Already available in the Gemini web app for me. I have the normal Pro subscription.


I don't see in the ai studio


I see it but when I use it says "Failed to count tokens, model not found: models/gemini-3-pro-image-preview. Please try again with a different model."


pretty sure the serious companies are just using claude through bedrock. let anthropic handle the model, outsource the rest


not at all. me, owner of a new feature: hey boss wanted to get your thoughts on this. Boss: "let's schedule a review for next week, and pull in x and y who are familiar with the space"

Vs Boss: "your call"


Simple feedback collector. Something I end up building in every project, so why not built it once and productise even just for myself. Still building out the dashboard, some clustering, and automated PRs (by Claude Code)

https://feedbee.willsmithte.com/


Great idea!


how much are these services wasting sending out these emails? surely some rate limiting would be sensible


What are they wasting, other than recipient time?

Hundreds of reset mails a month is probably rate limited; otherwise it would be hundreds a day.


Compared to the torrent of actual spam this is just a drop in the bucket.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: