Hacker Newsnew | past | comments | ask | show | jobs | submit | black3r's commentslogin

Well there's also the fact that GPT-3 API was released in June 2020 and its writing capabilities were essentially on par with ChatGPT initial release. It was just a bit harder to use, because it wasn't yet trained to follow instructions, it only worked as a very good "autocomplete" model, so prompting was a bit "different" and you couldn't do stuff like "rewrite this existing article in your own words" at all, but if you just wanted to write some bullshit SEO spam from scratch it was already as good as ChatGPT would be 2 years later.

Also the full release of GPT-2 in late 2019. While GPT-2 wasn't really "good" at writing, it was more than good enough to make SEO spam

I didn't remember that, but it would explain the spam exponential grow back then.

While I agree with the general sentiment that lots of things about GH actions don't make sense, when you actually look at what the vulnerability was, you'll find that for lots of your questions it wasn't GitHub Actions' fault.

This is the vulnerable workflow in question: https://github.com/PostHog/posthog/blob/c60544bc1c07deecf336...

> Why are actions configured per branch?

This workflow uses `pull_request_target` targeting where the actions are configured by the branch you're merging PR into, which should be safe - attacker can't modify the YML actions are running.

> Why do workflows have such strong permissions?

What permissions are workflow run with is irrelevant here, because the workflow runs the JS script with a custom access token instead of the permissions associated with the GH actions runner by default.

> Why is security almost impossible to achieve instead of being the default?

The default for `pull_request_target` is to checkout the branch you're trying to merge into (which again should be safe as it doesn't contain attacker's files), but this workflow explicitly checks out the attacker's branch on line 22.


Sure, that particular workflow is quite busted. But GitHub encourages this garbage design.

For example:

> GITHUB_TOKEN: ${{ secrets.POSTHOG_BOT_GITHUB_TOKEN }}

In no particular order:

- The use of secrets like this should be either entirely invalid or strongly, strongly discouraged. If allowed at all, there should be some kind of explicit approval required for the workflow step that gets to use the secret. And a file in the branch’s tree should not count as approval.

- This particular disaster should at least have been spelled something like ${{ dynamic_secret.assign_reviewer }} where that operation creates a secret that can assign a reviewer and nothing else and that lasts for exactly as long as the workflow step runs.

- In an even better design, there would be no secret at all. One step runs the script and produces output and has no permissions:

            - name: Run reviewer assignment script
              env:
                  PR_NUMBER: ${{ github.event.pull_request.number }}
                  GITHUB_REPOSITORY: ${{ github.repository }}
                  BASE_SHA: ${{ github.event.pull_request.base.sha }}
                  HEAD_SHA: ${{ github.event.pull_request.head.sha }}
              input: worktree ro at cwd
              output: reviewer from /tmp/reviewer
              run: |
                  node .github/scripts/assign-reviewers.js
I made up the syntax - that’s a read only view of “worktree” (a directory produced by a previous step) mounted at cwd. The output is a file at /tmp/reviewers that contains data henceforth known as “reviewer”. Then the next step isn’t a “run” — it’s something else entirely:

            - name: Assign the reviewer
              env:
                  PR_NUMBER: ${{ github.event.pull_request.number }}
                  REVIEWER: ${{ outputs.reviewer }}
              action: assign-reviewer
That last part is critical. This does not execute anything in the runner VM. It consumes the output from the previous step in the VM and then it … drumroll, please, because this is way too straightforward for GitHub … assigns the reviewer. No token, no secrets, no user-modifiable code, no weird side effects, no possibility of compromise due to a rootkit installed by a previous step into the VM, no containers spawned, no nothing. It just assigns a reviewer.

In this world, action workflows have no “write” permission ever. The repository config (actual config, not stuff in .github) gives a menu of allowed “actions”, and the owner checks the box so that this workflow may do “assign-reviewer”, and that’s it. If the box is checked, reviewers may be assigned. If not, they may not. Checking the box does not permit committing things or poisoning caches or submitting review comments or anything else - those are different checkboxes.

Oh, it costs GitHub less money, too, because it does not need to call out to the runner, and that isn’t free.


Some of these issues are excusable by saying they "added too many features too fast" (especially the inconsistencies which the article begins with), but lots of the issues are just caused by Liquid Glass becoming a thing and some "less important" apps didn't get a proper UX test after switching to Liquid Glass design (the whole latter half of the article)...

And that's not excusable - every feature should have its maintainer who should know that a large framework update like Liquid Glass can break basically anything and should re-test the app under every scenario they could think of (and as "the maintainer" they should know all the scenarios) and push to fix any found bugs...

Also a company as big as Apple should eat its own dogfood and force their employees to use the beta versions to find as many bugs as they could... If every Apple employee used the beta version on their own personal computer before release I can't realistically imagine how the "Electron app slowing down Tahoe" issue wouldn't be discovered before global release...


While I agree with the premise that "vibe code hell" has replaced "tutorial hell", they are very much not the same. To expand on that, let's start with the fact, that a good coder needs both "skill" and "knowledge".

Tutorials (at least the good ones) give you some knowledge - the tutorial often explains why they do what they do and how they do it, but don't give you any skill, you just follow what other people do, you don't learn how to build stuff on your own.

Vibe coding on the other hand gives you some skill - how to build stuff with AI, but don't give you necessary coding knowledge - the AI does all the decisions for you and doesn't explain why it did what it did, or how it did it, it just does it for you.

"I can't do anything without Cursor's help" is not really the problem. The problem is that vibe coders create some stuff and they don't understand how that stuff works. And I believe this is much bigger problem than knowing how stuff works but not knowing how to use it.

Learning doesn't need to be "uncomfortable". Learning needs to be "challenging". There is a difference. The suggested approach here vaguely reminds me of the "you must first learn how to code in a notepad before using an IDE" approach.

While the real takeaway should be "you must first learn how to learn, before properly learning something". To learn something properly, you need 2 things: To know what to learn, and to know when you've learned it. To know what to learn you need a curriculum - this obviously depends on your specialization for coders, and can be more crude or more detailed, but you still need something to follow so that you can track your progress. "When you've learned it" for coders is when you can explain what some code does to a colleague and answer questions about said code. It doesn't matter if you wrote it, or someone else wrote it, or an AI wrote it. Understanding code you didn't write is even more important than understanding your own code.


On the other hand, if as an aspiring software engineer I was forced to do military service and had the option to do it as part of a military cybersecurity unit, I'd pick that over running around with weapons without blinking an eye.


What emails suck at is communication between multiple people in a work setting. That's why Slack, Teams, and others emerged and got popular.

For example:

- When multiple people respond to the same email, the email "thread" branches out into a tree. If the tree branches out multiple times, keeping track of all the replies gets messy.

- While most clients can show you the thread/tree structure of an email chain, it only works if you've been on every email in the chain. If you get CC'd later, you'll just see a single email and navigating that is messy.

- Also if you get CC'd later, you can't access any attachments from the chain.

- You can link to a Slack/Teams conversation and as long as it's in a public channel, anyone with the link can get in on it (for example you have a conversation about a proposed feature which then turns into a task -> you describe the task simply and link "more info in this slack convo"), you can't do that with Emails (well I guess you could export a .eml file, but it has the same issue as getting CC'd later)

- When a thread no longer interests you, you can mute it in Slack/Teams. You can't realistically do that with emails, as most people will just hit "reply all"

- But also sometimes people will hit "reply" instead of "reply all" by a mistake and a message doesn't get delivered to everyone in the thread.


I oppose. Mails are superior in announcing to multiple people. If people want to participate they can in many ways. It is well structured, well documented and offers coherent discourse. Slack/Teams are for just-in-time dynamic, collaborative conversation that are quickly fading and missing out on all the strengths mails have in terms of permanence, archival, search and general quality. Something that totally gets lost in instant messaging like Discord, Teams and such where context is basically non-existant and may be gone completely in minutes.

Remember Google+ ? What lasted was Gmail and barebone simple Mail.


> Mails are superior in announcing to multiple people

People who are known at time of sending. A slack message can be searched by those joining the team much (much) later, those who move teams, in-house search bots, etc. Mailing lists bridge this gap to some extent, but then you're really not just using email, you're using some kind of external collaboration service. Which undermines the point of "just email".


> > Mails are superior in announcing to multiple people > > People who are known at time of sending. A slack message can be searched by those joining the team much (much) later, those who move teams, in-house search bots, etc.

People use slack search successfully? It's search has to be one of the worst search implementations I have come across. Unless you know the exact wording in the slack message, it is almost always easier to scroll back and find the relevant conversation just from memory. And that says something because the slack engineers in their infinite wisdom (incompetence) decided that messages don't get stored on the client, but get reloaded from the server (wt*!!), so scrolling back to a conversation that happened some days ago becomes an excercise of repeated scroll and wait. Slack is good for instant messaging type conversations (and even for those it quickly becomes annoying because their threads are so crappy), not much else. I wish we would use something else.


How would you search from mail threads you weren't CC'd on?


MS Exchange had sort-of solved that problem with Public Folders. Basically shared email folders across an organization.

The older solution is NNTP/Usenet. I wish we had a modern system like that.


> Mailing lists bridge this gap to some extent, but then you're really not just using email, you're using some kind of external collaboration service. Which undermines the point of "just email".

Mailing lists are just email. They simply add a group archiving system.


thats why online private archives like https://mailarchive.ietf.org/arch/browse/ exist. for a free version, use groups.google.com


you just use a shared inbox for the team


This assumes said email is properly filtered and doesn't get lost in a sea of work spam. I also assert email is actually terrible at context; unless that is part of an existing thread, or again your filtering/sorting is great, you will often spend at least a paragraph just establishing context.

> It is well structured, well documented and offers coherent discourse.

You must have great coworkers who know how to communicate. I cannot say the same for everyone at my company. Email at many of the places I've worked can quickly devolve on more than 3-5 replies.


Worse than the work email spam at some of my previous jobs was the Slack spam - at least the email spam was work-related. Too many people substitute work for a social life and treat Slack like they’re on a group chat with friends.


> Worse than the work email spam at some of my previous jobs was the Slack spam

It’s annoying if not muted and you need to work. Why not do that?

A workplace with no chat and zero talk would be pretty grim.


If the company Slack doesn't have a #memes channel, I don't want to work there.


There's nothing wrong with social chat on Slack. It just needs to be either in a thread or, better yet, in a dedicated social channel.

Saying people shouldn't have social chat on Slack is like people shouldn't have social chat in the office kitchen because it's part of the same office complex.


And if they did that, I’d have nothing to complain about. That’s never been my experience though with Slack at work.


That’s unfortunate but it’s not a universal trend.

The problem here isn’t Slack, it’s poor Slack etiquette. However you can change etiquette at a company level.


@here I need an update on a ticket

@here were doing some it maintenance over the weekend in the middle of the night on a system no one uses


Google+ dies not because it was a bad product but because google changed strategy and killed it.

Ultimately it’s all subjective - some people prefer email some chat some calls some no comms at all.

If you can communicate well, articulate what you say and want well, and actually read and understand what I write then I will communicate over any medium with you. If not then I’ll have a bad time regardless of medium


I do agree that email quickly becomes messy, even with mailing lists. It's really much the same issue Slack has, a lack of training. It's just assumed that people will know how to use both email and Slack, but we don't. For email it's a decade old debate, that rational minds lost as Outlook dictated top-reply, forcing you to read threads backwards and discouraging the recipient from inline replies and cutting out irrelevant parts.

Slack is equally terrible, because the interface and threads is actually hard to navigate and I honestly cannot make search work in a rational manor. The more discusions you have in Slack, the worse it becomes.


Slack is the equivalent of shouting across the room. I copied anything that seems important to my notes. Any message that’s more than an handful of screen old can be considered lost.


IMO, that's a benefit of slack. At $LAST_JOB, we had a 30 day expiry on data in slack, which everyone was in uproar over initially. But, it forced us to actually put stuff elsewhere.


I thought so when I started at a company that had that policy but in the end we still mostly ended up with split-brain issues (eg some information is shared in both places, some in only one, some updates get lost) with the added negative that stuff disappeared from Slack.

It's just a hard problem overall when you have email, chat, wiki, docs, and a ticketing system.

And, unfortunately, all these things exist because not one of them is actually good beyond its scope (if it's even good within its scope to begin with).


Sure, emails are not the right tool for multiple people discussing a project, even less - when we want to add new members to a thread, or to leave (by those who were added, but for whom it is not longer relevant).

At the same time, when I was a cofounder & CTO, I used Basecamp, which promoted email-like threads. (There is a chat-like functionality as well, but I made policed to use it only for impromptu things like setting Zoom meetings or so, nor for anything that may be important in the future (brainstorming, ideas, architecture choices, analyzis, etc).

It created a culture of clarity of thoughts I never had before, or after. And yes, they a year later is was easy to search for why we picked this way of optimizing quantum computing in Rust not another (which pros and cons, possible paths not yet explored, etc), go back to unused UI designs, retrieve research for publication, etc.


Work said "email is not official, use slack." We literally had a meeting where people were complaining about not knowing about recent changes. "We announced it in these 5 channels, we will start announcing it in more."

Like, email works for announcements yo. Naw, let's jeep messaging N other places.


Isn't there one company wide channel? With slack or email, you still need to make a list of people who get the announcement. Slack has been a lot better in my experience for joining a team and looking back at the history


I’ve always thought email needs a new “view mode” that somehow imports the email structure without actually using a separate program like Slack. Something like an expanded workflow view that shows emails as a series of separate nodes flowing in one direction.

The key point being that this is not a separate program, but a different way to view the data already inside emails.

I’m just brainstorming here so apologies if this doesn’t make much sense.


So maybe having some hybrid like Slack/Teams over email? Where UI of such email client is rendering emails as a room/channel and subject is the name of the room (removes RE, FW, ...) and works like room identification?. So you will get IRC like experience. If you will add PGP on top of it, it can be also secure and decentralized.


Delta Chat certainly could emerge in that direction. https://delta.chat/


The ‘reactions to emails’ thing that Outlook does is gross. However I avoid most chat apps and dislike email so I’m probably not a representative user.


I find a structured conversation far easier to work with personally.

You can respond only to the subthread you want to, and not have the single thread become a mess of quoted and irrelevant replies that you have to scroll past to find the answer you want.

Additionally, shared folders fit well within a team environment and works much like usenet for messaging.


All those problems with email sound like a treat compared to screenshots of chats I'm not in.


The tree format seems an advantage, if anything. It naturally separates discussions into separate threats. Messaging software would dump all these into a single channel so you could have different conversations happening at the same time interspersed.


Agreed, a single thread is painful if it’s actually spawning off multiple sub-topics. I suppose the better answer is to start a separate thread in Slack in that case but it can flow weirdly where the topic originally arises in one place but is continued elsewhere; it relies on someone linking on the original thread to keep context. In a mailing tree, that context is still there.

All of this depends on having a sane email client though, doing it via outlook or gmail is a nightmare and I suspect this is the root of many people’s aversion to email.


The two big problems are shitty mail clients, and people not knowing how to quote (which gets enabled by the shitty mail clients).

If someone gets CC'd later than typically because the discussion got to a point where the input is needed for the current question - and in a mail thread with proper quoting surprisingly often the quoted email is sufficient context for the added guy to jump in.

What makes a big mess out of things is the nested list of fully quoted emails with top answers at the bottom I now have to go through when getting added to figure out what the fuck they want from me.


> When multiple people respond to the same email, the email "thread" branches out into a tree. If the tree branches out multiple times, keeping track of all the replies gets messy

I think this is mostly due to bad UIs in email clients. Usenet had similar, if not more extensive, branching many Usenet clients made this quite manageable. I don't see why similar clients could not be written for email.


Thank you. This is a very insightful comment that pinpoints something I could never quite put my finger on.

So the issue is that you need a git pull or something like it to prevent branching. Chat etc... achieves this through real-time state management. In an async setting you need something else.


Found only “emerge” while searching these comments for the exact term “merge”.

The bifurcations of communications is unmanageable.

Why is my own timeline is still manual, while presumably all the datacenters can combine, search and sort (merge) dated datapoints?

I want a Personal Palantir or something, and no, not vibe coded in a weekend.


At least at my workplace, chats got popular because it was a way for humans to talk to humans without getting drowned out by dozens of automated messages, irrelevant announcements, and other clutter.


Mailing lists. They've been around since the 80s. They solve all these problems. They are amazing. Use them.


I disagree. I might have been born a generation too late but I think mailing lists are terrible, horrible way to communicate.

My favourite is text forums - I guess shows when I was socialised online


text forums are just a web interface for mailinglists, or vice versa. google groups and others can (or could) support both, and usenet news too. they are all just messages. the difference is only the tool you use to display them.


There was GMANE to convert mailing list into NNTP archive. I was big fan of it. Too bad it's gone


Hey lets you mute a thread.


Without an apple ID you can compile an iOS app, but can only run it in an iPhone Simulator on a Mac.

With a free apple ID (no additional registration needed) you can also install your compiled iOS app on your iPhone and have it working for 7 days before you need to re-install it.


> This is such an odd statement. I mean, surely they have to be willing to review the contents of apps at some point (if only to suspend the accounts of developers who are actually producing malware), or else this whole affair does nothing but introduce friction.

Requiring company verification helps against some app pretending to be made by a legitimate institution, e.g. your bank.

Requiring public key registration for package name protects against package modification with malware. Typical issue - I want to download an app that's not on available "in my country" - because I'm on a holiday and want to try some local app, but my "play store country" is tied to my credit card and the developer only made it available in his own country thinking it would be useless for foreigners. I usually try to download it from APKMirror. APKMirror tries to do signature verification. But I may not find it on APKMirror but only on some sketchy site. The sketchy site may not do any signature verification so I can't be sure that I downloaded an original unmodified APK instead of the original APK injected with some malware.

Both of these can be done without actually scanning the package contents. They are essentially just equivalents of EV SSL certificates and DANE/TLSA from TLS world.


> Typical issue - I want to download an app that's not on available "in my country" - because I'm on a holiday and want to try some local app,

The solution here is just to get rid of artificial country limitations which make some users download APKs. None of those make sense in the online world anyways.


F-Droid generates a unique key for each app and that key is then reused for all builds of that app. This will probably just require registering the F-Droid public key to the package name with Google.


Google to F-Droid: "no signature for you"


Apple did this to Epic when the EU threatened to intervene.


can you please give an estimate how much slower/faster is it on your macbook compared to comparable models running in the cloud?


Sure.

This is a thinking model, so I ran it against o4-mini, here are the results:

* gpt-oss:20b

* Time-to-first-token: 2.49 seconds

* Time-to-completion: 51.47 seconds

* Tokens-per-second: 2.19

* o4-mini on ChatGPT

* Time-to-first-token: 2.50 seconds

* Time-to-completion: 5.84 seconds

* Tokens-per-second: 19.34

Time to first token was similar, but the thinking piece was _much_ faster on o4-mini. Thinking took the majority of the 51 seconds for gpt-oss:20b.


You can get a pretty good estimate depending on your memory bandwidth. Too many parameters can change with local models (quantization, fast attention, etc). But the new models are MoE so they’re gonna be pretty fast.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: