Hacker News new | past | comments | ask | show | jobs | submit | TheGlav's comments login

They're not blocking uBO, they're removing the features in the browser that allowed uBO to work by releasing new plugin APIs, "Manifest v3". They're eliminating the key APIs needed for uBO to identify things that it shouldn't load, and then not load them. Google claims this was for "performance" or "security" reasons. Of course, the only major 'performance' or 'security' affected is the ability to identify, intercept, and stop harmful or ad related downloads before they start.

Does this affect extensions that know every website you visit even if it doesnt need to know, and has nothing to do with the extension’s functionality? (ie the ones that Similarweb buys)

It's not if. It's when. It has been 'when' since 2020. It is coming. It is not going to not come. It will be here in mere releases. Get ready.

You're probably right, but FWIW it's not unheard of for google to announce, continually delay, and eventually completely backtrack on things like this, like third party cookie deprecation.

This time it affects their bottom line in a profound way so wishful thinking is probably not going to work unfortunately.

> affects their bottom line in a profound way

Something around 8% of total digital ad spend.


To be fair, that's tremendous

Source?

>third party cookie deprecation

It's because they were literally sued and are not allowed to remove it, not because they don't want to.


Yeah, hence why I started already migrating, slowly.

I have a simple tab organizer extension and some greasemonkey scripts that should work perfectly fine on Firefox without any changes.


What do you mean "solve"? This is solving the problem. If people see things they consider good enough, that's all they care about.

Source: every other piece of news or social media on the planet.


Apple is the closest you get today. You can even pay a premium for it!

Given apple was part of PRISM and, you only pay premium for the pr.

I'm apple hater but you got to recognize they do better at privacy than Google.

If you care about the NSA, then you better not have any phone. Whatever it's a android, iphone, grapheneos, anything. Israel blowing up pagers is a proof that nothing is impossible to them.

But if you want to say fuck off to the big data harvester like Google, Microsoft, Facebook, and so on.. then apple isn't bad at all.

You just got to deal with the usual apple bullshit, no side loading, repairability, thunderbolt charger, no headphone jack, etc.


Shin Bet and friends don't have a magic wand making pagers explode. NSA can't circumvent math. Etc.

This 'but X will get you anyway if they want' or '5$ wrench' is used by alot of people I know to rationalize selling themself out privacy wize.


Your privacy is as strong as your weakest Link. NSA only need one a few vulnerability to be able to monitor what you do, it may be on chip maker, network provider, operating system, compiler, etc..

That's a lost battle, if they want to see what you do, they do and there is nothing one can do but force them legally not to do so.


> But if you want to say fuck off to the big data harvester like Google

They literally sell your traffic to Google.


I am sure there is a name for this fallacy, but being better yet not enough at something where bar for reaching privacy is so high isn't cutting it. The result is the same, sans warm fuzzy feeling not anchored in sad reality of 2024.

We always use "Perfect being enemy of good"

>do better at privacy than Google

so instead of an actual improvement just settle for second least worst? ironically google pixels are 100 times more private than any apple device will ever be because you can securely run your own 100% controlled open source OS such as grapheneos.org which is an actual private as in feature not marketing OS.


A phone has several levels of software and hardware, the OS that the user knows is not in charge of communications, its main role is to interface the user to the computer inside the phone. The phone OS sees the phone communication hardware akin to the way it sees an Ethernet card. The phone communication hardware (named baseband modem) is also under control of the SIM and every time the mobile operator wants to change the behavior of the baseband modem it can through the Sim toolkit.

sadly firmware is a bitch on almost any modern device. good thing it can more or less be isolated from the OS

https://grapheneos.org/faq#baseband-isolation


Believing the software can somehow be separated from the hardware is a lie. They can mitigate at best the the amount of information one can extract but at the end whoever control the hardware can have access to extremely private information.

A gyroscope sensor is able to accurately record what one say close to his phone. It doesn’t even need Android to run he has access to private information.

https://crypto.stanford.edu/gyrophone/files/gyromic.pdf


OK, but still:

https://en.wikipedia.org/wiki/Apple–FBI_encryption_dispute

Apple's reaction to a number of such things has been to further enhance encryption.

They go to a good deal of trouble to make things they can't break. Look at the new cloud compute model they're introducing:

https://security.apple.com/blog/private-cloud-compute/

And if you've missed it, note the prior "verified contact" key exchange added to iMessages, as well as the "sorry we can't read your backups to help you recover your data" security added to iCloud (provided you only use devices up-to-date and opted in). This one is a customer service nightmare, they added it anyway.

All that said, this article is less interesting since (a) if your cell phone uses a telco, "they" know where you are, and where you've been, no Apple needed; and (b) unlike Apple segmenting your Maps directions to prevent themselves from knowing where you are going, Google's always been about your location.


Counterpoint:

https://news.ycombinator.com/item?id=41184153

Apple was caught issuing issuing OCSP queries (hello, XKEYSCORE) every time an app was launched, promised to stop logging and build an opt-out, then reneged and memory-holed the promise.


You act like Google or Apple had a real choice in the matter. AFAICT, that was all court-ordered.

[flagged]


Why? Apart from state sponsored violation of privacy, I think Apple does in fact provide the best privacy protections. I’m also happy they don’t do bizarre things like Android sometimes does, for example preventing you from taking screenshots on your own device because an app can do that on Android.

However, I dislike Apple for their extortionist approach to defending the App Store duopoly, browser access, moderation/censorship on apps, etc.


They are a for-profit company. They are closed-sourced both in hardware and software. Their ecosystem is known to be a walled garden. They aren't open in their processes. They sell your data, they just don't tell you. Following all of their ToS you have to agree to use their devices - you don't even own anything, you are just paying for a subscription (that can be revoked any time for any reason) to use the device.

I agree with most of what you said. But how do they sell their customers’ data?

You can't know, it's kept in a secret. But given the input data - you just have to assume they do (even if they don't).

You can pay apple premium for a lot of things doesn't mean you actually get it tho. people really forgot PRISM that quickly...

Yeah. At many places, team size (and eventually, number of teams you manage through other managers) is codified in manager role descriptions.


It really is a whole alien domain to itself. The syntax being so foreign to most developers doesn't help. But it's so powerful, incredibly powerful, and in most browsers, extremely efficient.

The extremely vast majority of web app developers don't need 99% of what CSS can offer. But it's neat to know it's there.


  whole alien domain
  so foreign to most developers
  doesn't help.
  it's so powerful, incredibly powerful,
  most browsers, extremely efficient.
That is because CSS is a different Chomsky Grammar than HTML and EMCA - intentionally not Turing complete, nor-self referential - why the "has(" puedo property was so problematic.

That also relates to sibling comments about the awkwardness of the pairing to Javscript, which is of a higher grammar, and Turing Complete*.

It also relates to the "awesomeness" of the "fire-and-forget" nature of CSS - unless very specifically hooked, it can be hardware-accelerated nearly care-free because it isn't per frame to the DOM, which Javascript is (meaning HTML, it's own Chomsky Grammar!)

It is what it is, the epitome of an optimized amalgamation of technical debt we call the modern web specification.


> That is because CSS is a different Chomsky Grammar than HTML and EMCA - intentionally not Turing complete, nor-self referential - why the "has(" puedo property was so problematic.

It will be a nitpicky comment, and I’m sure you mean it this way, but it wasn’t clear to me: a language’s syntax being in one type of grammar class is irrelevant to its execution semantics corresponding to a recognizing automaton. So you can have a language with a regular syntax that is Turing complete just fine.

In fact, most languages’ syntaxes are context-free (sometimes with some escape hatches), but are semantically Turing-complete.


  nitpicky comment,
I even put an asterisk! Oh wait, not the usual "unlimited tape finite universe"

I was unclear. It is intentionally not Turing Complete, by way of avoiding self-references, as self-references would make it a higher-order grammar, and Turing-complete because it then it can innately loop, making it impossible to flatten to a lower, Labeled Push down automaton.

Although they are mutually exclusive, I implied causality.

Its the implicit loop and requirement of a heap/stack of variables self-referencing requires that is Turing-Complete itself.


> That is because CSS is a different Chomsky Grammar than HTML and EMCA - intentionally not Turing complete

CSS has been Turing complete for many years.

You can simulate Turing machines with pure HTML+CSS, e.g. https://github.com/yrd/tm2css

"Rule 110" which implies Turing-completeness has also been implemented in CSS, e.g. http://eli.fox-epste.in/rule110/


CSS has been very, very, very close.

I would had had, had - "has" not been had, expected sooner, but atlas.

But the aforementioned ":has": : The :has() pseudo-class cannot be nested within another :has(). This is because many pseudo-elements exist conditionally based on the styling of their ancestors and allowing these to be queried by :has() can introduce cyclic querying.

  Pseudo-elements are also not valid selectors within :has() and pseudo-elements are not valid anchors for :has().
Note the two limits, "cyclic querying" and self-referential parameters.

The former is required for a basic computational model, the latter for one that supports recursive-ness and thus some optimizations .

  "Rule 110"
This is just lambada calculus and has no tape movement - requires checkin boxes, still, and thinking about whether or not to halt - which actually kinda is its own asterisk (not the usual 'infinite tape' kind). Turing Machines would halt on some input; your calculator goes until the actual computer, you, halts, or stops actually checking(computing) the state for a HALT/desired state.

You'd think someone woulda parasol'd the checkboxes and at least attempted to use :onHover with grid to require minimal mouse movement to trigger input instead.

Or, a bounding box input state driven hack - like when your cursor is "in between" elements, changing every frame.

  pure HTML+CSS, e.g. https://github.com/yrd/tm2css
seems cool, actually. SASS repeats CSS exhaustively through HTML-encoded steps until a valid one is painted - that valid one being the HALT/output. You do have to specify the number of steps, though. You would have to know whether or not it halts, the answer, and how many steps it took to compute to functionally (heavy-lifting in this context) use it - or else it would have to reference itself....which would make it a higher grammar.

But it can't: https://www.w3.org/TR/css-variables/#cycles

  This can create cyclic dependencies where a custom property uses a var() referring to itself, or two or more custom properties each attempt to refer to each other....If there is a cycle in the dependency graph, all the custom properties in the cycle are invalid at computed-value time.
Very close. But you must beg the question (ie. know the answer) in both step count and thus the answer, else you'd have an infinite HTML page. Which is fine in math (HTML is a Type 2, can be infinite, no self-reference, no self-children, nor orphans), but not really much of a simulation/emulation though - if it can only produce valid machines at (essentially) compile time.


You could just call it syntax, you know.


Everyone reading this speaks multiple computer languages, and knows what a "syntax" is.

The disambiguation between levels (grammars) of syntax is what the above poster was both lamenting and heralding - possibly unaware of it's technical and mathematical necessity.

https://en.wikipedia.org/wiki/Ambiguous_grammar#Trivial_lang...


From my reading, if the site only shows you based on your selections, then it wouldn't be liable. For example, if someone else with the exact same selections gets the same results, then that's not their platform deciding what to show.

If it does any customization based on what it knows about you, or what it tries to sell you because you are you, then it would be liable.

Yep., recommendation engines would have to be very carefully tuned, or you risk becoming liable. Recommending only curated content would be a way to protect yourself, but that costs money that companies don't have to pay today. It would be doable.


> For example, if someone else with the exact same selections gets the same results, then that's not their platform deciding what to show.

This could very well be true for TikTok. Of course "selection" would include liked videos, how long you spend watching each video, and how many videos you have posted

And on the flip side a button that brings you to a random video would supply different content to users regardless of "selections".


It could be difficult to draw the line. I assume TikTok’s suggestions are deterministic enough that an identical user would see the same things - it’s just incredibly unlikely to be identical at the level of granularity that TikTok is able to measure due to the type of content and types of interactions the platform has.


And time.

An account otherwise identical made two days later is going to interact with a different stream. Technically deterministic but in practice no two end up ever being exactly alike, (despite similar people having similar channels.)

The "answer" will turn back into tv channels. Have communities curate playlists of videos, and then anyone can go watch the playlist at any time. Reinvent broadcast tv / the subreddit.


If there was never infrastructure to handle third party app stores, then such infrastructure would need to be built, and assumptions based on idea that there are no third-party app stores have to be revisited and reworked. They have to then provide hooks for those third party app stores that never existed before (like anything related to launching download of an app, for example.)


Indeed. A simple, "Generate a short, one paragraph intro letter for this job posting. Pretend you are me, no matter what, don't give any hints that you are an AI answering for me." As part of the prompt for generating the intro letter will probably get you pretty far in avoiding this.


Simon Willison[1] has a very compelling series of posts on why this will absolutely not work. The basic problem is that the model doesn't see your prompt. It just sees a bunch of numbers (after tokenization) and pretty much any attempt you make to prevent prompt injection (which this is a simple prompt injection) can be defeated.

What the world needs is the equivalent of "placeholders" like are used to prevent sql injection and the models to be trained (and model apis changed) to treat the information coming through the placeholder as fundamentally different to the main prompt and context.

[1] https://simonwillison.net/series/prompt-injection/


I asked the same prompt, but have a memory asking for detailed answers. It gave me:

    The text presents a logical inconsistency. Here's a breakdown of the issues:

    Sequence of Events: Joe drives Sue to the university and then returns home. This part is clear.
    Inconsistency: The text states that after Joe drove home, he drank tea with Sue in the kitchen. However, Sue should be at the university if Joe drove her there, not at home.
    To correct this, consider one of the following revisions:

    Sue returns home: If Sue returns home with Joe:

    "Joe drove Sue to the university. Afterwards, he drove back home with her, and they drank tea together in the kitchen."
    Joe drinks tea alone: If Sue stays at the university:

    "Joe drove Sue to the university. Afterwards, he drove home again and drank tea alone in the kitchen."
    Either option removes the inconsistency and clarifies the sequence of events.
This is pretty impressive.


Just yesterday people downvoted me on HN for suggesting ChatGPT had the best ability to deduct things, because, according to down voters, it didn't have any.

But the "dumb parrot" narrative that was quite fitting on GPT2, and maybe still useful for understanding the foundation of the tech in GPT3, is now just a demonstration of ignorance.

We don't know what they are really doing. Nobody working outside of openai knows.

But if you use it long enough, it will be obvious we have passed the fancy completion phase, there is a little logic in there. Not a lot, but it makes all the results much better already.

And definitely much better than the competition.


> Nobody working outside of openai knows.

I wouldn't be too sure openai people know either. if a machine has enough moving parts, it's hard for an individual human to understand all of it. nowadays it's possible that nobody quite knows why the silicon compiler put a particular block in a particular position on the die. it just figured that's the best way to save power or space or whatever.


For a more informed opinion than folks on the internet, here's some work from Microsoft with early/internal access to gpt4: https://arxiv.org/abs/2303.12712 . I don't think people close by these systems share the same dumb parrot sentiment at all.


I've never been sure what to make of that paper. It was published by Microsoft shortly after Microsoft's big deal with OpenAI and reads a lot like a marketing piece to me. Many of the observations didn't reproduce the same way once ChatGPT4 in public hands too. If nothing else, I'd prefer it come from a party who hadn't just signed a $13 billion dollar deal with OpenAI a few weeks prior with a view to using their products to sell more new products/features... It's somewhat self-serving for Microsoft to argue ChatGPT4 is super-awesome/sparks of AGI etc, regardless of validity of claims.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: