I don't recall what it was called in the menu, but it was definitely possible to assume a struct on a particular address. Muscle memory tells me the button is U, even though actual memory fails me.
The alternative is making it possible for developers to only think about code, not permissions, or at least specify the permissions in terms of what you want to do, not what permissions you need. Think iOS, you write "I need fine grained location access" into the manifest, you don't configure the permission system to allow you to call the API.
Another poster touched on another important point: it's important for this to be changeable independent of the code. The reason for this is actually kind of subtle. Obviously, you don't wanna have to need to rebuild in order to regenerate permissions. But the real reason, imo, is that it should be easy to parse for a human, easy to locate for a human, and also easy to parse and adjust for a machine, that might determine a permission is no longer necessary, or who is trying to build a dependency graph in order to determine who to wake up during an incident. That means it should go into configuration that is versioned and deployed alongside the code, but not in the code.
If you make this hard to understand and change, people will just copy it, and the you're back to square one. It's gotta be the easiest thing to do the right thing, because at scale, people are gonna do the easiest thing.
I feel like I'm kinda going on at length about this, so instead I'm gonna leave you with a link to a blog I wrote about the same concepts, if you wanna read more. It's about Kubernetes network policies, but really the same concepts apply to all kinds of access.
How you're describing iOS is similar to how nitric works. Developers indicate in code "I'm reading from this bucket", it's a request not an order, they're not actually configuring the permissions system. That request is collected into a graph of other requests (for resources, permissions, etc.) and passed via an API to a provider to fulfill.
If you want to change what "read" means you're free to do that in the provider without changing a single line of application code. But you also get the benefit on the Ops side of not needing to read the application code to try and figure out what permissions it needs to work, that part it generated so you can't miss anything.
If you want to output Terraform or config files or something else like you do today, to enable audits and keep it alongside the code, you can do that easily.
In newer versions of Android, apps which are not opened by the user have their permissions automatically and periodically revoked. So they no longer have the permissions, and when reopened, the user needs to grant the permissions again interactively. Presumably to solve this.
Thats great but their is a boat load of permissions that Android allow that never require user acceptance and are never revoked. Total disablement when not used would be much better.
Damn I guess we shouldn't do any small step to improve society somewhat unless we can overhaul systems entirely all at once!
Personally, I'd prefer to see us fight for successively smaller and smaller blast radii than simply hoping and praying the blasts disappeared entirely.
Small steps get us things like pop ups on every webpage or TSA. You'll just slowly create a bureaucratic dystopia. We need giant sweeping reform of privacy laws in the US and a restoration of the 4th amendment.
Not in the context of the government buying the data, they'll just buy it from google instead of shadowgovt.databroker.com. It's a red herring, a feel good feature that just limit's googles competition and doesn't really change the information collected on us.
Unless you're somehow claiming that your browsing history was used to train an AI for identifying tanks or terror connections, in which case for the former that makes no sense and for the latter the data is so emulsified that it can't really be considered your data any more than you could lay claim to a cat recognizer that was trained on a billion cat photos, some of which happen to be from your blog.
(And that's even assuming one accepts the premise that Google's cache of browsing data was used to train the AI that the Israeli government is using. In reality, that information is deeply firewalled and doesn't see the light of day for other applications).
You're arguing in bad faith, making this about browser history, this is about data collection of the sensor array that is your smart phone device, two very different things. It's hilarious to claim that what google is doing by disabling app access matters at all when google created the problem and profits from it in a really shady way all the while pretending to be doing you a service by protecting you from those 'shady' apps (and 3rd party app stores like say.. f-droid). And then using that data to _literally_ kill people. I'm not saying those apps aren't shady, I'm saying google pretending to protect you is shady.
Sorry; I just don't follow. Google isn't "protecting" me from F-droid; I have it on my Android right now. Nor is Google using cellphone telemetry to kill people. Nor is Google (AFAIK; if there's evidence to the contrary I'd be interested to see it) providing cellphone data to nations that are targeting them for death (Google doesn't even own a cell tower deployment). Nor is geotargeting people based on cellphone data a system limited to Google's architecture; that's a feature of cellphones, because they're little radios we carry in our pockets that continuously broadcast to a mesh network in an attempt to allow connection to it.
I don't think I'm arguing in bad faith, but I am trying to argue with someone who seems to be operating from a source of facts I don't have access to. You seem to be upset that Google makes cellphones? What am I missing here?
>Sorry; I just don't follow. Google isn't "protecting" me from F-droid
Yes, they give you a warning to scare off normal users and you have to enable installing from 3rd party sources. My point isn't that they're "protecting" you at all, my point is it's security theater.
>Nor is Google (AFAIK; if there's evidence to the contrary I'd be interested to see it) providing cellphone data to nations that are targeting them for death
Various subsystems on android are controlled by Google and they enable Google to collect and consolidate all of the telemetry/usage data etc (effectively google is root on your phone).
This information is used to select targets and kill people:
"Since 2002, and routinely since 2009, the U.S. government has carried out deliberate and premeditated killings of suspected terrorists overseas. In some cases, including that of Anwar Al-Aulaqi, the targets were placed on “kill lists” maintained by the CIA and the Pentagon. According to news accounts, the targeted killing program has expanded to include “signature strikes” in which the government does not know the identity of individuals, but targets them based on “patterns” of behavior that have never been made public. The New York Times has reported that the government counts all military-age males in a strike zone as combatants unless there is explicit intelligence posthumously proving them innocent."
You're not bringing much evidence to the table to single out Google for your frustration. Targeted tracking of individuals with cellphones is enabled by every cellphone, by virtue of the fact that it's a radio and signal strength and connection is logged and forwarded by the towers themselves; there's nothing special Google is doing to modify that process. So I don't know why we should focus on Google and not, say, T-Mobile or AT&T or TracFone or the entire cellular infrastructure.
You seem to be alleging that Google is brokering third-party access to data stored on the phone or generated by the phone (beyond the telemetry that's natural to every cellphone), but there's no evidence to support that hypothesis. Have I misunderstood what you're alleging?
Google's value in tracking is in providing services to users with the tracked data and (in the ads arm) linking advertisements to potential interested users (which is a system they broker internally).
They don't hand data to third-parties; third-parties hand data to Google, and Google might kick out answers to questions, but it does not kick out answers to questions like "Hey, is this person a terrorist?" There's no program for that. Hell, Google doesn't even kick out answers to questions like "Would Bob like to buy my shoes," the entire ad network is architected to minimize the ways an advertiser could glean the identity of a specific user who saw their ad.
The Network Mapper can export to Grafana Tempo (contributed by the community!), but doesn't have to. You can get its output as text, JSON, PNG or SVG using a CLI or an API (directly from the deployment in your cluster), and use it to auto-generate network policies.
Built while avoiding eBPF and reliance on a particular CNI with the intention to run on older nodes, with low privileges, a low performance footprint, and most importantly - zero config.
The problem with IAM systems is they tend to try to encompass so many different functionalities, and stay unopinionated, that there are just so many ways to achieve similar end results. This opens the way for endless bikeshedding, and unfortunately is inevitable to some degree in large enough organizations.
This is a bit of a shameless plug, but I hope since it's an open source project it's okay. I'm working on a suite of tools called Otterize (otter and authorize, get it, haha :) that automates workload IAM for Kubernetes workloads.
You label your Pods to get an AWS/GCP/Azure role created, and in a Kubernetes resource specify the access you need, and everything else is done by the Otterize Kubernetes operators so that your pod works.
It's a lot simpler than all the kungfu you normally have to do, but it's not magic, honestly, it's just the result of limiting scope and having an opinionated view of what the development workflow should look like. Basically, instead of maximizing on capabilities, it trades some capabilities to maximize on developer comfort.
Check it out if you're keen on contributing, or just think IAM has a tendency to devolve into a mess ridden with politics.
github.com/otterize/intents-operator and docs.otterize.com
ChatGPT uses the model text-davinci-003, which was released alongside it. The previous model of the same kind was text-davinci-002, and in a comparison I read, seems to perform much less impressively. So it's not just the chat interface. It's genuinely more impressive in my opinion.
Ah that's weird. Using text-davinci-003 in the playground gives very different answers to ChatGPT, and that's with playing with the temperature as well.