SteamOS does provide support for common anti cheats (don't know details though), made in collaboration between anti cheat maker and valve, but many games decide to specifically opt out of this support
Also, ultrawide monitors. They exist, provide more immersion. And typical resolution is 3440x1440 which is high and and the same time have low ppi (basically regular 27" 1440p monitor with extra width). Doubling that is way outside modern GPU capabilities
Well, modern games + modern cards can't even do 4k at high fps and no dlss. 8k story is totally fairy tale. Maybe "render at 540p, display at 8k"-kind of thing?
P.S. Also, VR. For VR you need 2x4k at 90+ stable fps. There's (almost) no vr games though
> modern games + modern cards can't even do 4k at high fps
What "modern games" and "modern cards" are you specifically talking about here? There are plenty of AAA games released last years that you can do 4K at 60fps with a RTX 3090 for example.
You can't get high frame rates with path tracing and 4K. It just doesn't happen.
You need to enable DLSS and frame gen to get 100fps with more complete ray and path tracing implementations.
People might be getting upset because the 4090 is WAY more power than games need, but there are games that try and make use of that power and are actually limited by the 4090.
Case in point Cyberpunk and Indiana Jones with path tracing don't get anywhere near 100FPS with native resolution.
Now many might say that's just a ridiculous ask, but that's what GP was talking about here. There's no way you'd get more than 10-15fps (if that) with path tracing at 8K.
> Case in point Cyberpunk and Indiana Jones with path tracing don't get anywhere near 100FPS with native resolution.
Cyberpunk native 4k + path tracing gets sub-20fps on a 4090 for anyone unfamiliar with how demanding this is. Nvidia's own 5090 announcement video showcased this as getting a whopping... 28 fps: https://www.reddit.com/media?url=https%3A%2F%2Fi.redd.it%2Ff...
> Also 60fps is pretty low, certainly isn't "high fps" anyway
I’m sure some will disagree with this but most PC gamers I talk to want to be at 90FPS minimum. I’d assume if you’re spending $1600+ on a GPU you’re pretty particular about your experience.
You can also save tons of money by combining used GPUs from two generations ago with a patientgamer lifestyle without needing to resort to suffering 30fps
I’m sure you could do N64 style graphics at 120Hz on an iGPU with modern hardware, hahaha. I wonder if that would be a good option for competitive shooters.
I don’t really mind low frame rates, but latency is often noticeable and annoying. I often wonder if high frame rates are papering over some latency problems in modern engines. Buffering frames or something like that.
Doom 2016 at 1080p with a 50% resolution scale (so, really, 540p) can hit 120 FPS on an AMD 8840U. That's what I've been doing on my GPD Win Mini, except that I usually cut the TDP down to 11-13W, where it's hitting more like 90-100 FPS. It looks and feels great!
Personally I've yet to see a ray tracing implementation that I would sacrifice 10% of my framerate for, let alone 30%+. Most of the time, to my tastes, it doesn't even look better, it just looks different.
> Also 60fps is pretty low, certainly isn't "high fps" anyway
Uhhhhhmmmmmm....what are you smoking?
Almost no one is playing competitive shooters and such at 4k. For those games you play at 1080p and turn off lots of eye candy so you can get super high frame rates because that does actually give you an edge.
People playing at 4k are doing immersive story driven games and consistent 60fps is perfectly fine for that, you don't really get a huge benefit going higher.
People that want to split the difference are going 1440p.
Anyone playing games would benefit from higher frame rate no matter their case. Of course it's most critical for competitive gamers, but someone playing a story driven FPS at 4k would still benefit a lot from framerates higher than 60.
For me, I'd rather play a story based shooter at 1440p @ 144Hz than 4k @ 60Hz.
You seem to be assuming that the only two buckets are "story-driven single player" and "PvP multiplayer", but online co-op is also pretty big these days. FWIW I play online co-op shooters at 4K 60fps myself, but I can see why people might prefer higher frame rates.
Games other than esports shooters and slow paced story games exist, you know. In fact, most games are in this category you completely ignored for some reason.
Also nobody is buying a 4090/5090 for a "fine" experience. Yes 60fps is fine. But better than that is expected/desired at this price point.
new cod is really unoptimized. on a few years old 3080 still getting 100 fps on 4k that's pretty great. if he uses some frame gen such as lossless he can get 120-150. Say what you will about nvidia prices but you do get years of great gaming out of them.
Which block did you go with? I went with the EK Vector special edition which has been great, but need to look for something else if I upgrade to 5080 with their recent woes.
I just have the Alphacool AIO with a second 360 radiator.
I’ve done tons of custom stuff but was at a point where I didn’t have the time for a custom loop. Just wanted plug and play.
Seen some people talking down the block, but honestly I run 50c under saturated load at 400 watts, +225 core, +600 memory with a hot spot of 60c and VRAM of 62c. Not amazing but it’s not holding the card back. That’s with the Phanteks T30’s at about 1200RPM.
Stock cooler I could never get the card stable despite new pads and paste. I was running 280 watts, barely able to run -50 on the core and no offset on memory. That would STILL hit 85c core, 95c hotspot and memory.
Yep. Few AAA games can run at 4K60 at max graphics without upscaling or frame gen on a 4090 without at least occasionally dipping below 60. Also, most monitors sold with VRR (which I would argue is table stakes now) are >60FPS.
Every time I step out of elixir I feel like I’m coding with some cumbersome construction gloves and everything is kinda coerced to work together. But for some reason it’s not picking up a lot of steam and the job situation is rough
Typed languages are in big time, and any dynamic language is facing an uphill battle for adoption, no matter how good it is. Elixir seems to recognize this and is getting there, but it hasn't fully arrived yet.
It’s sorta inevitable. I started with Python and used to hate the constraints types put on my design. But as I started working on larger projects, I got tired of Node-Python-Ruby and wanted something faster with stricter types but without too much ceremony. Found Go to be in that Goldilocks zone. The benefit of typed languages becomes easier to appreciate once you have accumulated a certain amount of elbow grease.
Types and editor tooling IMO. Both places they're making investments, but both areas where I feel like I'm giving up a lot of power, even though it's admittedly to GET a lot of power.
what exactly is present in intellij that isn't available in vscode / neovim with lsp / newer emacs? I ask because, I use(d) intellij for java and maybe i'm not using most of its capabilities or they're hidden away in plain sight, but i haven't noticed anything special aside from the slow ui.
For me there are two axes that are lacking (based on Scala and Elixir usage):
1. The feature set is just far weaker. If an LSP implementation and the client both hit 100% coverage, then you might end up having the same refactoring, code generation, follow definition/load docs/infer typing support. But of course the Elixir LSP matrix shows that basically none of the LSPs offer a complete feature set and you basically get to pick which features are missing. What about debugger support?
2. This could be viewed in some ways as a recapitulation of point one, but I actually appreciate the Integrated in Integrated Development Environment. I very much dislike when adopting a new technology having to cobble together a disparate number of tools to create a good editor experience, especially in a time where as a novice I'm actually not at all qualified to understand what a good editor experience would be! I actually call this the Clojure Problem, which is that many Clojure devs decide they have to learn emacs or a complicated fireplace.vim setup in addition to Clojure and wind up learning neither.
Regarding speed, don't get me wrong, I empathize with complaints about it and whenever I'm doing C# work for one of our services I'm reminded of just how slow developing Scala can be in particular. But I don't think of LSP approaches as fast, either. Perhaps they are async and don't block the UI, but now I'm just sitting here typing and getting constantly out of date feedback from the editor as it and the LSP catch up to whatever text has been entered.
GUID - 128-bit (16 bytes) code is enough to code unique id that can be stored in central database, that's smallest possible scenario, i think.
The other option is unique id (same 16 bytes, plus maybe other claims like bottle size) plus assymetric signature, 64 bytes i think - for Ed25519. Should fit into 174 bytes just fine and can be verified without internet
You can generate these trivially. If that bottle id is 100, next would be 101. With 128 bit numbers it's unlikely anyone ever guess an id without seeing it. Less bits - more collisions. Maybe some other number of bits will work
Regarding signature - offline verification, simple self-sustained checker can confirm that bottle is indeed valid one (can't count refills globally though)
Yes I can guess an ID. So what? I don't understand the threat model here.
But if the goal is effectively unguessable, 64 bit numbers would be fine. That's less than a one in a million chance of guessing an ID, so trying to guess is harder than just looking at some real bottles and copying their numbers.
> simple self-sustained checker can confirm that bottle is indeed valid one
> Yes I can guess an ID. So what? I don't understand the threat model here.
can print fake cola codes for infinite refills :) and some other genuine cola buyer will be very angry because their mint-condition bottle already used up.
Also, "one in a million" assuming this tech is only being used by this specific product. if anyone else is going to use the same schema for anything else, numbers will dry up much faster. bringing oats as valid cola bottle is funny. if you add prefix, it'll eat up these sweet sweet bytes
also, 1/million is not that much - you can spam online check api (and you need one with just IDs) to filter out existing ones. All in all 64-bit or so IDs have too many downsides to consider them useful, one simple misstep and the whole model is broken
You misunderstood what kind of "refills" are being talked about here.
> Also, "one in a million" assuming this tech is only being used by this specific product. if anyone else is going to use the same schema for anything else, numbers will dry up much faster. bringing oats as valid cola bottle is funny. if you add prefix, it'll eat up these sweet sweet bytes
I completely disagree with how you're using numbers here. Whether a number is a valid coke ID has no relation to whether it's a valid oats ID.
The thing the ID is attached to already acts like a prefix, while costing 0 bits.
The only way this could possibly make a difference is if you're scanning through hundreds of thousands of other packages and checking if their numbers happen to be a valid coke ID so you can cut it out and stick it on a coke bottle, all to avoid printing it yourself. That sounds unlikely. In particular you still need to check every number, so you're basically using other packages as a very very slow RNG to save two cents of printing costs.
> also, 1/million is not that much - you can spam online check api (and you need one with just IDs) to filter out existing ones
Mass-spamming an API that has even the slightest of anti-spam measures is a pain in the ass, especially if you want more than a couple IDs. The threat of "just find one of the many billions of real bottles and copy the ID off it" is always there, so if technical attacks are orders of magnitude harder then they don't matter.
It could be an ID in the central database, or jwt-style signed token. More crypto-punk stuff ;) (not related to cryptocurrency). "Valid bottle" sounds funny
Unique codes will finally allow the implementation of "drink verification can to continue" with validation that the user hasn't reused a previously scanned can
This is about Directive (EU) 2022/2380[1]. It does mention USB-PD:
> In so far as they are capable of being recharged by means of wired charging at voltages higher than 5 Volts, currents higher than 3 Amperes or powers higher than 15 Watts, the categories or classes of radio equipment referred to in point 1 of this Part shall:
> 3.1. incorporate the USB Power Delivery, as described in the standard EN IEC 62680-1-2:2021 “Universal serial bus interfaces for data and power – Part 1-2: Common components – USB Power Delivery specification”;
This hopefully means the end to standards-violating nonsense like SuperVOOC? Originally SuperVOOC wasn't USB-PD compatible at all. Now, AIUI, SuperVOOC is partly USB-PD compatible, but only to lower wattages.
Why would it? If you can charge the phone with PD, but also with proprietary standards that offer something better it would meet the regulation. Seems like the best of both worlds.
Again, why would it be worse? There’s very real benefits to some of the proprietary standards that you can’t get from PD. Just because PD offers higher wattage doesn’t mean the overall charging is faster. If you have to generate a ton of heat to convert the incoming power to voltage that the battery can take, it will hinder how fast you can charge your phone. This is especially problematic at higher voltages.
The legislation covers radio equipment up to 100W, also power delivery is directly mentioned.
For ‘fast’ charging, the radio equipment listed in Part I of Annex Ia, if it can be recharged by means of wired charging at voltages higher than 5 volts, currents higher than 3 amperes or powers higher than 15 watts, must: (a) incorporate the USB Power Delivery (USB PD), as described in the standard EN IEC 62680-1-2 (as referenced in Annex Ia); and (b) allow for the full functionality of the said USB PD if it incorporates any additional charging protocol.
26. Is a radio equipment allowed to support a higher charging power (e.g. 40 W) when using a proprietary charging protocol than when using USB PD (e.g. 30 W)?
The RED (in its Annex Ia, Part I, point 3.2), ensures interoperability with different charging protocols. For that purpose, radio equipment which is subject to the ‘common charger’ rules must ‘ensure that any additional charging protocol allows for the full functionality of the USB Power Delivery referred to in point 3.1, irrespective of the charging device used.’*
I’m not sure it matters; I hope market economy should do the rest. I believe it’s slightly cheaper to manufacture a device with a single USB-C port which supports PD, compared to a device with two ports, one 5W USB-C and some other port for faster charging.
A reason to not demand USB-PD, such law would prevent upgrades to later better version of that thing.
> A reason to not demand USB-PD, such law would prevent upgrades to later better version of that thing.
Can we apply some common sense please? You're right that not allowing revised standards would be silly. So, they simply update the law to reference newer versions of the standard. https://eur-lex.europa.eu/eli/reg_del/2023/1717/oj
How is a better standard going to be developed, though? Manufacturers aren't going to innovate, since they aren't allowed to sell anything besides USB-C.... so who is going to do the research and development for better designs, and how are we going to compare different possible improvements if manufacturers aren't allowed to try out anything new?
They are allowed to try out new stuff as long as the baseline is also met. Hence no problem for Apple with MagSafe for example, because their laptops also have USB-C charging.
Stabdards development is very much done to push regulators to adapt new things. The incentive is either to be able to develop new products that give a reason to upgrade, or royalties from the standard, or patent license money.
> how do you revoke a certificate which was used to issue millions of ID cards/passports once it leaks? Does everybody suddenly not have a "valid" ID proof?
You need cutoff date and some kind of public trail log to prevent backdating new certificates. This can be done via short-lived secondary certs derived from a root one, logged publicly
> You need cutoff date and some kind of public trail log to prevent backdating new certificates.
You might be able to do it without a public log by using an RFC 3161 (TSP) secure timestamp facility like the unfortunately named https://www.freetsa.org/. Basically, we want to trust identity attestations ("I am Bill Clinton and this is my face") made by a compromised CA between the time the CA certificate was created and an estimate (hopefully a conservative one) of the date of compromise. We want to distrust any certificates signed outside this time range.
This way, in the event of a CA compromise, we don't have to revoke everyone's certificate after a CA compromise.
I think we can implement this security model by having the CA ask the TSP server to countersign each certificate that the CA issues. The TSP would sign a hash of the whole CSR, including both identity ("I am Bill Clinton") and biometric (bill-clinton.jpg) information. Anyone can use the TSP's attestation to provide that the TSP server witnessed this combination of inputs at a specific time.
Sure, if you've compromised the CA, you can issue a certificate saying "I am Bill Clinton", but to do so, you need to either use a genuine, up-to-date TSP attestation, giving away the game, or you need to use an old TSP attestation, forcing you to use exactly the original inputs to the TSP. Using the exact inputs wouldn't help you: you want to issue a certificate saying "I am Bill Clinton" with attacker.jpg as the face, not bill-clinton.jpg. The latter won't help you do anything: you don't look like Bill Clinton and you don't have his private key.
An attacker would have to compromise both the CA and the TSP server to pull off a passport forgery. And you can make this process even harder by requiring multiple independent TSP servers to countersign certificates.
It really isn't, aside from using public key cryptography. There isn't even a concept of a "block" (ie. a linked list where each node is cryptographically linked to a prior node).
Blockchain can be the store of public data (dump public keys of intermediate certs into blockchain), but it's not necessary, public trail log is enough to call on backdated cert issuing
reply