From those commits, would you say this is RCE vulnerability taking advantage of memory/stack callbacks?
Does this mean an attacker may exploit this vulnerability to compromise an entire system?
Release 99.0.4844.84 has borked my JS canvas library. Currently working on a fix - it was my misunderstanding of the purpose of the CanvasAPI willReadFrequently flag that left the library open to a severe speed degradation.
In my defence the documentation implies that the willReadFrequently flag is only a hint to the browser, to take a different approach when performing getImageData() operations[1]. However setting the flag to true also impacts drawImage() functionality[2].
I tried reporting the issue as a bug last night - at the very least the issue needs to be documented - but the form for reporting issues kept collapsing on me so I gave up.
I just upgraded to this and noticed the Reading List has changed design again! They must have gone back and forth thousands of times on this so hopefully this is the final version.
> Not much is known, at least publicly, at this stage about CVE-2022-1096 other than it is a "Type Confusion in V8." This refers to the JavaScript engine employed by Chrome.
Is there a safer JavaScript engine folks can use without having to worry about this sorta thing? Even if it's slower, less compatible, more resource-intensive, etc.?
I feel like, in most cases, I could make due with JavaScript being 10x or even 100x slower, taking up 10x the RAM, lacking some uncommon features, and so forth -- if it meant being able to enable it without needing to worry about new zero-days.
What you're asking for will probably put you more at risk than V8 does:
1) JavaScript engines with any kind of usable performance are inherently complex
2) V8 is hardened, battle-tested and fuzzed/verified by the best engineers at Google and indepentently by third party researchers, since inception - the engine you will be using probably won't be
All of this is really a side-effect of Chrome's popularity and Google's resources, even the CVE itself. You would be relying on security by obscurity(in which obscurity = no big userbase = not a high value target). Have a look at payouts for RCE-capable V8 bugs.
> V8 is hardened, battle-tested and fuzzed/verified by the best engineers
It's built on unsound foundations that causes an endless stream of this kind of bugs. They make compromises regarding security engineering and then do indeed put a fair amount of engineering resources to mitigate the resulting security problems.
This can be said to be good or bad engineering depending on your viewpoint, the alternatives might for example have performance tradeoffs and it can be valid engineering to make tradeoffs that favour other things at expense of security. But also we certainly do know practical and proven ways to eliminate this class of memory safety bugs in JS implementations, so it's definitely an engineering choice.
yeah, plus that "the best engineers at Google" has an element of hyperbole. this is one of those times where you can move a statement from being ridiculous to being correct just by throwing in a "some of." it's not like they assigned Jeff Dean to AI instead of Chrome because he just couldn't cut it.
Formulating a plan to make V8 safe with a high degree of assurance sounds like a tall order for a monday HN comment!
I'll just point out that this type confusion bug class is just one of many that plague v8 based on perusing the CVE list, and memory safety errors and other security bugs typical of unsafe C++ seem to play a large part in many. V8 is also huge, and complexity is the enemy of security, there are much smaller JS implementations around.
Just fixing these most high profile bug classes might only reveal some other fundamental soundness issues. So it may be necessray to start from a clean slate with soundness and safety design constraints when adding features.
I'd prefer a [provably secure](https://en.wikipedia.org/wiki/Provable_security ) JavaScript-engine as a default. Or, if provable-security would be a bit much for a near-term project, something more heavily based in a simple engine-design, without trying to optimize stuff and perhaps including seemingly-redundant run-time checks.
Ya know, stuff like type-checking arguments, using stronger restrictions on async-calls to avoid potential race-conditions, more parameter-validation, relying on automatic-memory-management to avoid bugs, always bound-checking on array-accesses, always overflow-checking math, and so forth. In general, code that's designed to be simple and plainly correct, resisting the temptation to optimize.
Don't get me wrong, I appreciate that a lot of security-folks do good work trying to help identify-and-patch vulnerabilities in V8/etc.. And I appreciate that that enables a balance between performance and security that might be right for some applications. However, there're a lot of cases where I'd prefer a heavier focus on security.
probably a JS engine with guarantee through formal methods that sandbox escapes are impossible. for a JIT engine, this might mean asserting that control flow inside generated code never leaves it, and only accesses pages allocated to it. these obligations would also need to be carried through to standard library implementations.
i.e. probably secure analogously to how seL4 is provably secure. this would be infeasible for a browser, but you could actually accomplish it by running seL4 and executing the JS in an seL4 VM. you'd still have to prove everything the VM has access to is similarly secure, so still not feasible for a browser, but you could maybe make the Node.js equivalent of MSR's Ironclad.
I'm just very sad we don't have safe hardware w.r.t. memory corruption via rowhammer and I don't think any of the typical formal methods or seL4 account for it. Safely running untrusted code is nearly impossible on modern computers.
> JavaScript engines with any kind of usable performance are inherently complex
Depends on what you're using it for, surely? If you're just watching videos or scrolling through the news (where the JavaScript takes a back seat) then wouldn't any implementation be fast enough?
> just watching videos [...] (where the JavaScript takes a back seat)
If you're talking about media websites like YouTube, Netflix, Amazon Prime Video, Twitch and so on or even just videos on sites like Facebook, video streaming actually needs a lot of JavaScript behind the hood. Some of it even being already performance-sensitive today.
Video streaming/processing is done by JavaScript or even wasm binaries in popular sites nowadays - only decoding is on the browser itself. YouTube for example also does elaborate obfuscation of parameters to their video CDNs, which gives you a nonce that you can use to access them with usable speeds.
It depends heavily on the website we're talking about but there's generally a lot going on when streaming video on the web.
Usually what happens at the core is that JavaScript will download video, audio and subtitles progressively through small chunks of data called "segments" and push them to JS-exposed buffers called 'SourceBuffer'. Deciding which chunk to download, downloading them and pushing them already require a lot of JavaScript (for example, you need to decide which video and audio quality to download through adaptive algorithms, which tend to be quite complex, moreover there's also a lot of media events that needs reaction to, like when seeking, rebuffering, changing track etc.). You also have a lot of JavaScript there to limit risks of playback stalling and if you have DRMs, a lot of JavaScript there to be able to recuperate the right decryption keys (an operation you generally wish to finish as soon as possible as it is often the last step before playback).
On some websites, you might want to play with as low latency as possible between the broadcaster and the user. In those cases, you might want to optimize your JS code, have very small checking intervals, and you might again prefer to run as much code as possible in a worker to avoid rebuffering due to the risk of the main thread being too occupied doing other things to push media segments.
Even on non-low-latency contents, some websites which already have a lot of JavaScript running beside video playback such as at least Facebook and YouTube pushed browsers for quite some time now to be able to use the main JavaScript media streaming APIs in a worker (https://github.com/w3c/media-source/issues/175), e.g. in another thread.
You could also have complex contents (lot of audio and subtitles languages, many audio and video qualities, multiple decryption keys, long duration etc.) that may lead to big performance and memory issue when parsing them on the JS-side. Those contents are usually described through a file named "manifest" or "playlist" which in this case can take a lot of resources to process (the document can be up to a huge 15MB XML where I work), often leading either the linked JavaScript to run in a worker or to use webassembly (a solution we chosed). Even more if you consider live contents, where this document might have to be regularly refreshed.
You might also want to apply some processing on the media played, for example transmuxing mpeg-ts segments to MP4 ones so they can be played by more browsers. Those are very frequent operations that can be performance-sensitive and are also often performed in another thread.
Again it very much depends on the website and I mainly know the use cases I personally encountered. Generally, adaptive media player are very complex JavaScript beasts.
Also performance issues and poor memory management from the browser-side can lead to a lot of issues. A recurring issue at my work is bad performance leading through side-effect to a very poor quality being played (due to the high overhead in loading segments, pushing them to the buffer etc.).
All these would suffer without a powerful and featureful JS engine like we generally have today on most browsers.
>Is there a safer JavaScript engine folks can use without having to worry about this sorta thing? Even if it's slower, less compatible, more resource-intensive, etc.?
You can disable JIT in firefox[1], which makes it fall back to an interpreter. That should theoretically make it safer as there are less optimizations going on and less generated code being directly executed by the CPU.
If you take those charts at face value, they're pretty incredible. The JIT ends up worse than the interpreter for most (real-world non-synthetic) use cases for power usage, memory usage, and startup time. Page load time is a wash. And this is after Google has poured $billions into optimizing V8.
For clarification the graphs aren't comparing JIT vs interpreted they are comparing the JIT tier being allowed vs not allowed.
Even with JIT enabled most functions are still just interpreted, hence why the vast majority of tests are equal as the vast majority of tests are interpreted either way. It's only when the JS engine thinks it can start to realize performance gains on hot code that it will start to JIT it. You can see this behavior in the "Average improvement and regression" graph where JIT starts trading other stats for performance gains.
Knowing this and looking at top "daily browsing" sites you get results about exactly where you'd expect, the JIT engine is tuned to let the vast majority of the code on these sites be interpreted since much is only called a handful of times or less leading to little difference. You see a bit of the JIT engine tiering where it starts to pick up a few of the hot pieces of code at a trade off on the other stats.
If you look beyond "daily browsing" sites into web apps and such that's where JIT is actually focused and where you'll see the most gains. It's intentionally not trying to get involved on lightweight pages because it makes no sense to do there, regardless how much time and money is invested into the JIT it will always know the best performing strategy for some 1 pass JS to set the page layout is to interpret it.
Microsoft has Edge, which is a browser. If your argument is that it's Chromium based and this not their own browser, there are several other companies like Opera, Vivaldi, and Brave who's sole product is a Chromium based browser. I don't think it would be appropriate to say that those companies "don't have a browser".
If you're worried about browser vulnerabilities in the javascript engine, have you considered disabling javascript by default and enabling it per-site on just the sites that you trust?
However, I'd prefer to have a secure JavaScript-engine that could be kept on by default, then enable a fast JavaScript-engine on a per-site basis.
For example, I have an exception on here for HackerNews to use JavaScript. But the JavaScript HackerNews uses is trivial; a naive JavaScript engine that's 1000x slower and uses 100x the RAM probably wouldn't even make an observable difference, would it? Except if it's secure, then I could've just had JavaScript on by default (without needing to add an exception), and then I wouldn't have to worry about stuff like if HackerNews gets compromised one day.
A lot of sites seem to do really little things with JavaScript, but break if it's disabled -- some blog sites won't even load posts without JavaScript enabled. It'd be nice to just have a secure browser to view such things with.
Microsoft has added some mitigtions to Edge a few months ago as defense in depth - wondering now if this is actually exploitable on Edge or if their mitigations prevent it? Any Microsoft/Edge security people on here?
> I feel like, in most cases, I could make due with JavaScript being 10x or even 100x slower, taking up 10x the RAM, lacking some uncommon features, and so forth -- if it meant being able to enable it without needing to worry about new zero-days.
Not on the "modern web" you wouldn't, even the current speedy versions of V8 and ${whatever}monkey now used by Firefox the thing often is brought to a crawl by the deluge of Javascript. Imagine your current browser, only 100 times slower and 10 times more memory-hungry.
Nope, the solution lies in getting rid of most of the Javascript on most pages. uBlock and uMatrix can help a bit but the real solution lies with web developers. If and when that goal is achieved it would be possible to browse the web using a slow-but-'safe' browser. Some pages (e.g. SPAs) really depend on all that Javascript and as such won't be useable withour 'modern' JS engines but there is no reason for e.g. your bank or payment processor's pages to depend on near-native speed Javascript engines.
> it would be possible to browse the web using a slow-but-'safe' browser. Some pages (e.g. SPAs) really depend on all that Javascript and as such won't be useable withour 'modern' JS engines but there is no reason for e.g. your bank or payment processor's pages to depend on near-native speed Javascript engines.
I don't plan on my bank trying to 0day my browser. If anything, I trust them not to do anything malicious more than the sites that actually need to go fast.
All of those analytics would already have the ability to run in the bank's origin and steal my money. What threat model are you operating under where malicious code running on your bank's website is trying to do something other than steal all of your money (which they're already able to do if they're executing on the bank's origin)?
The threat model where the analytics company is hacked. I don't know why you would trust your bank to never let that happen.
There are some APIs on the bank site you need to conduct your business, and there are also some extraneous APIs. Yeah, sure, you tell the bank it's their fault clickwatcher.js got hacked, and maybe they give you your money back, but it seems like unnecessary exposure to unnecessary hassle to leave all that crap running fully trusted.
But having a more hardened browser literally does nothing to protect you. If your bank's website is hacked because of a third party API, the most well-hardened browser (as the commenter that I replied to describes) in the world will happily allow all of your money to be stolen. A slow-but-secure JS engine does not protect you from JS that's doing things it's allowed to do.
Oh, oh, oh. I thought this was the thread where we turned javascript off or used adblock or some such. Yeah, secure vs insecure jit engine isn't going to help here.
You don't seem to understand what I wrote so I'll explain it:
- imagine a slow-but-secure browser, 10 to 100 times as slow and using 10 times as much memory as stated by the parent poster
- imagine your bank and payment processor using a minimal amount of Javascript on their sites to make it possible to use that secure-but-slow browser without incurring too big a performance penalty
Do you now see what I mean? It is not that your financial institutions would zero-day you, it is that you'd use the secure-but-slow browser (or browser mode) to access those sites. Secure, because you're dealing with financial data. Slow because that is what the parent poster stated as the price he'd be willing to pay for a secure browser.
You can you your insecure-but-speedy browser to watch cat videos where the H4CkZ0Rz can try to zero-day you to their hearts content because that browser does not have access to sensitive data. You could try to watch those cat videos with the secure-but-slow browser but that'd transport you back to the late 90's with single-digit frame rates (cat slide shows?).
> Do you now see what I mean? It is not that your financial institutions would zero-day you, it is that you'd use the secure-but-slow browser (or browser mode) to access those sites. Secure, because you're dealing with financial data. Slow because that is what the parent poster stated as the price he'd be willing to pay for a secure browser.
I understood what you meant the first time. Again, the risk you're mitigating is one that simply doesn't exist. A slow but hardened engine protects against attacks that simply don't happen for the sites you're talking about.
The threat here is abuse of the JS engine. If you're on my bank website origin running slow JavaScript you can already steal all of my money. Hardening the JS engine for that site is pointless because any bad JS running on the page already has the literal most valuable thing I have: access to all of my money. Having a provably not-zero-day-able browser does nothing to mitigate an attack if there's bad code running on the page.
I've been waiting for the arguments against Flash and Java applets (constant vulnerabilities + sites clogged with bad code) to finally become the argument against Javascript. Still waiting. The only reason it hasn't seems to me the massive investment by Google in speeding up the engine to the point where something even slower and more full of crud can seem relatively snappy on a mobile device.
whenever someone says "the solution is" and then says something that depends on a bunch of individual actors acting of their own accord rather than something systemic, it's hard to take them seriously. in a hypothetical dream world that might be a solution.
Don't forget that there is a sandbox. Even if there is a vulnerability with V8 you need to pair it with a vulnerability with the sandbox to exploit the system.
There are certainly other javascript implementations. For example, here's one I stumbled upon recently that's written in plain Go: https://github.com/dop251/goja
Of course, it won't help you since it's not built into a web browser.
For Windows, IE11/Trident. This may sound ridiculous, but if you think about it, it's still maintained security-wise (and will be forever, as per MS), and since its codebase has been frozen a few years ago, its attack surface can only shrink with time.
So if you're OK with the limited compatibility, it might be worth considering.
>it's still maintained security-wise (and will be forever, as per MS),
Source? According to microsoft:
>Please note that the Internet Explorer (IE) 11 desktop application will end support for certain operating systems starting June 15, 2022
>Customers are encouraged to move to Microsoft Edge with IE mode. IE mode enables backward compatibility and will be supported through at least 2029. Additionally, Microsoft will provide notice one year prior to retiring IE mode.
Your best bet right now for IE 11 is an installation of windows server 2022, which contains IE 11 and will be supported till Oct 14, 2031. Still, it's unknown whether IE 11 would be supported by then.
A lot of confusion between various comments so far so this is my attempt to re-baseline:
- Chakra is the JavaScript engine in I.E. 11 (and later forked for the old MS Edge), Trident (MSHTML) was the browser engine (forked into EdgeHTML for the old MS Edge).
- The I.E. 11 desktop application is just that, the desktop application. It is not all of I.E. 11 or it's engines, the rest of which are still in Windows 11 even.
- I.E. mode is the first party way to access the remaining portions of I.E. 11 via the current Chromium Edge, this is what allowed them to sunset the I.E. desktop application.
All that said I don't particularly buy this as being particularly more secure. Sure, it's only getting security fixes but that doesn't inherently mean it is more secure or getting more security fixes than modern solutions. It could just be becoming an outdated security architecture that is only patched often enough to keep the minute userbase happy enough.
It depends on what "Javascript engine" means, and what sort of javascript you want to execute.
If you want something that can run ES5 code, this might be your ticket. But if you want something that can run "modern javascript" (where the meaning of "modern" changes over time), then IE11/Trident won't help. It doesn't even support ES6, which came out in 2015. Modern websites often depend on javascript language features newer than that. Npm packages are the same.
i think the problem with IE11 is going to be the rendering moreso than the javascript engine. it doesn't support css variables and only a custom version of the grid syntax, so sites are only going to get more broken
I just love MS. A company so focused on security and caring about its customers. I always encourage people to use Edge. We need to stop spyware companies like Google.
My what a difference twenty years makes. the relative food and evil of MS and Google have totally swapped. Shows the cost of Google's failure to find other profitable businesses than ads. It is sad for the software revolution, with so much talent in their employees, they had so much potential to improve the world.
Microsoft is not focused on security. The amount of trivially exploitable extremely serious security bugs in Azure scream "nobody even pretends to think about security here".
I'm assuming all cloud providers have vulnerabilities discovered, reported and patched all the time, as it happens in any complex set of software.
Do you have any link where the cloud providers are compared to show that Azure in particular has a higher rate or are you just making unfounded speculations?
Is a good starting point. No other big cloud provider has had vulnerabilities that allow crossing the tenant barrier, and Azure has had two of them. If you read the details, both of them are simply unacceptable - especially the second one is trivial and shouldn't have passed any sort of security review.
I like that in chrome one can turn off javascript and images by default, then re-enable it for select sites only, or leave a tab open to re-enable it temporarily only.
You are making the assumption that an engine with fewer optimizations that runs slower will be safer by default, but I fail to see the connection between the two.
So it does appear that there is a fairly heavy connection between the two things.
I am not an expert in JITs or JIT related security issues, but from my understanding, since JITs get to bypass the normal W^X memory restrictions, it makes it a really nice target for exploits and RCE.
Likewise, the current zero-day affecting Google's Chrome presumably could've been prevented with more robust type-checking on everything (assuming the bug is as-reported in the article). Such type-checking might be a bit slower, and possibly require a bit more RAM if objects weren't already carrying type-identifiers, but then no such zero-days, either.
A specific optimization that might be faulted for this zero-day in Google's Chrome, etc., might be describable as [type erasure](https://en.wikipedia.org/wiki/Type_erasure ). Presumably this was done because carrying type-identifiers (basically a tag that says what type an object is) requires more RAM (to store the type-identifiers) and more computation (to check that type-identifiers are correct/etc.). However, other optimizations may've been factors in this zero-day too.
2022-01-04: Earliest identified exploitation (according to the linked article).
2022-02-10: Google TAG discovered the vulnerability.
2022-02-14: Google Chrome was patched.
????-??-??: Hopefully most folks have updated by now, such that that particular attack isn't getting anyone anymore.
According to the article:
> Google’s Threat Analysis Group (TAG) attributed two campaigns exploiting the recently patched CVE-2022-0609 (described only as “use after free in Animation” at the moment) to two separate attacker groups backed by the North Korean government.
Generally, "use-after-free" vulnerabilities could be prevented by using more secure memory-management systems. To be clear: this is easy to do, programming-wise; presumably the vulnerability was able to occur because the software-design favored performance over security.
Is there a site/service/mailing list that provides notifications for critical/RCE/in-the-wild exploit patches? Keeping every piece of software you run up-to-date takes a lot of work, and something like that would help with knowing what to prioritize.
Yes ! Computer Emergency Response Teams (CERT)[1] exist in most countries and publish security advisories as newsletters or RSS.
e.g. CERT-EU security advisories [2]
But there are so many softwares and exploits that the signal to noise ratio is low if you are not in charge of a big IT infra.
I took a look and my first impressions are not good.
1. like you mentioned, the signal to noise ratio is pretty bad. eg. "OpenSSL/LibreSSL Vulnerability (CERT-EU Security Advisory 2022-017)" which is a DoS exploit that consumers would likely not care about. There's also no vendor/product filter, so I get notifications about "H2 Database Console" that I don't care about.
2. It's slow/out of date. eg. "Multiple Vulnerabilities in VMware (CERT-EU Security Advisory 2022-013)" was published on February 17, 2022, but the patch was published January 15th, a month earlier.
I subscribe to debian and openbsd security advisory email lists, which works for me generally to know what is going on in the space(s) I care more about:
funny enough, was asking my self the same question yesterday after 5-minute googling didn’t get me anywhere. I see a recommendation mentioned below, but as I also saw, hard to find something where you can control signal to noise ratio
I use snap for some applications in spite of the trouble it has caused me. I was super-happy to find out that it had upgraded me to a not-vulnerable verson of chromium before I even knew to look.
For all of the (deserved) hate snap gets, there are some shining up sides.
securing a machine that is updated regularly and runs untrusted code is not realistic, monitoring network exfil is.
an exploit that cannot communicate is likely benign and easy to detect in the attempt.
monitor all outbound network connections with a gui prompt that defaults to deny. whitelist trusted domains/ip for a better experience and a bit less security.
macos has littlesnitch[1], linux has opensnitch[2], or roll your own on libnetfilterqueue[3].
bonus points if the filtering happens upstream at a router or wireguard host so a compromised machine cannot easily disable filtering.
bonus points if the filtering is at executable level granularity instead of system level.
> monitor all outbound network connections with a gui prompt that defaults to deny. whitelist trusted domains/ip for a better experience and a bit less security.
> bonus points if the filtering happens upstream at a router or wireguard host so a compromised machine cannot easily disable filtering.
Is it possible to combine these two with open/tinysnitch somehow? It'd be nice to easily build a whitelist but with the way Windows works I couldn't trust any firewall that was running on Windows itself.
filtering upstream is easy, just send all traffic to a linux wireguard server and run a snitch there. getting the gui prompt is a bit tricker. for maximum trust, that gui should probably be on another device than the original machine. ie a push notification to your phone.
I would like to analyze the issue of browser security without controversy. The mitigations that Edge puts into practice (I'm talking about "Super Duper Secure" and "Enhanced Security") can prevent the operation of exploits in the V8 engine like this 0-day?
Is this platform dependent or the mitigation in progress works well?
I mean for example some feature on mac and Linux is available out of the box asACG feature.
This analysis is very interesting because I have only read analisys related to privacy and not about security and integrity. (I mean compare between Chorme, Edge, Brave, etc ...)
Are in-app browsers in Electron even secure in the first place? Does it use Chrome-style sandboxing with multiple processes, etc.? Do bugs in the Electron engine get patched in a timely fashion?
Genuinely asking here. I've never written an Electron app personally so I don't know how this stuff is done exactly, but the idea of in-app browsers in Electron apps sounds terrifying to me, security-wise.
why is chrome having so many updates within the past few months? is it because of coverage? (more users?). i use chrome off and on between that and firefox depending on the site and i am surprised how often i've been reading about issues with chrome.
this just goes to show that updates are always 2 or so steps behind. It's a near certainty that governments, top criminal organizations have a trove of exploits for all major programs, and new ones created after old ones get patched.
I'm not saying it -isn't- in there, just that it's not 100% chance it's there. I don't think the exact "failure" has been cited yet. Would be good to check qute-browser webpage or qtwebengine page rather than a random HN asshole like me :D
I don’t know if you’re joking or not, and I say this as someone who uses Edge as their primary browser, but Edge does not improve the situation you describe. Edge is just a flavor of chromium at this point and absolutely gives Chrome a run for its money in the tracking and telemetry department.
Heh, my corp locks down the edge updates and bundles them with the OS updates. Edge is going to be vulnerable to this one for months maybe a year longer that chrome.
Just what the doctor ordered in the middle of a war which is also waged in the information space. Hopefully the fact that it’s in v8 will take the exploit a bit longer than usual to proliferate.
It feels to me like the entire os security model is broken and leaving security up to applications even well resourced ones like chrome is a fools errand.
Is there anyway we could benefit from starting again and building a secure os from first principles? Isn’t this one of Fuscias goals?
You have to start further back than you realize. Almost all computers nowadays ship with a second dedicated CPU and OS that you can't access or shut off. They are network self-aware and it is a backdoor. The most well known one is called the Intel Management Engine.
There is no point having better software if you can't even secure the hardware. Yes, the risk is minimal because even if the key to the ME leaks, it will never be given away or sold because it's too valuable. It is still a sense of disquiet for me that it is there in the first place. It doesn't add to the performance or security of your existing setup. It is there to make things easier for others.
I'm fine with the ME, but if it ever did leak, it very well could be sold or dumped on a pastebin.
Hackers are unpredictable. They could throw the plans for a fusion reactor that saves the world in the ocean. They could launch a nuke for the lulz. They can be crazier than wallstreetbets people.
The server motherboard I just bought has this as well. Thankfully access to it is at least isolated to a separate network port. I'm debating supergluing it closed or maybe physically disconnect the port somehow.
You may want to review that very carefully, typically if that separate network port doesn't have a live network on it that issues DHCP addresses the functionality will fall back to the port that is attached.
Oh fun. Thank you for the tip! Any sugestions on how to go about this? I'm a relative newb in these matters. Switching from MacOS to linux daily driver
Keep an eye on ports 16992-16995, 5900, 623, 664, and realize that packets destined for those ports may never become visible to the OS so you'll have to catch them in transit to the board. Another place to look at is what DHCP leases are issued by your DHCP server, conceivably the management engine could request an address for itself.
Also be aware of the sideband interface[1] available to the IPMI by checking the block diagram in your motherboard's manual. For instance, here's AsrockRack's X470D4U diagram[2] showing the IPMI can be accessed directly through its dedicated NIC and also sideband through one of the main NICs.
But we do know that it is Javascript-related, so please correct me if I'm wrong but disabling JS for all websites except the ones you really, really trust and need should offer long-term general protection against such 0-days in most cases.
There are lots of 0day exploits outside of the JavaScript engine. Going down this path, it would be safest to not use the web at all, or really just not own a computer.
There are a lot of ways to die outside of lung cancer though. Going down this path, it would be safest to not drink alcohol, not drive a car, or really just not live life at all.
My point here is that there are some things that have outsized impacts and can be avoided in isolation. Smoking is like that for health.
Javascript, ActiveX, java web applets, flash, any other way of executing arbitrary turing-complete remote code on my local machine directly, those are all vastly more likely to lead to CVEs than HTML parsers, image parsers, and other functionalities of browsers.
It's perfectly possible to identity and eliminate larger attack surfaces without slippery-sloping yourself into not being able to take smaller risks.
No, I think it's reductio ad absurdum; what I mean is reasonable means of reducing risks for people who don't use that much web apps and consume mostly text such as news etc.
Dont think this has todo with web standards, its probably JIT related. Google should just turn that off, majority of 0 days seem to be because of that.
[0] https://msrc.microsoft.com/update-guide/vulnerability/CVE-20...