Hacker News new | past | comments | ask | show | jobs | submit | joshumax's comments login

Yeah, I've always been surprised[1] when people come out claiming to be Satoshi but ignore this very blatant writing style in their own texts. I haven't really interacted that much with them online and both the double spaces and the British spellings struck out to me and a few others years ago.

1: https://news.ycombinator.com/item?id=15917598#15919288


Yes, it works, but there are some issues.

The first one being that the SoC configuration runs in DS-compatibility mode which does not expose all of the available RAM in the DSi, and nds-bootstrap won't properly redirect I/O requests to the external SD card, which means you either need to embed the rootfs in the NDS ROM which won't persist changes, or use a flash cart and run the DLDI build instead.


Actually, someone made a working DSi mode WiFi driver some months ago. It is even used in a DSi compatible fork of ftpd. Very handy when transferring files on the SD card of a hacked DSi since it has WPA2 support unlike DS mode which is stuck with WEP.

https://github.com/devkitPro/dswifi/issues/4#issuecomment-89...

https://github.com/shinyquagsire23/dsiwifi

https://github.com/mtheall/ftpd/compare/master...shinyquagsi...


I very fondly remember this project when I was growing up too, and I credit it with sparking my interest in kernel development.

When trying to port some industrial control software to DSLinux, I ran into some bugs around how the SLOB allocator behaved under memory pressure. One of my patches landed upstream, even though SLOB is deprecated now. Still, as a kid starting out in the embedded space, it opened my eyes to the joys of hacking around with homebrew.

Fun fact: a modded DS still powers a large part of my local observatory's equipment.


> Fun fact: a modded DS still powers a large part of my local observatory's equipment.

I'm interested! I'd like to read more about that.


Sorry, I think this might tangentially be my fault. I know someone who works on the GitHub team and let them know about the situation via IM. A few minutes later both the account and the issue disappeared.


If by "lens flares" we're referring to the 6 spikey lines jetting from the stars, those are known as diffraction spikes and are a result of the 3 spider vanes holding the secondary mirror in front of the primary mirror cluster at the center of the optical axis. As light enters the primary mirror cluster in the JWT, bright points of lights such as stars have visible aberrations due to the position and orientation of these vanes. In the JWT which uses 3 of them, their orientation creates this 6 spike diffraction pattern. In the case of the Hubble telescope, which is based on the RC telescope design, it uses 4 vanes which creates a 4 spike diffraction pattern instead.


Funny I should see this on the front page of HN the day before I finish my Stellarium port for Nintendo Switch [1]. Unfortunately a lot of the UI code in the 1.0 release made it harder to port so I'm currently basing the Switch version on the 0.xy tags, but it's still a great accomplishment for the Stellarium team!

1: Unless I get approval from Nintendo AND Noctua you will need an RCMed Switch with Atmosphere to run it.


Why wouldn't Noctua give their approval ? Why would they need to ? If they do, why would they be the only one who need to ? There does not seem to be a CLA for Stellarium.

Now Nintendo has been very conservative with regards to what they accept, but it would be very interesting to try…


Do you have a link to the nsp file?


Funny how two people can hack two separate home phones to run DOOM without realizing it until after the fact. For perspective, I posted a blog article about modifying a CaptionCall phone to run DOOM at the same time [1] as this was posted! What a unique coincidence!

1: https://joshumax.github.io/general/2021/08/11/running-doom-o...


Amusingly I see this happen all the time in the information security domain. Two researchers coming out with the same results at basically the same time and they were isolated and doing the work independently. It happens in science and other areas so often. It’s an interesting phenomenon for sure. Maybe someone has named it and thought more about it. I just imagine we are all primed with roughly the same information and resources and these lovely little coincidences pop up from time to time, relatively often.


https://en.wikipedia.org/wiki/Multiple_discovery

"Multiple Discovery" is kind of on the nose, but it works.


I think “Zeitgeist” is the prevalent term here.

Or maybe it more the cause of multiple discovery, reading the Wikipedia page: https://en.wikipedia.org/wiki/Zeitgeist


Or possibly "steam-engine time".


I found it amusing that this time, both the article submitter and the second person to do a similar hack are named Josh. Perhaps that's slightly less common.


Synchronicity?


Serendipity perhaps? As an aside, it's one of my favorite words. In my opinion it's a beautiful configuration of letters that's sounds lovely when saying it, and the meaning of it is equally wonderful. As a word, it makes me happy.


What happened to "Cellar door"? :-)


It’s also quite capable, sporting an ARMv7 i.MX6 Quad SoC, 4GB of NAND, and a whopping 1GB! of DDR3

I guess as a follow-up challenge, you could try to run Android on it.


It’s almost as if Doom were a go-to example to demonstrate running arbitrary code on obscure devices.

https://i.reddit.com/r/itrunsdoom


I didn't know that reddit subdomain. Seems to be identical to https://old.reddit.com/r/itrunsdoom/.compact


It's a shortcut to the old mobile site, yeah. "i" for iPhone. Not to be confused with i.redd.it, which hosts images.


That's just a different link to the same subreddit.


Yes, there’s a long history of it ever since the source code was either open-sourced or just made public.


I mean, Newton and Leibniz figured out calculus at the same time in slightly different ways. It is common enough, but still a nice coincidence when it happens.


next step: multiplayer


Bolyai and Lobacevskij, again and again!


Some of the reviews also seem comically fake, take for example this 5 star review for the "Airpod Pros":

'You will love these earbuds . I mainly use the wireless earbuds, I am very satisfied in all aspects. These earbuds also have a solid voice for music. They are easy to pair. More than enough scope. These earbuds work very beautifully. Awesome battery life and clarity. These earbuds are the most convenient and comfortable. I love these earbuds , I already want another pair. Buy them. You will love these earbuds .'


Manjaro on the PinePhone currently has 2 branches. The first alphas used Plasma Mobile and another branch of alphas use Phosh. Work on the Plasma Mobile builds seems to have slowed down for Manjaro ARM, partially likely due to some UI instability and also because the Phosh experience currently feels more polished.


I don't expect much from anti-malware companies, but this is one of those moments that made me absolutely dumbfounded that someone actually thought embedding an entire un-sandboxed JS engine with SYSTEM privileges was in any way a good idea. I actually had to get out of bed, open IDA, and start a Windows VM just to check that this wasn't some sort of elaborate hoax!

This isn't some MIDI parser logic, it's an entire JS interpreter that can parse DOM elements! How in Earth did this even get pushed out to a release? Did we learn nothing since the last time [1]?

1: https://bugs.chromium.org/p/project-zero/issues/detail?id=12...


Because low-effort JS developers are now everywhere. Just as JS should not be found in the server yet is now prevalent, JS is now finding its way into other places where it shouldn't be. You can't have an entire industry push this terrible ecosystem, then expect security companies to miss out on the fun. Locating and hiring C++ engineers at a scale is something that has become very, very difficult.


That is a really bad take.

Executing code written in any language -- dynamic, static, compiled, interpreted -- would be problematic here.

> That service loads the low level antivirus engine, and analyzes untrusted data received from sources like the filesystem minifilter or intercepted network traffic.

Forget JS. Do not load or execute code from untrusted sources in an unsandboxed environment with system permissions. This is about capabilities, not syntax. If your main takeaway is, "they should have used a C interpreter instead", then you have entirely missed the point.


I agree with you that no interpreter should be running there. It’s bad design.

But how many C/C++ engineers would think to design a system that runs a min interpreted code, vs JS ones? The take isn’t as bad as you think.


> But how many C/C++ engineers would think to design a system that runs a min interpreted code

Multiple people, in this very thread, including you[0]. And apparently at least one Avast engineer and their upper management.

I'll requote/paraphrase another commenter[1] down-thread: it wasn't JS devs who wrote a custom interpreter inside a privileged C/C++ program. It was a C/C++ developer who thought, "I can handle this."

It's very important when calling out security failings to point out the real failing. If people are reading this and trying to take away security advice, I don't want their takeaway to be, "so my custom LUA interpreter is fine."

[0]: https://news.ycombinator.com/item?id=22545385

[1]: https://news.ycombinator.com/item?id=22545945


Re-read the comment. GP is not saying to make an interpreter for C++, they are saying that there should be no interpreter. If the language is compiled there's no need for one. C++ can obviously be insecure, but the scale of a JS interpreter + the fact that it's meant for executing arbitrary code leads to a huge security flaw that isn't present in just a normal C++ app.


Then just say that there should be no interpreter. Don't confuse the issue by talking about memory safety or act like there are better/worse ways to do this.

> but the scale of a JS interpreter + the fact that it's meant for executing arbitrary code

"the fact that it's meant for executing arbitrary code" is the only part of that statement you need. Avoid executing arbitrary code in unsandboxed/unisolated environments, even in a normal C++ app, even if the code is compiled. The scale doesn't matter.


The scale is important. A JS interpreter is 10s of thousands of lines of code at best. Even if sandboxed, the likelihood that there's a way out of the sandbox massively increases as more code is added. The possibility of bugs in the interpreter is another issue. For example, 10k lines of C or C++ code means there's 10k extra lines where there could be a buffer overflow, segfault, or memory leak. And then multiply that by the number of times these lines are executed with unknown input (AKA every line of JS code).


The scale is only important if you trust yourself to build a secure interpreter in the first place. Caring about the complexity of the interpreter means you are relying on the interpreter to keep you safe. Do not do that thing! The interpreter is not your sandbox.

If you're following best practices and isolating the process, then the number of lines of code shouldn't matter for the actual security. You should assume that a custom-built interpreter designed to run malicious code always has bugs -- whether it's running JS, LUA, whatever -- so you should run that code in a separate, sandboxed process that doesn't have system access.

It's not that the scale doesn't make a difference in complexity, it's that (for the most part) if you find yourself at the point where you're asking questions about the scale, you have already seriously messed up, and you need to go back and rethink your design.

----

The business problem Avast was trying to solve was, "how do we tell whether or not a random Javascript file contains malware?" The answer they came up with was, "we'll run the file in a process with system-access and see what happens."

I'll ask the same question I asked the original commentor: what is a safe way to solve that business problem without process isolation? And if you are correctly isolating the untrusted code, then why does the complexity of the JS interpreter matter?


Compiling untrusted code isn’t going to save you.


No, it won't. But there's no need to compile untrusted code in this scenario. All the code that is run should be packaged in.


I think you've possibly misunderstood the actual situation. This is a virus scanner. Analyzing untrusted code is the point.

The JS interpreter isn't there to lay out an interface, it's there to help them understand untrusted code that they find on the filesystem.


Where did I talk about having interpreted code?


> Contrast this with tailor-made, slim and well tested C++ code.

Avast is (was) running untrusted code in their system, by design. If your intention is to say that they shouldn't run untrusted code, then just say that. Don't waste time talking about whether memory safety matters for untrusted code -- just avoid untrusted code.

A lean C++ solution to the design problem of "how do we run untrusted code" is no better than an interpreted solution. You're focusing on the implementation, not the core design, and it is the core design that's flawed.

The only result of bringing JS up in a conversation like this is going to be to make people doing equally unsafe things in/with other languages feel better about themselves.


Again, what are you talking about?

No interpreter code should be running there, C, C++ or JS. What I am saying is that this design is made by or for people who are seeking to run JS as part of the business logic, which is ridiculous. I haven't brought up "memory safety" at all. Perhaps read the thread before commenting?


> who are seeking to run JS as part of the business logic

You can, of course, safely use JS for business logic without executing code from unsafe sources, in the same way that you can safely use C#, LUA, Lisp, or any other language. But that aside, your criticism is completely irrelevant to the actual security flaw.

I don't see any indication Avast was trying to use JS for business logic in the first place. I'm sure they over-embed crap for other parts of their interface, but that's not what's happening here. Avast was not loading a custom interpreter to execute their own business logic, they were loading a custom interpreter to analyze user files as part of their virus scan.

> That service loads the low level antivirus engine, and analyzes untrusted data received from sources like the filesystem minifilter or intercepted network traffic.

> Despite being highly privileged and processing untrusted input by design, it is unsandboxed and has poor mitigation coverage.

It wasn't Avast's reliance on JS as a development tool that caused them to say, "maybe we should parse and run arbitrary files on the filesystem in a process with elevated permissions."

It's their business logic that's the problem. Avast's business logic is, "we want to execute untrusted source code to see if it contains viruses." It wasn't a JS engineer saying, "I can't do my job in C++". It was a C/C++ engineer saying, "I know how I can tell if this file is dangerous -- I'll run it in the main process to see what it does."

If you can tell me a safe way to accomplish that business logic in C/C++ or in any other language without process isolation or sandboxing, then I'll concede the point. But I'm pretty sure you can't.

As an aside, an interesting followup to this disclosure would be if someone tested whether or not Avast is also interpreting other languages like LUA that they regard as potentially dangerous. I wouldn't necessarily take it as a given that JS was the only language they were doing this with.


> What I am saying is that this design is made by or for people who are seeking to run JS as part of the business logic

Nope, you've fundamentally misunderstood the issue at hand. The JS is not Avast's business logic. Rather, the program is attempting to analyze and detect malware in JS that the user encounters, similar to how an AV engine might inspect Microsoft Office files for malware. "Low-effort JS developers" had nothing to do with this. "C/C++ engineers" (and/or those above them) made this decision.


> If your intention is to say that they shouldn't run untrusted code, then just say that.

Or maybe stop trying to "gotcha" other commenters by making assumptions about what they were trying to say.


I was testing a pre-release version of one of our products at work, and it was causing (inadvertently) massive slowdowns periodically due to a naive approach to scanning a system for installed applications.

So, I did what any sane devops engineer would do; I throttled the CPU use limit for its cgroup in the systemd service file. Now no more scans.

Except now the UI wouldn't load. Couldn't figure out why. Just an empty white window. Turns out, it's running a node.js server and the whole UI is rendered in HTML/CSS/JS, but because of that it was so non-performant that the UI would effectively not render at all if it couldn't slam your CPU.

I can't think of any native-code, native-widget, control-panel-type UI that would completely fail to render the entire window at all if limited to 10% CPU time, but hey, here we are.


> it was causing (inadvertently) massive slowdowns periodically due to a naive approach to scanning a system for installed applications.

> Turns out, it's running a node.js server and the whole UI is rendered in HTML/CSS/JS, but because of that it was so non-performant that the UI would effectively not render at all if it couldn't slam your CPU.

So was it the naive approach to system scanning, or the web-based UI that was the problem? Because there are some performant desktop applications using web rendering, like VS Code. Though I'm not sure how VSCode would behave if limited to 10% of CPU, because that's kind of a weird scenario.


> But how many C/C++ engineers would think to design a system that runs a min interpreted code,

This is essentially how antivirus software works. Every one of them packages an emulator to execute malicious binaries.

I'd say the number one thing stopping C++ devs from running eval'd C++ code is the lack of a std eval, and that's probably it.


Antivirus software normally matches code patterns to well-known pattern database. It does not investigate the code on the client machine. AV software houses run their own labs, where emulation is used to inspect suspected malicious code.


To my knowledge every single major AV packages a local emulator. We have long, long moved beyond a world where AV does basic pattern matching.

Frankly, I am far less concerned with the js interpreter than I am the rest of the codebase.

http://computervirus.uw.hu/ch11lev1sec4.html

https://www.blackhat.com/presentations/bh-europe-08/Feng-Xue...

http://joxeankoret.com/download/breaking_av_software_44con.p...


Given the prevalence of undefined behaviour, a C/C++ system should be assumed to contain arbitrary code execution vulnerabilities until proven otherwise. So in practice most C/C++ programs that process data can interpret code, even if they weren't intended to.


I don't know why people freak out so much about undefined behavior - yes it's not defined in the language standard and that's quite unfortunate, but it becomes defined as soon as you chose a compiler. And, careful work (and avoiding really hacky things) can let you easily write a C++ program that dodges undefined behavior if you're uncertain how stable your build chain is.

To be honest though, in the modern world, picking a stable compiler like GCC is a good enough choice for life - this isn't the 90s where you might have to dumpster dive to find copies of that specific borland compiler your company decided to tailor their code to.

(edit: All the above holds until you start making assumptions about uninitialized memory, at that point you're really in trouble and, honestly, C++ really should be better about preventing you from using dirty memory)


The behaviour of GCC is by no means clearly defined. Even taking it as given that any memory handling errors will result in arbitrary code execution (accessing uninitialised memory as you say, but also e.g. double free), there are other cases. GCC has been known to compile the addition of two integers into an arbitrary code execution. It has been known to compile code like:

    void doDangerousStuffIfAuthorized(AUTHORIZATION* authorizationPtr){
      AUTHORIZATION authorization = *authorizationPtr
      if(authorizationPtr == null || !isValid(authorization)) return;
      doDangerousStuff();
    }
into something that executes doDangerousStuff() when passed null. When users complain about such things, the answer is that the code was causing undefined behaviour and so what GCC is doing is correct according to the standard.


As much as I am tired of the crappy code produced by some JS developers this time they are innocent. If you had read the article you would have known that the JS code executed here is JS found on the Interent, not any JS written by Avast. The bugs are in Avast's C++ code (or possibly C).


The bugs aren't in the code, and this whole subthread begun from what LeoNatan25 wrote is a tangent. The bugs are in the design, of downloading programs from random untrusted anybodies on the World Wide Web and running them, indeed of downloading programs from random untrusted anybodies on the World Wide Web and running them with elevated privileges. In order to test whether they are malicious, no less.


> If you had read the article you would have known that the JS code executed here is JS found on the Interent, not any JS written by Avast.

I think this makes it worse.


Yes, but it also means that this is not implemented because of "low-effort JS developers".


I have read it. It's not clear what code runs inside the interpreter.

What reason is there to even have such an interpreter in a highly privileged process?


>What reason

Benchmarks. It's faster if you don't push all the scanned data through a process boundary.


If the interpreter was running code written by Avast then it wouldn't be a security issue. Having an interpreter running code you have written vs writing the code in C++ is not necessarily better or worse from a security point of view.


Highly disagree here. Javascript's DOM parsing functionality has but one purpose: presentation manipulation, i.e. rendering. Having something like that running as SYSTEM is a security issue in itself, regardless of where the code comes from.

FFS, even display drivers don't run with full system privileges anymore.


JS has no DOM API, browsers provide JS an API to use. Plus DOM had nothing to do with rendering, it's just tree manipulation APIs.


Generally the interpreter is probably better, once you have enough memory-managed code that it outweighs the number of vulnerabilities in your native code by virtue of its significantly lower bug rate.


I'd expect server-side JS code running on popular VM (v8, spidermonkey) to be safer than custom C++ (sandboxed vs running on the bare os).

And BTW, this is why WebAssembly runtime on the server is a big deal. Being able to painlessly run any untrusted code from "nonsafe language" in a sandboxed environment.

Also, if you read it correctly, Avast is running "wild" Javascript in a custom privileged VM (potentially written in C++)


Your expectation makes no sense. Popular JS VMs have huge attack surfaces, and are prime candidates for gray and black market vulnerability hunts. They are often not maintained, thus once a vulnerability is discovered, the entire app is compromised. In the case of a highly-privileged process, this can be catastrophic.

Contrast this with tailor-made, slim and well tested C++ code. And yes, I do expect security companies to have well-written and well-tested code.


> And yes, I do expect security companies to have well-written and well-tested code.

Your expectation makes no sense, given the vulnerabilities we've seen in AV software in the past decade.

If they insist that executing suspect JS is a good idea, they a) probably should use an established interpreter unless there's good reasons not to and b) not run it privileged.

EDIT: Avast appears to have deactivated this now: https://twitter.com/avast_antivirus/status/12376853435807539...


> Popular JS VMs have huge attack surfaces

No, not really? Depending on the browser they have generally have a small-to-medium attack surface. Yes, they can JIT, but often they can't do much else.

> and are prime candidates for gray and black market vulnerability hunts

Because they are remotely exploitable, nothing more.

> They are often not maintained

The world's deepest pockets and countless hours from the world's smartest minds go into maintaining them…

> once a vulnerability is discovered, the entire app is compromised

Not in modern browsers.

> In the case of a highly-privileged process

Oh good, so not the JavaScript process, right?


How many vulnerabilities have existed in Electron apps, sandboxing and all?

I meant maintained by app developers who include the runtimes, not the runtimes themselves.


> How many vulnerabilities have existed in Electron apps, sandboxing and all?

Significantly fewer than you'd find in a comparable C++ application, probably, and with much less effort put into securing things like "if I index into this array am I allowing for an arbitrary write primitive" and "can I safety use this object without giving an attacker code execution". Electron bugs tend to be of the sort like "oops, we can load a file from the filesystem because we forgot a string check", and C++ bugs are "that, but with the other things I just mentioned".


> probably

Based on what? On C++ you have complex systems with difficult code to get correctly. With Electron, you have terrible chat apps that take 1GB of memory to display a few chat bubbles that allow remote execution into machines running them.

The data to compare the two is just not there to assume anything like you just did. Meanwhile, electron apps have proven quite insecure, despite not being able to allow arbitrary write primitive by indexing into an array.


> Popular JS VMs...are often not maintained

The V8 engine, in 2020, is one of the most actively-maintained software projects of any kind. And (for better or worse) nearly everyone who needs a JS engine uses that one. This includes - among others - Chrome, NodeJS (which means Electron too), and now Edge. The only major outliers I can think of are JavaScriptCore (iOS/Safari) and SpiderMonkey (Firefox).

The sin committed by Avast was rolling their own version of something, as a less-than-massive-company, when the state-of-the-art implementation is OSS. That has nothing to do with JavaScript the language. You're commenting on things you clearly know nothing about.


Id say it makes a lot of sense. You're comparing a memory unsafe language with a safe one.


The runtimes are also written in “memory unsafe languages” (C++). The runtimes bring a whole lot more code than if you wrote own tailor made code, meant to do something specific, in “memory unsafe languages”.


Yes, but the runtimes usually have a large amount of time and security effort invested into them.


This is not correct in my experience. I think it's more apt to say that security effort has mostly been spent around sandboxing and related technologies which is really an admission that there is no way to secure the JS VMs in themselves. The best engineers in the world can't do it. Maybe that will change if they move to safer languages, but so far nobody has done that.

Therefore when you see an exposed unsandboxed VM, you instantly know it's critical issue.


Writing secure C++ is quite hard, even for the best engineers in the world. However, that absolutely does not mean that your handcrafted C++ code is more secure than JavaScript running a virtual machine that is written in C++. The sandbox exists as another layer of defense, not because the code is inherently more insecure. (Also, it's usually because JavaScript virtual machines evaluate untrusted input, which is something that has been shown to be notoriously difficult to secure against in general.)


Can I invoke my @pcwalton card here? :-)


Love when the C++ dev enters the debate and claims with a straight face that security vulnerabilities is problem in other languages.


It's not about C++, it's about selecting a more appropriate tool that JS. JS is often used only because the developer knows nothing else. What's even more ridiculous, often JS is not even the easiest route.

Serious question, what are reasons to use JS in non-web contexts, apart from developer familiarity?


It doesn't matter if it was a Delphi interpreter. Having an unsandboxed interpreter running unsigned code was a stupid move. That some C++ developer thought it wise to do this is perhaps part of the issue.

V8 on the server has a very nice eventloop that's very easy to leverage for high performance while avoiding horrifying overflow issues and fits well for a large majority of web request/response patterns while still offering significant developer speed.


To be fair, this is the case in just about everything. If you know one language or tool, most will use that tool to do what they need instead of learning something else that might or might not be used ever again.

Some even find it fun to bend something that isn't meant to be bent.


It's the only kind of JIT'd code you're allowed to run on iOS without going through Apple's approval process. Pebble apps use JS for any code they need to run on the phone (as opposed to the watch) for this reason.


(Technically there's also WebAssembly as of recently, but it's part of JavaScriptCore so this is somewhat pedantic.)


> @paraboul is on point: JS is often used only because the developer knows nothing else.

I never said that.


Apologies, I misinterpreted. Redacted


JavaScript is one of the fastest scripting languages, for one. It often has pretty decent bindings to native code as well.


JavaScript is the only thing that you can really run on any semi-modern device. TVs, phones, laptops, desktops, servers, the only thing you can expect to execute on all of them is JavaScript. If you write your core libraries in JavaScript, you'll have that much less to worry about re-implementing and maintaining in something else. You'll have flexibility to potentially execute the same code on either client or server, phone or desktop. There are situations where that's pretty useful.

More than that, at least last time I checked, V8 is really fast. It is many times faster than the usable Python implementations, or practically any other memory-managed runtime. Only luajit seemed to sit in the same ballpark when I pulled up the shoot-out a couple years back.

I personally hate all of these facts, but sometimes, they really do mean that prioritizing JavaScript, or at least something that compiles down to JavaScript, is the best choice.


> JavaScript is the only thing that you can really run on any semi-modern device.

Wait, hold on: you can usually run C on most devices.


Generally true, but we both know that it's not so simple, and I'm surprised HN took this bait so readily.

With JS, you have to worry way less about hardware-specific builds, platform-specific linking implications, differing system behavior and intrinsics, or any of the other substantial hangups that become relevant when you need to distribute a native application across a wide range of devices.

We don't need to repeat the rest of the thread where everyone hops in and says "tut tut, hypothetically, it would be possible for it to not be that way". We're talking about the way things actually are. In an ideal world, JS would've been out of the picture about 3 years after it was born. :)


The difference is that C needs to be compiled to run on anything, javascript does not.


Right, but JavaScript needs a runtime to work at all. And I don't think there was any requirement that the language couldn't be compiled?


It's not like you seriously want to run C without a runtime.


Many people do in fact do this, often for embedded systems. And most other systems happen to ship with a C runtime.


There are C interpreters, commercial and open source since the mid-90's, don't mix languages with implementations.


This is more like a lazy or deeply ignorant use of running a process as root and reflects on the recklessness of the system designer, and not on whatever that process happens to be.


> Because low-effort JS developers are now everywhere.

I take it you're not a big fan of JS? That's a lot like saying your not a fan of hammers. Maybe you aren't good at using them, maybe the noise scares you; maybe you think hammer wielders are all idiots and the only smart people are shovelers. It's a tool, it works better in some situation, worse in other.

Low effort <insert language here> developers are everywhere. Lol. Please just stop. Any language is a bad language if used poorly. Literally, JS is just as bad as C++ in the hands of the incompetent.


I don't think JS devs are the ones embedding JS interpreters into binaries.


Ever heard of Electron?


> Locating and hiring C++ engineers at a scale is something that has become very, very difficult.

AvastSvc.exe is not the place where you need programming 'at scale'.


The problem here is actually that the scanning engine is running as SYSTEM in the first place. Whether having a JS engine/emulator in there is a separate matter. As usual, "endpoint security software" is very poorly engineered. Keep in mind that this is a common pattern among vendors; though some are even worse (e.g. Symantec used to do this directly in kernel space).


I actually agree with your conclusions, however, if they had dropped privileges before running javascript it would be worlds and worlds better. Whoever bootstrapped the C parts should have known this.


In this case, the interpreter is included for analysis of JS code and was seemingly custom made for that purpose, not to leverage JS developers, so your point doesn't apply here specifically.


Part of it is that C++ is poorly taught in most universities. Even if you do get an education in C++, try using any of the modern features like smart pointers on your homework. Your teacher most likely will give you a poor grade.


> Just as JS should not be found in the server

Said who?

> You can't have an entire industry push this terrible ecosystem, then expect security companies to miss out on the fun.

I would expect most security experts to push you to use JavaScript instead of C++, since the former will protect you from a number of rather common security issues in the latter…

> Locating and hiring C++ engineers at a scale is something that has become very, very difficult.

Is it really that hard? Here, I can help: I know C++, and I'll be graduating soon. Hire me ;)


If you haven't spent substantial amounts of time (personal estimate > 10 years) working with C++ writing production code , you certainly don't know C++.

Even if you did all that, odds are that you still don't know C++.


Ok, so I lied, I don't really know C++ because nobody really knows everything about C++. But I have written production C++ code (some of which is used by most of the people here, including you…) so ¯\_(ツ)_/¯. Anyways, this is veering off-topic, so if you or someone else would like to discuss your hiring woes and/or would like to test whether I really know C++ my email's in my profile; I'd be happy to talk to you there.


> If you haven't spent substantial amounts of time (personal estimate > 10 years) working with C++ writing production code , you certainly don't know C++.

Yeah this is part of the reason why I wont even try to learn the language


Don't let them scare you away from learning it; you can absolutely write C++ to a useful capacity in much less time than that.


Ah yes, the "No True C++ Developer" argument


That’s scaremongering. You may not know all of the syntax, but you can certainly write proper C++ code.


Avast was founded in 1988. Now imagine what its codebase looks like.


From experience, large/old software companies have their own rules what C++ is acceptable. A developer will take some time to learn that, but will then be able to blend in. Again, saying someone cannot write C++ after 10 years experience is nonsense.


That's not at all what I wrote though.

You can very well write something without really knowing it. In fact it's more than obvious that the majority of code written these days falls under that category.

Programming is hard and it takes years if not decades to "know" how to do it to an extent that's not harmful. This also applies to learning the tools. Some tools are easier to learn than others. C++ is notoriously difficult to learn.


You can write in a language even if you don't know it fully. C++ is a difficult case, because you have a historically rich syntax with even more richer modern syntax. But that doesn't mean you cannot write in the language, it just means you will focus on a subset that you do know. This is true about any language. Do you know about far pointers in C? Do you know all the intricacies of the Swift or Kotlin syntaxes? That does not mean you can't write software in these languages, or that you don't know them.

JavaScript isn't free of this at all, it's also a very complex syntax, due to many years of lumping more and more features without any coherent design. The tooling around JavaScript is notoriously bad and broken. Having to rely on package for basic stdlib functionality, having to understand how nested dependencies can and will collide, etc.—all of that creates much higher cognitive load than having to use C Lion or Visual Studio (not Code) for C++ development.


Apparently Microsoft, Google and Apple see it otherwise, hence their initiatives to improve C++ static analysers, lock down the use of C and C++ on their platforms with focus on safer languages, and even start making use of hardware memory tagging.


Be sure to keep on complaining about Apple not allowing interpreters on iOS though


was it intentional that the bug you linked to is assigned and owned by this same dude?


Tavis Ormandy has been dismantling AV software, one after another, for some time now.


Psst…there's a Linux harness to load the DLL.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: