Hacker News new | past | comments | ask | show | jobs | submit | jellyksong's comments login

Our team is in a similar position right now — I’m curious how you decided on Highcharts as opposed to other options like Plotly.


Basic question: Why is this faster than running Intel Linux apps in an emulated Intel Linux VM? Because Rosetta is faster than QEMU, and you only need to emulate one application rather than the entire kernel?


Emulation of an x86 kernel level means you lose the hardware-assisted virtualization support you'd get with an ARM kernel, and emulating an MMU is slow (among other things.)

Technically this would be replacing QEMU user-mode emulation. Which isn't fast in a large part because QEMU being portable to all host architectures was more important than speed.


a lot of the performance gains in rosetta 2 come from load time translation of executables and libraries. so when you run a binary on the host mac, rosetta jumps in, does a one time translation to something that can run natively on the mx processor and then (probably) caches the result on the filesystem. next time you run it, it runs the cached translation. if you're running a vm without support inside the guest for this, then you're just running a big intel blob on the mx processor and having to do realtime translation which is really slow. (worst case, you have an interrupt for every instruction, as you have to translate it... although i assume it must be better than that. either way you're constantly interrupting and context switching between the target code you're trying to run and the translation environment, context switches are expensive in and of themselves, and they also defeat a lot of caching)


It’s because Rosetta is able to do AOT compilation for the most part. So it actually converts the binary rather than emulating it at run time.


Correct, plus Rosetta is substantially faster than QEMU because of AOT (as others mentioned) as well as a greater focus on performance.


There's an amazing game, 4D Toys [1], that lets you interactively play with 4D objects in a 3D environment. It's great on iPad.

I believe the developer had to create a custom physics engine to support 4D space [2].

[1] https://4dtoys.com/

[2] https://marctenbosch.com/news/2017/06/4d-toys-a-box-of-four-...


This is basically what the field of neural architecture search tries to do. Here’s a good (somewhat technical) introduction: https://lilianweng.github.io/lil-log/2020/08/06/neural-archi...


I think NAS is a bit higher level than what the OP had in mind - NAS isn't usually used to search for fundamental operations like self-attention or convolution. But I guess you could probably adapt it quite easily.


I believe some algorithms, like AutoML-Zero, search on the level of mathematical operations.


In the same theme, there’s a recent documentary called “The Weight of Gold”, narrated by Michael Phelps and other Olympic champions. Really poignant stories.


> Thankfully these days Cloud Native Python apps are built around less monolithic options such as Flask, Pyramid, or Tornado.

FastAPI is also a very popular choice these days, arguably more so than Pyramid and Tornado.


Pretty cool! If you're into these kinds of historical illustrations, I highly recommend the wonderful EdX course "Visualizing Japan".

https://www.edx.org/course/visualizing-japan-1850s-1930s-wes...


Two questions:

1. I wonder how many times the test set can be used on "incremental changes in future versions of the model" before losing statistical validity.

2. This article describes their process, but not the FDA's process. Are there specific regulatory requirements for ML models beyond their four types of reports?


1) AFIAK there are no hard and set rules for this. I think this would have to be the manufacturer's judgement call. Good point though, with enough time you may end up just over-fitting to the test set.

2) The FDA's guidance on ML models is still in flux. Please see https://www.fda.gov/medical-devices/software-medical-device-...


They show up in roots of polynomials with real coefficients.


This article is about optimization (finding good parameters), not the approximation power of neural networks (which is well-known through the universal approximation theorem).


I know. What we're both saying is: if you have enough lines, you can find the params easily.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: