1. depends if you're in the physics main loop or not; in Godot for example the physics main loop is based on the project configuration, not correlated with the display (I could have expanded on that in the article)
2. You can know the targeted frame rate, you can also infer it by averaging the delta times at runtime (not reliable but maybe more accurate on a slow machine?)
3. In my graph, I did pick random delta between 15 and 120 fps IIRC; the point was to simulate what could roughly happen with random lag spikes
A frame time spike is covered by the overshooting point: it will basically land further down the curve, which means a point converging toward the B target.
As the delta time converges to 0, the lerp coefficient is going to converge to 0 as well, meaning it won't move. Said differently, if time stops the interpolation as well. You may enter into the realm of floating point accuracy issues at some point with low values, but I'm curious of what specific scenario you have in mind here?
One thing you can do as a user already today to reduce Python dynamism of classes is using __slots__: https://wiki.python.org/moin/UsingSlots. I enjoy this a lot, not only for the memory and access performance boost, but because it also makes the code more rigid (and thus more resilient to monkey code attacks).
I'm not sure why this is posted here, but please note that this was written more than 10 years ago, and a lot of things changed since then. It's important to take this article from an historical perspective, keeping in mind that it was written by someone with "interests" in the matter (as much as I wanted to be objective, let's not lie to ourselves, I was a FFmpeg developer first). The popularity of this article played an important role in "public's opinion" (it's my most popular article ever).
Nowadays, even though I've been distant to the project for years now, I can say the situation changed in a good way (from my perspective). The projects are unified again because people from both sides made difficult compromises. I think it would make sense to focus on how the issue was resolved rather that why it was so bad. This fire is extinguished, let's work on keeping things as peaceful as we can.
For the record, I mostly left FFmpeg development years ago. And while it wasn't the only factor, the merge effort and overall tension actually drained me pretty badly at that time. Of course, this is also true for several people from both sides, and surprising to no one the project(s) lost many developers in the process.
The multimedia community is plagued with drama like this, but this one was particularly destructive. We can certainly take lessons from this, but I'll leave that to the historians.
My biggest takeaway from this fiasco was when a repo maintainer (he was libav) changed ffmpeg to libav in the Ubuntu (or was it Debian?) repositories.
That was a lesson that distros and their repositories are walled gardens just like the various application stores over in the mobile world, curated strictly at the whims of certain individuals or organizations.
It gave me a newfound appreciation for the "just find, install and run whatever whenever wherever" gung ho attitude found in Windows (and to a lesser extent MacOS and Android).
Many people (myself included!) were first amazed by how easy and how much software was @ “just there” for Linux - and apt-get or rpm or emerge and you have what you want.
It’s much later that you realize that someone is doing the work to keep those working, and decisions are being made all the time about which version, which project, etc.
The one I first really noticed was MySQL being silently replaced with MariaDB.
It's too often the software from Ubuntu or Debian repositories is outdated.
For example, Python, Docker, Rust, ClickHouse, Go, and Node.js - all have some versions in Debian, but the common knowledge is to ignore them and always install the official version instead.
It depends on what you want to do. If you want to set up a system and let it maintain itself, stick with the repository version. Upstream versions tend to allow breakages and major version upgrades that make running standard system updates a chore.
Upgrading Node is especially annoying because once you upgrade the default Node binary, you can never get repository software that depends on Node to run reliably again. Everything needs to be installed through NPM from that point on or you'll inevitably get compatibility errors. That also means that you need to change your update procedures, because security and stability updates are no longer taken care of automatically, and that means reading the docs of all the packages you update to look for breaking changes.
If you're doing development, grab the latest upstream versions. They've probably got more features you want, at the cost of having to do the occasional config/setup update.
If you want the latest of the latest of everything, you'll want to switch to something like Arch (or its derivatives) or set up a system separating the upstream tools from the system tools.
It's similar to the Ubuntu release cycle: don't run the rolling release on a server, probably don't run the LTS version on your dev machine.
As a user trying to debug commands suggested on internet didn't work because your version of ffmpeg wasn't actually ffmpeg was kind of insane. And that might be because other applications assumed that "ffmpeg" wasn't something else.
I agree with your sentiment but this is one step further, more akin of active sabotage. How ubuntu just went along with this is beyond me.
Thanks, yeah the visualization was fun to do. I'll eventually try to cleanup the script I did in a hurry and submit to the research repo along with the others.
WRT to the viewport mess on mobile, I'm not sure what's going on. I did set the width=device-width thingy in the header, and the <video> has a width of 800 (container is 900), but it doesn't seem to honor that for some reason. Webdev definitely isn't my thing, so I haven't looked into it much yet. If someone has a suggestion I'm happy to give it a chance. The same happened in the Bézier article I wrote a while ago, and I agree it's annoying.
2. You can know the targeted frame rate, you can also infer it by averaging the delta times at runtime (not reliable but maybe more accurate on a slow machine?)
3. In my graph, I did pick random delta between 15 and 120 fps IIRC; the point was to simulate what could roughly happen with random lag spikes