Thank you for that tip. I had a lot of screen tearing before I enabled that option.
Do you have "Force Full Comoposition Pipeline" enabled or just "Force Composition Pipeline"?
OFFTOPIC: Isn't blue light still best to be avoided as much as possible, as it damages cells in the eye, causing macula degeneration. Please correct me if I'm wrong.
One great aspect is that joplin supports iOS 8+. My Ex needed to write a big dissertation that she just couldn't make progress on (unsurprisingly). Her work laptop is too bulky to carry around every day unfortunately.
So at some point I figured she could use my old ipad mini 1 with a size-fitted logitech "ultrathin keyboard" (that magnetically holds onto the ipad, making it one piece for storage) that I impulse bought and never used.
So hardware was there, but looking for a markdown editor that supported sync and that old device + her windows laptop was not fruitful. Except well, short story: joplin is awesome for that.
She could use the ipad to write a few minutes here and there during transit and was able to complete it.
The markdown file is then converted to latex/pdf using pandoc + pandoc-citeproc (for citations using a bibtex-file).
Support for antiquated iOS versions is rare (making the ipad mini 1 useless in many cases although it still appears to be a capable device) even though many apps don't seem to have a need for higher versions. I suspect that most apps use frameworks that have a version cutoff to reduce complexity if that even makes sense.
First time I hear about compile times being an issue with Rust.
Is that a general problem with bigger projects or can you identify language features that add specifically to the problem?
First compilation is slow. After that it’s much better during development times. This can mean that ci can take a ton of time if you don’t do even the smallest bit of optimization.
Wow, I get frustrated and feel unproductive if my feedback loop is longer than 2-5 SECONDS. This must require a seriously different approach to trial and error exploration
A lot of that is done in a simulator on the computer, which usually builds very quick once you get the environment and simulation scripts written.
The full build to a bit file that can be loaded into hardware is an NP-hard optimization problem, so as your design approaches the limits of either the desired clock speed or the amount of space in the chip, it can become very slow.
I've heard that long warm compilation times can be mitigated by breaking up code into separate modules so it doesn't recompile e.g. the actor system every time you adjust a layout. Basically caching works best on the module boundary. Is this something you've explored?
Yes, I factored out quite a bit of generic parts of the engine into libraries and took some steps to cut down on number of generics being instanced (which was one of the main drivers of compile time). Now I'm left with pretty much the game core, which is highly interdependent and thus nearly impossible to break up into crates (they can't mutually depend on each other). And it's still kinda slow.
I've splitted a 20k+LOC Rust project into multiple crates and compilation time improved a lot (still slow compared to other languages) but it's now manageable.
I started this hobby with an Ender 3 a few months ago. Look into motor dampers. They honestly take away ~90% of the noise and loudness.
I bought these:
www.aliexpress.com/item/Funssor-12pcs-lot-NEMA-17-Steel-Rubber-Stepper-Motor-Vibration-Dampers-Imported-genuine-42-stepper-motor/32825669027.html
Make sure you find the right dampers for your motors. Ender 3 uses Nema 17.
Do you have an opinion on the fast.ai and deeplearning.ai courses?
I finally have some time to work through these and since the deeplearning.ai series starts on December 18th, I'm wondering which one to dive into since I can't tell from the outside how they compare.
While I agree with others that more is better, if you can take only one course, I strongly recommend taking Andrew Ng's. While it is true that you don't need to be able to design and understand 'nets from scratch to be able to use them, I agree with most of the brightest minds in DL that you won't get too far if you don't at least have an intuition for the math behind it. And Ng's course really only gives you that - an intuition. It does an excellent job at ensuring participants understand the bare minimum to do any kind of serious work. Learning ${your favorite framework}'s API will be a breeze if you understand the "why" already.
I would take both. deeplearning.ai focuses more on math fundamentals, fast.ai takes a more coding oriented approach. It also has 2 classes: a beginner and advanced one. I personally prefer the fast.ai approach.
Add Udacity's DLF ND to the mix and do all 3 of them, they are all a bit different. Udacity's one has the inventor of GANs doing lectures there, so it's pretty top notch as well.
What might be confusing to you is that in Nim all three following variations are the same
If you're curious about nim, I can highly recommend "Nim in Action".