Hacker News new | past | comments | ask | show | jobs | submit | 384028345's comments login

echo IS a function.

What might be confusing to you is that in Nim all three following variations are the same

  echo("foo")
  echo "foo"
  "foo".echo
If you're curious about nim, I can highly recommend "Nim in Action".


Thank you for that tip. I had a lot of screen tearing before I enabled that option. Do you have "Force Full Comoposition Pipeline" enabled or just "Force Composition Pipeline"?

EDIT:

https://wiki.archlinux.org/index.php/NVIDIA/Troubleshooting#...

That's helpful! :)


OFFTOPIC: Isn't blue light still best to be avoided as much as possible, as it damages cells in the eye, causing macula degeneration. Please correct me if I'm wrong.


https://www.health.harvard.edu/blog/will-blue-light-from-ele...

This is widely debunked. The above is but one of many such counterpoints.


One great aspect is that joplin supports iOS 8+. My Ex needed to write a big dissertation that she just couldn't make progress on (unsurprisingly). Her work laptop is too bulky to carry around every day unfortunately.

So at some point I figured she could use my old ipad mini 1 with a size-fitted logitech "ultrathin keyboard" (that magnetically holds onto the ipad, making it one piece for storage) that I impulse bought and never used. So hardware was there, but looking for a markdown editor that supported sync and that old device + her windows laptop was not fruitful. Except well, short story: joplin is awesome for that.

She could use the ipad to write a few minutes here and there during transit and was able to complete it. The markdown file is then converted to latex/pdf using pandoc + pandoc-citeproc (for citations using a bibtex-file).

Support for antiquated iOS versions is rare (making the ipad mini 1 useless in many cases although it still appears to be a capable device) even though many apps don't seem to have a need for higher versions. I suspect that most apps use frameworks that have a version cutoff to reduce complexity if that even makes sense.


You might like the tldr program:

$ tldr ln

ln

Creates links to files and directories.

- Create a symbolic link to a file or directory: ln -s path/to/file_or_directory path/to/symlink

- Overwrite an existing symbolic to point to a different file: ln -sf path/to/new_file path/to/symlink

- Create a hard link to a file: ln path/to/file path/to/hardlink


First time I hear about compile times being an issue with Rust. Is that a general problem with bigger projects or can you identify language features that add specifically to the problem?


First compilation is slow. After that it’s much better during development times. This can mean that ci can take a ton of time if you don’t do even the smallest bit of optimization.

Otherwise, it’s a solid language choice. L


As a point of reference "warmed up" compilation for me still takes ~1minute. Bearable for general work, horrible for quick tweaks or UI-related work.


I'm so jealous of software development tools.

Elaborating the hardware (systemverilog) I'm working on (about a year of work from scratch, nothing fancy) takes 8 minutes.


Wow, I get frustrated and feel unproductive if my feedback loop is longer than 2-5 SECONDS. This must require a seriously different approach to trial and error exploration


A lot of that is done in a simulator on the computer, which usually builds very quick once you get the environment and simulation scripts written.

The full build to a bit file that can be loaded into hardware is an NP-hard optimization problem, so as your design approaches the limits of either the desired clock speed or the amount of space in the chip, it can become very slow.


I wish. This figure is reloading the formal tool after a one line change in the RTL.

Placing (not routing!) takes around 24 hours since we need to run in a larger environment than just the block level to get realistic timing reports


I've heard that long warm compilation times can be mitigated by breaking up code into separate modules so it doesn't recompile e.g. the actor system every time you adjust a layout. Basically caching works best on the module boundary. Is this something you've explored?


Yes, I factored out quite a bit of generic parts of the engine into libraries and took some steps to cut down on number of generics being instanced (which was one of the main drivers of compile time). Now I'm left with pretty much the game core, which is highly interdependent and thus nearly impossible to break up into crates (they can't mutually depend on each other). And it's still kinda slow.


+1

I've splitted a 20k+LOC Rust project into multiple crates and compilation time improved a lot (still slow compared to other languages) but it's now manageable.


Really? It has been quite a talking point for a while with Rust. Final binary performance is great, but compilation times hurt badly.


FYI: I just put the PDF on my Kindle Paperwhite and it actually looks like a regular ebook in terms of font size and readability so far.


I started this hobby with an Ender 3 a few months ago. Look into motor dampers. They honestly take away ~90% of the noise and loudness.

I bought these: www.aliexpress.com/item/Funssor-12pcs-lot-NEMA-17-Steel-Rubber-Stepper-Motor-Vibration-Dampers-Imported-genuine-42-stepper-motor/32825669027.html

Make sure you find the right dampers for your motors. Ender 3 uses Nema 17.


I actually have these on order shipping from china just takes forever.


Yesterday I came across this post, which might be interesting to you: https://hackaday.com/2017/02/15/chronio-diy-watch-slick-and-...

Actually, if you search for "watch" on that website, you'll find plenty of stuff. Maybe you can find some inspiration there.


Thanks for the info! The book looks interesting.

Do you have an opinion on the fast.ai and deeplearning.ai courses? I finally have some time to work through these and since the deeplearning.ai series starts on December 18th, I'm wondering which one to dive into since I can't tell from the outside how they compare.


While I agree with others that more is better, if you can take only one course, I strongly recommend taking Andrew Ng's. While it is true that you don't need to be able to design and understand 'nets from scratch to be able to use them, I agree with most of the brightest minds in DL that you won't get too far if you don't at least have an intuition for the math behind it. And Ng's course really only gives you that - an intuition. It does an excellent job at ensuring participants understand the bare minimum to do any kind of serious work. Learning ${your favorite framework}'s API will be a breeze if you understand the "why" already.


I would take both. deeplearning.ai focuses more on math fundamentals, fast.ai takes a more coding oriented approach. It also has 2 classes: a beginner and advanced one. I personally prefer the fast.ai approach.


Add Udacity's DLF ND to the mix and do all 3 of them, they are all a bit different. Udacity's one has the inventor of GANs doing lectures there, so it's pretty top notch as well.


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: