Hacker News new | past | comments | ask | show | jobs | submit | tmtvl's favorites login

Similar, but Japan: The Roads to Sata by Alan Booth

Here is a short list:

https://graphics.stanford.edu/~seander/bithacks.html

It is not on the list, but #define CMP(X, Y) (((X) > (Y)) - ((X) < (Y))) is an efficient way to do generic comparisons for things that want UNIX-style comparators. If you compare the output against 0 to check for some form of greater than, less than or equality, the compiler should automatically simplify it. For example, CMP(X, Y) > 0 is simplified to (X > Y) by a compiler.

The signum(x) function that is equivalent to CMP(X, 0) can be done in 3 or 4 instructions depending on your architecture without any comparison operations:

https://www.cs.cornell.edu/courses/cs6120/2022sp/blog/supero...

It is such a famous example, that compilers probably optimize CMP(X, 0) to that, but I have not checked. Coincidentally, the expansion of CMP(X, 0) is on the bit hacks list.

There are a few more superoptimized mathematical operations listed here:

https://www2.cs.arizona.edu/~collberg/Teaching/553/2011/Reso...

Note that the assembly code appears to be for the Motorola 68000 processor and it makes use of flags that are set in edge cases to work.

Finally, there is a list of helpful macros for bit operations that originated in OpenSolaris (as far as I know) here:

https://github.com/freebsd/freebsd-src/blob/master/sys/cddl/...

There used to be an Open Solaris blog post on them, but Oracle has taken it down.

Enjoy!



For example, it looks nice in Common Lisp: https://people.eecs.berkeley.edu/~fateman/papers/overload-AD...


It's because Clarc is finally out:

https://news.ycombinator.com/item?id=21758298 (Dec 2019)

https://news.ycombinator.com/item?id=32597291 (Aug 2022)

That's the performance work I often mentioned in those pinned comments. I need no longer lament that it's not done!

Not sure what I should do with those old comments (https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...) ...maybe nothing.

Btw, we rolled this out over 3 weeks ago and I think you're the first person to ask about it on HN. There was one earlier question by email. I think that qualifies as a splash-free dive: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que....



Sometimes, it's Adobe's legal sheanigans (like this one: https://old.reddit.com/r/AdobeIllustrator/comments/1cwryan/a..., since rescinded)

Sometimes, it's this: https://youtu.be/DL9FXjJPqbE


Definitely a microbenchmark and probably wouldn’t be generally representative of performance. This page gives pretty good standards for OS benchmarking practic, although admittedly geared more for academia https://gernot-heiser.org/benchmarking-crimes.html

I worked through this a few years ago and it is wonderful, but I found chapter 9 on the replace function totally impenetrable, so I wrote a blog post in the same dialogue style intended as a gentler prelude to it. A few people have emailed me saying they found it and it helped them. https://ahelwer.ca/post/2022-10-13-little-typer-ch9/

I think POC||GTFO carries on some of the spirit of those times.


Aycock & Horspool came up with a 'practical' method for implementing Earley parsing (conversion to a state-machine) that has pretty humorously good performance delta over "naive" Earley, and is still reasonable to implement. Joop Leo figured out how to get the worst-case of Earley parsing down to either O(n) (left-recursive, non-ambiguous) or O(n^2) (right-recursive, non-ambiguous). That means the Earley algorithm is only O(n^3) on right-recursive, ambiguous grammars; and, if you're doing that, you're holding your language wrong.

A somewhat breathless description of all of this is in the Marpa parser documentation:

    https://jeffreykegler.github.io/Marpa-web-site/
In practice, I've found that computers are so fast, that with just the Joop Leo optimizations, 'naive' Earley parsing is Good Enough™:

    https://loup-vaillant.fr/tutorials/earley-parsing/

Examples (for Common Lisp, so not citing Emacs): reddit v1, Google's ITA Software that powers airfare search engines (Kayak, Orbitz…), Postgres' pgloader (http://pgloader.io/), which was re-written from Python to Common Lisp, Opus Modus for music composition, the Maxima CAS, PTC 3D designer CAD software (used by big brands worldwide), Grammarly, Mirai, the 3D editor that designed Gollum's face, the ScoreCloud app that lets you whistle or play an instrument and get the music score,

but also the ACL2 theorem prover, used in the industry since the 90s, NASA's PVS provers and SPIKE scheduler used for Hubble and JWT, many companies in Quantum Computing, companies like SISCOG, who plans the transportation systems of european metropolis' underground since the 80s, Ravenpack who's into big-data analysis for financial services (they might be hiring), Keepit (https://www.keepit.com/), Pocket Change (Japan, https://www.pocket-change.jp/en/), the new Feetr in trading (https://feetr.io/, you can search HN), Airbus, Alstom, Planisware (https://planisware.com),

or also the open-source screenshotbot (https://screenshotbot.io), the Kandria game (https://kandria.com/),

and the companies in https://github.com/azzamsa/awesome-lisp-companies and on LispWorks and Allegro's Success Stories.

https://github.com/tamurashingo/reddit1.0/

http://opusmodus.com/

https://www.ptc.com/en/products/cad/3d-design

http://www.izware.com/mirai

https://apps.apple.com/us/app/scorecloud-express/id566535238


You want to enable thrashing prevention: https://docs.kernel.org/next/admin-guide/mm/multigen_lru.htm...

Make sure you use zram (or zswap) as well. May also consider enabling userspace oom handler like others already suggested, but it's less important than the rest.


I can also recommend the Digital Design and Computer Architecture lectures from ETH Zürich if you're trying to understand computers at a lower level:

https://www.youtube.com/playlist?list=PL5Q2soXY2Zi-EImKxYYY1...


> Common Lisp is the union of all Lisps

Except it wasn't. Common Lisp was different from most Lisps in that it was a standard and not an implementation. Implementations were different, from small to large scale. The initial CLtL1 language definition was a small part of Lisp Machine Lisp with some stuff added in (type declarations, lexical binding, ...).

CLtL1 lacked

  * a way to start or quit Lisp
  * command line arguments
  * memory management (like garbage collection, finalization, memory areas)
  * virtual machine
  * threads
  * stack groups
  * continuations
  * interrupts
  * fexprs
  * error handling
  * object system or any way to define extensible operations
  * user defined stream types
  * networking
  * internationlization features (like international character sets, multilanguage support, ...)
  * a higher level iteration construct
  * tail call elimination
  * stack introspection (backtraces, ...)
  * pretty printer interface
  * MLISP / RLISP syntax
  * library & source code management
  * 'weak' data structures
  * extensible hash tables
  * terminal interface
  * assembler
  * advising
  * source retrieval
  * pattern matcher
  * calling external programs
  * locatives
  * big floats  
  * single namespace for functions and variables
and more. None of that was in CLtL1. Much of that also is not in ANSI CL.

Read through Lisp manuals (MIT Scheme, MacLisp, MDL, Lisp Machine Lisp, Interlisp, ...) from that time (when CLtL1 was published) and much of that existed.

Implementations provided more. Just like, say, Interlisp-D (which also was an operating system with applications) provided much more than most Common Lisp implementations.


There are many great bloggers out there. Some of my favorites:

  - nullprogram.com
  - zserge.com
  - eli.thegreenplace.net
  - smalldatum.blogspot.com
  - rachelbythebay.com
  - muratbuffalo.blogspot.com
  - sirupsen.com
  - brooker.co.za/blog/
  - jack-vanlightly.com
  - utcc.utoronto.ca/~cks/space/blog/
  - jvns.ca

My particular interests would see me appreciating access to cache structure, cache behavior, prefetching, cache coherency, memory management unit control, instruction pipelining, and maybe register renaming control.

I am not particularly qualified to try and design a better ISA, I just know there are some things I would like the option to control at some times (or at least I imagine I would). A lot of the list above is about exposing more of the architecture to user control as a general principle. However, I’d rather see some areas of hardware design change to facilitate something other than coding paradigms and practices that stretch back nearly 40 years. Chisnall’s ‘C is not a Low Level Language’ article discusses several architectural directions I would like to see happen, but also talks about how as long as the ISA remains the same moving to new programming styles/philosophies continues to be a difficult proposition.


I once recorded a demo video, running Concordia on an actual Symbolics Lisp Machine.

https://vimeo.com/83886950


"I fear not the man who has released 10000 games once, but I fear the man who has released 1 game 10000 times." - Todd Howard.

Oh god locales.

https://github.com/mpv-player/mpv/commit/1e70e82baa9193f6f02...

Basically, if you want to break any piece of software, there's a locale for that.


> Programming languages aren't just for machines to execute. There are for humans to communicate and collaborate.

"All it does is segfault!" I wake up in a cold sweat. It's still dark; a quick glance at the clock shows that it's about 3 in the morning. There's a stir besides me.

"Was it the dream again?" My wife says from the darkness to my side.

"It's fine, just go back to sleep."

She mumbles something incoherent and in a few moments she's snoring quietly. I lie down, but I already know there's no more sleep for me tonight.

+++

In the morning I arrive late in the office. I was in the parking lot at 7:30, but it took me an hour and a half to work up the nerve to walk in the building. They're at it again. The "collaborators". After we invented the perfect programming language for human communication and collaboration (we-lang) software developers took to calling themselves 'collaborators'. Today they're rehashing the same argument that they've been on for the past month.

"Michael, why are you always late? The computer is doing it again, and you know you're the only one who gets it." A loose mob of collaborators are standing around a 50 inch monitor on the wall. A half blank terminal monopolizes the screen. It seems to have been in the middle of producing log output, but now the cursor blinks lazily at a final message: "generic fault encountered".

It started with looks of confusion, then contempt, and finally resignation. And now I stare back, knowing how today is going to go. The same way as all days. "Yes, there are two problems, which is what I was trying to say yesterday. The first problem is that we're launching multiple threads and then they're all trying to modify the same unlocked data. And the second," I take a deep breath, "and the second is that the goal is kind of an open problem. We're potentially ambiguous in a few places and even if we weren't then it could take decades for the fastest computers to brute force a solution that conforms to the spec."

"Michael, we've been over this before. We all agreed that this is the nature that the UI should follow. We all agreed that this is how the back end ought to work. Why can't you, those mathematicians you keep talking about, and the computer all just get with the program. Maybe try being a team player for once." In the mob heads nod in ascent. We-Lang is perfect. Everyone agrees and understands everyone else. The collaboration between humanity has been achieved as if we were a single person. One mind. Unification.

But when we run it on the computer, all it does is segfault.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: