Hacker Newsnew | past | comments | ask | show | jobs | submit | dosshell's commentslogin

Note that: There are no economic science Nobel prize.

Only one similar named price in the name and memory of Alfred Nobel, which some how, is allowed to be part of the Nobel prize celebration.

I guess my opinion is in minority, but i don't like that another prize hijacks the Nobel prize.


> I can get away with a smaller sized float

When talking about not assuming optimizations...

32bit float is slower than 64bit float on reasonable modern x86-64.

The reason is that 32bit float is emulated by using 64bit.

Of course if you have several floats you need to optimize against cache.


Um... no. This is 100% completely and totally wrong.

x86-64 requires the hardware to support SSE2, which has native single-precision and double-precision instructions for floating-point (e.g., scalar multiply is MULSS and MULSD, respectively). Both the single precision and the double precision instructions will take the same time, except for DIVSS/DIVSD, where the 32-bit float version is slightly faster (about 2 cycles latency faster, and reciprocal throughput of 3 versus 5 per Agner's tables).

You might be thinking of x87 floating-point units, where all arithmetic is done internally using 80-bit floating-point types. But all x86 chips in like the last 20 years have had SSE units--which are faster anyways. Even in the days when it was the major floating-point units, it wasn't any slower, since all floating-point operations took the same time independent of format. It might be slower if you insisted that code compilation strictly follow IEEE 754 rules, but the solution everybody did was to not do that and that's why things like Java's strictfp or C's FLT_EVAL_METHOD were born. Even in that case, however, 32-bit floats would likely be faster than 64-bit for the simple fact that 32-bit floats can safely be emulated in 80-bit without fear of double rounding but 64-bit floats cannot.


I agree with you. It should take the same time when thinking more about it. I remember learning this in ~2016 and I did performance test on Skylake which confirmed (Windows VS2015). I think I remember that i only tested with addsd/addss. Definitely not x87. But as always, if the result can not be reproduced... I stand corrected until then.


I tried to reproduce it on Ivybridge (Windows VS20122) and failed (mulss and muldd) [0]. single and double precision takes the same time. I also found a behavior where the first batch of iterations takes more time regardless of precision. It is possible that this tricked me last time.

[0] https://gist.github.com/dosshell/495680f0f768ae84a106eb054f2...

Sorry for the confusion and spreading false information.


Sure, I clarified this in a sibling comment, but I kind of meant that I will use the slower "money" or "decimal" types by default. Usually those are more accurate and less error-prone, and then if it actually matters I might go back to a floating point or integer-based solution.


I think this is only true if using x87 floating point, which anything computationally intensive is generally avoiding these days in favor of SSE/AVX floats. In the latter case, for a given vector width, the cpu can process twice as many 32 bit floats as 64 bit floats per clock cycle.


Yes, as I wrote, it is only true for one float value.

SIMD/MIMD will benefit of working on smaller width. This is not only true because they do more work per clock but because memory is slow. Super slow compared to the cpu. Optimization is alot about cache misses optimization.

(But remember that the cache line is 64 bytes, so reading a single value smaller than that will take the same time. So it does not matter in theory when comparing one f32 against one f64)


This is very interesting! Are there any movements to move towards this?

Wouldn't it open up for a new attack vector where process could read each other data?


I agree with you, hidden is worse.

But we do know what it can not static link to, any GPL library, which many indirect dependencies are.


I think you mean the LGPL? It allows you to "convey a combined work under terms of your choice" as long as the LGPL-covered part can be modified, which can be achieved either via dynamic linking or by providing the proprietary code as bare object files to relink statically. The GPL doesn't have this exception.


If static and dynamic libraries use the same interface, shouldn't they be detectable in both cases? Or is it removed at compile time?


First IANACC (I'm not a compiler programmer), but this is my understanding:

What do you mean by interface?

A dynamic library is handled very different from a static one. A dynamic library is loaded into the process virtual memory address space. There will be a tree trace there of loaded libraries. (I would guess this program walks this tree. But there may be better ways i do not know of that this program utilize)

In the world of gnu/linux a static library is more or less a collection of object files. The linker, to my best knowledge, will not treat the content of the static libraries different than from your own code. LTO can take place. In the final elf the static library will be indistinguishable from your own code.

My experience of the symbole table in elf files is limited and I do not know if they could help to unwrap static library dependencies. (A debug symbol table would of course help).


I know this is maybe not the answer you want, but if you are just interested in getting the job done there exist companies that are experts on this, for example:

https://fortune.com/2024/03/11/adaptive-startup-funding-falc...


Also interested in this. Does this task really require such specialized knowledge?


The first thing that is required is to define what they are trying to do. In other words, list some question and answer examples. It's amazing how many people are unwilling or unable to do this and just jump to "we need to train a custom model". To do what exactly, or answer what kinds of questions? I have actually had multiple clients refuse to do that.


Very good point. I totally agree with you.


One problem I encounter with math wiki is that I almost need to know what it is before reading to understand the wiki page.

I think wikibooks is a good initiativ to solve this, and could be powerful when combined with a normal wiki.


> almost need to know what it is before reading to understand the wiki page

There is a project page advocating more accessible technical articles, https://en.wikipedia.org/wiki/Wikipedia:Make_technical_artic...

In some cases technical subjects just require some pretty steep prerequisite knowledge, but where possible it's nice to try to make them as accessible as can be done practically within the space constraint of a few introductory paragraphs. Usually that means trying to aim at least part of any article at approximately "1 level below" the level where students are expected to first encounter the topic in their formal study. (This isn't always accomplished, and feel free to complain on specific pages that fall far short.)

Writing for a extremely diverse audience with diverse needs is a hard problem. And more generally, writing well as a pseudonymous volunteer collective is really hard, and a lot of the volunteers just aren't very good writers. Then some topics are politicized, ...

How much time have you personally spent trying to make technical articles whose subjects you do know about more accessible to newcomers? If anyone reading this discussion has the chance, please try to chip away at this problem, even if it's just contributing to articles about e.g. high school or early undergraduate level topics – many of these are not accessible at the appropriate level. But if you are an expert about some tricky technical topic in e.g. computing or biology or mechanical engineering, go get involved in fixing it up.


I’m sure it is totally impossible because figuring out where to start (what’s “obvious” to the reader), but a wiki that also has some sort of graph and could work out the dependencies for a given theorem, what you need to know to understand it, and then a couple applications (for examples) could be really useful. Automatic custom textbook on one specific topic.


Look up Abstract Wikipedia.

https://meta.wikimedia.org/wiki/Abstract_Wikipedia

It's more or less Wikipedia but the articles are created using natural language generation on a functional programming base. The main goal is to generate content in any language from a common underlying structure, but one could also try recursive explanations of a given topic in that framework as well.


> One problem I encounter with math wiki is that I almost need to know what it is before reading to understand the wiki page.

Case in point, nLab: https://ncatlab.org/nlab/show/HomePage

For instance, https://ncatlab.org/nlab/show/homotopy+type+theory

Although this is partly inevitable because the content is really abstract, I know there are more approachable ways to define “monad” than https://ncatlab.org/nlab/show/monad


Yes Wikipedia is really bad for maths articles. They're all written by people who just learnt about the topic and are showing off their pedantically detailed knowledge of it.

I recommend Mathworld. Much much better.


A benefit of knowing a non-english language is that my native language wiki entry usually is a good tldr of the english one.

Many of the english math entries seem to be written for math students (as in math program students, not students studying math).


I think you have typo in your url.


Thanks for the heads up, there was an SSL issue on Cloudflare, should be fixed now :)


I'm not be able to tell pixellabs ai art apart from professional painted art.

Of course the best result is when the tool is combined with a good artist.

I'm 100% sure the price for game art will drop significantly, if it already hasn't.


> it reasons about object lifetimes statically.

How does that differ from RAII?

I think i misunderstand you or lack knowledge, because this sounds exactly like RAII.

I know that rust has major compile time checks, but saying that the difference is that it reason about life time as difference to c++ is misleading. I think the major point of c++ compared to c is that c++ "reason about object lifetime statically" with deconstructors and RAII. And saying that rust do this and implying c++ don't is misleading.


Rust and C++ work very similarly with respect to objects you own. In both languages they get cleaned up by a destructor when they go out of scope. However, Rust also has a lot of language features that deal with objects you borrow, i.e. have a pointer to. In C++ you might make a "use-after-free" mistake and hold a pointer to an object that's already been destructed, which can lead to all sorts of memory corruption. In Rust, the same mistake almost always results in a compile-time error. The part of the compiler that enforces all this is called the "borrow checker", and getting used to the borrow checker's rules is a big part of Rust's learning curve.

One thing C++ programmers might be interested to learn, is that this doesn't only apply to simple variables and references; it also applies to library data structures and their methods. The Rust compiler doesn't really "know" what the .clear() method on a Vec does, but it knows enough to prevent you from calling that method while you're holding references to the Vec's elements.


> getting used to the borrow checker's rules is a big part of Rust's learning curve.

It’s also a big part of C and C++’s learning curves, it’s just that the compiler doesn’t tell you about it; you’re being taught by segfaults and silent memory corruption instead.


Very true. My hot take on this is, if you want to learn C or C++ "properly" (not just well enough to get your homework done, but well enough to write medium-size programs that pass ASan/UBSan/TSan), the fastest way is to learn Rust first. This is emphatically not because Rust is quick to learn, but rather because ownership and borrowing discipline takes years of mentorship to absorb in C and C++, and plenty of career programmers never really get it.


> Rust and C++ work very similarly with respect to objects you own.

There is one major difference. In Rust you can memcpy objects you own to a different base address and they will still work, unless they're pointed at by a Pin<> type (as such, code that requires objects to stay put in memory must take a Pin<> reference. This arrangement may potentially be replaced by a special "?Move" trait in future editions of Rust). C++ pins all objects by default and allows objects to have custom "move" constructors, to sort of enable them to be moved elsewhere.

In C++, objects that get "moved" must leave behind something that can be destructed cleanly; Rust has no such limitation, at least wrt. ordinary moves. Arguably the closest thing it has to a C++ move is the .take() pattern, which leaves a default-constructed object behind. But this is rarely used.

The general tradeoff is that the Rust pattern memcpy's compound objects more often, but has less gratuitous use of the heap and less pointer chasing compared to idiomatic C/C++. This is a valid choice once one has the compiler support that Rust provides.


If anyone's trying to follow along but wondering what the heck we're talking about, I have a video about this :) https://youtu.be/IPmRDS0OSxM?t=3019


> However, Rust also has a lot of language features that deal with objects you borrow

An other major safety difference is that Rust uses destructive moves, once a binding is moved-from it becomes invalid / inaccessible (and it won't be destroyed).

In C++, a moved-from object is in a "valid but unspecified state", so it must be destructible (because destructors always run) but any interaction with the object other than destruction may be UB, and the compiler won't tell you (in the same way it won't tell you about borrowing issues whether it's UAF, invalidation, ...).


Yeah if you never use a pointer/reference/iterator, the guarantees are quite similar. The rust compiler however also checks for use after free and iterator invalidation bugs. In addition it makes it harder to shoot yourself in the foot in concurrent code because you can’t have multiple mutable references to an object.


Talking about fullstack.

How far away is network support?

Like hosting bitsnbites.eu on it?


Haha! Infinitely far away (not on my horizon). Thought has crossed my mind, but there are so many other things I want to do first (e.g. audio, superscalar CPU, ...).


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: