Hacker News new | past | comments | ask | show | jobs | submit login

Good read. One thought-provoking bit for me was:

> Waymarking was previously employed to avoid explicitly storing the user (or “parent”) corresponding to a use. Instead, the position of the user was encoded in the alignment bits of the use-list pointers (across multiple pointers). This was a space-time tradeoff and reportedly resulted in major memory usage reduction when it was originally introduced. Nowadays, the memory usage saving appears to be much smaller, resulting in the removal of this mechanism. (The cynic in me thinks that the impact is lower now, because everything else uses much more memory.)

Any seasoned programmers would remember a few of such things - you undo a decision made years ago because the assumptions have changed.

Programmers often make these kinds of trade-off choices based on the current state (typical machines the program runs, and typical inputs the program deals with, and the current version of everything else in the program). But all of those environmental factors change over time, which can make the input to the trade-off quite different. Yet, it's difficult to revisit all those decisions systematically as they require too much human analysis. If we can encode those trade-offs in the code itself in a form that's accessible to programmatic API, one can imagine implementing a machine learning system that can make those trade-off decisions automatically over time as everything else changes via traversing the search space of all those parameters. The programming language of today doesn't allow encoding such a high-level semantic unfortunately, but maybe it's possible to start small - e.g. which of the associative data structure to use can be chosen relatively easily, the initial size of datastructure can also be potentially chosen automatically based on some benchmarks or even metric from the real world metric, etc.




I don't think that exploding the state space of your program by making the history of your design decisions programmatically accessible (and changing them regularly to reflect new assumptions) would be good for the quality of the result.


I don't think it's as simple as saying "the state space explodes, and that's bad".

When you say state space, I think about what is dynamically changing. If you can select one of two design decisions e.g. at compile time then, yes, your state space is bigger, but you don't have to reason about the whole state space jointly. The decision isn't changing at run time.


You have to have tests for all combinations though. At least those combinations that you actually want to use. You get the same problem when your code is a big ifdef-hell.


Testing is important, for sure, but just because you have two parameters with n choices each, does not mean you have to test n^2 combinations. You can aim to express parameterization at a higher level than ifdefs.

For example, template parameters in C++. The STL defines map<K, V>. You don't have to test ever possible type of key and value.


I'm pretty sure that you need n^2 tests if you have n non-equivalent choices each. For maps many types are equivalent so you don't need an infinite number of tests.


If the two hypothetical parameters only affect disparate program logic for some or all of their possible choices, they could require as few as 2n tests instead of the full n^2... If I'm understanding the hypothetical right. (It depends on their potential for interaction.)


Setting aside the AI angle, perhaps just recording the assumptions in a way that can be measured would be enough.

Tooling, runtime sampling, or just code review could reveal when the assumptions go awry.


Does FFTW automatically generate implementations optimized for each machine?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: