Hacker News new | past | comments | ask | show | jobs | submit | more adamrt's comments login

I love seeing OpenBSD new more than you average person, but is there a reason you’re submitting a month old release?

https://news.ycombinator.com/item?id=31109046


Sometimes significant releases (and I'd call one of OpenBSD's twice yearly ones including full support for a brand new set of silicon reasonable significant) are worth one resurface for those that missed it the first time around which is nearly inevitable. If there is enough interest it'll get on the front page again and can be useful. I'm definitely happy about the Apple Silicon support myself, more so then the comments the first time around. While I don't immediately plan to use it there right now, it'll mean a very solid second life for a lot of systems as Apple moves on.

And honestly even right now, the state of very compact high CPU performance PCs is actually kinda surprisingly mediocre. I've found myself researching it quite a bit from the perspective of gateway/firewall usage, where I've moved fully away from crappy proprietary appliances to doing it on standard systems. For racking stuff there are lots of great options and none of this would replace them. But if you want something more like a classic appliance in a small scale setting that's compact, very low energy, quiet, etc, but has more oomph than something like an RPi too. It's actually very easy to rapidly get into the $600+ range even for something with fairly mediocre performance.

Whereas a base Mac Mini M1 goes new for $700 and already floods the used market for $350-500, and has literally nearly an order of magnitude more performance than an RPi 4 while still only using 15W. It's almost double the single thread speed and +50% the multithread of the (admittedly now old, but also still what AMD is selling as their current embedded chip until Genoa) EPYC 3251 I have in a much more expensive system. Granted that has IPMI and a bunch of other non-CPU features that are very valuable to me, but still. And GPU and any GUI is perfectly irrelevant. The Mini could be an excellent option for a lot of appliance usage honestly which is a pretty surprising place to be but there it is.


Lots of good info. Thanks!


> I could turn the question around, what evidence do you have they aren't?

I don't care about either side of this argument, but you can't logically turn the question around. The burden of proof is on you, who made the claim.


Perf Avg? I thought the M1 was absurdly fast, no?


I have a macbook and am happy with it, so not a hater, but...

A lot of outlets have pointed out that the M1 chip came out in this kind of interim period between Intel and AMD releases, so performance comparisons were a little misleading.

If you compare M1 chips to more recent AMD chips (I'm less clear about Intel), they're more similar. There's differences in power use and singlecore versus multicore attributes but overall they're much more similar than you'd conclude based on when M1 first came out. M1 is still very nice and efficient, but not absurdly better, or maybe even better at all.

There's a lot of issues with memory use (leaks, excessive use) on M1 laptops. There was just a post about this here on HN. The issue is sometimes raised without awareness of it as a general trend, but it keeps coming up over and over. So far I haven't seen any explanation for why it happens or how to avoid it, but it's real. Just Google "macbook memory leaks" and you'll see plenty of discussion. As far as I know, there's been some red herring solutions but no actual resolution.


Opening my Emacs config on a top i9 mbp takes an average of 3–4 seconds. On an M1: <1s.


Not to stir the eternal debate, but if my Vim config takes > 100ms, I start debugging what's going on/figuring out the responsible plugin.

3-4seconds would drive me insane :O


A few seconds isn’t bad to boot an OS, no? ;-)


It's not really an issue for many, perhaps even most, emacs users. I have a my emacs set to launch when my window manager starts on login, after which it stays open until I log off or shut the computer down. Emacs and Vim have such different workflows that this type of comparison isn't all that meaningful.


Nah, that's fair. I've got about runs ls -l | wc -l 143 packages that I load. My Emacs config is pretty heavy. I keep trying to trim it down a bit. Though, sometimes I have edit sessions that last several weeks, so it's not that bit a deal. But still.


Two things. Emacs as a daemon! And. Emacs 28 native compilation. Absolute game changer for speed in a million ways. Scrolling a 5000 line python source file with all the packages, syntax highlighting, etc etc, is buttery smooth:

https://www.emacswiki.org/emacs/GccEmacs


Yeah… I should probably set up the daemon sometime. I just tweak my config so much (it's Emacs, c'mon) that it's just been simpler for me for some reason to just launch it fresh for forever.

native-comp doesn't do anything for start up. Good tip though—I've been running bleeding-edge Emacs for a while now. ;-)

I ran `esup` and figured out most of the slowdown is from the otherwise excellent straight.el [1] package manager. I put

    (setq straight-check-for-modifications '(check-on-save find-when-checking))
in my init, and now everything is really speedy!

[1]: https://github.com/raxod502/straight.el#my-init-time-got-slo...


Native comp doesn't do anything for for startup implicitly, absolutely. There are no startup optimizations or anything like that. However, the speed with which elisp runs on native comp seems to be between 1.5 and 10 times faster. I run emacs on a lot of underpowered arm devices (think rpi zero) and use my full config. And the decrease in startup time on those machines was astonishing.


Jcs is always doing something cool. Thanks for the write up!


Indeed. As a programmer desperate for technical edutainment to consume after work, he’s a gem.


As an OpenBSD and retro protocol user (gopher, irc, MUDs, and so on) JCS is amazing.


I use iOS, Firefox and 1Password to fill out passwords regularly. I just logged out of HN and back in on my iPhone just to test. Is there certain versions that this affected?


Firefox for iOS apparently.

[UPDATE] although reading the replies, it looks like that appears to work. Can’t confirm as don’t use Firefox on iOS.


Git blame has a feature just for this `git blame -w -M`. -w ignores white space changes and -M isn't really necessary but will ignore moved lines


Hey Adam, just wanted to say I’m a huge fan of both your and Andreas’ work.

Really glad you guys got together for this.


Thanks for listening!

It was a great episode to make I'm always looking for devs with interesting backstories to share so if have any ideas shoot me a message.


Hah, I was thinking the same thing and jumped to comments to see if someone else mentioned it. I actually use this quite a bit in Magit.

Magit is genuinely amazing.

Thanks @tarsius!


I usually try to leave the Magit praising to others in these threads but in this case I would have found it hard to resist. Glad others have already taken care of it. ;P


You've made a thing that has many loyal users, myself included. :)

Will this change make your work on Magit easier?

If there was something you could add/remove to/from Git to make your life easier, what would it be?


It probably doesn't make much of a difference because Magit's stashing commands offer more flexibility that even after this addition is still missing from `git stash`. Stash creation is actually one of the very few areas where Magit doesn't use the respective Git procelain commands at all (as opposed to using them and then doing additional things using other pluming and/or porcelain commands as is the case in many other areas).

Also I have to continue supporting older Git releases anyway; people like to use the latest Magit version without considering doing the same for Git, for some reason.

Change the license to GPLv2 OR later.


> We want exactly the functionality that is already present in other systems, pitfalls and all. That's just an excuse to do nothing

Are you offering to help with the maintenance or development associated with it? Or are you just demanding features while calling the people that do help with that stuff liars?

Maybe they do know better? Or maybe they know about other hassles that will come with it that you can't fathom. They've been developing and maintaining one of the best open source projects on the planet for nearly 3 decades. Maybe with all that experience, its not their opinion that is BS?


Given there's already a pg_hint_plan extension I really don't see why people are so angry about the developers not wanting to bless something they consider a footgun as a core feature.


"risk", because pg_hint is an external module, written by somebody as a hobby, therefore if it stops existing or working after some updates or with newer versions of Postgres then it becomes a huge problem for PROD systems that need it for any reason.


My point is that "core developers should add an unproven feature they think is a footgun and take on the support overhead of that without convincing evidence it would actually be a net win" is not a convincing argument.

Also given Amazon AWS support it for Aurora, I feel like it's not -that- hobby-ish.

If people use it and can provide evidence that it helps more than it hurts, that might be convincing.

Insulting the core developers for not yet being convinced seems rather less likely to help.


> Also given Amazon AWS support it for Aurora, I feel like it's not -that- hobby-ish.

Oh, didn't know that :)

About the rest: sure, I can agree as well about the indirect adverse effect of having them available (e.g. easy to misuse them as I saw in some apps using Oracle DBs), and it's for sure wrong to insult somebody because of this.

Still, personally, I think that the pros would outweight the cons of having that embedded in the app.

On one hand I remember some nights spent in the past trying to make some SQL work, hints were always at least a good temporary workaround.

On the other hand there will always be some SQL which confuse the optimizer (or more special cases about a lot of data changing distribution of values, etc..) and hints would be the only way to cover these cases.

Maybe an interesting question is on which level should hints act? I know mainly only Oracle & MariaDB, therefore I know hints of the type "use that index"/"query tables in this order"/"join these tables with this join type"/etc..., which are probably low-level hints. Maybe already just higher-level hints of the type "most important selectivity criteria comes from inline-view X"/"I want just the first row of the result"/"take into account only plans which select data by using indexes"/etc... would be as well interesting, not sure, just dumping here my thoughts.


It is written by Telco company.


Sure, maybe in not doing what every other database does, that really helps users out when they're suffering unpredictable performance, they're demonstrating their far superior knowledge. And maybe all the users that are suffering are somehow incompetent.

Or maybe it's just typical developer hubris of the kind that hits everyone at some point.


> Why "on function exit" style defers - already known to be a bad idea from Go - is beyond me

Is there something you can point me to about this? I write Go professionally and from a readability and utility standpoint I really like it in common scenarios. I hadn't heard its a know bad idea and am just curious. Thanks.


I think parent means that there are languages with scope-based clean-up (e.g. in rust/c++ a value will be cleaned up at the end of the scope that contained it, so one can even create a separate block inside a function) which is a better choice than forcing people to do clean up at the end of the function.


Note that Rust isn't dropping things "at the end of the scope" but at the end of their lifetime, it's just that if you declare local variables their lifetime ends when they fall out of scope and so this often (but not always, so it's worth remembering to care about lifetimes not scopes) coincides for values in those variables.

Making things more confusing, Rust is inferring scopes you never explicitly wrote, for example Rust brings a new scope into existence whenever you declare a variable with a let statement:

  let good = something.iter().filter(is_good).count(); // good is a usize

  let good = format!("{} of them were good", good); // a String
This is fairly idiomatic Rust, whereas it would sound alarm bells in a lot of languages because their shadowing is dangerous (if you hate shadowing you can tell Rust's linter to forbid this, but may find some other people's Rust hard to read so I suggest trying to see if you can live with it instead).

Obviously that first variable named "good" is gone by the time there's a new variable named good, and so that usize was dropped (but, dropping a usize doesn't do anything interesting, beyond making life harder for a debugger on optimised code since this "variable" may never really exist in the machine code). On the other hand the String in that second variable named "good" has a potentially long lifetime, if it gets out of this local variable before the variable's scope ends.

Because Rust is tracking ownership, it will know whether the String is still in good when that scope ends (so the String gets dropped), or whether it was moved somewhere else (e.g. a big HashMap that lives long after this stack frame). Because it tracks borrowing, it will also notice if in the former case (where the String is to be dropped) there are outstanding references to that String alive anywhere. That's prohibited, the lifetime of the String ends here, so those references are erroneous, your program has an error which will be explained with perhaps a suggestion for how to fix it.


NieDzejkob is correct: In Rust, shadowing a variable has no effect on when the destructor of the previous value runs. Thus there's no problem with retaining a reference to the previous value:

    let a_string = String::from("foo");
    let retained_reference = &a_string;
    let a_string = String::from("bar");
    dbg!(retained_reference);
    dbg!(a_string);
Prints:

    [src/main.rs:5] retained_reference = "foo"
    [src/main.rs:6] a_string = "bar"
Similarly, "non-lexical lifetimes" have no effect on when a destructor runs. The compiler will infer short lifetimes for values that don't need to be destructed (don't implement Drop), but adding a Drop implementation to a type will force every instance's lifetime to extend to end of scope. (Though as in C++, temporaries are still destroyed at the end of the statement that created them, if they're not bound to a local variable.)

The only exception to this rule that I'm aware of is what you mentioned about move semantics: Moving a value means that its destructor will never run. That's the big difference from C++. Everything else to do with destructors is very similar, as far as I know.


To my mind, move semantics being "the only exception" is a pretty bad joke. Unlike C++ Rust's assignment semantics are moves. So, you're not opting in to anything here as with C++ move, this is just how everything works.

For example, if you were to make the second a_string mutable, and then on the next line re-assign it to yet a third string containing "quux", the "bar" string gets dropped immediately, as a consequence of move semantics again.

In C++ you'd have to go write a bunch of code to arrange that, although I believe the standard library did that work for you on the standard strings - but in Rust that's just how the language works, you assigned a new value to a_string so the previous value gets dropped.


> In C++ you'd have to go write a bunch of code to arrange that

I don't think it's quite that bad. If you define a new struct or class that follows the "Rule of Zero (or 3 or 5)", the copy-assignment and move-assignment operators will have reasonable defaults. For example, the following Rust and C++ programs make the same two allocations and two frees.

Rust:

    struct Foo {
        m: String,
    }

    fn main() {
        let mut x = Foo {
            m: "abcdefghijklmnopqrstuvwxyz".into(),
        };
        x = Foo {
            m: "ABCDEFGHIJKLMNOPQRSTUVWXYZ".into(),
        };
    }
C++:

    struct Foo {
      string m;
    };

    int main() {
      auto x = Foo{"abcdefghijklmnopqrstuvwxyz"};
      // Foo's default move-assignment operator is invoked on the temporary.
      x = Foo{"ABCDEFGHIJKLMNOPQRSTUVWXYZ"};
    }
The high-level "you assigned a new value so the previous value gets dropped" behavior is indeed what's happening, and it's automatic in most cases. But when we do voilate the Rule of Zero and override the default constructors/operators, things get quite complicated, and it's easy to make mistakes. (Also in general we often get more copies than we intended, when we're not dealing with temporaries.)

The "moves are implcit and destructive, and everything is movable" behavior in Rust is substantially simpler and often more efficient, and personally I strongly prefer it. But I'll admit that trying to contend with destructive moves without the borrow checker would probably be painful.


That is a common misconception. The destructor runs at the end of the scope. Try putting a println in a drop impl.


If you do this with C++ destructors, they will sure enough fire at the end of the scope. Even if your String is long gone, the destructor fires anyway, destroying... a hollowed out String left behind to satisfy the destructor.

But go ahead and try it in Rust, your print doesn't happen because nothing was actually dropped. The String was moved, and so there isn't anything to drop.

https://gist.github.com/rust-play/4bbcc2a4efb641e578e84a1962...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: