Hacker News new | past | comments | ask | show | jobs | submit | ironman1478's comments login

I think when it comes to music, it's really disheartening to hear people say it hasn't gotten better. There is a lot of good music coming out in so many genres, but it really requires actively seeking it out.


I've dug pretty deep in the genres I like...it's just all slight variations imo of the work the trendsetting artists did establishing the genre in the first place as far as I can tell.


You might have just gotten older.

The issue is that as music progresses and changes so too does distribution networks. Traditional, or even nontraditional to those from the pre spotify internet days, pipelines of music discovery have been largely co-opted by industry. Outlets of organic discovery are different now - and people typically don't continually keep changing their habits enough to keep up with it.

Pair this with the fact that most people settle their musical tastes to be in line with when they are experiencing the most emotionally significant time in their early lives (high school for some, college for others, etc) and the result is an assumption that

A) What they encounter forms an overall opinion of "all" new music despite being the tip of the iceberg and

B) It's not as good as what they grew up on


Because algorithmic curation shifted from what's quality to what's "sellable". That's a part of why you get sentiments that "media got worse". Hardcore media fans will still scour and find proper curations themselves (or be he curator), but the default of just letting Spotify or Netflix tell what's "quality" is long over.

Another little cut on why people are trusting big tech less ad less.


This headline really is misleading. The source will still be released, it just means that the work leading up to a release will be in private. IMO, there's nothing wrong with that, since it's likely that a lot of the intermediate com mediates are likely just noise.


If I understand correctly, it sounds a bit like Valve and SteamOS. They publish the source, but they optimize for their internal developers and tools, not for easy source access by anonymous members of the public. Valve has been publishing its source as tarballs, not git commits (hence, projects like Jovian and Evlav reconstructing it in GitHub).


"likely just noise"

It's always noise to people that don't care, but matters to people who rely on AOSP, including third-party ROM developers.


You are saying that non-Google parties are reliant on AOSP being developed in the open on a public Gerrit instance? In what way?


Where do you think LineageOS gets their feature updates and security patches?


We can be less blindsided when the latest wave of anti-consumer features are announced.


I think the determinant will be how transparently this process is maintained.


Right - Google has been less than stellar with Chromium with ad revenue motivations behind manifest v3. One can only assume that other changes are being pushed and it'll be too late to fight when the OS ships before or same day as source.

As others say, it breaks contributions and any chance that other forks will keep up.


Which is bad since all the forks will be behind at least few weeks, months maybe.

And think about all the merge conflicts you have to resolve after 3 months of code changes.


This is technically how open source is supposed to work. There never was any obligation to develop in public, or technically to release the source code publicly either. There is no obligation to communication, bug requests, etc.

The obligation is specifically to provide the source code (without certain usage restrictions) for binary releases when requested, and no more.


First they came....(Explains the strategy)


Additionally, that's already the case for much of the Android projects. The remaining projects that developed directly in AOSP will develop on the internal branch like the rest.


I don't understand the difference between what is described in the article and how Android has been developed from the start. They have always developed new versions internally and then dumped into AOSP right before release. https://groups.google.com/g/android-building/c/T4XZJCZnqF8/m...


Some projects developed directly in AOSP. Those projects will now develop on the internal branch like the rest. So it's not as big of a change as some are making it out to be.


Developing in the open would make contributions from the community way more viable, would give the public the ability to see what's coming and prepare for it, would increase the likelihood that security vulnerabilities or other bad things are discovered and prevented early on. It would make the project more likely to serve the interests of its users.


>The source will still be released

Lol nice pipe dream. The source isn't even fully released today.


The source for AOSP. Individual Android devices have never been open source (minus some very few exceptions), and they have no obligation to do so as AOSP is Apache/permissively licensed.

Note that the demand for secrecy is primarily driven by device manufacturers, not Google. Manufacturers want to keep their "secret sauce" from their competitors.


Google has gradually been moving things from AOSP (open) to GApps (closed). Some of the things that have been moved are fairly essential for a mobile operating system (like a location provider and a SMS app). Projects building on AOSP now have to provide replacements, or declare them out of scope and punt.


They don't publish the really interesting sources anyway, this doesn't really change anything.


Right, they’re making the Gerrit instance private.


"Other canceled projects investigated whether climate change could lead to armed conflicts over access to fish stocks and how it affects societies in the Sahel."

It's about understanding how a society can change due to climate change, not understanding the actual mechanics of climate change. Do you think that understanding whether or not conflicts stemming from climate change are not useful?


A lot of the research seems really useful. I don't understand why we wouldn't prioritize understanding how to prevent drug cartel recruitment (one of the examples cited in the article).


Well, if “we” were deliberately weakening the US, either to destabilize the government in order to being about a radically different system of government, or on behalf of a hostile foreign power (or, perhaps most relevantly, as part of a hybrid movement whose members have a mix of those motivations), then “we” might want to cut a large number of things that would seem nonsensical to cut for people who don’t share that goal.


Feb 27, https://apnews.com/article/mexico-drug-lord-rafael-caro-quin...

> Mexico has sent 29 drug cartel figures, including drug lord Rafael Caro Quintero, who was behind the killing of a U.S. DEA agent in 1985, to the United States as the Trump administration turns up the pressure on drug trafficking organizations.. “This is historical, this has really never happened in the history of Mexico,” said Mike Vigil, former DEA chief of international operations.. The unprecedented show of security cooperation comes as top Mexican officials are in Washington trying to head off the Trump administration’s threat of imposing 25% tariffs on all Mexican imports starting Tuesday.


The original article references cartels in Colombia, not Mexico. Also, the goal of the research is to prevent people from joining crime organizations, not how to catch existing heads of criminal organizations. It's not clear what you implied by the text that you quoted.


  cartels in Colombia, not Mexico
https://dialogo-americas.com/articles/mexican-cartels-expand...

> the Ombudsman’s Office, an agency of Colombia’s Public Ministry, highlights the presence and the areas of operations of three of the strongest Mexican drug cartels: the Jalisco New Generation Cartel, Los Zetas, and the Sinaloa Cartel.. there is a belief among foreign consumers that Colombian drugs are of high quality, so the narcos capitalize on that.

  the goal of the research is to prevent people from joining crime organizations, not how to catch existing heads of criminal organizations
If budgets are constrained, which is a higher priority?


Starving the body of the snake ?

History suggests cutting off the head of a drug cartel is just a Tier 2 career advancement opportunity.

Did the aquisition of Joaquín "El Chapo" Guzmán as a token trophy head put much of a dent in the ongoing activity of the Sinaloa Cartel Ship of Theseus?

There's much to be said for understanding an organism (an entire cartel dynamic in this case) in order to eliminate it.


Was this research the first social science project to seek an understanding of cartels?

If not, what actions were taken based on previous social science research?


You asked a question, I offered an answer.

> Was this research the first social science project to seek an understanding of cartels?

Very probably not. Not in the US and not globally.

> what actions were taken based on previous social science research?

That'd be a question best answered by digging through the Hansards, Parliamentary Archives, Congressional Records, etc. of respective governments.

Political history would suggest that many good suggestions of social change were likely ignored in favour of operations with a lot of smoke and noise.


> Political history would suggest that many good suggestions of social change were likely ignored in favour of operations with a lot of smoke and noise.

Likely to increase with terrorist designation, https://apnews.com/article/gangs-cartels-sinaloa-aragua-trum...

Organizations sometimes optimize for the problem to which they are the solution, https://news.ycombinator.com/item?id=39491863


It might have been a smarter move by the US back in the day to extend USAID and collective forming business support to poor south american farmers rather than have the CIA indirectly fund cartels to run drugs into poor US communities in order to distance themselves from supplying arms to overthrow other governments.

Today that horse is dead and well beaten .. but maybe guns will solve things this time 'round.


Tariffs are part of the current round.


The research is useful in the sense that actually understanding the issues being investigated would be valuable. The question is, would the funded research actually deliver a better understanding of those issues? Given the state of the social sciences, I don't think that it would.


> Given the state of the social sciences, I don't think that it would.

Why not? That's a really broad brush.


Google “replication crisis” just for starters.


Some fields are inherently harder to replicate though -- economics, psychology, etc.

The fact that we (somewhat) do it with biology is amazing.

That doesn't mean we should avoid spending money in hard to replicate fields, or that we should require impossible to satisfy bars for funding.

Some of the most interesting results in these fields come with further questions attached.


The question isn’t whether the social sciences should exist as an academic field, the question is if they deliver reliable enough results to be a cost effective use of the defense budget.


Indeed, but replication crisis isn't that argument, given that's a field-as-whole issue rather than the cost effectiveness of a specific study.


I'm still talking about the field as a whole; I'm just saying "should the field exist" is a different question than "does the field generate good enough results for the Defense Department to rely on". See also my other comment, "I also don't think the Pentagon has the expertise necessary to tell good social science from bad."


If anything, I'd expect the Pentagon might have better expertise, given how distinct its intellectual zeitgeist is from social science academia.


Yeah this is an under discussed aspect of the grant issues, it's important to decide whether or not to fund something, but just as if not more important is to make sure that the funding is used in a way that actually delivers the promised effect.


I also don't think the Pentagon has the expertise necessary to tell good social science from bad.


The DoD employs experts from many different fields. Wars aren’t just won on the battlefront. PSYOPS is an obvious example of applied social science, but it extends far beyond that.


That phrase is really dependent on where you are in life. If you're young, sure! But if you have a big purchase coming up in the next few years (house) or might be retiring, I can't imagine tying up a lot of one's resources in the market right now. The downside really can outweigh the upside.


Ever heard of the Bosnian war? Many in eastern europe hate gypsies too. Maybe they don't discriminate on the color of skin specifically all, but they definitely hate based on culture.


I was in Sarajevo during those times for a couple of days, so I remember. Also in Pristina. Yes, many people in Eastern Europe used to dislike gypsies, when I was a kid it was widespread, but I never heard anything about this in the past 10 years. This fire is out, probably on a permanent basis.


I haven't had a hard time finding employment. I am going to start at a FAANG next week, coming from another FAANG. I work in embedded / hardware. I think what is being asked for is changing and people are being caught flat footed. We don't need more web or backend devs, we have so many of those. The world of embedded can't hire enough because the scope of embedded has increased. It's no longer just writing bare metal C on a tiny microcontroller, it has expanded to writing applications for embedded Linux for custom devices that are more tricked out than desktops 20 years ago. It's easier than ever to build an SoC and a device and so many companies need people to write software for them. I think it's hard to outsource because frequently it requires lab space (unless you're working on pure compute devices) and companies don't want to ship prototypes and devkits overseas. So there are industries that are hiring lots of Americans.

Also this might be a hot take, I don't agree with Elon or Vivek's plan to outsource, but I do think tech has done this to itself. Why are so many startups and tech companies so bloated with engineers? I worked at a company that sold VERY complex factory and plant monitoring software to companies around the world. It's engineering team was about 500 people supporting a wide range of products and different levels of the stack. Companies with way less complex software and less software volume have way more bloated engineering orgs and are way less efficient. Because, fundamentally most people are bad at software engineering (which is different from just programming and pure CS), including grads from MIT. A lot of companies are outsourcing because if everybody isn't that great, they might as well pay less for something that isn't good. The companies that have simple products and have crap engineers are the ones outsourcing.


How would a junior engineer get into embedded software? Personal hardware projects with esp32s, etc?

Seems far more interesting than cobbling together CRUD apps but it seems tough to get into given the experience roles require.


Having a computer engineering degree helps. Outside of the embedded equivalent of CRUD, you'll have to know the details of how a computer works. How memory visibility and synchronization works, what mmio is, basic assembly, interrupt concepts, etc. I'd actually recommend reading a book on operating systems. A lot of embedded devices that are pure compute require knowledge of operating systems because you're basically building software to enable a user to interact with the accelerator or sensor on the device.

I'd also learn a tool like yocto or buildroot, both tools for creating Linux images. They're not great tools, but they are widely used in creating embedded Linux images.

If you want to solve domain specific issues (like getting into DSP) or doing high speed networking, you have to find an entry level job in that field and have the requisite background. It's hard to do DSP without an EE background for example.


C++ really is a blast and I think the complaints people have about it really depend on context. Lots of C++ devs hate that language but imo misdirect their hate. They actually work on legacy products built by people who were never that great at writing software. This happened to me with Rust actually where I was thrust into an existing rust project at the company I work for. It's an "old" codebase with many different and conflicting conventions within it (different error handling types that don't compose well, bad class abstractions, lots of unsafe block when not needed, etc). The project was the most miserable project I've ever worked on and it's easy for me to blame Rust, but it's really just bad software development. There are warts in C++, but a lot of the pain people run into is just that they are working on crap code and it would be crap in any language.


In every other language (except C), basically every time you write v[x]=y you aren't inviting the possibility of arbitary memory corruption. The C++ 'std::optional' type makes calling '*v' undefined behaviour if 'v' is empty. The whole point of optional is to store things that might be empty, so why put undefined behaviour on the most common operation on an optional when it's empty (which is going to be common in practice, that's the point!)

The problem I have with C++ (and I've written a lot), is every programmer involved in your project has to be 100% perfect, 100% of the time.

I know you can run memory checkers, and STL debug modes, but unless you are running these in release as well (is anyone doing that?), then you are then counting on your testsuite hitting every weird thing a silly, or malicious, user might do.

Given how important software is nowadays, and how Rust seems close to the performance of C++, is it really worth using a language where so many of the basic fundamental features are incredibly hard to use safely? They keep making it worse, calling an empty std::function threw an assert, the new std::copyable_function has made this undefined behaviour instead!


The C++ 'std::optional' type makes calling 'v' undefined behaviour if 'v' is empty*

Are you talking about dereferencing a null pointer? That is going to case a low level exception in any language, it has nothing to do with std::optional.

every time you write v[x]=y you aren't inviting the possibility of arbitary memory corruption

That's how computers work. You either pay the penalty of turning every memory check into a conditional boundary check first or you avoid it all together and just iterate through data structures, which is not difficult to do.

Also debug mode lets you turn on the bounds checking to debug and be fast in release mode.

The reality is that with a tiny amount of experience this is almost never a problem. Things like this are mostly relegated to writing your own data structures.


No, I'm talking about std::optional<T>. It's a type designed to store an object of type T, or nothing. You use '*v' to get the value out (if one is present).

It was originally discussed as a way of avoid null pointers, for the common case where they are representing the possibility of having a value. However, it has exactly the same UB problems as a null pointer, so isn't really any safer.

I'm going to be honest, saying that memory corruption in C++ is 'almost never a problem' just doesn't match with my experience, of working on many large codebases, games, and seeing remote security holes caused by buffer overflows and memory corruption occurring in every OS and large software program ever written.

Also, that 'penalty' really does seem to be tiny, as far as I can tell. Rust pays it (you can use unsafe to disable it, but most programs use unsafe very sparingly), and as far as I've seen benchmarks, the introduced overhead is tiny, a couple of percent at most.


It was originally discussed as a way of avoid null pointers,

I don't think this is true. I don't think it has anything to do with null pointers, it is a standard way to return a type that might not be constructed/valid.

I think you might be confusing the fact that you can convert std::optional to a bool and do if(opt){ opt.value(); }

You might also be confusing it for the technique of passing an address into a function that takes a pointer as an argument, where the function can check the pointer for being null before it puts a value into the pointer's address.

I'm going to be honest, saying that memory corruption in C++ is 'almost never a problem' just doesn't match with my experience,

In modern C++ it is easy to avoid putting yourself in situations where you are calculating arbitrary indices outside of data structures. Whether people do this is another story. It would (or should) crash your program in any other language anyway.

Also, that 'penalty' really does seem to be tiny

The penalty is not tiny unless it is elided all together, which would happen in the same iteration scenarios that in C++ wouldn't be doing raw indexes anyway.


>I don't think this is true. I don't think it has anything to do with null pointers

The paper that proposed std::optional literally uses examples involving null pointers as one of the use cases that std::optional is intended to replace:

https://isocpp.org/files/papers/N3672.html

Here is an updated paper which is intended to fix some flaws with std::optional and literally mentions additional use cases of nullptr that the original proposal did not address but the extended proposal does:

https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p29...

I am going to be somewhat blunt, but if you're not familiar with how some of the use cases for std::optional is as a replacement for using raw pointers, along with some of your comments about how C++ treats undefined behavior, as if it's just results in an exception, suggests you may not have a rigorous enough understanding of the language to speak so assertively about the topic.


I am going to be somewhat blunt, but if you're not familiar with how some of the use cases for std::optional is as a replacement for using raw pointers

I never said there wasn't a use case, I said it wasn't specifically about protecting you from them. If you put a null pointer in and reference the value directly, it doesn't save you.

If you don't understand the context of the thread start arguments over other people's simplified examples. I'm not going to write huge paragraphs to try to avoid someone's off topic criticisms, I'm just telling someone their problems can be avoided.


Yes you did and I literally quoted it, here it is, your words:

"I don't think this is true. I don't think it has anything to do with null pointers, it is a standard way to return a type that might not be constructed/valid."

This was in response to:

"It was originally discussed as a way of avoid null pointers,"

I have provided you with the actual papers that proposed std::optional<T> and they clearly specify that a use case is to eliminate the use of null pointers as sentinel values.


It's a template, you can put whatever you want in it including a pointer. All your example shows is that you can bind a pointer to a reference and it will wrap it but won't access the pointer automatically.

I'm not sure what your point here is other than to try to mince words and argue. It's a standard way to put two values together and what people were probably already doing with structs. You can put pointers into a vector too, but that doesn't mean it's all about pointers.


I'm pretty sure the issue that the parent commenter is referring to isn't about wrapping a pointer type in an optional, but wrapping a _non-pointer_ type in an optional, and then trying to access the value inside. std::optional literally provides a dereference operator operator[1] which contains the following documentation:

> The behavior is undefined if this does not contain a value.

The equivalent to this in Rust isn't `Option::unwrap`, which will (safely) panic if the value isn't present; the equivalent is `Option::unwrap_unchecked`, which can't be invoked without manually marking the code as unsafe. I've been writing Rust professionally for a bit over five years and personally for almost ten, and I can definitively say that I've never used that method a single time. I can't say with any amount of certainty whether I've accidentally used the deference operator on an optional type in C++ despite writing at least a couple orders of magnitude less C++ code because it's not something that's going to stick out; the deference operator gets used quite often and wouldn't necessarily be noticeable on a variable that wasn't declared nearby, and the compiler isn't going to complain because it's considered entirely valid to do that.

[1]: From https://en.cppreference.com/w/cpp/utility/optional/operator


Is your argument that rust with panic if you are a bad programmer and C++ says its Ub of you are a bad programmer?

That's just fundamental difference of opinion, Rust isn't designed for efficiency, it's designed for safety first. C++ unofficial motto is, don't pay for what you don't use.

If I type *X why would I pay for a check if its empty, I literally should have checked the value isn't empty.

If you work in a code base with people who don't check, your codebase doesn't have static analysers, you do no code review and dereferencing an uncheck optional get's to production, do you think a .unwrap in rust wouldn't have made it to production?


Your basis seems to be no-one is ever going to write bad code, anywhere, ever, and invoke undefined behavior. That doesn’t seem reasonable.

Also, an unwrap isn’t perfect, but it’s much better than UB. It asserts. No memory corruption, no leaking all your user’s data, no massive fines.

The equivalent to C++ would be an unchecked unwrap in an unsafe code block, and that would throw up flags during review in any Rust codebase.


An unchecked dereference should also throw up flags during review in a C/C++ codebase. I didn't assume that nobody would make mistakes. My argument has always been that you use a language like C++ where needed. Most of your code should be in a GC language. Going in with that mentality, even if I wrote that code in Rust, I'm exporting a C API, which means I may as well have written the code in C++ and spend some more time in code review.

EDIT: an unwrap that crashes in a panic is a dos condition. In severity this might be worse or better depending where it happens.

Both are programmer error, both should be caught in review, both aren't checked by the compiler.


> "Rust isn't designed for efficiency"

Citation needed, because Graydon Hoare the original Rust creator (who has not been involved with Rust development for quite a long time) wrote about how the Rust that exists is not like the original one he was designing:

- "Tail calls [..] I got argued into not having them because the project in general got argued into the position of "compete to win with C++ on performance" and so I wound up writing a sad post rejecting them which is one of the saddest things ever written on the subject. It remains true with Rust's priorities today"

- "Performance: A lot of people in the Rust community think "zero cost abstraction" is a core promise of the language. I would never have pitched this and still, personally, don't think it's good. It's a C++ idea and one that I think unnecessarily constrains the design space. I think most abstractions come with costs and tradeoffs, and I would have traded lots and lots of small constant performancee costs for simpler or more robust versions of many abstractions. The resulting language would have been slower. It would have stayed in the "compiled PLs with decent memory access patterns" niche of the PL shootout, but probably be at best somewhere in the band of the results holding Ada and Pascal."

https://graydon2.dreamwidth.org/307291.html


The fact that by default array access is bounds checked in Rust and by default it isn't in C++ disproves that.

I think you would have a hard time convincing the C++ standards committee to put a checked container in the standard, maybe now with the negative publicity maybe but definitely not before.

I'm guessing it would be impossible to get an unchecked container into the rust stdlib.



My point isn't that people aren't going to write bugs in Rust; my point is that people _will_ write bugs in literally any language, and bugs that cause panics in Rust are not going to expose the same level of vulnerability as bugs in C++ that cause UB.

There clearly are people who think that UB isn't as dangerous as I do, so if that's where you stand, I guess it is just a "fundamental difference of opinion". If you actually believe that you (or anyone else) is capable of being careful enough that you aren't going to accidentally write code that causes undefined behavior, then I don't think you're wrong as much as delusional.


It's okay to admit you were wrong.


I think if you had something real to say here you would have done it already.


I just want to pick out one thing:

It would (or should) crash your program in any other language anyway.

To me there is a massive difference between "there is a way a malicious user can trigger an assert, which will crash your server / webbrowser", and "there is a way a malicious user can use memory corruption to take over your server / webbrowser".

And bounds-checking penalties are small. I can tell they are small, because Rust doesn't have any kind of clever 'global way' of getting rid of them, and I've only noticed the cost showing up in profiles twice. In those cases I did very carefully verify my code, then turn off the bounds checking -- but that's fine, I'm not saying we should never bounds check, just we should do it by default. Do you have any evidence the costs are expensive?


If you profile looping through an array of pixels in linear memory order vs bounds checking every access it should be more expensive due to the branching.

As someone else mentioned you can boundary check access if you want to in C++ anyway and it's built in to a vector.

My point here is that it isn't really a big problem with C++, you can get what you want.


Iterating over an array in Rust does not involve bounds checking every access since Rust guarantees immutability to the container over the course of the iteration. Hence all that's needed is to access the size of the array at the start of the iteration and then it is safe to iterate over the entire array.

Note that this is not something C++ is able to do, and C++ has notoriously subtle iterator invalidation rules as a consequence.


They asked for evidence of bounds checks being expensive, please try to keep the context in mind.

Note that this is not something C++ is able to do

Or you could just not mutate the container.


> "They asked for evidence of bounds checks being expensive"

They said "bounds-checking penalties are small" not that it was free. You've described a situation where bounds checking happens and declared that it "should be" expensive because it happens. You haven't given evidence that it's expensive.


I didn't give direct evidence, that's true.

You can profile it yourself, but it is the difference between direct access to cached memory (from prefetching) and two comparisons plus two branches then the memory access.

To someone who doesn't care about performance having something trivial become multiple times the cost might be acceptable (people use scripting languages for a reason), but if your time is spent looping through large amounts of data, that extra cost is a problem.


It's the optimization that C++ is unable to perform, not the immutability.


Are you talking about optimizing out bounds checking that isn't happening in the first place?


If you declare using an invalidated iterator as UB, the compiler can optimize as if the container was effectively immutable during the loop.


I've been wondering how much you'd pay if you just required that out of bounds accesses be a no-op with an option of calling an error handler. De-reference a null or out of bounds pointer you get zero. Write out of bounds, does nothing.

  int a = v[-100]; // yolo a is set to 0
I really suspect that unlike the RISC mental model compiler writers think in terms of a superscalar processor would barely be slowed down. That is if the compiler doesn't delete the checks because it can prove they aren't needed.


My understanding is that the cost that is mentioned comes from the branch introduced by checking if the dereference is in-bounds (and the optimizations these checks allegedly prohibit), not from what happens if an OOB access is detected


Where I feel that breaks down while probably leet code would be faster. But high quality code tends to have checks to make sure operations are safe. So you got branches anyways in real code.


You have to do a very similar operation to has_value() .value() with Rust's optional though... How is opt.ok_or("Value not found.")?; so different from the C++ type?


It seems like people think dereferencing an unengaged optional results in a crash or exception, but ironically you are actually less likely to get a crash from an optional than you would from dereferencing an invalid pointer. This snippet of code, for example, is not going to cause a crash in most cases, it will simply return a garbage and unpredictable value:

    auto v = std::optional<int>();
    std::cout << *v << std::endl;
While both are undefined behavior, you are actually more likely to get a predictable crash from the below code than the above:

    int* v = nullptr;
    std::cout << *v << std::endl;
I leave it to the reader to reflect on the absurdity of this.


What is the desired behavior? I see at least 3 options: panic (abort at runtime predictably), compiler error (force handling both Some&Nothing cases [needs language support otherwise annoying], exceptions (annoying to handle properly). There is too much undefined behavior already.

Perhaps, 3 types could exist for 3 options.


I understand what undefined behavior is, I just don't dereference pointers or optionals without first checking them against nullptr or nullopt (respectively). In fact, I generally use the .has_value() and .value() interface on the optional which, to my point in the above comment, is a very similar workflow to using an optional in Rust.

I think if you adopted a more defensive programming style where you check your values before dereferencing them, handle all your error cases, you might find C++ is not so scary. I would also recommend not using auto as it makes the types less clear.

    std::optional<int> v = std::nullopt;
    if (v == std::nullopt) {
        return std::unexpected("Optional is empty.");
    }

    std::println("{}", *v);
If you are dereferencing things without checking that they can be dereferenced I don't know what to tell you.


One is undefined behavior which may manifest as entirely unpredictable side-effects. The other has semantics that are well specified and predictable. Also while what you wrote is technically valid, it's not really idiomatic to write it the way you did in Rust, you usually write it as:

    if let Some(value) = some_optional {
    }
At which point value is the "dereferenced" value. Or you can take it by reference if you don't want to consume the value.


I prefer ok_or for optionals and map_err for for results in Rust. I believe the way you proposed ends up with very deeply nested code which I try to avoid.


`std::optional<T>`'s style is more akin to using

  if x.is_some() {
    let x_value = unsafe { x.unwrap_unchecked() };
  }
everywhere.


You can pay the penalty and get safe container access, too. AFAIK, C++ standard containers provide both bounds checked (.at()) and unchecked ([]) element retrieval.


Now we are getting down to a philosophical issue (but I think an important one).

In Rust, they made the safe way of writing things easy (just write v[x]), and the unsafe one hard (wrap your code in 'unsafe'). C++ is the opposite, it's always more code, and less standard (I don't think I've seen a single C++ tutorial, or book, use .at() as standard rather than []), to do bounds checked.

So, you can write safe code in both, and unsafe code in both, but they clearly have a default they push you towards. While I think C++'s was fine 20 years ago, I feel nowadays languages should push developers to write safer code wherever possible, and while sometimes you need an escape hatch (I've used unsafe in a very hot inner loop in Rust a couple of times), safer defaults are better.


I really can't resonate with this. The type system is obnoxious (and you don't even get proper memory safety out of it), the "most vexing parse" truly is vexatious, there's all the weirdness inherited from C like pointer decay (and then building new solutions like std::array on top while being unable to get rid of old ways of doing things), and of course the continued reliance on a preprocessor to simulate an actual module import system (and the corresponding implications for how code for a class is organized, having to hack around with the pimpl idiom etc....)

Essentially, the warts are too large and numerous for me to find any inner beauty in it any more.


> and of course the continued reliance on a preprocessor to simulate an actual module import system

C++ modules are supposedly meant to save the day. Although only recently have the big three compilers (GCC, Clang, MSVC) reached some form of parity when compiling modules.


I mean, this is all anecdotal, but I've only ran into the most vexing parse a few times, I rarely had to use the pimpl idiom, and the header file stuff... okay that isn't great. I've been a c++ dev for 10 years and I actually worked on an extremely old codebase and was involved in modernizing it is to C++11. Maybe I'm too C++ brained, but all those things just aren't that bad? There is no wartless language and you just deal with the warts of the language you're in and it's your responsibility to learn the details of the language.


The header stuff is pretty bad. The rest are strange choices of stuff to complain about. The vexing parse was never a big deal and these days with bracket initialization (which only rarely you can't use) even less. The pimpl idiom: not sure why it's really a problem. Plus you have many choices for type erasure in c++.


>The vexing parse was never a big deal

I definitely got bit by it multiple times.

>The pimpl idiom: not sure why it's really a problem.

Because it's even more boilerplate and adds indirection in a place where you might have been painstakingly trying to avoid it (where the first indirection costs you all your cache coherency). C++ lets you go out of your way to avoid objects having to carry any overhead for virtual dispatch if you aren't going to use it; but then you might have to choose between rerouting everything through a smart pointer anyway or suffering inordinately long compile times. Nothing to do with type erasure (if I'm thinking clearly, anyway).


I can deal with all the warts but I'm not really having a blast.


True - but it’s much easier to make an incomprehensible mess in more complex languages. Whereas blub language projects are pretty easy to decipher regardless of the state of the codebase.


I refer you to the (admittedly dated, but also quite funny) C++ FQA https://yosefk.com/c++fqa/


This 1000%, stop blaming the tools!


How can we make better tools if we don't blame tools?


Eh, I'm reminded of the old "programming languages as weapons" comic, with the one that I still remember is JavaScript was a sword without a hilt, with the blade being the "good part" and in place of the hilt, another blade labelled the "bad part".

You can blame devs instead of tools all day long, but you can't deny that there are things about the tools that hold the developers back.


I am good at passing these interviews and am at a faang (will be moving to another one this month). These interviews are useless and provide a false signal on problem solving skills and people's abilities to learn things. The interviews specifically don't test whether or not somebody can roll up their sleeves and jump into code, because if it did why do I as a new hire have to explain so much about software engineering and debugging practices to people who have been here so long?

If you've actually worked at a large company, you'd know that 90% of the real work is done by like 5% of people (maybe even less). If the interviews worked this ratio would be so much better.


what do you think would be a better test?


For new grads, just let them in. Just make sure they know something so a simple coding exercise is maybe good there. For more senior people, actually talk about their resumes and try to understand what they did, why they did it, and importantly, if they actually did it. I never got asked anything about my previous experience when I got into my first faang beyond some superficial things like "what tools did you use".

I've seen interviews where you actually have to present a thing you've done and then explain the decision making process behind it, tradeoffs, outcomes, etc. those are more common outside of CS and I'd like to see more of that in CS. Or hell, have them write an essay describing the tradeoffs about some theoretical engineering decision. That'll give me actual info on how a person thinks.

For context, I used to work in self driving cars and had to interview many people who claimed to work in that field and claim to have worked on huge projects themselves. Then you dig deeper and it turns out they were part of a huge group, they never heard of this problem, that problem, never heard of this industry standard, etc. it's like, forget coding, this person doesn't even know the domain as much as they claim and not being upfront about that disqualifies you immediately in my books.


I have mentioned this before. The best interview I had as a candidate was - they gave me access to their codebase, explained an actual relatively small bug, asked me to fix it and left me alone (I was seated among their devs, they were minding their business, I was minding my own). I think it took me half hour or so (can't remember the exact time, it was a few years ago). I fixed it, they asked me to explain how I found the bug/fixed, they made an offer.

No talking (other than me showing them how I fixed it). No bullshit questions like "where do you see yourself in 5 years" or "talk to us about your strengths" etc.

Second best - they asked me to design and write pseudo code for a simple system. Don't worry about syntax, but make sure to follow good design practices, within reason. Gave me a pen and notepad and left me alone for an hour. I wrote it, explained my thought process, they made an offer.

Then I had shitty interviews - one very large, very famous insurance company - they had 5 rounds of interviews back to back, for a normal developer role, lol. Asked me about some obscure options for grep etc. It was an exercise in them showing off their skills (more like their memory of Linux commands) than learning about my skillset. I couldn't wait to get the hell out of that building.

Of course interviews can't be light when you are hiring for highly technical, high critical positions (like security, for example). But for most software dev positions, the formats above are very efficient. Most software devs are writing "glue" code, not rewriting some mission critical real time OS.


I sometimes ask obscure questions but I don't hold it against people if they can't get it. It just tells me a bit about how the person works, who they are, where they might fit ... I've certainly recommended people for hire that score an actual 0% on those questions.


Vim is great but I think VSCode really is the one tool to rule them all going forward. It's extremely well designed, snappy, and I think does the correct thing of first treating everything as a text file and allowing plugins to provide semantic meaning. I never liked visual studio because it was too specific to writing software and using the GUI to do what I wanted. Editing msbuild files directly was a pain for example. If I wanted to do some shell scripting or system debugging, I had to leave that environment.

Whereas with VSCode, I really never have to leave the VSCode environment to do what I want. I can pop open a shell within VSCode and don't have to switch windows. I can easily open random files not associated with my project and VSCode does the right thing (usually). It opens images easily, renders markdown well, etc. my favorite feature is that you can pipe cli output directly to VSCode in the shell and then it opens a tab displaying that output. You'd be so surprised how often that feature comes in handy.


> Vim is great but I think VSCode really is the one tool to rule them all going forward.

I really hope you’re wrong about that. I don’t want to be ruled by another Microsoft product


I mean, nothing stops you from not using it. It's just a really well designed piece of software and it does a good job of getting out the way something a lot of IDEs don't do well (visual studio for example).


>pipe cli output directly to VSCode in the shell and then it opens a tab displaying that output

Example from VSCode Terminal: $ echo hello | code -


You can do all the things you mentioned in Vim. I also can't imagine that you would be comparing the performance of VSCode and Vim, my vim with all of its plugins and vim script starts up in 95ms. That's faster than the threshold of human perception.

I think you missed the point of my comment you are replying to. You can get things done in any editor. The point is that Vim/Emacs have been around for decades and last your whole career, VSCode has been around for a fraction of the time and killed off the previous editor, Atom, that everyone though was "the editor".


Nice try, microsoft. There’s way too many floss options to bother with surveillance capitalism tools and lock-in/e³.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: