As a big SpaceX fan, I appreciate the innovation and success that SpaceX has brought to space exploration. However, it's crucial that we have multiple reliable launch and crew providers to ensure the safety and sustainability of space missions. While SpaceX has been a game-changer, relying solely on one provider is risky. The ongoing issues with Boeing's Starliner highlight the importance of diversity in our space program. We need to support and develop multiple providers to maintain a robust and secure presence in space.
> However, it's crucial that we have multiple reliable launch and crew providers to ensure the safety and sustainability of space missions.
The keyword here is reliable and I would add the word competitive. Having an expensive, late, unreliable provider may in fact be a net negative. I think Starliner in it's current form isn't helping the industry. I hope they get their act together, or that we fund a reliable and competitive alternative to SpaceX.
Yes, we need multiple providers and the proper way to do that is not by bailing out poor designs by incumbents.
Instead, we should be setting lucrative incentives for new entrants.
But it's a lot simpler than NASA's previous vehicle, the Shuttle.
If you read the selection statement[1], it seems clear in retrospect that NASA put too much weight on Boeing's Shuttle experience (via Rockwell), and not enough emphasis on SpaceX's Dragon 1 experience. But I think, at the time, it was difficult to know which factor was more important.
Keep in mind that Starliner is NOT the entire launch system. It is only the crew capsule. It rides on top of an existing rocket. The same is true of SpaceX Dragon which rides on top of Falcon 9 that already existed.
To your point, Starliner could have started as cargo-only to prove out as much as possible. That's what Dragon did.
I mean, I wish that didn't need to be said, but after the whole submarine-controlled-by-an-xbox-going-to-the-titantic thing, who am I to say what bucket of bolts people might jump into entirely on their own accord
A lot of people get that wrong. It’s certainly not how we treat MVPs in my particular industry.
“ Yet what is often missed is that a minimum viable product isn’t merely a stripped down version of a prototype. It is a method to test assumptions and that’s something very different. A single product often has multiple MVPs, because any product development effort is based on multiple assumptions.”
I will disagree. You are building a product and you need to know if you are building the right thing. So you build it using a set of MVPs. Another way to call it is Objectives
MVP focuses on developing a product with just enough features to satisfy early users and provide feedback for future iterations. This means that the full vision of the product is sacrificed in favor of speed and minimalism. Key word: sacrificed. You don't sacrifice in a system humans are reliant on to live.
You can find this described on Wikipedia in a similar manner [0].
Compare this to a Waterfall approach - an approach that has been used in developing rigor in critical systems for decades. Waterfall emphasizes a complete and well-documented design upfront, ensuring that the final product aligns with the original vision and objectives. The end result is a fully-featured product, even if (and it will) take longer to develop.
Again... Wikipedia [1].
So, no. You don't build a mission critical system by stacking MVPs like Lego block on top of each other and then calling them "objectives". It's clear you've never built or been involved with building systems that are classified as "Safety of Life Critical System". Feel free to go review some relevant standards (e.g., ISO 26262, IEC 61508, DO-178C) and then feel free to re-justify how MVPs could be used for space vehicles that transport humans.
“ It's clear you've never built or been involved with building systems that are classified as "Safety of Life Critical System".”
That couldn’t be farther from the truth. I have worked on safety critical aircraft systems for the past 10 years. We incrementally have built MVPs and have been very successful. I won’t say which but it’s one of the most successful aircraft projects in development today.
It is, and I'm curious what dang and HN's plan is wrt this issue going forward. On one hand, the "assume good faith" has been a core tenet of this community. At the same time, LLM-generated walls of text aren't good faith. And they're not going to get less common from here on out.
I'm also surprised by how many human replies these comments get, seemingly unaware what they're responding to, given that it's HN and how long it's been since the release of GPT-3, I thought a larger percentage of readers would notice.
Very obvious ChatGPT style and structure. Here's another one of his comments copy/pasted from ChatGPT. Many others have called him out on this. He is a pathological liar.
It's truly a rorschach test of sorts. I agree with you that there isn't enough information to say, but reading through the comment history of the commenter in question does not make it seem more likely that they are GPT. Reminds me of Fallout 4 with everyone suspicious of each other being synths.
On the contrary, the comment history makes it very clear.
Pages and pages of relatively short comments, not a single one written in a remotely LLM-reminiscent style. Then, within a very short period, multiple very long comments in exactly the default style that GPT writes in.
The chances of someone waking up some day and entirely changing their writing style might as well be zero, I've never seen it. It would be a gradual process if everyone.
I read HN every day and I think this is only the 2nd time I've come across clearly generated content. If suspicion is the issue, that should be much more frequent. On Reddit it's already more common, and I've already had multiple people admit to it when pointed out, asking "How did you know?".
It does help that I've spent the last 1.5 years building LLM-based products every day.
A few weeks ago I had a eureka moment to describe it: GPT writes just like a non-native speaker who has spent the last month at a cram school purely aimed at acing the writing part of the TOEFL/IELTS test to study abroad. There, they absolutely cram repeatable patterns, which are easy to remember, score well and can be used in a variety of situations. Those patterns are not even unnatural - at times, native speakers do indeed use them too.
The problem is dosage. GPT and cram school students use such patterns in the majority of their sentences. Fluent speakers/humans only use them once every while. The temperature is much higher! English is a huge language grammatically, super dynamic - there's a massive variety of sentence structures to choose from. But by default, LLMs just choose whichever one is the most likely given the dataset it has been trained on (and RLHF etc), that's the whole idea. In real life, everyone's dataset and feedback are different. My most likely grammar pattern is not yours. Yet with LLMs, by default, its always the same.
It also makes perfect sense in a different way; at this point in time LLMs are still largely developed to beat very simplistic benchmarks using their "default" output. And English language exams are super similar to those benchmarks; I wouldn't be surprised if they were actually already included. So the optimal strategy to do well at those without actually understanding what's going on, but pretending to do so, ends up being the same. Just in this case it's LLM's pretending instead of students.
I should probably write a blog post about this at some point. Some might be curious: Does this mean that it's not possible to make LLMs write in a natural way? No, it's already very possible, and it doesn't take too much effort to make it do so. I'm currently developing a pico-SaaS that does just that, inspired by seeing these comments on Reddit, and now HN. Don't worry, I absolutely won't be offering API access and will be limiting usage to ensure it's only usable for humans, so no contributing to robotic AI spam from me.
I'd give you concrete examples, but in the comment in question literally every single sentence is a good example. Literally after the second sentence, the deal is sealed.
There's other strong indicators besides the structure - phrasings, cadence, sentence lengths and just content in general, but you don't even really need those. If you don't see it, instead of looking at it as a paragraph, split them up and put each sentence on a newline. If you still don't see it, you could try searching for something like "English writing test essay template".
I remember that there were "leaks" out of OpenAI that they had an LLM detector which was 99.9% accurate but they didn't want to release it. No idea about the veracity, but I very much believe it, though it's 100% going to be limited to those using very basic prompts like "write me a comment/essay/post about ___". I'm pretty sure I could build one of these myself quite easily, but it'll be pointless very soon anyway, as LLM UIs improve and LLM platforms will start providing "natural" personas as the norm.
> I'd give you concrete examples, but in the comment in question literally every single sentence is a good example. Literally after the second sentence, the deal is sealed.
I dunno. I believe you see that in it, but to me it just reads like any other Internet comment. Nothing at all stands out about it, to me. Hence my surprise at the strong assertions by two separate commenters.
Genuinely fascinating! I'd show you the instances on Reddit of similar comments where people admitted it after I pointed it out, but unfortunately I don't really want to link my accounts.
You're also free to confirm in my HN history that in hundreds of comments (and tens of thousands read), this is the single one time I've pointed it out. I did do a cross-check of their profile to confirm it, just in case it was a false positive - don't want to accuse anyone if I'm not 100% sure, because it's indeed technically possible that someone simply has the exact same writing style as the default ChatGPT assistant.
Here's the entire comment, dissected to make the structure and patterns clearer.
> As a __, __.
> However, it's crucial that ___.
> While __, ___ is risky.
> ___ highlight the importance of __ in __.
> We need to ___ to __.
Any single one of these sentences on their own wouldn't be enough. It's the combination, the dosage that I mentioned.
If you're interested in hard data that explores this phenomenon (although outdated/an older version of GPT), here's an article [1]. In a year or so, if you do the same analysis on "However, it's crucial that", you'll discover the same trend as the article showed for "a complex and multifaceted concept". Maybe the author would be open to sharing the code, or rerunning the experiment.
I've used ChatGPT extensively and this stuff is extremely obvious after you have read literally thousands of ChatGPT responses. Immediately recognized it and called him out. Boilerplate AI template response. ChatGPT has a very distinctive way of answering questions.
Starliner is a completely different vehicle designed for completely different requirements. The only thing that they have in common is that they can both operate in a vacuum.
That's like saying that a motorcycle could replace a semi, because both have wheels and a motor.
Yeah, Orion is a huge bottleneck to future moon missions, because it is the only way to get off the moon back to earth. Everything else has multiple solutions. The entire idea behind the lunar gateway was to make it possible for CLPS companies to reach NRHO with underpowered rockets instead of only a hypothetical launch vehicle such as the lunar Starship, which does not even exist as of today.
Lockheed Martin is building a cislunar transporter for getting fuel to NRHO. What is needed is a cislunar crew transporter in addition to the fuel transporter.
Unfortunately, the demand for space missions is tough to justify starting a new company with that goal in mind. It will require heavy government funding to make it sustainable.
And to your comment about SpaceX, this is a Boeing problem and you’re just throwing SpaceX under the bus for the other company’s troubles. SpaceX is the alternative provider. How many more do you think is feasible?
>> this is a Boeing problem and you’re just throwing SpaceX under the bus for the other company’s troubles.
That is what happens. If a company wants to play in this sort of arena, it will not be treated "fairly" and will suffer for the mistakes of others. In a narrow two-company industry, the mistakes of either party will always impact the industry as a whole.
Think of that company that lost a submersible at the Titanic. Undersea tourism is also very narrow industry. All companies involved are dealing with the repercussions of that accident from diminished demand to potentially stricter regulations, not to mention increased insurance costs. That isn't fair, but that is how such industries work.
I didn’t read the parent comment that way, who did give due deference to spacex. This is hard stuff, as I was reminded when the dragon capsule exploded during very early testing. But spacex is such a beast that it overcame that ridiculously fast.
Give Dream Chaser another chance. It is already going to be necessary as an escape pod for large commercial space stations. If it does double duty as capsule backup, it will achieve greater amortization.
Was this written by an AI? Christ, this is the most empty diplomatic platitude-spewing I've seen all week.
I'll say what you hopefully mean: SpaceX needs solid competition but the old-school contractors are broken. Pending an analysis of their impotence, they may need thorough fumigation before we rely on them for anything.
Edit: Low IQ downvoters are too stupid to recognise obvious ChatGPT replies. I checked his comment history and found that he uses ChatGPT here regularly https://news.ycombinator.com/item?id=41274200