Hacker News new | past | comments | ask | show | jobs | submit | more zippy5's comments login

I feel the opposite. My understanding is that BE-4 is hydrogen based rocket engine which really has never been executed successfully before where raptors are methane. Blue origin is going from 0 to 1 and space x is going from n to n+1. Obviously the short term results are going to be worse than Space X but I’m not convinced their engines will be worse.

That being said obviously ULA made the wrong deal since it’s not clear they will alive long enough to benefit from interplanetary refueling.


BE-4 is methane too, and hydrogen has been done zillions of times before, including by Blue themselves (BE-3).


> BE-4 is hydrogen

BE-4 and Raptor are both Methalox:

A type of binary rocket fuel composed of liquid oxygen (lox) oxidizer and liquid methane combustible.

> which really has never been executed successfully before

A staged combustion cycle that is used by BE-4 has seen flight with kerolox (RP1 and oxygen) engines. Mostly Russian engines such as the historic NK-33 and current versions like the RD-180. In this version the pre-burner that drives the turbine uses very little fuel and a lot of oxygen and in the combustion chamber you mix gaseous oxygen with liquid fuel.

A staged combustion cycle has also been achieved in the US with the SSME or RS-25, the engine on the Space Shuttle. This engine was staged but in a very different way then. Important to know that this is a fuel rich engine, meaning that to drive the turbine (or rather turbines in this case) burn a little bit of oxygen with a lot of fuel leading to gaseous fuel and liquid oxygen being mixed in the main combustion chamber.

There has never been a flown version of either of these engines types using methalox.

SpaceX Raptor is even another step beyond either of these, refereed to as 'Full-Flow Staged'. In this version you have one fuel-rich and one ox-rich pre-burner and you end up mixing gaseous fuel with gaseous oxygen. This allows for much better mixing of propellants and thus a higher combustion rate and thus a higher efficiency.

> Blue origin is going from 0 to 1 and space x is going from n to n+1. Obviously the short term results are going to be worse than Space X but I’m not convinced their engines will be worse.

I think I explain above that this is incorrect understanding.

The BE-4 is a considerable worse engine then the Raptor. The Raptor is blowing it away in all key comparative metrics by a huge amount, if far more reusable, has a higher throttling range and is far cheaper to build.

The Raptor is evolving considerable faster then the BE-4, as Raptor 2.0 is already as far in development as BE-4 initial version. The Raptor 2.0 as a sea-level engine will almost match thrust of the BE-4 while being a far smaller engine.

It has to be understood that the BE-4 delivered to ULA will be considerably less capable then what they promised with BE-4. The removed a lot of features that are not needed by ULA in order to get this done. In order to fly their own New Glenn rocket, they will need to evolve the engine considerably.

The Raptor also has a Vacuum version, refereed to as RVac. In the initial presentation about NewGlenn, they presented a vecuum version of the BE-4 but that was scrapped very early.

In summation, BE-4 is a really nice engine but its at a technology level that is still below some of the Russian engines like the RD-180. The Raptor is a true next generation engine.

> That being said obviously ULA made the wrong deal since it’s not clear they will alive long enough to benefit from interplanetary refueling.

I am not sure what you are talking about. ULA has no interest in interplanetary travel and ULA will be around for quite some time.

Also, the BE-4 is strictly an engine used to launch from earth and only for the first stage, so your comment about interplanetary refueling makes no sense.

Maybe you are mixing up the BE-4 with the BE-3U, an upper stage engine they are also developing.


This was wonderfully written and if your gonna start a data team, this is how you do it. But I can see that I’m the only one who thought it was crazy to start a data team in the first place.

This company makes 10M and spends 3M on the team and infrastructure to make data a core competency?

A vast majority of wins discussed were lowly differentiated web / mobile / supply chain analytics which they could have gotten and setup with 3rd party software for an order of magnitude cheaper.

I can only imagine what this hypothetical startup could have learned if they spent that money actually talking to customers, and running more experiments.

I’ve heard people talk about data as the new oil but for most companies it’s a lot closer uranium. Hard to find people who can to handle / process it correctly, nontrivial security/liabilities if PII is involved, expensive to store and a generally underwhelming return on effort relative to the anticipated utility.

My take away was that startups benefit tremendously from a data advisor role to get the data competency, as well as the educational and cultural benefits, but realistically the data infrastructure and analytics at that scale should have been bought not built. Obviously there are a couple of exceptions such regulatory reasons like hippa compliance for which building in-house can be the right choice if no vendor fits your use case.


As someone who reaches for code if they need to blow their nose, what is a 3rd party vendor going to supply that a “English-to-SQL translators” wont do?

(I have not finished the article, but the idea that devs / data scientists can be replaced by some vendors makes me wonder what I have missed)

Edit: Also love the Uranium quote :-)


So my assumption is that for a given business model, like e-commerce or Saas business much of the highest value analysis is fairly standardized and can be templated. For example breaking down conversion rate by weekly cohort is something that can be pretty easily be done in google analytics.

The problem with English to sql translators or most coders in general are the assumptions we make, in particular about the underlying data. For example, say we want a join two tables, so we write a query to join on two columns and often call it correct which it is from a logical or schema perspective it is. However, null values, defaults like 0, many to one relationships vs one to one relationships, issues with instrumentation such as networking timeouts or bot detection, etc all can impact the down stream metrics. My point is that when there are 500 lines of sql in a query such as those mentioned the article, there’s a lot of ways to be mostly correct but to cumulatively be wrong.

Like many popular enough open source tools, 3rd party vendors get battle tested, issues get found before you, and they can justify devoting more resources to rigorously ensure correctness than the average analyst has the time or energy todo because their business depend on you trusting the outputs.

I’m not saying you couldn’t do all this yourself. But given the sheer number of analytics tools that are reasonably priced, you might have chosen to spend your time on something more specialized like a recommendation system.


can you point me at some of the vendors - I am missing a chunk of knowledge i suspect.

Or is this - for exmaple - people taking google analytics and producing analysis on top of that.?


Highly recommend Heap [1] - they have a neat approach that doesn’t require you to ‘decide’ which analytics you want to track ahead of time.

Disclaimer: I was an early engineer at Heap.

[1] https://heap.io/


Heap might be good but they are crazy expensive. We were quoted something like a quarter million dollars. Good luck getting that signed off, plus you still need quite technical analysts to run the thing.

I've found https://contentsquare.com/ to be much better received by juniors and seniors alike, and it's a fraction of the cost of heap.


I don’t know the specifics of what you were quoted, but a quarter million dollars (guessing per year?) does strike me as high.

Were you a later-stage startup by chance? The price point for pre-Series-C startups should be much, much lower.


That's odd. Why would you charge more for a post-series C startup or enterprise versus a pre-series C?


That’s generally how pricing works for SAAS products - most later stage customers have stricter or more customized needs. Think support SLAs, SSO, ACLs for their employees, etc.


Ah, so these do do web analytics on users - ok. That makes much more sense.


Very happy heap customer here. Been using it since 2016 or so and brought it from last company to my current startup. Autocapture is magic.


+2 on that! would love to know about what you think is worth investigating @zippy5


+1. @Zippy - May I ask for some of the vendors you refer to, please?

Also love the Uranium analogy.


So for example, the author saw that supply chain team had difficulty managing the complexity and scale of their analysis in large part due to the scalability of their spreadsheet solution. I would have pushed them to use Airtable which is basically a more scalable spreadsheet. By choosing the data pipeline route, the people who understand how to improve the supply chain model and the history of decisions that went into it, as well as previous missteps, now have limited ability to experiment with improving it. In my experience, every rewrite of a system has something lost in translation which makes me think that in the authors example that the life of the analysts got better but may have made the quality of supply chain model worse.

In the long run, there is plenty of useful logistics software that should do everything they want but the most important thing is to empower the people with domain expertise in the data to be as close to the solution as possible. Better decisions are often a result of better information/experience than better analysis. Unfortunately I haven’t studied these vendors well enough to make any suggestions though I believe that the solutions are well defined enough to write textbooks on them, which suggests to me that existing software and I would mostly implement similar methodologies.

On the marketing and product analytics tools, I think 80% of the problems boil down to measuring conversion rates and the comparing those rates across different contexts to select for the contexts which improves those rates.

Another user mentioned heap, which is great product if you know you don’t know what contextual data is meaningful but you suspect that it’s partially in how they interact with other parts of your website. Personally I’d use heap judiciously since I suspect there will be limitations to how useful the historical data will be in the future and collecting everything is expensive. One limitation is that site interactions are only part of the potentially important context. Another limitation is that startups change rapidly, so their historical data often depreciates in terms providing insight into their current problems. For an extreme example, I’m sure zoom’s conversion data before and during pandemic look completely different. But even a small tweak to google’s search algorithm could totally change what type of customer finds your site.

Personally I’d advocate talking to customers, potential customers, and other stake holders to understand what is important and measure that. Most companies, currently do the opposite where they take a lot of measurements and then try to figure out what’s important. The first approach can probably be done in google analytics. The second I might try and use Amplitude which is I what imagine a tool like heap will eventually try to evolve into.

The hardest person to help with data in the organization is the CEO because really they use data as form sales tool and reporting. The closest I have seen a tool to doing this in a way the CEO could mostly self service is Sisu data. Though it’s the CEO so it’s probably reasonable to hire some help anyway.

Lastly data warehouses were the gold standard in the early 2010s but Presto is better fit these days for companies whose data is distributed across many different places.


> spends 3M on the team and infrastructure

You're making a pretty big assumption on cost of team & infrastructure there. This company could have 100+ people with that kind of revenue (I've worked at a company this size before). The data team is only about 6 people. The cost of the data team & infrastructure is likely less than $1M


Having unique data is quite valuable. If your organisation can make decisions based on signals that other people can't detect then it can gain a decisive edge.

I do wonder at the anecdotes in this article though. In businesses that I've seen, the data team is usually the biggest impediment to a data-driven culture because they have databases full of numbers and no real grasp of how that links to the decision making process that makes the business money.

Beefing up the team doesn't help. In data, as in business more generally, the important think is not trying to guess what job your doing and spend a lot of time talking to customers about what job they need done. If the data team is where that work happens in a business then that can be helpful - but the grunt work of SQL/reporting/basic analysis is almost never where the value appears from.


> My take away was that startups benefit tremendously from a data advisor role to get the data competency, as well as the educational and cultural benefits, but realistically the data infrastructure and analytics at that scale should have been bought not built.

I really like your takeaway about data teams at tech companies. They try to make "data" a core competency of their business, at huge cost for fixed value.

I also appreciated the very subtle implication that the OP is shrouding empire building under an otherwise informative growth story.


> it’s a lot closer uranium

Love this analogy!


I somewhat agree that a gritty person shouldn’t keep a fast food job, but some of the Uber drivers I meet are incredibly gritty. They work incredibly long hours and grind on multiple apps. They have well thought out strategies for how to make as much as they can per hour and have plans for how they will invest their money.

I admire the heck out of em but I can see most people don’t want that life. I have no doubt that kind of work ethic could be wasted in the wrong environment but a successful person should have both grit and the ability to find an environment where they can use it to get ahead.

I’d even argue that corporate America as whole rarely rewards grit with a couple exceptions.


"some of the Uber drivers I meet are incredibly gritty."

That's exactly the point. There's a huge number of people bringing a hell of a lot of hustle to just keeping their head above water.

There was a brilliant blog post a couple years ago pointing out that perfect competition - from the perspective of the gig worker on the ground - is fundamentally dystopian. When you're providing a commoditized service (like driving a car) in perfect competition, you've got nowhere to go but working harder or smarter. Which quickly means that everyone in the market is working just as hard and smart, bringing maximum hustle (or, if you prefer, grit) just to hang on.


> and the ability to find an environment where they can use it to get ahead

Reminds me of this quote:

"Free enterprise needs elbow room"

-Poul Anderson


SSN were never really secret. For the most part the first 5 digits are a derivative of when and where you were born (public record) and you’ve given out the last 4 to every financial institution and employer.

You last part is spot on. Basically people should setup a password at the DMV or something.


The United States Postal Service would be a great "trust provider" (managed PKI, signing personal certificates for individuals and busnesses, etc). They already do it inasmuch as many government agencies (the BMV in my state, for example) accept addressed official correspondence as proof of residency.


> the first 5 digits are a derivative

SSN assigned after 2011 are randomly assigned. The first digits no longer have any special meaning.


If the people looking to learning about this subject have the “dark ages” misconception, is it so bad to make it easy to find via search and then explaining the nuances of the misnomer?


At quantum scale or near the speed of light newtonian mechanics breaks down, so we know it a local approximation. However, for set circumstances it approximates it does it well. For many decades physists searched universally true theory to no avail.

My point is this, IQ seems to have signal. It probably doesn't hold up in extreme circumstances. Psychology in particular seems to have a difficult time holding large enough studies to find universal constants that we should accept that IQ might be the best local approxination for now.


That doesn’t support your original point. Color blindness is very persistent over time IQ isn’t. Sure +/- 7 IQ points sounds very accurate, but that can be more than a 30 percentile change from two different tests 1 month apart meaning the actual signal is very weak.

You can read papers saying IQ texts on 2 year olds are strongly correlated with adult IQ, but read the details and individual people’s scores varied wildly.

So sure in aggregate it seems consistent, but that’s largely an artifact of test design. Someone with mental handicaps so profound as to be non communicative will consistently score very low, it doesn’t mean much for the general population.


Your completely missing the point..

Honda sells commodities. Maserati sells status.

Motorola sells commodities. Apple sells status.

Computers are commodities, smart phones are commodities and at the end of day none of that stops people from buying from apple

Cars make a ton of sense for apple because it’s a product which a decent percentage of people will all pay a premium to get the cool version.


Apple’s status is based on the capabilities of their products. They try to only make stuff that works really well; as a result they don’t make inexpensive stuff.

But they don’t make very expensive stuff either, the type of stuff whose status is independent of its function, like the equivalent of $800 jeans. They’ve tried a couple times (gold Apple Watch for example) and it did not catch on.

So, I think it’s fair to ask what unique functionality Apple will bring to a car to differentiate it from all the other cars.

And from an internal perspective, Apple leaders have repeatedly said that they are only interested in new markets when they think they can have a transformative impact. So again, I think it’s fair to wonder what that impact might be. There’s undoubtedly more to their thinking than just slapping an Apple logo on a Hyundai.


Who knows what they could bring? When iPhone was rumoured the speculations were quite widely off from what Apple has actually put out.

I think the only real difference between entering car business compared to other categories Apple has transformed is regulation. Cars need to look and behave a certain way because of laws in many cases.


It’s one thing to pay a highly competent person 6 figures. It’s an entirely different thing if you can’t fire them. I’m not saying that the government doesn’t need more competent people but it’s not going to work if there’s a bigger carrot and no stick.


Part of a solution for that are term-limited civil service jobs - that keeps a rotation of talent and low performers can't stay forever.

That said, you raise an important problem - how do you manage poor performers without allowing a bad-faith political official to fire people unfairly?


Provide clear mandates and then structurally separate organizations that should exist indefinitely from most interference by elected officials. Some of the most effective institutions of the US Government are the ones that were deliberately structured to be self-organizing, the military, the federal reserve system, and (until recently) the postal service.


The other side is that the peak of American prosperity occurred post WWII where a majority of the developed world (any country that had experienced an industrial revolution), was blown to shreds. This gave the US roughly a temporary monopoly on manufacturing and exports. It’s hard to see a future where the dominance of American prosperity happens again unless that scenario repeats itself. Aka WWIII with the US coming out unscathed and the rest of the world destroyed.

No doubt your point of mismanagement and bad actors is valid but it’s important to recognize that much of the glory days of the American middle class corresponded to the height of many others misfortune.


Note that this effect hasn't just occurred in America but other developed countries (e.g. London, Canada, Australia) all which had more secure work, local manufacturing, local union power/membership, etc and have experienced similar trends as the US.

In our local newspaper there have been many articles about wage stagnation and the decline of our manufacturing base as well - albeit we may be a few years behind America which isn't much in the grand scheme of things.


It is possible, look at asia, they don't have land lines in many countries, they started around 3G, and now are pushing that further with 5G. The last economy to update will be the most advanced. So if the US can learn from China and Europe, it is possible that they could become a greater manufacturing power.


I think it’s worth noting that popularity isn’t always deterministic or meritocratic.

I once read about an experiment where users would listen to music and and the platform counted the number of listens and showed users. When the experiment was rerun, a different set of songs from the first experiment were popular. I believe I read this in Invisible Influence by Jonah Berger.

The point is that most content platforms select for content that is sufficiently good (or clickable) but not necessarily the best.


>most content platforms select for content that is sufficiently good (or clickable) but not necessarily the best

Best is in the eye of the beholder anyway. But, truth be told, as long as an algorithm is reasonably dialed into my genre preferences... (There are some genres I just don't like even if you show me the very cream of the crop. And there are less popular ones that I'll probably enjoy even Tier 2 performers well enough.) I'll be pretty well satisfied with the overall most popular songs in those genres. Will it hit all my favorites. No. But it wouldn't be a bad cut.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: