Hacker News new | past | comments | ask | show | jobs | submit | dawidloubser's comments login

And hydraulic power back seats, and a hydraulic sun roof (!!) and hydraulic power aerial, and hydraulic controls for the air suspension. Some models even had a hydraulic glass partition between driver and passengers.

Absolute nuts. But the high point of dead-silent, powerful, smooth conveniences no matter the maintenance cost.


I used to have a Ferrari 355 with a hydraulic soft top… sort of. The hydraulics (when they worked) only got the top part of the way up, and you had to manually finish the job. If the hydraulics didn’t work, or the seat and window sensors didn’t work, which was frequently, you were out of luck. If you disconnected the hydraulic system you could do the whole job manually and it was actually faster to do it that way.


In addition to those reasons, the pneumatic system was smaller than an equivalently powerful electric motor, at that time, so I've been told.


> Absolute nuts. But the high point of dead-silent, powerful, smooth conveniences no matter the maintenance cost.

Wife's car broke down (probably the water pump or just a loose hose) on the highway while coming back from vacation. As it was a very busy day where we were (France), there were not any regular cab left so the insurance company sent us a driver with... a Mercedes S class 550 (not a taxi but a private driver: no "cab / taxi" thinggy on the roof). It's still as you wrote: dead-silent, powerful and silky smooth.

Diesel engine but as a passenger I honestly couldn't tell, even though I'm a petrolhead and tend to notice these things.


I've always been into vintage Mercedes-Benz cars.

A talented and wonderfully jovial mechanic ran a specialist workshop east of Johannesburg, in Brakpan (South Africa).

For a long time he had a Mercedes 600 - similar to the linked video but with the M100 engine naturally-aspirated as standard - in the shop.

That particular car, finished in a cream colour, belonged to Ian Smith (1919-2007), the last colonial prime minister of Rhodesia before it became Zimbabwe.

I got to spend a lot of time with the car, including several drives in it, and especially seeing the mechanical bits exposed including the ridiculously cool (silent, powerful) hydraulics that operated every convenience of this car, from power windows to power back seat.

While a Mercedes 600 indeed screams "head of state" or "pope" it gets my vote for the all-time, no-holds-barred, most opulent and classy car of all time with absolutely nothing kitsch or gangsta about it.

A true high point of what Mercedes-Benz once was.

And don't get me started on it's little brother, the W109 300SEL 6.3, of which there was a lovely example in the shop for a few months as well. It drove and operated perfectly, I had the most amazing solo test drive in that car, taking it to 200km/h.

This was in 2003, and the 300SEL 6.3 was for sale for $3,000 and Ian Smith's 600 was about $12,000.

God if only I had the money then! I wonder where both of those rarities, sitting in a small workshop in the East Rand in South Africa, ended up at. Probably exported to Europe or the USA.

Anyway, just wanted to share that small anecdote.

P.S. The Chrome, Leather, and craftsmanship of a Mercedes 600 is far beyond any Rolls or Bentley, of that or any other period. And Mercedes vehicles of that era were made to last decades and hundreds of thousands of miles down to every detail including all the rubber parts. The disposable garbage churned out in the 21st century simply fuel man's insane behaviour to constantly buy, consume, discard.

There was a time when cars were made for a different modality.


I agree about Mercedes quality. The difference between them and other cars like Rolls or Bentley, I think, is that Mercedes were made to be owned and serviced by regular people. They were built to be exceptionally robust, and that took pride of place (for a while) over technical and luxury features.

The biggest Merc I’ve had was a W126, a 300SDL, and that was magnificent.


But surely you can see that the economy does not serve us. Who is "us", anyway?

The economy, and the worldwide technological system that it fuels, behaves like its own organism with its own ultimate goal unbeknownst to us.

By looking around you, is it not perfectly clear to you too that it does not have anything to do with the well-being of people?


People work to eat. You missed the point entirely. I refer you to my first post on this.


I guess the more succinct the code, the more the reliance on understanding what a function actually does - either through experience, or by reading the docs. The words function is simply:

  words :: String -> [String]
So that

  words "foo bar baz"
  -- Produces: ["foo","bar","baz"]
In my experience, both the blessing and the curse of Haskell's incredible succinct expressiveness is that, like other specialised languages - for example using latin for succinctly-expressed legal terms - you need a strong understanding of the language and its standard library - similar to the library of commonly used "legal terms" in law circles - to participate meaningfully.

Haskell, and languages like Go (which anybody with a bit of programming experience can easily follow) have very different goals.

Like many in this discussion, I too have developed a love/hate thing with Haskell. But boy, are the good parts good ...


I recently learned that things like Goroutines aren’t naturally written with buffers and channels. Granted anyone who reads the original documentation would likely do it correctly, but apparently that’s not how they are intuitively written. so while it may be easy to read it might be harder to write than I was assuming.

So maybe there a difference where Haskell has an advantage? I mentioned it in my previous comment but I don’t know Haskell at all, but if this is “the way” to do splits by word then you’ll know both to read and write it. Which would be a strength on its own, since I imagine it would be kind of hard to do wrong since you’ll need that Haskell understanding in the first place.


It all comes down to knowing the FP vocabulary. Most of FP languages share the names of the most widely used functions, and if you're well versed in Haskell you'll have 80/20 ratio of understanding them all, where the 20% part would be language-specific libraries that expand and build upon the 80% of the FP vocabulary.


I think that anybody who finds the process of clumsily describing the above examples to an LLM in some text box using english and waiting for it to spit out some code which you hope is suitable for your given programming context and codebase more efficient than just expressing the logic directly in your programming language in an efficient editor, probably suffers from multiple weaknesses:

- Poor editor / editing setup

- Poor programming language and knowledge thereof

- Poor APIs and/or knowledge thereof

Mankind has worked for decades to develop elegant and succinct programming languages within which to express problems and solutions, and compilers with deterministic behaviour to "do the work for us".

I am surprised that so many people in the software engineering field are prepared to just throw all of this away (never mind develop it further) in exchange for using a poor "programming language" (say, english) to express problems clumsily in a roudabout way, and then throw away the "source code" (the LLM prompt) entirely such to simply paste the "compiler output" (code the LLM spewed out which may or may not be suitable or correct) into some heterogenous mess of multiple different LLM outputs pasted together in a codebase held together by nothing more than the law of averages, and hope.

Then there's the fun fact that every single LLM prompt interaction consumes a ridiculous amount of energy - I heard figures such as the total amount required to recharge a smartphone battery - in an era where mankind is racing towards an energy cliff. Vast, remote data centres filled with GPUs spewing tonnes of CO₂ and massive amounts of heat to power your "programming experience".

In my opinion, LLMs are a momentous achievement with some very interesting use-cases, but they are just about the most ass-backwards and illogical way of advancing the field of programming possible.


There's a new mode of programming (with AI) that doesn't require english and also results in massive efficiency gains. I now only need to begin a change and the AI can normally pick up on the pattern and do the rest, via subsequent "tab" key hits as I audit each change in real time. It's like I'm expressing the change I want via a code example to a capable intern that quickly picks up on it and can type at 100x my speed but not faster than I read.

I'm using Cursor btw. It's almost a different form factor compared to something like GH copilot.

I think it's also worth noting that I'm using TypeScript with a functional programming style. The state of the program is immutable and encoded via strongly typed inputs and outputs. I spend (mental) effort reifying use-cases via enums or string literals, enabling a comprehensive switch over all possible branches as opposed to something like imperative if statements. All this to say, that a lot of the code I write in this type of style can be thought of as a kind of boilerplate. The hard part is deciding what to do; effecting the change through the codebase is more easily ascertained from a small start.


Provided that we ignore the ridiculous waste of energy entailed by calling an online LLM every time you type a word in your editor - I agree that the utility of LLM-assisted programming as "autocomplete on steriods" can be very useful. It's awfully close to that of a good editor using the type system of a good programming language providing suggestions.

I too love functional programming, and I'm talking about Haskell-levels of programming efficiency and expressiveness here, BTW.

This is quite a different use case than those presented by the post I was replying to though.

The Go programming language has this mantra of "a little bit of copy and paste is better than a little bit of dependency on other code". I find that LLM-derived source code takes this mantra to an absurd extreme, and furthermore that it encourages a though pattern that never leads you to discover, specify, and use adequate abstractions in your code. All higher-level meaning and context is lost in the end product (your committed source code) unless you already think like a programmer _not_ being guided by an LLM ;-)

We do digress though - the original topic is that of LLM-assisted writing, not coding. But much of the same argument probably applies.


At the time I'm writing this, there are over 260 comments to this article and yours is still the only one that mentions the enormous energy consumption.

I wonder whether this is because people don't know about it or because they simply don't care...

But I, for one, try to use AI as sparingly as possible for this reason.


You're not alone. With the inclusion of gemini generated answers in google search, its going down the road of most capitalistic things. Where you see something is wrong, but you have no option to use it even if you don't want it.


I like to idealistically think that in a capitalistic (free market) society we absolutely have the option to not use things that we think are wrong or don't like.

Change your search engine to one that doesn't include AI-generated answers. If none exist any more, all of Google's customers could write to them telling them that they don't want this feature and are switching away from them because of it, etc.

I know that internet-scale search is perhaps a bad example because it's so extremely difficult and expensive to build and run, but ultimately the choice is in the consumers' hands.

If the market makes it clear that there is a need for a search engine without LLM-generated answers at the top, somebody will provide one! It's complacency and acceptance that leads apparently-delusional companies to just push features and technologies that nobody wants.

I feel much the same way about the ridiculous things happening with cars and the automotive sector in general.


when you take energy into account its like anti engineering. What if we used a mountain of effort to achieve a worse result?


Based on my experience of building several payment gateways, it is my opinion that it's pretty much always "3 lines of code" (which isn't true about this library - more like "3 steps") to post a payment, even to the nastiest acquiring or banking API.

The remaining 675,000 lines of code are to:

- Perform Risk / Fraud scoring to decide whether you want to, indeed, process this payment.

- Deal with the myriad of failure scenarios - including mapping them to your own system's error semantics - in a way that your customers can understand to reduce support calls.

- Refund, void or reverse previous payments.

- Create the necessary accounting entries in order to do settlements / settlement reports for your customers.

- Etcetera

Payments systems are perplexing to me: Nothing is a more obvious candidate for an absolute, standardised, commoditised piece of software in the same way that the global IP network routes packets - only in payments we are routing "promises" and our routes, and routing decisions, are in many ways much simpler.

Yet there are very few industries where this particular wheel gets reinvented as often as it does; each organisation convinced that it has its own unique approach to doing this absolutely standard, regulated "thing" - which, reductio ad absurdum, is just an expensive buffer in a network of pipes.

Hopefully open-source software will pave the way: TigerBeetle is an amazing start (distributed ledgers), and it's hopefully only a matter of time until the other components of a payments switch are freely available as open-source components with high-quality APIs.


So true. And next to the 675,000 lines of business logic that you describe, you'll need an additional 133,700 lines of boilerplate code that:

- ensures all operations are atomic. You can't just post to an API and then insert something in your database: that's a guarantee to have payments in one place and not the other.

- has some way to retry on failure: your average job-queue with exponential backoff is quite certainly still too naive.

- has full, secured and guaranteed audit logs. That have all the data needed for an audit, but also not too much. You chose to not go for the Event Sourced Architecture because of Reasons? Good luck bolting it in now.

The hard part isn't generating some pain.001.001.03 (yes, that really is the name for the SWIFT Payments Initiation in iso-20022) format. The hard part is everything else.


Add some lines for managing the bank's security requirements: signature, encryption and authentication.

If you work with multiple banks, like many corporates or fintechs, multiply many of these lines by the number of banks you work with.

Even before starting to code anything, a big part of the job is obtaining the documentation from each bank and specifying the integration for each bank.

For instance, for the same payment scheme, different banks require different maximum payments per file or payload, or maximum payment file or payload size.

More things to consider in this high-level article https://www.numeral.io/blog/bank-payment-integrations-challe...


Thanks for sharing Matthieu - you know a lot about the banking system. I previously worked at Modern Treasury, so I'm also very familiar with bank integrations.

If you had the time, would love to talk more about this - my email is svapnil@woodside.sh and I'd love to get in contact with you


I completely agree. When I first saw some of the issues ( in US ), my first reaction was that of disbelief. I simply could not believe this is the way it is set up.

If US has any defense, it is that it is not alone in this craziness as almost every bigger power center carefully manages its domain to ensure it remains a relevant player. To put it simply, there is too much money in managing different pipes.

At this point though, short of complete collapse necessitating full rewrite of the existing payment systems, standardization will not happen. ISO itself is a convoluted mess with one real benefit of using standard XML.


The government has typically abdicated payments to various proprietary networks and big banks in the US. Try getting your hands on the NACHA spec for our glacial "direct" deposit standard.

We also have no standard way of letting users authenticate to their bank to download transactions. Once you get logged in there is usually a way to get OFX (QFX) files but the process is manual. I happened on the European open banking documentation the other day...jealous.


Excellent example clearly from a fellow soldier from the trenches!

As somebody who has built several instances of both payments- and travel booking systems, I have seen things in systems that "adhere to published schemas" (often because the schemas were beastly, design-by-committee hellscapes of extensibility) that defy belief.

While there is a strong argument to be made that strict type systems in programming langues like Haskell and Rust make it very difficult to play outside of the rules, this is unfortunately not the case in practice when it comes to APIs - both at present where you have a JSON Schema or Open API spec if you are lucky, and in the past (XML Schema, SOAP).

I wish that the ability to express a tight, fit-for-purpose schema or API actually resulted in this commonly being done. But look at the average API of any number of card payment gateways, hotels, or airlines, and you enter into a world where each integration with a new provider is a significant engineering undertaking to map the semantics - and the errors, oh the weird and wonderful errors... to the semantics of your system.

I am glad to work in the space-adjacent industry now, where things are ever so slightly better.

(Note the lack of sarcastic emphasis - it really is only _slightly_ better!)


Another shout-out for Jellyfin. Somewhat unusual in that it's build with .NET technologies and runs on Linux, but it's proven to be an excellent media server for many years now.


That's a nice thought, but unfortunately the environmental cost of "storing" all of this material - i.e. all the billions of tonnes of plastic pollution already out there, and in there (inside you in the form of microplastics) - doesn't come for free.

It's a current problem, and we don't seem to have the technology or even the political will to solve it currently.


That's a great use-case! But what about when GPT hallucinates - gets it wrong. What happens when you have sent a (potentially legally-binding) quote to your customers, and you have to back out of it?

Humans make mistakes - whether directly, or indirectly through the software that we craft. But when the mistake happens, we can reason about it, understand why it happened, and fix the problem.

Wouldn't it be embarrassing to not be in a position to reason about / explain a mistake to your customers caused by this "thing" we use that's been trained on all the random information available on the internet? Would that not reduce the value and standing of your business (which is ultimately about the people and expertise contained within) in the eyes of customers?

As an engineer, technologist, and CTO I absolutely love what I've been able to craft using generative AI under direct supervision.

But I cannot imagine a world where we give our carefully thought-through, easily changeable, algorithms and processes for a black box that cannot be reasoned about, no matter how good the output _sometimes_ is.

I have to think many people feel this way.


Our sales people do a quick eye test before sending it out.

Usually, the list of recommendations are about ~8 services that we can replace. Our sales people know those services and prices by heart. A quick glance is enough for them. Our quotes are not legally binding. The contract that we sign is.

We found that it was highly accurate. GPT4 has been especially good at stuff like this after OpenAI changed it to write Python code for this kind of calculation.


This! GPT is not an "intelligence" by itself, but more an extension to the human brain, like an "exocortex" holding information that we cannot or dont want to keep in our neocortex. Google search was the first of these "exocortex" extensions to the brain that was popular among many people. So the same seems to happens now with LLMs. Maybe we will not end up with an AGI but more with a human/machine hybrid which has the "superhuman" capabilities.

just my 2 ct


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: