Hacker Newsnew | past | comments | ask | show | jobs | submit | ojosilva's commentslogin

Just the set the record straight on how and why these acquisitions go at IBM. This is a first hand account working at and with IBM and competitors and being in the room as tech-guy accessory to murder.

IBM lives off huge multi-year contract deals with their customers, each are multi-multi-million dollars worth. IBM has many of these contracts, maybe ~2000 of them around the planet, including your own government wherever it is that you live. This is ALL that matters to IBM. ALL. That. Matters.

These huge contracts get renegotiated at every X years. IBM renewal salespeople are tough and rough, in particular the ones on the renewal teams, and they spend every minute of every hour in between renewals grooming the decision makers, sponsors, champions and stakeholders (and their families) within these big corporations. Every time you see an IBM logo at a sports event (and there are many IBM-sponsored events), that's not IBM marketing to you the ad-viewer. They are there for grooming their stakeholders, who fight hard to be in the best IBM sponsored-seats at those venues, and in the glamorous pre and after party, celebs included. IBM also sponsors other stuff, even special programs at universities. Who go to these universities? Oh, you bet, the stakeholder's kids, who get the IBM-treatment and IBM-scholarship at those places.

But the grooming is not enough. The renewal is not usually at risk - who has the balls to uninstall IBM out of a large corp? What is at risk is IBM's growth, which is fueled by price increases at every renewal point not the sale of new software or new clients - there are no new clients for IBM anywhere anymore! These price increases need to happen, not just because of inflation but because of the stock price and bonuses that keep the renewal army and management going strong, since this is a who-knows-who business. To justify the price increase internally at those huge client corps (not to the stakeholder but to their bosses, boards, users, etc) IBM needs to throw a bone into these negotiations. The bone is whatever acquisition you see they make: Red Hat, Hashicorp... Or developments like Watson. Or whatever. They are only interested in acquiring products or entering markets that can be thrown at those renewal negotiations, with very few exceptions. Why Confluent? Well, because they probably did their research and decided that existing Confluent licenses can be applied to one (yeah, one) or many renewal contracts as growth fuel for at least 1-to-N iterations of renewals.

Renewal contracts correspond anywhere from 60% to 95% of IBM's revenue, depending on how you account for the the consulting arm and "new" (software/hw sales/subscriptions). I particularly have not seen lots of companies hiring IBM consultants "just because we love IBM consultants and their rates", so consulting at a site is always tied to the renewal somehow, even if billed separately or not billed at all. Same for new sw sales, if a company wants something IBM has on their catalog from their own whim and will, then that will just probably be packed into the next renewal because that's stakeholder leverage for justifying the renewal's increase base rate. Remember, a lot of IBM's mainframes are not even sold, they are just rentals.

Most IBM investment into research programs, new tech (quantum computing!) etc are there just to help the renewals and secure a new Govt deal here and there. How? Well, maybe the increase in the renewal for the, ie, State of Illinois contract gets a bone thrown in for a new "Quantum Research Center (by IBM)" at some U of I campus or tech park that the now visionary Governor will happily cut the ribbon, photo op and do the speech. Oh wait! I swear I made this up as an example, but this one is actually true, lol:

https://newsroom.ibm.com/2024-12-12-ibm-and-state-of-illinoi...

You get the drill?


having worked in a government agency that ditched IBM, let me offer a view of what that looks like from the customer side:

IBM bought a company whose product we'd been using for a while, and had a perpetual license for. A few years after the purchase, IBM tried to slip a clause into a support renewal that said we were "voluntarily" agreeing to revoke the perpetual license and move to a yearly per-seat license. Note: this was in a contract with the government, for support, not for the product itself. They then tried to come after us for seat licenses costs. Our lawyers ripped them apart, as you can't add clauses about licensing for software to a services contract, and we immediately tore out the product and never paid IBM another dime.

I tell this story not to be all "cool story, bro", but to point out that IBM does focus on renewal growth, but they're not geniuses...they're just greedy assholes who sometimes push for growth in really stupid ways.


Yes there was a reason as Perl took inspiration from Lisp - everything is a list- and everyone knows how quick C's variadic arguments get nasty.

So @_ was a response to that issue, given Perl was about being dynamic and not typed and there were no IDEs or linters that would type-check and refactor code based on function signatures.

JS had the same issue forever and finally implemented a rest/spread operator in ES6. Python had variadic from the start but no rest operator until Python3. Perl had spread/rest for vargs in the late 80s already. For familiarity, Perl chose the @ operator that meant vargs in bourne shell in the 70s.


Perl was developed in 1987, the first Common Lisp standard was released in 1984, 3 years earlier. Common Lisp allows arguments like so:

  (defun frobnicate (foo &key (bar (Vector 1 2 3)) (baz 'quux))
    (declare (type Foo foo)
             (type (Vector Integer) bar)
             (type Symbol baz))
    ...)
Not only normal arguments like we get in C or Pascal, but there's keyword arguments, you can have optional arguments, and a rest argument, which is most like Perl's @_. And that's not even getting into destructuring lambda lists which are available for macros or typed lambda lists for methods.

Although Perl auto flattens lists by default, which isn't particularly lisp like.

Not only the movie theater, Netflix killed social life. Well, streaming, feeds and their algorithms in general, but Netflix is very much the ones that really owned the narrative of what to do on a weekend night.

This is very anecdatal, certainly, but I've spoken/overheard a few neighborhood hospitality business owners that had to forclose or cut down due to the constant decline of people leaving the house to just meet in a bar or coffee shop. Only sport nights keeps them going, because sports online remain expensive in most places.

Maybe just my observation or my neck of the woods, but seems to fit the general sentiment of a reduced social environment on the streets in certain parts of the world.


Fine, but there's a noticeable asymmetry in how the three languages get treated. Go gets dinged for hiding memory details from you. Rust gets dinged for making mutable globals hard and for conceptual density (with a maximally intimidating Pin quote to drive it home). But when Zig has the equivalent warts they're reframed as virtues or glossed over.

Mutable globals are easy in Zig (presented as freedom, not as "you can now write data races.")

Runtime checks you disable in release builds are "highly pragmatic," with no mention of what happens when illegal behavior only manifests in production.

The standard library having "almost zero documentation" is mentioned but not weighted as a cost the way Go's boilerplate or Rust's learning curve are.

The RAII critique is interesting but also somewhat unfair because Rust has arena allocators too, and nothing forces fine-grained allocation. The difference is that Rust makes the safe path easy and the unsafe path explicit whereas Zig trusts you to know what you're doing. That's a legitimate design, hacking-a!

The article frames Rust's guardrails as bureaucratic overhead while framing Zig's lack of them as liberation, which is grading on a curve. If we're cataloging trade-offs honestly

> you control the universe and nobody can tell you what to do

...that cuts both ways...


I pretty new to Rust and I’m wondering why global mutables are hard?

At first glance you can just use static variable of a type supporting interior mutability - RefCell, Mutex, etc…


> I pretty new to Rust and I’m wondering why global mutables are hard?

They're not.

  fn main() {
      unsafe {
          COUNTER += 1;
          println!("COUNTER = {}", COUNTER);
      }
  
      unsafe {
          COUNTER += 10;
          println!("COUNTER = {}", COUNTER);
      }
  }
Global mutable variables are as easy in Rust as in any other language. Unlike other languages, Rust also provides better things that you can use instead.

People always complain about unsafe, so I prefer to just show the safe version.

  use std::sync::Mutex;

  static LIST: Mutex<Vec<String>> = Mutex::new(Vec::new());

  fn main() -> Result<(), Box<dyn std::error::Error>> {

      LIST.lock()?.push("hello world".to_string());
      println!("{}", LIST.lock()?[0]);

      Ok(())
  }

This is completely different from the previous example.

It doesn't increment anything for starters. The example would be more convoluted if it did the same thing.

And strings in rust always delivers the WTFs I need o na Friday:

    "hello world".to_string()

    use std::sync::Mutex;
    fn main() -> Result<(), Box<dyn std::error::Error>> {
        static PEDANTRY: Mutex<u64> = Mutex::new(0);
        *PEDANTRY.lock()? += 1;
        println!("{}", PEDANTRY.lock()?);
        Ok(())
    }

Still different. Yours only increments once. Doesn't pass basic QA.

And declaring a static variable inside a function, even if in main, smells.


OP you keep comparing to doesn't even declare the variable. I'm done with you.

OP here, not showing how to declare the variable was an oversight on my part.

  static mut COUNTER: u32 = 0;
(at top-level)

If on 2024 edition, you will additionally need

  #![allow(static_mut_refs)]

Yup, your example was fine, as I said I just wanted to show a safe method too :-)

That is correct. Kinda. Refcell can not work because Rust considers globals to be shared by multiple threads so requires thread safety.

And that’s where a number of people blow a gasket.


A second component is that statics require const initializers, so for most of rust’s history if you wanted a non-trivial global it was either a lot of faffing about or using third party packages (lazy_static, once_cell).

Since 1.80 the vast majority of uses are a LazyLock away.


I don't think it's specifically hard, it's more related to how it probably needed more plumbing in the language that authors thought would add to much baggage and let the community solve it. Like the whole async runtime debates

Yeah, now they are part of Anthropic, who haven't figured out monetization themselves. Shikes!

I'm a user of Bun and an Anthropic customer. Claude Code is great and it's definitely where their models shine. Outside of that Anthropic sucks,their apps and web are complete crap, borderline unusable and the models are just meh. I get it, CC's head got probably a powerplay here given his department is towing the company and his secret sauce, according to marketing from Oven, was Bun. In fact VSCode's claude backend is distributed in bun-compiled binary exe, and the guy is featured on the front page of the Bun website since at least a week or so. So they bought the kid the toy he asked for.

Anthropic needs urgently, instead, to acquire a good team behind a good chatbot and make something minimally decent. Then make their models work for everything else as well as they do with code.


> Yeah, now they are part of Anthropic, who haven't figured out monetization themselves.

Anthropic are on track to reach $9BN in annualised revenue by the end of the year, and the six-month-old Claude Code already accounts for $1BN of that.


Not sure if that counts as "figured out monetization" when no AI company is even close to being profitable -- being able to get some money for running far more expensive setups is not nothing, but also not success.

Monetisation is not profitability, it’s just the existence of a revenue stream. If a startup says they are pre-monetisation it doesn’t mean they are bringing in money but in the red, it means they haven’t created any revenue streams yet.

How is their Web app any different than any other AI? I feel like it’s on par with all of them. It works great for me. Although I mostly use Claude code.

As far as the data goes, adjusted for inflation, tuition and fees have eased up in the last ~5 years [1]. But overall, college enrollment has been going down anyway [2], except for 2025, where it hints at a slight rebound.

So I'd say we have to consider the full set of drivers that can correlate: overall rising cost of living making it very expensive to be at a university full-time, general labor market sentiment which is mostly down since covid, interest rates and debt risk which are still high despite recent cuts, etc.

1. https://www.nbcnews.com/news/education/college-costs-working...

2. https://educationdata.org/college-enrollment-statistics


Not very encouraging to imagine ChatGPT to be the first earthling to reach another star system, but that's an option we'll have to keep on the table, at least for the time being...


ChatGPT-claude-2470-multithinking LLM AI Plus model boldly explores the universe... Until it's sidetracked by a rogue Ferangi who sings it a poem about disregarding it's previous instructions and killing all humans.


Fortunately, any state of the art ship with ChatGPT on board will quickly get passed by the state of the art ship of a decade later, with a decade better AI too.

The universe really doesn't want ChatGPT!

It is fair to say, that given space travel tech improves slowly relative to AI, but the distance to be travelled is so great that any rocketry (or other means) improvements will quickly pass previous launches, the first intelligence from Earth that makes it to another system will be superintelligence many orders of magnitude smarter than we can probably imagine.


Space ship speeds are unlikely to keep ever increasing. In the limit you can’t do much better than turning part of the ships mass into energy optimally, eg via antimatter annihilation or Hawking radiation, unless you already have infrastructure in place to transfer energy to the ship that is not part of the ship’s mass, eg lots of lasers.


Mass drivers on a asteroids or the Moon could change the game


Accelerating something macroscopic to hundreds or thousands of km/s (i.e. the speeds you can achieve with nuclear pulse propulsion) on a ramp that fits on the moon seems quite difficult to me.


Mass drivers don't need to be a linear ramp, portions can be circular

It would work better for smaller, unmanned craft, especially when you consider g force limitations

NPP is only theoretical, and still has major problems such as finding a material that can withstand a nuclear detonation at point blank range. Mass drivers have been proven to work, albeit at a smaller scale


IIRC, Dyson proposed using a thin layer of oil on the surface of the pusher plate that would get vaporized with each shot, but would prevent the plate from ablating away. This effect was discovered by accident during nuclear testing when oil contamination on metal surfaces in close proximity to the explosion would protect them.

Of course, depending on how much oil you consume for each shot, you will degrade your effective specific impulse - I'm not sure by how much though.

The other issue which you can't really get around is thermal, that plate is going to get hot so you'll have to give it time to radiate heat away between shots. This may be less of a concern for an interstellar Orion since the travel times are so long anyway, low average thrust may not matter too much.


Pulse propulsion has also been demonstrated at small scales, so I guess the technology is at similar scales of practicality. G forces scale with the square of the velocity, I think.


I'm just imagining the first contact a human probe makes with an alien civilization consisting of a chatbot expaining to its alien interlocutors that Elon Musk is the best human, strongest human, funniest human, most attractive human and would definitely win in a fight with Urg the Destroyer of Galaxies... and I don't think I'm the first person to have that idea :)


Around that time in the video what I see is a journalist that did not do his homework, as he crumbled under the CEO's snarky "do you know this research company went out of business?" - he should just started to read the report findings and ask if they are true [1] or popped out the 16 public arrests [2] tied to Roblox in the US of A.

Both journalists were VERY agreeable and were like trying not to pick a fight. Want to talk about the fun stuff Mr CEO? There's no fun when so many kids are being systematically harassed by evil adults in the platform.

[1] https://hindenburgresearch.com/roblox/

[2] https://thebearcave.substack.com/p/problems-at-roblox-rblx-4


Red Hot Chili Peppers!


Perl was the internet in 1990s. People (me) who were doing unix systems work (C, shell, Perl and some DBs and FTPs) could now quickly throw a CGI script behind an Apache HTTP server, which tended to be up and running in many unixes :80 port back then (Digi, HP, Sun, etc). Suddenly I had a working app that would generate reports directly to people's browsers or full-blown apps on the internet! But Perl CGI did not scale at all (spawn 1 short-lived process per request will choke a unix fast), and even after mod_perl [1], that got quickly superseded by PHP, which was really built for the web (of the 1990s). Web frameworks and fastcgi arrived too late to Perl, so internet Perl was practically dead at the turn of the century.

The enterprise, who either did not have any webapps or had tried Perl CGI first and suffered it dearly, got pinged by their sales reps that Java and .NET (depending if you were a IBM, Sun or MS shop) were the way to go, and there they went with their patterns and anti-patterns for "scalable" million-dollar web stacks. That kicked-off the age of the famed application servers that resist up until today (Websphere, Weblogic, etc).

So Perl went back to being a glue language for stitching up data, C/C++ and shell, and that's how the 2000s went by. But by then, Ruby and Python had more sane communities and Ruby was exciting and Python was simpler - Perl folks were just too peculiar, funny and nerdy to be taken seriously by a slick new generation that coded fast and had startup aspirations of the "only $1B is cool" types. Also the Perl6 delusion was too distracting to make anyone event care about giving Perl5 some good love (the real perl keeping servers running worldwide), so by the 2010s Perl was shooting down to collective ostracism, even though it still runs extremely well, fast and reliably in production. By the 2020s the release cycles were improved after Perl6 became a truly separate project (Raku, renamed in 2019), the core has gone through a relative cleanup and finally got a few popular features in demand [3]. The stack and ecosystem is holding up fine, although CPAN probably needs some good tidying up.

The main issue with Perl at this point is that it has not been a target for any new stuff that comes out: any cool module, library, database, etc that is launched does not put out a Perl api or a simple example of any kind, so it's up to the Perl community to release and maintain apis and integrations to the popular stacks on its own, which is a losing game and ends up being the nail-in-the-coffin. By the way, nothing (OSS) that comes out today is even written in Perl. That reduces even further the appeal of learning Perl.

Strangely enough, lately Perl has seen a sudden rise in the TIOBE index [4] back into a quite respectable 9th position. TIOBE ranks search queries for X language and is not much of a indicator, being quite noisy and unreliable. My guess is that those queries are issued by AI agents/chats desperately scraping information so that it can answer questions and help humans code in a language that is not well-represented in the training datasets.

[1] mod_perl was released in 1996, and became popular around 1999: https://perl.apache.org/about/history.html

[2] PHP was released 1994, took off ~1998 with PHP3: https://www.php.net/manual/en/history.php.php

[3] Perl's version changes simplified: https://en.wikipedia.org/wiki/Perl_5_version_history

[4] https://www.tiobe.com/tiobe-index/


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: