Hacker News new | past | comments | ask | show | jobs | submit | berkes's comments login

The main thing that keeps me from using Jupyter notebooks for anything that's not entirely Python, is Python.

For me, pipenv/pyenv/conda/poetry/uv/dependencies.txt and the invitable "I need to upgrade Python to run this notebook, ugh, well, ok -- two weeks later - g####m that upgrade broke that unrelated and old ansible and now I cannot fix these fifteen barely held up servers" is pure hell.

I try to stay away from Python for foundational stuff, as any Python project that I work on¹ will break at least yearly on some dependency or other runtime woe. That goes for Ansible, Build Pipelines, deploy.py or any such thing. I would certainly not use Jupyter notebooks for such crucial and foundational automation, as the giant tree of dependencies and requirements it comes with, makes this far worse.

¹ Granted, my job makes me work on an excessive amount of codebases, At least six different Python projects last two months, some requiring python 2.7, some requiring deprecated versions of lib-something.h some cutting edge, some very strict in practice but not documented (It works on the machine of the one dev that works on it as long as he never updates anything?). And Puppet or Chef - being Ruby, are just as bad, suffering from the exact same issues, only that Ruby has had one (and only one!) package management system for decades now.


Chrome is installed on (almost?) every Android phone. So they'd be buying much more than this.

Not a new "AI phone", which has to gain traction, find users, convince people to switch, compete in highly competitive (hardware( and duopolized (OS, Software) landscape.

I won't be suprised if amongst Android users, Chrome is one of the most installed apps - if only because many phones have it locked (i.e. its really hard or impossible to remove).

Maybe "Google Assistant" is installed more than chrome, IDK. But Chrome has the additional benefit that it is also installed on many iPhones. Sou Chrome would be a gateway into "making your iPhone an AI phone" too.


What would be "treason territory"? The leaking or the siphoning of case data?

How is this different from opening any website through a QR code, that will then run "arbitrary code"?

We certainly need more really good browsers which use gecko.

But for that to happen, Mozilla needs to up their effort to pull apart the components, decouple them from their own integration (firefox, thunderbird) and treat them as first-class projects, whose sole focus is to provide browser-builders and such with the components and tools to integrate the pieces.

Purely technical, it's still easier to build around "chrome" components. Which is why everything from electron, via "webviews" to the oculus browser or that webview-thing in your fridge, uses chrome tech and not mozilla. Edit: in an ideal world, it would be a no-brainer for e.g. Meta to pick Mozilla components to build a browser for their VR headset. Or for VW when they develop an in-car screen. Or for an app-builder to add some web-rendering of their in-app help.

But IMO this stems from a fundamental problem with Mozilla. Their cash-cow is firefox. So if they spend time and money making tech that then makes competing with firefox easier, they lose twice. So they will never truly commit to this.

Even if that would, IMO, be one of the most impactful things for Mozillas' manifesto of a "free internet".


It's notable that there's no real nodejs equivalent running on Mozilla tech. I'd love for someone closer to the tech to explain why there's not a rich ecosystem behind spidermonkey, etc.

I too would love to learn more about that.

I am not sure about the current state. But "back then" all the components in Firefox were tightly coupled and almost impossible to extract on their own.

"Back then" being, IIRC, 2012 or so, when I briefly worked on the web and CMS side of a project that used HTML + CSS (and a tiny bit of JS) to render the UI of a media-box. The OS was basically a thing that could boot a "browser" and handle network stuff. Firefox was not an option, as it was near impossible to even remove things like the address bar, tab handling and all that. But the hardware was so underpowered, that a full browser was not an option. Yet "yet another khtml" wrapped in the most basic "executable" did just fine.

But this is a while ago, and only one project that chose not to use Firefox/gecko.


How would they loose? Right now people looking for a "component" are just using chrome(ium), so Mozilla does not have those "users" to begin with.

If Gecko would be as usable for integration as Blink is more people would use it overall which is a net benefit for Gecko.


Their loss lies in the fact that this would enable people to build competitors to firefox, as they would basically make a box of components to do exactly that.

Yet Firefox, the product, is what brings in money. Not the underlying tech.


I remember the good old times when Mozilla had a project named Chrome (yes) to (if my memory serves me correct) make building apps with gecko easier.

edit: Probably misremembering, now that I searched for it. Yes "chrome" was (and still is?) used to describe the non-webview parts of the FF but apparently I totally made up the project part.


Are you thinking about XUL / XULRunner?

Was it Prism?

That was a project to make it easy to make site specific browser IIRC.


Yes, I think it was. I think some people used it to create custom "chromes" for Gecko, hence the confusion.

edit: Funny enough, the continuation project for Prism was named "Chromeless": https://github.com/mozilla/chromeless


Because IIRC in old Mozilla Chrome was a project name that was the UI layer of Firefox. Chromeless basically just mean Firefox without UI for other applications to built on top.

They should. But if they can is not that certain.

IANAL, but when I asked a person somewhat involved in EU anti-trust processes, osx and macos aren't even close to be classified as monopolies in most of the EU, so the idea that Apple is abusing their monopoly to enforce their own tech on users, doesn't apply that clearly.


EU antitrust doesn't require a monopoly (or even majority marketshare), just abuse of a dominant position. I still wouldn't bet on them going after macOS Safari any time soon, it's a much weaker argument they've been able to force much because of it (unlike iOS Safari).

Indeed.

We had "Altavista" and for a very short time it was OK, but then quickly decended into a ad-ridden "portal" This was 1997 or so.

The web was full of popups, and then popunders. It was not uncommon to close your browser in the computer-room, then have to close 20 popups that kept coming back. Some of which showing straight out porn. At least scams like viagra, "buy gold online" or "download more memory" malware.

Before Google, it was merely undoable to find anything useful between all the banners, gifs, "only readable in netscape" search-engines.

Before Mozilla/Firefox, popups made it almost impossible to browse the web for longer than half an hour before the browser crashed or the computer locked up.

Chat was insecure, scammers, groomers, malware injection, mitm was everywhere. There was no privacy.

Forums, BBSes and NNTP were full of "trolls" before this term was even known. Flamewars, flamebait, and again, scammers, groomers and malware everywhere.

I do have fond memories of this time. But also know these memories are distorted. It was a dark forest already.

The main difference, I believe, was that the majority of internet users back then were smart - mostly western - educated or young people. I.e. the "tech literate" folks. Those who know how to deal with malware, scams, groomers, privacy, hackers. Those who know how to navigate around popup-bombs, redirect-loops, illegal-content and criminals. But the bad stuff was there from the early days. Today, the "bad stuff" has shifted, from criminals into monopolized big-tech tapping our attention and data, but it has always been there, this dark side.


I've never liked it, but

> Move fast and break things

is really a bad concept in this space, where you get limited shots at releasing something that generates interest.

> Employees are encouraged to ship half-baked features

And this is why I never liked that motto and have always pushed back at startups where I was hired that embraced this line of thought. Quality matters. It's context-dependent, so sometimes it matters a lot, and sometimes hardly. But "moving fast and breaking things" should be a deliberate choice, made for every feature, module, sprint, story all over again, IMO. If at all.


> is really a bad concept in this space, where you get limited shots at releasing something that generates interest.

Sure, but long term effects are more depending on the actual performance of the model, than anything.

Say they launch a model that is hyped to be the best, but when people try it, it's worse than other models. People will quickly forget about it, unless it's particularly good at something.

Alternatively, say they launch a model that doesn't even get a press release, or any benchmark results published ahead of launch, but the model actually rocks at a bunch of use cases. People will start using it regardless of the initial release, and continue to do so as long as it's a best model.


I'd argue its a bad concept in any spaces that involve teams of people working together and deliverables that enter the real world.


I'm with you. Yet I've always understood 'move fast and break things' to mean that there is value in shipping stuff to production that is hard to obtain just sitting in the safe and relatively simple corner of your local development environment, polishing up things for eternity without any external feedback whatsoever, months or even years, and then doing a big drop. That's completely orthogonal to things like planning, testing, quality, taking time to think etc.

Maybe the fact that this is how I understood that motto is in itself telling.


I understand it as that too. But even then, I dislike it.

Yes, "shipped, but terrible software" is far more valuable than "perfect software that's not available". But that's a goal.

The "move fast and break things" is a means to that goal. One of many ways to achieve this. In this, "move fast" is evident: who would ever want to move slowly if moving fast has the exact same trade-offs?

"Break things" is the part that I truly dislike. No. We don't "break things" for our users, our colleagues or our future-selves (ie tech debt).

In other situations, the "move fast and break things" implies a preferred leaning towards low-quality in the famous "Speed / Cost / Quality Trade-off". Where, by decreasing quality, we gain speed (while keeping cost the same?).

This fallacy has long been debunked, with actual research and data to back it up¹: Software Engineering projects that focus on quality, actually gain speed! Even without reading research or papers, this makes sense: we all know how much time is wasted on fixing regressions, muddling through technical debt, rewriting unreadable/-manageable/-extensible code etc. A clean, well-maintained, neatly tested, up-to-date codebase allows one to add features much faster than that horrible ball of mud that accumulated hacks and debt and bugs for decades.

¹e.g. Accelerate: The Science of Lean Software and DevOps: Building and Scaling High Performing Technology Organizations is a nice starting point with lots of references to research and data on this.


> is really a bad concept in this space, where you get limited shots at releasing something that generates interest.

It's a really bad concept in any space.

We would be living in a better world if Zuck had, at least once, thought "Maybe we shouldn't do that".


>It's a really bad concept in any space.

I struggle with this because it feels like so many 'rules' in the world where the important half remains unsaid. That unsaid portion is then mediated by goodharts law.

If the other half is 'then slow down and learn something' its really not that bad, nothing is sacred, we try we fail we learn we (critical) don't repeat the mistake. Thats human learning - we don't learn from mistakes we learn from reflecting on mistakes.

But if learning isn't part of the loop - if its a self justifying defense for fuckups, if the unsaid remains unsaid, its a disaster waiting to happen.

The difference is usually in what you reward. If you reward ship, you get the defensive version - and you will ship crap. If you reward institutional knowledge building you don't. Engineers are often taught that 'good, fast, or cheap pick 2'. The reality is its usually closer to 1 or 1.5. If you pick fast...you get fast.


> Also, xhtml is just markup.

And so it's not a programming language runtime (i.e. javascript or wasm), nor a css renderer, nor a bunch of web-apis.

It's these things, not the (X)HTML parsing and rendering that makes a browser the complex thing it is.


Signal. But not because they cannot read this metadata (they can) but because they promise they don't.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: