Hacker Newsnew | past | comments | ask | show | jobs | submit | Scene_Cast2's commentslogin

I found out that in the embedded world (think microcontrollers without an MMU), Tensorflow lite is still the only game in town (pragmatically speaking) for vendor-supported hardware acceleration.

OpenRouter has some gotchas with OpenAI models. In some cases it requires an OpenAI key.

Not anymore, especially after other routers like Vercel's AI Gateway and proxies from LLM providers like Fal, DeepInfra, and AtlasCloud didn't get the memo of enforcing BYOK for ID verification required models after GPT-5's release.

Theoretically yes, in practice no. There is (according to my sensors) a fairly large CO2 increase inside a room when a modern furnace (with external exhaust) is running. I've confirmed this with several units (all made in the last 10 years), and it's not that the windows are closed - when the furnace turns off, the CO2 drops. And it's not that the exhaust is placed in a bad spot either.

> There is (according to my sensors) a fairly large CO2 increase inside a room when a modern furnace

If this is happening, then you shouldn't be using that furnace/room!

Something beyond the furnace is not configured right.


Yes, fossil fuels are the best to keep pollution away, just need to installed perfectly, configured and maintained regularly, monitored to make sure everything is running correctly, and have additional properties lying around vacant just in case there are leaks, misconfigurations, poor installation, etc. But we must use fossil fuels, there are no other options!

Could also just be the furnace. Incomplete combustion conditions can give rise to the symptoms mentioned here.

I doubt it's the furnace, unless there's a serial defect/recall.

The problem is likely "between the keyboard and the chair." ;-)


I had a gas furnace that wasn't properly maintained as far as cleaning. Result: insufficient air flow for full combustion. Secondary result: CO build up in basement space. Tertiary result: asthma-like symptoms for me.

I believe it.

Your control for this test should be (and maybe was, you don't say) running the furnace circulation fan without running the burner. CO2 levels are unlikely to be uniform throughout a building, and thus mixing will change (raise, lower) the CO2 levels depending on where you're measuring.

There are large, large gaps of parallel stuff that GPUs can't do fast. Anything sparse (or even just shuffled) is one example. There are lots of architectures that are theoretically superior but aren't popular due to not being GPU friendly.

That’s not a flaw in parallelism. The mathematical reality remains that independent operations scale better than sequential ones. Even if we were stuck with current CPU designs, transformers would have won out over RNNs.

Unless you are pushing back on my comment "all kinds" - if so, I meant "all kinds" in the way someone might say "there are all kinds of animals in the forest", it just means "lots of types".


I was pushing back against "all kinds". The reason is that I've been seeing a number of inherently parallel architectures, but existing GPUs don't like some aspect of them (usually the memory access pattern).

yeah, bad writing on my part.

I noticed that they renamed the Element mobile app to Element Classic. Has Element X reached feature parity and stability yet? For how long will Classic be maintained?

> The outgoing Element mobile app (‘classic Element’) will remain available in the app stores until at least the end of 2025, to ensure a smooth transition

https://element.io/blog/mas-migration-unleashes-element-x-on...

I can't find any other communication from Element Creations other than that.

The renaming to Element Classic doesn't bode well considering that Element X still doesn't support a vast number of home servers and a number of Synapse authn/authz features.

If they remove it from the app store, my advice for my users is going to be to switch to fluffychat, and I'll eventually migrate away from Synapse to some flavor of Conduit.


Element X now has initial support for threads & spaces (as of last week), which were the main things missing from full parity with Element Classic.

Good to know! These are very important features and not having them really gets in the way of switching off of classic. I am worried about "intial" support - what is going to break with threads and spaces that I try to join with thr new Element X?

Sorry to hijack this thread to ask - but what is the current state of sliding sync? Does it still require a separate proxy service to enable sliding sync if you're self-hosting a homeserver; or is it upstreamed into synapse? Also is there a list of clients that are sliding sync aware?

upstreamed into Synapse as of Sept 2024, and the proxy was sunset in Nov 2024: https://matrix.org/blog/2024/11/14/moving-to-native-sliding-...

Not that many clients have actually adopted it though, because the MSC is still not 100% finalised - it's currently entering the final review stages at last now over at https://github.com/matrix-org/matrix-spec-proposals/pull/418.... Right now Element X uses it (exclusively), and Element Web has experimental support for it; I'm not sure if any others actually use it yet.


As of lately, Spaces are now supported in Element X which possibly brings it to feature parity (at least I wouldn't know what's missing, and I've been using Element X now for some months because of these plans)

You can learn more about Element X migration plans from last weekend's Matrix conference here https://youtu.be/_cahXxr8d-4?si=0b9qyjiEYVpMczDy&t=442

Absolutely not. It doesn't have commands and probably a lot more.

It also does not have parity by having deliberate breakage like calls.

It's a sluggish buggy mess, so I guess you could say it has parity in that aspect.


> It's [Element X is]...sluggish...

I regret to concur. On an iPhone PRO MAX with iOS 18.7-latest, my stopwatch says:

  - Element X loads to list All Chats in 3 seconds.
  - Element Classic loads to list All Chats in <1 second.
And Element X is supposed to be the "fast one", due to Rust SDK, etc. etc.

I'm giving Element X etc. the benefit of the doubt and will see them through.

But there NEEDS TO BE a user-advocate or project-manager just wailing on usability internally at Element. If you need such a person, find someone, and if you can't find anyone, hit me up, but I would think someone should be filling this role already.

In addition to bundling and network effects, one magic thing that helped grease the skids for some apps like AOL Instant Messenger or Facebook Messenger (in its glory days) or WhatsApp/Discord/Telegram or whatever gain very wide adoption was their relatively seamless user experience.

As much as the Parent sounds like complaining, I think it's complaining in good faith. We want Matrix to succeed.


Hm. Something sounds wrong here, then. On my iPhone 12 Pro Max on iOS 26, my account (~5000 rooms) opens in about 2s in Element X iOS. On the classic app it’s about 10s (ie unusable).

Roughly how many rooms are you in? and what server is this on (it could be a serverside problem)? And what precise build of the app?


  > how many rooms are you in?
8 on both (same account)

  > and what server is this on (it could be a serverside problem)?
It's a hosted SaaS personal homeserver. So yes, quite possibly a server-admin issue. I've just put in a ticket to find out.

EDIT: Synapse 1.139.0

  > And what precise build of the app?
Element X Version 25.10.0 (190)

EDIT: After updating to Element X Version 25.10.1 (192) [latest Update from App Store], about 2 seconds is observed -- still slower than Classic, but a little better than before. I will still finish following up regarding Server issues/info with server admins; hopefully that fixes it.

Thanks a ton for all you do! Good to know it's not the expectation.


This is really surprising. Can you do a clean launch (ie kill the app and relaunch it) and then submit a bug report from both apps and let me know what mxid to look for? (DM to @matthew:matrix.org if needed). The logs will say where the time difference is coming from. EX should always be way faster than classic Element.

Sure, both are uploaded, I'll DM you what to look for

thanks - both received; we'll dig into it. thanks!

> opens in about 2s in Element X iOS

I think we're getting closer.

Your "good experience" on Element X iOS matches my "bad experience" on Element X iOS.

See, with my Server and Chats, Classic is actually very snappy:

  - Element X: ~1.5 seconds avg (rounds to 2 sec if using a non-decimal stopwatch, but more like 1.5 when measured more precisely)
  - Element Classic: ~0.6 second avg (actually slightly faster visually, this includes my response time to stop the timer, probably more like just around/under 0.5 sec)
Anyway, Classic is very fast for me to open. I like it a lot. It feels almost instant.

But X loads in 2-3 times the time. I sit there waiting for content to load, even if it's just for a second.

Is this the best Issue to watch?: https://github.com/element-hq/element-x-ios/issues/4102

Because I really hope speed does not regress for people already with very fast load times in Classic, when X becomes the only flagship App in the App Store.


To be complete, for anyone following along: the above hypothesis was allegedly incorrect. 2 seconds is not supposed to be normal for so few chats. Element X is supposedly normally nearly instant to load & list chats for such a small number of chats.

So, I'll try to come back here and comment if I get it resolved.


After installing and configuring coturn, and then finding out that ElementX requires Element Call instead feels like a "fuck you" from developers.

what guide were you following that told you to install coturn for Element X? It shouldn't be /that/ surprising that Element X's group-capable calling requires a group call server, and in general most people seem happy to not have to worry about coturn given the server acts as a relay.

The instructions are from Synapse docs:

https://element-hq.github.io/synapse/latest/setup/turn/cotur...

Element (not ElementX), the official/preferred app, works with coturn for 1-on-1 calls. But ElementX does not. IMO it is surprising to break 1-on-1 call functionality.


In Matrix 2.0, all calls are group capable (much like Matrix itself doesn’t specialcase DMs - they are just rooms with 2 people). So yes, no coturn is needed.

We haven’t got as far as interop between legacy Matrix 1:1 calling and Matrix 2.0 style MatrixRTC though, which I can see being annoying - but overall the admin burden should be simpler than running a coturn.

We’ll update the synapse docs to explain that coturn isn’t needed for MatrixRTC calls.


If you need an attention sink, try chess! Pick a time control if it's over 2 minutes of waiting, and do puzzles if it's under. I find that there's not much of a context switch when I get back to work.

Running X11 on WSL is pretty easy and pain-free these days. The author could hypothetically run Zathura through that setup in a pinch.

I'm all for a good Acrobat alternative though.


I wonder if there's research into fitting gaussian splats that are dependent on focus distance? Basically as a way of modeling bokeh - you'd feed the raw, unstacked shots and get a sharp-everywhere model back.



Thanks for the links, that is great to know. I'm not quite sold if it's the better approach. You'd need to do SfM (tracking) on the out of focus images, which with macro subject can be really blurry, I don't know how well that works.. and a lot more of images too. You'd have group them somehow or preprocess.. then you're back to focus stacking first :-)


The linked paper describes a pipeline that starts with “point cloud from SfM” so they’re assuming away this problem at the moment.

Is it possible to handle SfM out of band? For example, by precisely measuring the location and orientation of the camera?

The paper’s pipeline includes a stage that identifies the in-focus area of an image. Perhaps you could use that to partition the input images. Exclusively use the in-focus areas for SfM, perhaps supplemented by out of band POV information, then leverage the whole image for training the splat.

Overall this seems like a slow journey to building end-to-end model pipelines. We’ve seen that in a few other domains, such as translation. It’s interesting to see when specialized algorithms are appropriate and when a unified neural pipeline works better. I think the main determinant is how much benefit there is to sharing information between stages.


You can definitely feed camera intrinsic (lens, sensor size..) and extrinsic (position, rotation..) into the SfM. While the intrinsic are very useful the extrinsic not actually that much. In no way can you measure the rotation good enough, to get subpixel accuracy. The position can be useful as an initial guess, but I found it more hassle than worth it. If the images track well, have enough overlap, you can get exact tracking out of them without dealing with extrinsic. If they don't track well, extrinsic won't save you. That was at least my experience.


@dang sorry for the meta-comment, but why is yfontana's comment dead? I found it pretty insightful.


FYI, adding @ before a user name does nothing besides looking terrible and AFAIK dang does not get a notification when he’s mentioned. If you want to contact him, the best way is to send an email to hn@ycombinator.com .


I think I was shadow-banned because my very first comment on the site was slightly snarky, and have now been unbanned.


The juiciest one is the Albin Counter-gambit. If you follow the "ideal line" where white blunders and takes the bishop bait, there's a neat knight underpromotion to win a queen.

From my own play, I typically see knight f3 from white on move 4, which still results in interesting games.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: