Hacker Newsnew | past | comments | ask | show | jobs | submit | sorenjan's commentslogin

The coverage map isn't that impressive though: https://www.garmin.com/en-US/connectivity/fenix8pro/coverage...

They're using geostationary satellites, but their Inreach stuff is using Iridium. Anyone know which satellites they're using for this, and if the coverage can be expected to increase in the future?


It's Skylo, which makes me really sad about this. This 'breaks' the InReach name completely if they're selling both global pole-to-pole Iridium and limited Skylo coverage devices under the same umbrella, and almost angling it as if you should put more faith in "just carry one less thing!" when it might doom you when you need it.

Wow. When I saw "inReach" I totally assumed it was Iridium just like every other device.

Guess I won't be selling my Mini anytime soon !



Wow. Full coverage of the contiguous US is nice, but other than that you can mostly send emergency messages in places that are already close to civilization. Places that probably have cell coverage anyways

In a ten minute drive from Issaquah, WA (major suburb 20 minutes from Seattle), I can be in the woods with no cell coverage on a mountain frequented by many hikers (parts of Cougar Mt. are especially dead to cell phones). Let alone driving another half hour and having no cell coverage at all once you walk from the trailhead.

And cell service is surprisingly poor at my home in the heart of Redmond suburbs, even. If you rely on a cell phone to get out of a tight spot, stay out of the woods, at least in the U. S. West.


Yea I don’t even hike or do anything woodsy and I frequently have zero coverage in western WA. It’s a real issue, common even, depending on where you live (I’m near the capital)

The West has much less cell service than you think especially in the pretty places.

The US in general has much less cell service than you might think. I'm on a major carrier, I get 1 bar at home, there are places in my neighborhood that effectively have none, and I'm within a few miles of state forest where there is definitely no service.

If you never leave a city or major transportation routes, you might not realize how much "dark" space there is. Those red maps the mobile service providers like to promote seem to me to be extremely deceptive.


Even in the SF Bay Area, a short road bike ride into the hills can quickly get you into areas with zero cell coverage.

The i3 has suicide doors, the i8 is the one with three comma doors.

Sho you right

It's more than all the RAM I had in my Windows 98 computer that ran Windows and Winamp, which was fully capable of playing music and Command & Conquer: Tiberian Sun at the same time.

One killer feature in OSMAnd is the ability to add new maps layers. It's possible to find Strava's heatmaps as overlays (unofficially), which can be really helpful for instance.

I can't get it to always translate a page, it keeps asking me every time I visit.

> I’ve been programming computers since 1986 and even I have never said it would be cool to side load on my phone.

Because you know about the options, and probably have at least one computer where you can install what you want. Imaging if 1986 you only had access to an iPhone, like most young people today, would you still be programming computers 40 years from now then? There are new computer science students in university that doesn't know how file paths work.


In 1986, it would have been like having my only “computer” my Atari 5200. Are you really arguing that kids today don’t know that computers exist? I can’t see myself enjoying programming if the only thing I had was an iPhone with a keyboard and mouse - but it being “open”.

I hope this is the start of more ML filters in ffmpeg. They added the sr (super resolution) filter years ago, but it's old and it's difficult to get the weights so you can run it, since they're not included. They have added support for multiple inference libraries like libtorch, but again, it's difficult to even get started. Hopefully they can get behind a consistent ML strategy, ideally with a "models" directory with ready to use models for upscaling, temporal upscaling, noise cancelling, etc. A lot of audio and video filter research use ML now, new codecs will probably also use it soon.


Is window size visible to web sites when java script is turned off? It's off by default in Tor browser.


It's on by default in Tor browser.

You have to explicitly switch to "Safest" mode to turn it off completely.

>Why does Tor Browser ship with JavaScript enabled?

We configure NoScript to allow JavaScript by default in Tor Browser because many websites will not work with JavaScript disabled. Most users would give up on Tor entirely if we disabled JavaScript by default because it would cause so many problems for them. Ultimately, we want to make Tor Browser as secure as possible while also making it usable for the majority of people, so for now, that means leaving JavaScript enabled by default.

https://support.torproject.org/tbb/tbb-34/


Yes, even without JS a website can tell what size your browser window is. One way would be to use a large amount of media queries like so:

    @media (min-width: 1000px) {
      #tester-1000 {
        background-image: url("1000.png");
      }
    }
You could also imagine a website first using ~15 queries to know what the window width is upto 100px, and then provide coarser media queries on the next page load.


*finer, not coarser


Yes, CSS and <picture> etc. can load different resources based on viewport size. Then there are side channels like lazy loading, layout + what you interact with.


The last time I was reminded of the bitter lesson was when I read about Guidance & Control Networks, after seeing them used in an autonomous drone that beat the best human FPV pilots [0]. Basically it's using a small MLP (Multi Layer Perceptron) on the order of 200 parameters, and using the drone's state as input and controlling the motors directly with the output. We have all kinds of fancy control theory like MCP (Model Predictive Control), but it turns out that the best solution might be to train a relatively tiny NN using a mix of simulation and collected sensor data instead. It's not better because of huge computation resources, it's actually more computationally efficient than some classic alternatives, but it is more general.

[0] https://www.tudelft.nl/en/2025/lr/autonomous-drone-from-tu-d...

https://www.nature.com/articles/s41586-023-06419-4

https://arxiv.org/abs/2305.13078

https://arxiv.org/abs/2305.02705


But I have also seen people trying to use deep networks to identify rotating machinery faults, like bearings, from raw accelerometer data collected a high frequencies like 40 kHz. Whereas the spectrum of the data from running FFT on the signal contains fault information much more obviously and clearly.

Throwing a deep network on a problem without some physical insight into the problem has also its disadvantages it seems.


Yeah, we're shouting into the wind here. I have had people tell me directly that my ideas from old school state estimation were irrelevant in the era of deep learning. They may produce (in this case) worse results, but the long game I'm assured is superior.

The specific scenario was estimating the orientation of a stationary semi trailer. An objectively measurable number and it was consistently off by 30 deg, yet I was the jerk for suggesting we move from end to end DL to trad Bar Shalom techniques.

That scene isn't for me anymore.


> That scene isn't for me anymore.

They will learn. At least when the competition beats their solution with a hybrid approach they can't begin to understand.


I’m working on this sort of thing right now in a SaaS product that previously didn’t have support for vibration data. One competitor is all ML-d up to the hilt but customers don’t like the black box and keep finding it gives false positives with no explanation. I think one problem is those buying not understanding problem; they just want to plug in a sensor and insights to happen, but without information about the machine that’s never going to be able to provide useful insights


>It's not better because of huge computation resources, it's actually more computationally efficient than some classic alternatives

It's similar with options pricing. The most sophisticated models like multivariate stochastic volatility are computationally expensive to approximate with classical approaches (and have no closed form solution), so just training a small NN on the output of a vast number of simulations of the underlying processes ends up producing a more efficient model than traditional approaches. Same with stuff like trinomial trees.


This is really interesting. I think force fields in molecular dynamics have underwent a similar NN revolution. You train your NN on the output of expensive calculations to replace the expensive function with a cheap one. Could you train a small language model with a big one?


> Could you train a small language model with a big one?

Yes, it's called distillation.


Interesting. Are these models the SOTA in the options trading industry (e.g. MM) nowadays?


Can you elaborate on this? I am curious what processes are being simulated to feed the network.


There are also extremely misleading research articles out there promising good results with deep networks in the area of anomaly detection, without adequate comparison with more classical techniques.

This well-known critical paper shows examples of AI articles/techniques applied to popular datasets with good-looking results. But, it also demonstrates that, literally, a single line of MATLAB code can outperform some of these techniques: https://arxiv.org/pdf/2009.13807


> an autonomous drone that beat the best human FPV pilots

Doesn’t any such claim come with huge caveats — pre specified track/course, no random objects flying between, etc…? ie. train & test distributions are ensured same by ensuring test time can never be more complicated than training data.

Also presumably better sensing than raw visual input.


> "MCP (Model Predictive Control)"

^ that's MPC. (MCP = Model Context Protocol)


Nvidia is supported on WSL2, which is a VM, so that shouldn't be the issue.

https://docs.nvidia.com/cuda/wsl-user-guide/index.html


WSL is a special case, the only case where this is allowed, to my understanding, outside of datacenter-class GPUs. Early in WSL2's life there was a special NVidia driver you had to download from Microsoft which allowed the GPU to be partitioned only by WSL. Now this functionality is built into the normal driver that NVidia distribute and doesn't require a special download.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: