Can you think of a more efficient protocol between the modem and main cpu then the AT-based ril with constant status checks? Is there anyway to make the modem more efficient, since its cpu is just consuming power and not adding much to the functionality of the phone? Can it be a smarter gateway that only triggers a main cpu wakeup when there's something important to notify the user of, like an incoming call or sms?
> Can it be a smarter gateway that only triggers a main cpu wakeup when there's something important to notify the user of, like an incoming call or sms?
That's how it works.
> Is there anyway to make the modem more efficient, since its cpu is just consuming power and not adding much to the functionality of the phone?
Modem is sleeping most of the time. (at least with my driver) It's much more efficient than the main A64 SoC. Like 10mW sleep vs 100mW sleep.
> Can you think of a more efficient protocol between the modem and main cpu then the AT-based ril with constant status checks?
ModemManager/ofono are not using AT interface, I think. They are using QMI.
Anyway, you don't need constant status checks even with AT interface. You just enable status indications you want to receive and that's what you'll get without polling.
My first job was an ISP I co-founded. Our first dial-up system was a classic 120MHz Pentium w/128MB RAM and 4GB harddrive, that served as the terminal server for 16 phone lines and let users start SLIP/PPP or a shell connection. It was not unusual to have 16 users logged into it at once running applications on it.
It also was our main e-mail server and web server, hosting both our own website and several customer websites. We had a second one of the same specs for a few other things and backups, but that was our main workhorse.
[EDIT: They were running Linux 1.0 and 1.2, first with NCSA httpd, then Apache; when we first got them they ran Slackware, installed from floppies; I think we ended up installing Redhat at some point before we retired them; the modems were hanging off Cyclades serial cards; and they were literally hanging - for our first 16 lines we had US Robotics Sportster modems hanging on the wall. (I'm writing this in the hope someone will tell me to get off their lawn and tell me how they did more with less, btw.) ]
I really appreciate that CPU manufacturers started adding dedicated hardware for things like decoding MP3's. Not sure about the machine learning hardware they put in it nowadays, but then, that's mainly Apple who do a lot of that for e.g. your pictures (so they don't have to send it to their servers).
MMX is integer math only. MP3s require floating point unless you hand code fixed point version of the decoder. In real life just recompiling with MMX support gives marginal difference https://www.cs.cmu.edu/~barbic/cs-740/mmx_project.html
MMX was pretty useless (reuses FPU registers = cant run FPU code in parallel) marketing gimmick from Intel designed to tick boxes, promoted with fake "designed for MMX" campaign https://www.mobygames.com/images/covers/l/51358-pod-windows-... spoilers: MMX enables one sound filter in the whole game, no speed difference. Ubisoft just made some extra cash by printing this on the box.
MMX was one of Intel's many Native Signal Processing (NSP) initiatives. They had plenty of ideas for making PCs dependent on Intel hardware, something Nvidia is really good at these days (physx, cuda, hairworks, gameworks). Thankfully Microsoft was quick to kill their other fancy plans https://www.theregister.co.uk/1998/11/11/microsoft_said_drop... Microsoft did the same thing to Creative with Vista killing DirectAudio, out of fear that one company was gripping positional audio monopoly on their platform.
> MP3s require floating point unless you hand code fixed point version of the decoder.
This is a weird statement. "MP3 encode/decode requires floating point unless you implement in fixed point such that you don't need floating point." It's perfectly possible to write fixed point MP3 decoders.
Sure, MMX wasn't that great, but it was Intel's first SIMD extension, was definitely intended to help with "things like MP3 decoding," and was followed by a ton of improved extensions with similar goals.
What's interesting is, I used to have this old 486 laptop. With Linux on it, I could run XMMS playing my MP3s and do other things and you could barely tell it was there. In Windows, playing an MP3 file with WinAmp took up so much CPU, you were stuck running only WinAmp.
Windows 9x had preemptive multitasking bolted on as an afterthought. Same engineering flaw that made it so crash-prone, with much of the kernel exposed read/write to the application for backwards compatibility with Windows 3.1. And Windows 3.1 did not have preemptive multitasking at all. Everything was cooperative running in one address space. If an application used anything from that era, the kernel could end up wasting way too many cycles handling the request. This made multitasking under high load dicey at best.
Linux is a fully preemptive OS with a kernel designed to return from system calls quickly, without blocking all the processes in the system. No surprise it fared better.
When I was a kid my school got a Sinclair ZX81 with 1k of memory. One 1k computer for the whole school! Once in a blue moon we would get class time on the computer - Our class got to program a 'turtle' to draw shapes on a big piece of paper.
I was so excited about it, and my mom had done some punch-card programming in university and so knew that this was going to be the future.
The Sinclair Spectrum 48k had just come out, and so we splurged and bought one.
We lived in Asia at the time, and games on cassette were difficult to get, so at 10 years old I spent hours typing in BASIC games from magazines, debugging and POKEing and PEEKing to see how things worked.
We must be nearly the same age. I remember entering pages of poke codes from a magazine for a game on my Commodore 64. Those really were great (if tedious) times :)
When I was a kid we had a BBC Micro. In university we had CD burners but if you bumped the computer or did anything else while it was burning the CD, the disc was ruined. :P
Oh yes and then right around the time 8x became possible (but cd-r only, cd-rw was still 4x initially I think) someone invented a way to continue after a buffer underrun instead of throwing away the disc. I think that really sold the faster burners because otherwise they were kind of useless for the increased risk of having a buffer underrun skmewhe in the process.
They were expensive coasters back then too! When I was in high school, I spent like $30 (was a ton of money to me back then) on blank CDs (like a 10 pack) only to have them all stolen out of my locker. No idea what the thief would have done with them, as CD burners were not common at all.
When I was 6 my parents decided to buy a PC for our home. I think it must have been a 286, but I really don’t remember. I do remember moving to a 386 and then a 486.
Lots of fond memories of norton commander, wordperfect and the old versions of battle chess.
Good times! I wish I had understood back then how much computers were going to change the world. Luckily my parents did.
Same story here. I remember being 5 or 6 years old. We had a DOS PC with an orange CRT display. I think it was also a 286. I still remember the commands to run Word Perfect and this game called Fun House. My first command line experience. Mostly my mom used the computer for word processing and printed off letters on our dot matrix printer.
Peeling off the edges of the pages printed on a dot matrix printer was either super relaxing or super stressful depending on what you had just printed.
Lucky you! The Commodore 64 to Pentium time frame was only 11 years (1982 -> 1993), and computers were expensive then. Being able to make major upgrades every ~2 years would have been a luxury.
These anecdotes mesh with what I remember. A friend in high school around 1999 had a Pentium 133; it could play MP3s without stuttering but it couldn’t do anything else while it did that.
I bought my first PC in the late 80s. It was an 8086 by Amstrad, with a 4Mhz processor, 512kb of RAM and no HDD.
It was running MS DOS 3.30 Plus and I made my first programs in GW-Basic.
These modems are way powerful. This reinforces the need of open source firmware for them. They run their own OS and can stay on independent of the rest of the hardware consuming very little power. Imagine putting your computer to sleep while downloads continue and can survive geographical network migration.
A fully open source firmware for GPS/LTE modem is badly needed.
"They run their own OS and can stay on independent of the rest of the hardware consuming very little power."
It's worse - there's a second, independent computer inside your phone that you have no control over: the SIM card.
The SIM is a standalone computer with its own processor and memory which your carrier can communicate with and upload programs to run on it without your knowledge ("OTA updates", etc.)
Seeing the ShadySIM project mentioned on that page reminded me of this DEFCON talk, which talks about some of the capabilities of the SIM OS (some of them run Java!) and what power they grant you over phones if you have control over the underlying cell network (legitimately or otherwise).
TBF, eSIM as you describe it is not "no physical sim card at all", so they might be confusing it with something else. (I assumed, based on their phrasing, that they were talking about a virtuallized SIM card running in a (hopefully suitably locked-down) VM.)
And you see some of what you're talking about in closed systems too, where the system integrator can control both sides.
The Wii, and PlayStations since the PS3 have had system control processors that can perform downloads, updates, and general system maintenance on a low power processors even while the majority of the system is off. In the Wii that processor was also responsible for the majority of the network stack even when accessed from the main processor. On the PS4 that processor even ran a relatively complete FreeBSD.
So a modem on a pine phone can survive the HN hug of death (and the number one post no less)? Am I missing something here? Because there's no way I believe that!
Since we're here, I did a quick grep through today's access log. It appears that your link got me 573 visitors to that article and 56 people clicking through to the other article linked therein.
My mind is blown that anyone writes software for stacks that max out at few hundred requests per second.
Everything I've seen comparing "high level" languages says programmers work around the same efficiency whatever the language. So why isn't everything done in Go, C#, and Java where 100,000 requests/second is trivial?
Even if you don't have much load, isn't the possibiliy of a cat DDoS'ing you by sleeping on someone's F5 key at least slightly concerning?
I swear half these companies that claim their site is "being DDoS'ed!" it's just somebody just running HTTrack to archive an article they like.
A static site is really easy to host and HN's hug of death isn't actually that intense; the site itself only gets like six million views a day, a fraction of those click on every link, and no syndicated page has much reach.
People badly overestimate what it takes to serve a website nowadays. I don't know whether it's just because we're all used to 10+ second website loads as our browser groans under the weight of all the trackers, or if we all have too much experience with web pages that run multiple 10ms+ unoptimized queries against databases that pull way too much data back, etc. and so we all think web pages are some sort of challenge, but webpages themselves really aren't that hard. It's all the stuff you're trying to put in to them that is the challenge.
I hit SHIFT-CTRL-r to watch the page reload under the network debugger so I could get a snapshot of the total size, which looks to be about 100KB, just eyeballing the sums, so at 10MB/s it should be able to serve around 100 requests/second, which is enough for most websites, even ones getting hugged.
Reminds me of when my university changed from “CCnet”, running all courseware/grading on custom code written by an engineering professor running on a tower under his desk to WebCT/blackboard.
The former worked fast and without any issues, but the latter just groaned and groaned and crashed.
During the transition, a prof asked the class which one to use and all 250 people said in unison “CCnet”.
“ Its main virtues were the stark, no-nonsense interface and the incredibly efficient (although complex) database system that made CCNet ideal for running large numbers of courses with very little by way of server computing resources.”
I recently migrated a personal blog running on a hugo static site to digital ocean with automated github deployment, SSL certificates, CDN, and a custom domain. It took all of 15 minutes including updating the DNS on my custom domain registrar.
The same but with AWS S3 instead of a VPS is even simpler and avoids any maintenance / software updates. The bandwidth is more expensive but for a small personal site it shouldn't make much of a difference because you also save the price of the VPS.
I speak as a software engineer whose blog has hit the HN frontpage multiple times, and who has optimized my blog to be static & lightweight, just like the author. In my experience, a front-page HN post (in the top 3) leads to 30-40 page hits per second. The page at https://nns.ee/blog/2021/04/01/modem-blog.html weights 44.4 kB transferred (HTML, CSS, images, fonts, etc, all compressed over HTTPS). So this is 11-15 Mbit/s of peak bandwidth, below the maximum throughput the author measured on the Pine phone modem (20.7 Mbit/s).
And the bottleneck is going to be purely network. Not CPU. Not disk. Just serving a small set of files cached in RAM by the pagecache.
It's above the max, no? The author measured 20Mbps between the phones main OS and the rest of the world, but 10Mbps between the phone and the modem (over the adb bridge).
My website also got multiple times in the HN frontpage and there was never any lags even on a very cheap VPS. The websites that get the HN hug of death are often the one requiring database access (e.g. wordpress).
But also, serving static content is more or less a matter of network connection speed. You can cache everything in RAM and write data to a socket very quickly. HTTPS makes things a little wonky because you can’t just sendfile() files directly into a socket but the overhead is still pretty minimal.
Slightly misleading title. This blog post is not hosted over LTE. Instead, the web server just happens to run inside the Pinephone modem.
Internet --> USB --> Pinephone CPU --> (ADB forwarding) --> Pinephone modem with webserver.
That's cool, but I am slightly disappointed. I was hoping to find out that some ISPs allowed to forward ports over LTE.
Here, I don't really see the advantage over doing it on the main CPU, since it will have to forward packets anyway. And you can use the same reasoning with whatever the pinephone is connected to.
Sorry if it's a bit misleading! Wasn't my intention.
However, there's nothing stopping me from getting a data SIM from my provider (who does allow forwarding ports over LTE). I just currently don't have any spare SIM cards at hand. But that is the plan.
Although the latency will likely skyrocket, I would be very interested in measuring the energy efficiency. The main CPU could even be completely shut down.
You can even imagine calling the main CPU as backup if there are too many incoming requests, though it would be better to use a (likely available) ethernet-over-USB instead of ADB forwarding, and that scenario is unlikely if just for a static blog.
Anyway, if you do so, please make sure to publish a new blog post, post it here, and plot the power consumption during the hug. That would be some interesting data!
You can get carrier grade connections that would allow this kind of routing- it is not part of a standard consumer package. This type of capability is managed via the "Access Point Name" feature in phones.
I had been looking to do this exact thing for so long. It didn't occur to me that it was possible to unlock the AT commands for the pinephone. I just wanted to turn it into a modem. It's a pretty solid price for that piece of tech alone, since dev kit LTE modem boards are >$100 in all the places I could find.
The Quectel modems are fun parts. We are using one in a project as a kind of theft protection. I couldn't belive at first "this is a whole computer with Linux on it".
I really wonder why modem manufacturers feel the need to encapsulate everything so much. Is it for regulatory reasons, to protect their secret sauce, or for our convenience? On the one hand it makes simple things really easy with some AT commands, on the other hand it makes debugging quite frustrating because it is a black box.
Regulatory reasons. There are a lot of things that FCC regs say that devices that transmit either must or must not do, which made more sense back when everything was analog. Now that the device is a full computer and can be programmed to do whatever you want, the way the industry has decided to best fulfill all those requirements is to make the computer do exactly and only what is required and then lock it down into a black box.
If you just unlocked everything, it would probably be better at what it does, but it would also not be possible to sell to consumers.
Do those regulations explicitly say that the RF device is not allowed to be programmable by the user? And isn't the user legally forbidden from substantially modifying the RF behaviour in any case?
I can't find it now but there was an article posted a while ago that explained the existence of just such rules back when the very first mobile phones used analogue transmission, which of course was easy to intercept. So the FCC forced radio manufacturers to prevent users being able to tune to those frequencies and additionally specified that manufacturers be responsible for making this limitation very hard to defeat... rather than fix the original problem of course.
Those rules exist to this day but have no relevance. However I suspect there are other similar but slightly more sane rules in play since modems are now just software defined radios that can be made to do nearly anything you want given the right access - i.e you can cause a lot of trouble, even if you don't intend to.
Off-topic, abstracting laws away from the harm they aim to prevent has always bothered me a bit. By punishing behaviors (e.g., making a device with a certain set of capabilities) other than the ones people care about (e.g., careless or malicious RF activity) you're nearly guaranteed to introduce unwanted loopholes and second-order effects.
E.g., CA requires tags for your vehicles (fair enough -- gotta charge people for the privilege of driving at all, not just incrementally through gas taxes and whatnot /s), but the law isn't that your vehicle be registered and have tags purchased; it's that valid tags must be displayed on the vehicle.
That makes enforcement super easy because an officer can just walk up and down a line of cars handing out tickets, but it also means that people can break the law for no fault of their own even after having taken the action the law was intending to promote (paying CA in this case). If you tried to renew your tags at the right time in 2020, even months before they were set to expire, you couldn't pick them up in person (covid) and wouldn't receive them in the mail till after the governor's order temporarily allowing you to drive with expired tags had expired. Moreover, at least one police department was excited to cash in on that discrepancy :)
The problem in that example is that the behavior the law is supposed to encourage (paying CA if you own a car) only partially aligns with what the law punishes (not displaying a valid tag on your car). That lack of alignment leaves room for exploitation.
That's just a small example (intentionally so -- hopefully it's simple enough so as to not be controversial), but it's a constant pattern in our legal system:
- Taser-immune meth-heads robbing people at knife point is bad, so let's make a hint of a whiff of any kind of drug anywhere near an individual for any reason a felony.
- Child pornography is bad, so let's (try to) make encryption illegal.
- Pump-and-dump scams are bad, so let's make [thousands of pages of financial legalese] illegal.
- (intentional hyperbole on this point) Low quality medical devices hurting people is bad, so let's give a private corporation the privilege to write up a set of standards, never change them no matter how far the world progresses, and charge people for the privilege of reading them to know what's legal or not.
> I really wonder why modem manufacturers feel the need to encapsulate everything so much.
The alternative would be using an RTOS of some sort, implementing a full network stack (with > 100Mb throughput) and a USB device, as the bare minimum. Then you also have I2S audio, GNSS, etc.
It probably makes more sense using an entire OS at some point.
BTW, this is typical Qualcomm, that makes the chipsets for these modems.
I've been playing with the ESP32's a lot more recently (which go the RTOS route), and in reading the code for network stacks for these devices, it did occur to me how simple they were in terms of features. I wouldn't say that one needs the entirety of the Linux kernel to be able to implement all of those things, but it does build off of a more well used stack and reuses a lot of well-tested components.
So from this perspective, it's honestly more secure probably, too, as opposed to trusting a hardware company to hire embedded software engineers to reinvent an entire stack from scratch. The latter of which are in rare supply and we all know that reinventing things is a process that often leads to vulnerabilities.
Locking down the communication to just AT commands then makes sense for similar reasons.
LTE et al is wayyyyy more complicated than WiFI/BlueTooth for what it's worth. I can get the idea that process boundaries help the large heterogeneous teams needed to ship a full featured LTE modem, similar to how microservices help large heterogenous teams ship code at FAANGs. It's infrastructure for dealing with human inadequacies rather than a technical requirement.
Why not offload all that to the driver running on the main application processor? The risk of having a second, black-box running Linux is that it’ll get outdated and/or compromised and used as a persistence mechanism for malware.
Because you can also use the modem coupled with a low-end MCU, communicating through a serial port. This configuration is more common than one might think (I was personally involved in many of projects with that configuration).
Most of these can probably receive updates from the host OS if it's really needed, but I think what it boils down to is that the manufacturers just don't care, unfortunately.
I wonder if Justine's Redbean[0] webserver, which got recently posted on HN[1], would support ARM in such use case. Its executable size is 460KB, it even allows Lua scripting, and benchmarks 1 million pages per second on desktop PC.
While you'll probably run out of memory first, there's the enormous overhead of running qemu's translation, too. Using QEMU to emulate x86 on my POWER9 system, I get... roughly the performance of a 2001-2004 pentium processor, and the host cpu is running at 3.8GHz and comparable speed wise to a 2018ish intel processor.
In spite of the author's insistence, I don't think Cosmopolitan is really usable on anything but x86, the QEMU thing is more a cute proof-of-concept. For very simple programs I guess...
For a while I've really liked the thought of small little cheap hardened devices that could be surreptitiously dropped to act like wifi extenders as a "guerilla mesh". Of course there'd probably need to be some thought put into making them safe and look harmless enough not to trigger any kind of severe response if found. But imagine taking a bag around town and dropping "disposable" devices with low enough power use to mostly stay online w/solar and form a mesh network all over the place.
I think the biggest challenge to getting somewhere small enough that it can be "dropped anywhere" is definitively the solar panels. If you add the qualifier "where you have permission" it gets a lot easier - I don't think you need a very large panel to be able to power something suitably basic like this.
Something I used to daydream about was a self-deploying mesh using drones to drop off solar powered nodes on roofs around a city. It could be a sort of emergency deployment system. The drones could go back and pick up nodes that stopped working for some reason and bring them back to base.
I would love to see someone run the electricity draw numbers on a theoretical build of this, accounting for solar panel output, battery longevity, and how long it could theoretically stay online based how long the sun is up each day.
In my head I'm thinking an air tight tupperware container with a pi zero inside, setup for meshing. It would be cool if it was fairly flat and had panels on both sides, so you could throw it on a roof top and be reasonably sure it'll land with a panel facing up to run.
Displacing the heat in continuous usage will also probably be a challenge, not to mention figuring out how to deploy updates, having nodes be compromised, impostor nodes, latency, etc.
I mean it's an idea I've thrown around a lot in my head since college, but in all my scenarios, I never opened it up to the public, it was just going to be used for me. If that's the case, then yeah it's doable.
Depends on climate, I guess. I'm in London and here the far greater problem would be moisture and not heat dissipation.
Deploying updates is not that hard - you'll want a flash that's large enough for at least two partitions so you can keep a known good version around. Other than that, signing updates isn't a big deal, and tamper-proofing key storage enough for casual use isn't that hard. I wouldn't trust sending senstitive data clear-text over a network like that, but I wouldn't do that over public wifi anyway.
Latency is a bit more of a challenge if you're somewhere that'd need lots of hops, but if you are then presumably the alternative is worse, especially if you have nodes with LTE connectivity as a possible fallback.
I keep thinking the Meshtastic network is heading in that direction, maybe Disaster.Radio might have some features, and generally there seem to be a ton of projects that contain some but not all of the pieces you'd want.
I'm down to build the hardware, I just haven't a clue on the software side of things.
I think it really depends on use-case. A lot of mesh projects get stuck on really large scale, but if your goal is just to be able to build networks where the number of hops between any given node and one with uplinks is realtively limited - so more "range extender" than a replacement for normal ISPs, the software is an "easy" problem.
If you can come up with a hardware package that is resilient enough, then "all" you need on the software side is a small Linux install and a bootloader setup that will try either of at least two boot partitions. You'll want a watchdog to force a reboot to a know good install if it doesn't respond after a while. Routing is the tricky bit, but as mentioned if the number of nodes is small enough a small routing daemon to periodically exchange routing information and discovery performance fluctuations is not a big deal.
Perhaps I'm not being very generous, but I'm often kinda surprised when sites do meltdown from the HN effect -- although I recognise it depends on a lot of factors.
Seems like a lot of times the meltdown is just conservative rate limits on linux shared hosting. It's too bad because a lot of times it happens to sites that are getting their first substantial exposure.
This website returns 40.5 kB from nns.ee host (with CSS and SVG). 1000 users per second requires 40.5 MB/s or 324 Mb/s connection. Author mentioned that his device provides 10 Mb/s throughput. So realistic number is serving 30 users per second.
For some reason his main.css is not compressed, so that would be the first thing to optimize.
The blog post does state they are using darkhttpd but the network request for the blog post says "Server: nginx". Does darkhttpd use nginx inside or something?
I have been using these modems at work, specifically the EC25/EC20. They come with linux with some limitations. These modems are typically meant for IOT devices and you can get them for ~ 16/18 USD (or even lower). 4G/3G/2G, with around 128/256M flash and 128M RAM and 1.3GHz Arm Cortex A7. They are pretty powerful for their size, cost, current consumption and usability.
Almost all such parts can be hacked across manufacturers see
You can get the Linux distribution and a compilation environment for these from the manufacturer (source code). You may need to have a business account though.
Linux binaries can be used on these, we have tried code compiled in Rust, Go etc cross compiled for arm target and used on these targets, though the primary development is still in c / c++
They come with a bit dated OPENSSL, though you can compile to a newer version and use that.
SQLITE comes built in
There is support for peripheral access I2C, SPI, SDIO, I2S etc. You could possibly create a simple gaming console with LTE connectivity, since the SPI can drive a TFT and there are IOs for keypad and I2C for touch and accelerometer, gyro etc
Actual modem FW is not available, since that would be Qualcomm properietary!
I'm a bit confused: does the manufacturer provide a toolchain to compile and run binary code on something that is not advertised as a SOM/SOC, but only as an lte modem?
Yes that is true in fact even the most basic modems in their line up for 2G can be used as an Application processor rather than as a plain modem.
In this case however it is more like a complete build environment that includes the Linux distribution, GCC and other relevant Linux libraries etc for the distribution.
The real FW for cellular modem is likely never exposed since that may run on a DSP core or an ASIC sub system
Any chances that opening a root shell into the modem can be used to grab the closed firmware, reverse engineer it, then swap with an open source one? I mean it's not just about security but also being able to implement nice things to have running and connected while for example the phone operating system sleeps between calls.
No need to actually do the re work(at least re the whole firmware), if you google a little bit you can find a leaked source to a firmware related to the modem in pinephone. look for mdm9607, mdm9x07, mdm9207.
Also, some devs actually managed to replace the linux part of the firmware with linux build from source recently. I think they're trying to get mainline linux running now
If someone's reverse engineering that modem, I'm curious what all is running on there, if it makes any connections other than what you ask for, and who it talks to, and what is said.
I think AT&T is giving me an IPv6 address now but they appear to be doing some kind of NAT because connections appear to come from a different v6 address. Convincing one of these ISPs to give you a public address is probably an uphill battle since they'd get in trouble dumping people's phones on the internet.
Does the modem have access to GPS? If so, it would be interesting to implement theft protection or things alike.
Do binaries built with "GOARCH=ARM ARM=7 GOOS=linux go build" work?
I didn't worked with exactly this modem model, but yes Go binaries work just fine on most of them since it's usually heavily cut Android inside which using BusyBox.
As somebody who is looking to start a comercial product that needs a cheap SOM with some IOT connectivity I'm wondering whether there is a module similar to the one in the OP, but actually advertised and sold as a computer module rather than just a lte modem. Does anyone know? Something like Onion Omega2s without the SPI bugs.
Wait wait wait, does this mean data is going over a cellular network? On a data plan from some phone company? And their blog post just showed up here, on the front page of HN, which is notorious for crushing small servers with traffic?
LOLSOB, I really hope this poor person isn't in the US. Or the cellphone overage charges...
He's not actually serving data over the cellular network, but even if he did, how much traffic does being on the front page of HN generate ? His blog is very light, this page is only 22.2kB. Assuming 100k pageviews from HN, that would be 2.2GB. Well within most mobile plans.
> LOLSOB, I really hope this poor person isn't in the US. Or the cellphone overage charges...
I am in the US, and have unlimited 4G LTE data w/AT&T prepaid for just $65/mo. There wouldn't be any risk of overage charges in this plan, at worst it would be throttled if the network is congested.
You could totally host a small site over such a link, it would just require something like a dirt cheap VPS for the public IP address. Behind the 4G connection your server would maintain tunnels with the VPS for serving the actual content via a reverse proxy or some other circuit. Wouldn't hurt to do some caching at the VPS as well, for static content.
It's a stretch to call it unneeded, being able to communicate over the cellular network is pretty much the definition of what a mobile phone is, if you remove that it stops being a phone. In that sense running the phone with any system is optional too since you can always use it to weigh down some papers on your desk ...
So? Apple made such a thing into a product years ago. It was called iPod Touch.
Pinephone may have a phone in name, but it's basically a neat little SBC with an integrated touchscreen and battery, and bunch of other peripherals, and can be used for anything you like.
it's surprising, but it's not secret. every mobile phone runs a similar machine. and then another even lower level machine which is your sim card. it's as microcontroller running javacard which can be programmed remotely by your telco.
Over here some carriers will even sell you a static IPv4 address for your cellular connection. Other than that, packets are packets, this shouldn't be very surprising.
His blog is hosted on the modem but it does not actually use the cellular network, he routes traffic back to the host using adb port forwarding and then to some other host via USB and then to the Internet.
Original firmware is full of security issues. Things like effectively running `system("something %s", some string input from AT command)` under root, etc.
Originally I wanted to get access to the FW via one of these issues, but then I found the adb unlock, and it's way easier.
All it took me to crack their ADB protection was to extract and disassemble their atfwd_daemon from firmware update package, and look for interestingly named commands and reverse their algorithm for generating the unlock key.
Well none, but if it's the hardware responsible for managing your wireless connectivity on your mobile phone, you might not be happy about the possibility of that being compromised.
Yeah the blog is slightly misleading. The binary web server is running on the ARM core of the modem only. It's not as though mobile providers are starting to allow web hosting across their LTE network.
Beautiful seeing this on top of HN. :)
That message is from me. Nice to see someone using the access for something cool.
One of the reasons I decided to add it there was to give some visibility to this "feature" of EG-25G FW to people interested enough to look at dmesg.