Hacker News new | past | comments | ask | show | jobs | submit | oriesdan's comments login

By that logic, any site that allows to display third party content is a phishing/spam site and should be blocked - including twitter, facebook, HN and of course gmail.


And for people often saying it's a solution in search of a problem : that's why we need cryptocurrencies. At least, the ones that could be used as actual currencies.

It would be so much better if I could click a payment link that opens an external wallet software like "mailto:" links, verify the amount it proposes to transfer, and click "send" to send the money at the prefilled address instead of allowing someone to take it.

I really hope that one day, we'll tell our grandkids about that time where we were giving secret codes on the internet and anyone having them could take money on our account as they wish, and those kids will think we're delirious.


How is any of that not possible with regular money and a regular bank?

It's already how MobilePay works in Denmark. You give the website your phone number, and a prompt pops up on your phone screen to approve the transfer, with an amount, in a native (trusted) app.

Crypto doesn't add anything here.


As another example, Taler supports this exact requirement, without bringing crypto into the mix.

So does UPI in India. Banking transactions ask for my credentials on the bank website (after a redirect).

None of this needs cryptocoins.


Sending money worldwide is still often problematic (YMMV).


Good for Denmark, it doesn't exist here. Plus, there is no way I will install a proprietary banking app. For this to be acceptable, it needs to be a standard and have multiple implementations. At which point, using cryptocurrencies is the easiest way.


> Plus, there is no way I will install a proprietary banking app.

I'm glad that you found a problem for your solution.


I'm glad you enjoy our dystopian world.


But your point about not trusting an app is the same. At some point you’d need to trust some app.

If your problem is the bank’s app something like PSD2 will solve this by allowing you to connect to the bank directly and having your own app, right?


That's silly. This attack would be much easier to pull off with cryptocurrency.


Yep, the scripts that do so just replace the wallet you're going to send your bitcoin to with the attacker's wallet.


If you get your card skimmed you can call your bank and have your money bank.

If they steal your crypto it's buh-bye.


Ppl enjoy convenience. If we are to move to your solution in a matter of months or years we’d have Chrome integrate their wallet system as a new feature so you don’t need to open a new app. I think you can see how this is a very probable scenario


There are literally digital skimmers that do very similar attacks against checkout systems that utilize crypto currency.

The Lazarus group from North Korea actually did some of the earliest ones I saw, because they love themselves some bitcoin.


People love convenience too much. Look at what Gmail has done to mailto.


For the record, I received yesterday the new mainboard that will be in those phones to replace the one of my pinephone UBport edition. The boost in RAM (3gb instead of 2gb) does the trick for me, allowing to use firefox. Not to the point that the experience is even close to android, but at least usable (before that, I used elinks on the phone).

The system I use (mobian/phosh) still feels sluggish, but I suspect it's more a matter of software than hardware, now. I tried yesterday KDE neon/plasma with that new board, but had problems that the screen would black out after a few minutes of use (had to do hard reboot to recover).

I see in this release announcement that their build of plasma is on top on Manjaro, I'll have to try that one. It's especially interesting because someone from Plasma mobile team told on HN the other day that they support MMS.


I grew up in the C64 days - I still can't get my head around the fact that you need 3gb of RAM because 2gb is not enough to run a webbrowser these days.

I know that what's going on on a typical website these days requires an enormous amount of computational power, but I mean, come on. That shouldn't be the explanation, it's part of the problem.


Any content you see on the screen hasn't changed at all.

On a typical website all of the analytics are causing page bloat.

Sometimes there are frameworks like bootstrap but hypothetically those should be cached.

If you are looking at a web application then it has a framework that is a magnitude higher and the rest of it you see a significant increase.


Website bloat isn't the explanation. Firefox on the original 2GB Pinephone board is painfully slow even if you have uBlock Origin and Noscript installed. It is slow even to open and browse to a minimalist text-only website with. I'm not sure how much of this is Firefox, and how much is the whole Mobian UI that depends on GNOME components that have not been optimized to save RAM.

It also has a lot to do with the fact that the Pinephone CPU is underpowered compared to anything from 2020 (or almost anything from 2015, really).


Firefox on the original 2GiB pinephone runs just fine (Xorg/i3wm) even on a 1440p monitor with non-JS easy to compose websites (no complicated CSS filters, blur, etc.) with smooth scrolling.

It's only slowed down by storage access speeds mostly.

https://www.youtube.com/watch?v=LdKNugT-mTQ

And this demo is purposefully using both displays at once to stress the device more. With single display mode there's less demands on RAM, and more bandwidth is available to CPU.

Or https://www.youtube.com/watch?v=ZqJOs0YwjwY


It is too bad (in the case of parsing) most websites have dynamic HTML structures... I wonder how hard it would be if you had some browser/wrapper that made most websites into wire-frame boxes, text to be simpler to render


You do realize if you use ublock Origin and Noscript that actually degrades your performance because it has to verify to not load that information.


My apologies for not making myself entirely clear: even if you have no addons whatsoever, and you open Firefox and try to navigate to some bare-bones text-only website, even that is extremely slow on the 2GB Pinephone.

However, I would question whether uBlock Origin degrades performance – my own experience running Firefox on low-memory platforms like netbooks and the Raspberry Pi suggests that is not the case. Yes, addons use some amount of RAM, but if they prevent the device from loading a modern advertising-based webpage where the ads and analytics run into the many megabytes and the useless Javascript is CPU-intensive, it seems to work out as a net benefit.


It isn't the Bootstrap-like frameworks that are causing bloat. It's that websites are written as apps these days by people with relatively beefy machines compared to consumers who don't usually buy top of the line hardware every year.

For example, compare Twitter or Facebook to their recent SPA rewrites in React. I can't use either without sacrificing GBs of residential memory. Loading anything on those sites now requires many more CPU cycles than they did before.


Reddit is the worst offender for me. I don’t know if they use react? But whatever they’ve done, it’s horrific. It’s bad even on my top of the line 2020 Intel 13” MacBook Pro (the i7 one)!


Pretty sure they do, or at least they use another SPA framework. I would have mentioned Reddit, but forgot about its rewrite because old.reddit.com still works.


I did not know, that in C64 days you had a browser, with x WebAPIs to do various networking stuff, p2p, soundAPI, database, payment processing, complex - hardware accelerated styling and composite of layout, plattform irrelevant assembler subset, with a integrated IDE etc. etc.

A Webbrowser these days is simply much, much more than a static document viewer, despite this might be, what you want it to be.


I don’t think you need all that for the important stuff.


But other people think different of "important stuff" thats why it is there.

And those who really want only simple HTMl rendering, I believe there are lightweight alternatives (?).

And if not, well then maybe there is simply not enough demand, because most people apparently want to be able to have a email client in the browser and do online banking, or edit Wikipedia articles in a rich html editor, or play games, or watch videos and share and comment them or even do video editing, ... all in the browser.


> And if not, well then maybe there is simply not enough demand, because most people apparently want to be able to have a email client in the browser and do online banking, or edit Wikipedia articles in a rich html editor, or play games, or watch videos and share and comment them or even do video editing, ... all in the browser.

IMO they largely just want to do all these things in an open core VM, without risking installing malware on the ridiculously insecure proprietary OS most of them use if/when they use desktop computers.


Well yes and currently there is no alternative to a webbrowser which needs a lot of RAM for these tasks ... which was the main point, right? And not that we could do much better in theory. No doubt about that. But reality is the browser is usually the most pragmatic solution right now to most use cases. Which is good, when I can do online banking in a very niche project like pine, or do you think banks would port and certify their apps for the various linux distros?


I wonder if we could grade how much better a current system is vs something like the QNX 1.44MB demo disk.

http://toastytech.com/guis/qnxdemo.html


Thats easy: does the qnx meets the requirement of a modern browser?

No, then it is not.

Or do you mean in academical sense of functionaliy per byte? How useful is that?


Not to mention video.


It’s interesting because the original iPhone from 2007 managed pretty well with a half a gigabyte of RAM (or was it a quarter?).


An eighth (128 MB) in fact, according to Wikipedia. The original Galaxy S had 512 MB though.


Hasn’t Android always needed more though due to its use of an interpreter/JITer (JVM initially) instead of native code? That doesn’t explain why the PinePhone would need more than two gigabytes for a browser.


Android never used the JVM.


Reminds me of garbage bins. The larger the bin I have the more rubbish I find to fill it.


Boat has sailed. Recently told an intern to setup a simple notification system at work between Linux desktops and Android phones. He ended up downloading 2 GB of Android tooling and 600 MB of kdeconnect to build basically a TCP connection.


Especially since on PC it doesn't even need 1gb. With 7 add-ons (one being an adblocker) and Firefox having spawned 9 processes in Win10 it still uses less than 1GB RAM for me (even though two tabs are chats, so background-active). Something is clearly either wrong in the Firefox he run, his add-ons or Phinephone.


Or maybe that the os needs some ram, too?

And no, the firmware and drivers are sadly not memory optimized (or barely run at all), as ar as I know.


Supposedly improvements in hardware acceleration have changed this but IME a window manager with no compositing is way faster than phosh. I use fluxbox and on that firefox scrolls in real time for example.


Thanks for mentioning it.

How does fluxbox works on mobile? Is it basically the same than the desktop on a small resolution, or it can receive phone calls and have a notification bar?


If it's the compositor that causes the problem, can't it be disabled? I don't know phosh but in Plasma you can disable the compositing from the system settings.


Nope, there's no option for that in phosh settings. Actually, it's the compositor (phoc) that starts gnome-session in /usr/bin/phosh. I've just tried to edit the startup script to launch gnome-session directly, but it would refuse to start.

It's also a pure wayland system, so it's quite unlike what we're used on desktop.


> It's also a pure wayland system, so it's quite unlike what we're used on desktop.

When you say "pure", do you mean "no XWayland"?


All Phosh distributions I’ve come across have Xwayland.


I see, thanks for your explanations :).


Yes, which means you can actually see the windows for things running in the background which can be nice when your eg reading something from firefox while writing an email. All the phone stuff is handled by chatty and calls, notifications are handled by libnotify, alert sounds are handled by feedbackd etc.


Have you tried Sxmo? It's a very lightweight UI based on suckless programs but seems like it would make the phone functions calls, sms, very intuitive & easy to access. It's based on postmarketOS anything supporting that should also work.

https://wiki.pine64.org/index.php/PinePhone_Software_Release...


I’ve been using Mobian on the UBport edition for the last month. Sluggish doesn’t even describe the usability.

I bought a few data only sims in hopes of leaving the house for short errands with different devices. There is no way I could leave the house with the pinephone. It isn’t usable yet. Could an extra gig of RAM solve this? It seems like it needs years of work.


As you noticed, it's not just a RAM issue. The problem is that Linux GUI and apps generally expect laptop/desktop-ish levels of hardware capabilities because that's what they were developed on/for. You're literally running the same code base that the x86 versions run, just recompiled for ARM. Most mobile SoCs, especially the ones that are Linux-friendly in terms of being open enough to be viable, are not even remotely that. On iOS and Android, widely used apps such as browsers are extensively optimized for mobile.

On the Pinephone, and all other ARM-based Linux devices, we're still running applications like web browsers that were written expecting to run on multi-core I/O monsters with lots of RAM, beefier GPUs etc. (the main exception being the main GUI shell... it would be so much worse if you had to boot the phone into a full Gnome/KDE environment) We're still in the late stages of getting the pile of software that is a typical Linux distro running at all on a mobile device. Then assuming these devices get enough traction, you'll start to see more effort put into mobile-optimized software.


> The problem is that Linux GUI and apps generally expect laptop/desktop-ish levels of hardware capabilities because that's what they were developed on/for.

I remember running x11 and Java w/swing on 8MB 486 laptop... So something is off if multiple cores at 30-100 times the frequency can't run a gui?


Yep. But on mobile, Linux seem to run fine on the n900 and Ubuntu Phones.

The FreeRunner and Zaurus were sluggish, but they’re also on hardware that’s about 15 years old


My Nokia N9 ran a Debian derivative quite well. Gosh I miss that phone.


I have an old 32-bit machine from like 2005 with 2GB of memory running MATE Desktop that Firefox sails on, and that you can open several tabs on without it being a problem.


Yeah, it's obviously software related issues ... rather than having so many different distros running on it, it would be far more impressive to me if there was an demonstrable clear focus on improving performance!!


Relatedly, I've had a much better experience with Chromium than Firefox on my Pinebook -- I'm willing to bet this is because of Google's investment in Chromebooks.


Hmmm great points. Is this a "call for mobile versions" of classic Linux programs in a way?


Definitely... that's part of the reason I chime in when this topic comes up. Having done both Linux and mobile development, I don't believe that the entirety of the answer is going to be 'just throw a more powerful SoC in the device'. Sure, that will help to a degree in some areas but there is a lot of work that has been put into iOS and Android to achieve a balance between battery life and performance that most developers who have only worked in/on Linux haven't appreciated. Which is understandable since it wasn't 'their' problem... now with the Pinephone/Librem 5, it is.

Something I have found worthwhile has been listening to the UBports podcast over the last 6 months or so. They really do seem to be 'getting it' faster than most re: what's left to be done. That's partly because they're in the rather unique position of coming from a place of having the Android kernel (which isn't just a vanilla Linux kernel) do a lot of things for them (i.e. UBports on Android phones) and now that they're running on bare metal (i.e. UBports on Pinephone) with (mostly) the same code base they are able to see 'oh, yeah... the kernel and/or apps need to be able to do Y' in order to replicate the functionality they get on Android phones.


UBports relies on 2014-era Ubuntu-specific software that even Ubuntu moved away from. Consequently, I expect a lot of UBports to bit-rot before its maintainers can make it a reliable and competitive option. Mobian's stack isn't what I would have liked (it is a lot of unoptimized GNOME libs), but at least it seems to have enough corporate backing for development to keep going.


Yeah, both projects have their challenges. It's been good to see them, as well as the Librem developers, working with each other to advance their respective projects where it makes sense.

Just curious, what corporate backing are you referring to? I hadn't heard that before.


Historically so much of the GNOME tech stack which Mobian is based on, has been developed by Red Hat employees.


Is it possible the Raspberry Pi ecosystem could move things forward for ARM based linux devices? I know they aren’t mobile, but it is a strong ecosystem with some traction.


In a sense it already has. I suspect the reason that Linux ARM support is as good as it is currently has a lot to do with devices like the Raspberry Pi. In 2010 (i.e. before the Pi) I was using a BeagleBoard-xM and things were both rough and spartan. Today the ARM packages in the repos are nearly at parity and things work much better.

However, the Pi has never gotten much beyond being a forky port of Linux software (back then they had to: they were an arm6hf device in an arm7hf world) and since they seem determined to stay forked (i.e. they no longer need to be a fork, but rather seem to want to be), I don't expect much more from the project in terms of broader Linux enhancements. I'd love to be proven wrong on this.


In that regard pine64 is better, they attempt to get things upstream.


RAM will not solve this. Extra RAM will only allow you do do more things at once.


I tried Mobian but eventually switched to https://github.com/dreemurrs-embedded/Pine64-Arch - the phone is much more usable for me including Firefox. I am waiting to receive my updated board and look forward to further improvements.


> It's especially interesting because someone from Plasma mobile team told on HN the other day that they support MMS.

I'm curious about this too, as I explicitly tried plasma mobile on several distros (PostmarketOS, Manjaro, KDE Neon), and MMS did not work for me at all. It seems to work on the ofono stack, which seems to be closer to supporting MMS versus ModemManger + Chatty, but I'd love to hear from that same person again to hear their set up.


For reference, this was the post, in case you come around the author : https://news.ycombinator.com/item?id=25101199


IIRC (not a MMS user, it never took off in Germany) MMS requires provider/carrier specific settings (e.g. a special MMS APN) to be set, meaning that it may not work for you even though it’s properly implemented.


The screen blacking out thing might be due to DRAM speeds or timings being too aggressive. I received a manjaro CE phone the other day that does the same thing.

https://forum.pine64.org/showthread.php?tid=9832 has more info


Elinks has known vulnerabilities in JavaScript (Spidermonkey) and possibly other parts. It is abandonware, and should not be used. You could use a terminal, mosh, and Browsh, allowing you to use Firefox remotely.


> If you're skilled I'm sure you can find a company that you can put in 10 hours per week at

Do you have real world experience with that? I found on the contrary that the more skilled I get, the less companies are willing to make me work only part time. Which is too bad, because part time high paid job is really the best of both world.

Part time jobs in restaurants is what allowed me to learn programming when I was in early adult life, and I crave for as much time to learn new things, nowadays. Sadly, while people I work for are willing to negotiate insane amount of money (in my opinion), they are not to concede the slightest amount of time.


I'm on Gentoo myself, but for people coming from MacOS, I would recommend to give KDE Neon a try.

It's based on Ubuntu, but made by interfacers (the KDE team). I have used it for about a year and I thought during that time that it was "osx on linux", because of how well integrated everything was.

The Plasma desktop and the KDE project have been known historically to very well integrate their softwares with one another, but Neon is going a step further, by controlling the OS itself and being way less buggy that other distributions of the Plasma desktop.

That would probably be a good transition OS coming from Apple products, before people get a taste for chaos and pure freedom and want to go wild.


So basically, on a planet whose surface is covered mainly by oceans, a meteorite fall in an urban area, is big enough not to disintegrate in the atmosphere and small enough not to blow the house and kill the man (I mean, what happened to craters?), and it suddenly makes him rich.

I'm not a religious person myself but I totally get why this man decided to build a church. I just hope he won't make it any weirder by proclaiming himself a chosen one or something, because there's room for that.


I don’t get the impression that the proposed church is connected in any way to the meteor or himself as a personality, but since you raise the point:

Some time around 54 AD, silversmiths started a riot in Ephesus because their trade in idols was being threatened by people listening to Paul’s teachings of Jesus as Christ; then Acts 19:35 says:

> And when the town clerk had quieted the crowd, he said, “Men of Ephesus, who is there who does not know that the city of the Ephesians is temple keeper of the great Artemis, and of the sacred stone that fell from the sky?

(The town clerk’s speech was largely, “c’mon, guys, you know better than to throw a tantrum/riot over this—use the legal system if you’ve got a complaint”.)

There’s some uncertainty as to exactly what that meant and referred to, but it would seem to suggest that a meteorite had come to be an important part of their worship.


That's interesting, thanks.

Your post motivated me to google... Wikipedia has a list of uses of meteoric iron dating to 2.5k BC and earlier:

https://en.m.wikipedia.org/wiki/Meteoric_iron

The earliest dagger listed there seems to come from the Hattian people of Anatolia:

https://en.m.wikipedia.org/wiki/Hattians

It sounds like they were a pre-Indo-European group later absorbed/conquered by the Hittites...


Wouldn't be the first time.

"This stone [a black conical meteorite] is worshipped as though it were sent from heaven; on it there are some small projecting pieces and markings that are pointed out, which the people would like to believe are a rough picture of the sun, because this is how they see them."

"A six horse chariot carried the divinity [the meteorite], the horses huge and flawlessly white, with expensive gold fittings and rich ornaments. No one held the reins, and no one rode in the chariot; the vehicle was escorted as if the god himself were the charioteer. Elagabalus ran backward in front of the chariot, facing the god and holding the horses reins. He made the whole journey in this reverse fashion, looking up into the face of his god."

https://en.wikipedia.org/wiki/Elagabalus_(deity)#:~:text=The....


See also:

"Islamic tradition holds that it fell from heaven as a guide for Adam and Eve to build an altar. It has often been described as a meteorite."

https://en.wikipedia.org/wiki/Black_Stone


The law of large numbers says this should happen regularly.


They're basically just promising they won't be bad guys, which solves nothing.

The proper way to implement their feature without causing privacy issues would be to periodically update a list of authorized certificates and check against that list locally when launching apps. That would probably also increase performances.


It should only be "opt-in" and the OS should work if the user chose not to be a subject of surveillance.


I will disable it when it comes but I think it should be "opt-out" because otherwise the OS becomes insecure by default. And it will hurt majority of the people. Apple can ask it on system start like Siri and analytics.


If it were a pre-downloaded list then this is not surveillance. What it is doing it ensuring that software hasn't got malware in and/or that the developer's certificate hasn't been removed (e.g. for distributing malware). That's a good thing, like running a checksum on a downloaded file.


I agree, that would be a good idea, as long as it wouldn't call home and sending any reports afterwards.


How big would such a list be and how quick would a local lookup be?


Not any bigger than package repositories on linux distributions, which include the list of all known software and sometimes even rules how to build them.

It's just plain text. If I can have a local dump of wikipedia, I'm pretty sure I can store a list of developer IDs. Especially when I'm a company controlling the hardware and knowing what is the minimum amount of space the hard drives have in my computers.


Extremely small - probably in the megabytes range I would guess.

Think about antivirus definitions - those are many, many times larger, and still they have been kept up to date over the internet for decades.


There is a very large list of binaries that can potentially be downloaded, each of which can have hundreds or thousands of versions, while the number of known virus fingerprints is relatively small.


Apple doesn't check binary hashes but developer certificates these binaries are signed with. Which there are a lot less of (ie. firefox and thunderbird share the same certificate).


From what I understood, Gatekeeper still sends an application specific hash/ticket when an application is opened, not just a dev certificate (e.g. https://lapcatsoftware.com/articles/catalina-executables.htm...). Did that change in Big Sur?


The notarization check is on first launch of an app, but it doesn't occur on subsequent launches, unlike the certificate revocation check.


But the first lookup would have to stay, with all the implications that the proposed alternative (download a list of all certs/tickets) was meant to overcome.


This is what Bloom filters would solve. I believe another poster said that Firefox uses them to quickly check valid certs.


Implementation wise, there are probablilistic DSs like bloom filters which solve this very easily with further checks necesssary for false positives.


Kilobytes to megabytes, see CRLite.


Hi, thanks for taking questions.

What is the state of the phone call/sms/mms support in Plasma Mobile on Pinephone?

Also : will the CE edition be some sort of Neon mobile, or will it be based on PostmarketOS or something?


Yes it works! SMS and MMS are supported via the application Spacebar. The application got a lot of improvement last months. https://www.plasma-mobile.org/2020/11/12/plasma-mobile-updat...

Calls are also working fine.

The only "exception" being one carrier in the USA does not seem to work with ofono (the underlying framework) yet, but that issue was also with the previous UBports edition.


Interesting. My Understanding is that MMS is non-fuctional:

https://sr.ht/~anteater/mms-stack/

But I am admittedly more familiar with the Phosh stack versus the KDE stack. Are you then using ofono and that is providing a working MMS support for the Pinephone?

(Sorry I am not trying to be confrontational, it would be amazing if MMS works!)


MMS doesn't work with modemmanager, it sounds like the OS this ships with uses ofono instead which has MMS support.


So I cannot speak to ofono, but modemmanager largely does support MMS, with two exceptions:

- The APN for mobile broadband needs to be the same for MMS (modemmanager does not have support for multiple APNs yet), and

- "transfer-route MT messages" aren't implemented (which some carriers use for MMS, but it is not clear for me to how).

The larger issue is that there is no stack on phosh to support any of this. I proposed a couple of ideas of how such as stack would work on their git site, but I don't have the time to impliment it.

From what I saw with scapebar, it does not have native support to send picture messages (MMS), does not recieve picture messages, and did not have a way to send a group message (MMS). I am admittedly not familiar enough with ofono to know what works or how it functions in the stack. However, there is MMS support in the patched ofono stack below:

https://sr.ht/~anteater/mms-stack/


I hate to say, I just tested it and MMS doesn't work on plasma Mobile (PostmarketOS with Plasma Mobile 5.20.3)


This is great news.

Biggest question: Can I send SMS, on this phone, from another computer?

The biggest reason I ditched my iPhone for Android was the excellent implementation of in-browser messaging, to and from my phone, on any computer with a web browser.

I don't need to be able to text from "any computer with a web browser", but if I could send SMS from Win / Mac / Linux machines that I owned, that would be a huge incentive for me to give this platform a shot.


You can use KDE Connect[1] for send SMS from a Win / Mac or Linux machine. This also works with Android.

[1]: https://kdeconnect.kde.org/


This is fantastic. Thank you so much!!


There is KDE Connect. You can easily respond to texts from that.

Barring that, sending a text is a simple one-liner in the terminal. Worst case, you can just run the command via ssh or whatever other workflow you may want.


Beautiful. Thank you so much.


Is there auto call recording? Or on the roadmap.


In Europe, at least in my home country, law mandates some kind of beeps or warnings when recording calls.


Yes, and you, as an user would be breaking the law. But as far as I understand there is no law that would prevent phones from having this function.


I hadn't heard about the new support for MMS, that's fantastic news! It's such an important part of group messaging in the U.S. that not having access to it was a huge blocker for me in terms progress towards daily driver status.


That's awesome. Great job!


Which carrier?


I think the main reason they can afford pricing their services that high is because of peer pressure - probably itself the result of clever marketing, or that would be a really happy coincidence.

I've worked in many startups now, several of them where I was the first (and for a while, only) developer and had to decide on the infrastructure. Each time I was going with OVH, and each time the CEO was trying to push for moving to AWS instead, despite having no clue what the difference may be.

Their problem was that "startups are supposed to use AWS". They were having impostor syndrome. One would come to tell me every month or so how "all his friends use AWS, and they say it's very good". An other one was afraid what possible investors may say when he tells them we're not on AWS.

If people will pay overpriced services to be with the cool kids, why bother competing on price?


Being on AWS, Azure or GPC isn't about the costs being pressured from another department at all.

The cost reflects the premium for integration into the other services.

It is after first employees bounce or the company outgrows that hustle hack everything together mentally that consultants like myself get hired and want to sob that some employee decided to board to OVH as a small startup especially if it is growing.

Every time I've dealt with this it is a disaster or things hanging together by a thread. Hanging together by a thread have sort of gotten better with kubernetes mostly keeping everything running just by turning anything back on when it dies or crashes. The thing is that only until last year did OVH offer managed k8s, those poor startups that will suffer from these choices.

OVH has it's place to be considered:

+40 engineering companies

High out going bandwidth (CDN, video streaming, etc.)

IO Latency requirements

Large enough scale of anything where the cost of AWS egress bandwidth is too costly


Your lack of punctuation and odd syntax makes me wonder, but if I understood your post correctly, you claim that building with AWS is somehow safer / more robust / more future proof than building with OVH? A technical judgement?

If so, I vehemently disagree. I've been a consultant for 10+ years too and seen 50+ companies from the inside, from startups to behemoths – including AWS itself.

Companies running a tight ship around resources were generally technically superior to those using AWS. "Hanging together by a thread" indeed, playing the AWS bingo of "use a flaky soup of 3-letter-acronym-services to cover technical inaptitude".

AFAIR the AWS versions of Spark and Elasticsearch were abysmal to the point of being unworkable. At least two years ago, maybe it's better now.


I've worked as a contractor for a CEO for two companies, in both he pushed for a full migration to AWS. Would not be surprised if he got a kickback from AWS.

Amazon is pushing AWS pretty hard in the C-level, I don't know if you've ever followed one of their certifications or landing pages, but they do their marketing really well.

Anyway, I do think a platform like App Engine / Beanstalk and other quick / easy / no setup deployment tools have a benefit, if you're not good at setting up servers.


AWS allows you to shift your costs from CapEx to OpEx. Companies with low CapEx are valued higher since "theoretically" you could remove that bill by moving to another provider. Financial Engineering is just another part of software engineering and the cloud enables it.


This is not a real saving in my experience. The DevOps time for an app is so trivial. I actually just setup a .Net core app on Linux/mysql.

My Linux experience is old and very limited. I have used AWS for years for other things (S3, cloudfront, transcribe, etc.).

Initially I setup an elastic beanstalk app/separate mysql instance on my own AWS account just so I could quickly deploy (all new to me).

Then I setup the app on my client's VM, had to configure Apache, .net core app, service, mysql, mailing.

I would say the elastic beanstalk stuff took about 3 hours (some problems with IAM and Amazon's visual studio plugin, basically ended up having to use my master key). Setting up the VM server, plus a new way to deploying .net core apps and learning/relearning much of linux took 4-5?

So no significant savings there.

Deploys are a few clicks from VS on EB, and take a little longer to the VM, but only because I haven't bothered writing a script that I estimate would take me 1/2 hour at most, in reality probably 5 minutes.

I have clients on (windows) servers that have been running for 10 years with little intervention from me (had to clear some space a couple of times as that client's app saves large files just in case, but they are all backed up on S3 as well).

TL;DR; in my experience DevOps part of running a startup/small enterprise app is basically trivial, a rounding error, compared to time spent on development.


To be fully honest, for personal work, I use Caprover for DevOps.

Edit: The move from CapEx to OpEx is not about savings, it's often about shifting the costs in your books.


I guess I phrased that wrong. Explicitly, DevOps costs are tiny in a startup, even if you do it all yourself with a bare metal server, and moving 0.5% from pot A to pot B makes no difference.


It all depends on the services you provide.

Some businesses would require huge up-front investments without the likes of AWS. DevOps costs might overwhelm you pretty quickly once stuff like compliance becomes a factor, for example.

Sometimes it's not about the technical issues, but documentation, process and qualifications. In B2B there's plenty of that and just the bus factor [1] alone might force a start-up into considering a cloud provider.

In the end it's not just shifting cost, it's also shifting risk and standards and that may or may not be a critical factor.

[1] https://en.wikipedia.org/wiki/Bus_factor


Except that nearly all companies greater than a certain size have an engagement with AWS so they shifted CapEx to CapEx.


> shifted CapEx to CapEx

or in other words, shifted nothing?


Yes, I think the CapEx argument often advanced by the marketing of AWS and C-levels and engineers of large companies moving to the Cloud is just something said to justify the decision and help everybody get on board with it, but I think the CapEx -> OpEx one is fallacious.

There is others reasons for example the flexibility, the managed services, etc. but I don't think this one makes sense.


In my mind, one of the big (but seldomly discussed) pros for using AWS and especially their high-level services (especially WRT containers) is that they allow rank-and-file developers to do a lot more of the work that was traditionally considered 'ops'. This is advantageous because developer teams don't need to coordinate with a single ops team when they need something, which allows the whole organization to be more agile. Another advantage is that you don't have to hire and develop a high functioning ops competency in your business--you can outsource much of that to AWS and focus your time/resources on more valuable opportunities (in general, I wish the on-prem side of the debate would acknowledge opportunity costs in their talking points).


Startups don't win because they save 50% on hosting costs. They win because they move fast on product development. AWS make the latter much easier due to all of the additional tooling they provide.


If the constituent employees are trying to win in the classical sense, and not the pump-my-resume-and-bail sense.


CEO changes his opinion when money runs out and a new CFO comes in to fix the costs - at least from my experience.


If you need a CFO to look at your IT bill and cut costs, your problem is likely BS title inflation crowding out real work.


I usually search for documentation or tech problems in a text browser with cookies disabled, no js, no css, using tor and using an empty user agent.

I won't try to guess why, but the few pages I can access are always high quality, while poor quality pages (as discovered when opening the page anyway in a full browser) always try to block that setup.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: