Hacker News new | past | comments | ask | show | jobs | submit | hnthrow0693's comments login

What exactly leads you to believe that posting your credit card number on hacker news is somehow equivalent to not being outraged by the U.S. sharing _encrypted_ messages of suspected terrorists with the UK? Let’s not pretend the world is so black and white.


I was arguing against the idea of "if you're not doing anything wrong you have nothing to hide".

It's a bad attitude to have because allowing the government to have that power at all is a problem, even for terrorists and murderers and pedophiles. Because we don't know if people are those things until we investigate them, which means the government gets broad powers to investigate people to look for those kinds of crimes, and maybe find other people they want to get for other reasons.

I used to investigate computer crime. Yeah, my job would have been a lot easier if I had unlimited access to anyone's email. But instead I had to investigate and build a case without privileged access. It was harder, but I was glad that the system couldn't be abused.

Why should governments or law enforcement have privileged access?


Also because "good" democratic governments that respect human rights and personal freedoms deteriorate to oppressive tyrannies all the time. What's to say that your country won't?

Who'll protect you when they come knocking on your door to talk about that thing you posted about somewhere online, which used to be quite legal and no problem, but now is super illegal for some oppressive reason or another?

    First they came for the socialists, and I did not speak out—

         Because I was not a socialist.

    Then they came for the trade unionists, and I did not speak out—
         Because I was not a trade unionist.

    Then they came for the Jews, and I did not speak out—
         Because I was not a Jew.

    Then they came for me—and there was no one left to speak for me.


I have always been a bit surprised at the popularity of Alpine Linux for docker images. It’s awesome that the images are pretty small, but a wide variety of software has been shown to run noticeably slower on Alpine compared to other distributions, in part due to its usage of musl instead of glibc. I’d think that a few megabytes of disk isn’t as valuable as the extra cpu cycles.


For anyone who want a small image but with glibc, https://github.com/GoogleContainerTools/distroless is a good choice, especially if you are writing in static linked language e.g. Go and Rust.


"distroless" is just Debian packages. Their self-description is fairly annoyingly misleading, since they don't mention that they are just using packages from Debian.


Why is that a bad thing? Binaries are binaries, whether you copied from a deb package or completely built from source code (assuming reproducible build, which Debian supports), they are the same.


It's absolutely not a bad thing, indeed I think it's a good thing to use binaries from Debian. I just think the name of the project is strongly misleading. "distroless" is built from the Debian distribution, not something new made from scratch.


Not sure how widespread that view is, but that's never the association I would have made.

"Xless" is just a different way of saying "without X". "fearless" = "without fear"; "serverless" = "without a server" (yes, I know, not really); so "distroless" = "(comes) without a distribution"

So in my opinon the name correctly suggests that it's "just the package" and comes "without a distro", and not "was built without the help of a distribution".


> Binaries are binaries, whether you copied from a deb package or completely built from source code

No, they are not.

Debian packages are deployable packages, which are built around packages and services and conventions adopted and provided by the target distribution.


it should probably be called "package-manager-less" because there's no package manager in the final build, but there's also no ls, etc, so distroless kinda makes sense. Maybe systemless?


For statically linked binaries, why wouldn't you use the SCRATCH (0 kb) 'image'?


You would. Except most statically linked binaries may still need ca-certificates, tzdata, and some other files that libraries expect it to be present on the system.

Not to mention, you still need runtimes if you are programming in Java, Python or many other languages. https://github.com/GoogleContainerTools/distroless project gives a way to have these runtimes + their lib dependencies while still maintaining a minimal attack surface.


For almost any serious job running in production, you might need CA certificates and openssl.


Unless you're behind a load balancer which terminates TLS and the traffic you deal with is purely http.


Please don't do this anymore. End-to-end encryption is extremely easy to set up and maintain. P2PE will absolutely lull you into a false sense of security.


Everyone seems to think E2E encryption is needed everywhere (I know because the security guys at work think it is needed everywhere, even for everything inside a VPC), but even AWS here is advertising the fact that you don't need to do this:

https://aws.amazon.com/blogs/aws/new-tls-termination-for-net...

>Today we are simplifying the process of building secure web applications by giving you the ability to make use of TLS (Transport Layer Security) connections that terminate at a Network Load Balancer (you can think of TLS as providing the “S” in HTTPS). This will free your backend servers from the compute-intensive work of encrypting and decrypting all of your traffic, while also giving you a host of other features and benefits:


If you trust Amazon to know your risk profile better than your security people, you have a management problem of some sort.


I trust myself who setup our infrastructure vs. the security guys who's automatic response to everything is deny all everywhere, encrypt everything everywhere (at rest encryption isn't enough, what can you do to get the db to work on the data encrypted internally 100% of the time?), and enable 2 factor on everything (the github gui has 2fa enabled, why aren't push/pull requests using 2fa?).

I think may main point is that kneejerk reactions to satisfy a security list checkbox are just as useless as a default "encrypt everything everywhere" stance must be better.


Sounds like you need better security people.


...which, sadly, is true of many larger organizations: "security" ends up as just another middle management approval committee whose only job is to apply byzantine security checklists dreamed up by some Certified Security Architect (tm) way too late in the development process, right when it's hardest for product teams to reshuffle their entire architecture to comply, and with no consideration to the actual circumstances / risk profile of specific projects.

IMHO this should be viewed as a big, glaring anti-pattern, as it fundamentally puts security team goals at odds with product team goals.


Agreed.


...Which sounds like a management problem.


Agreed.


but without tls amazon can "decrypt" your traffic and see whats inside. its one thing to have a backdoor inside a server that they rent to you that would have to be actively exploited and another to passively clone the traffic and analyze it in the name of making the service better.


If you believe that amazon are potentially an adversary, but you still want to host it on their servers, there is essentially nothing you can do to stop them getting at your data. At some point, to process the data, you have to do that unencrypted. That is an unpluggable achilles heel.


Why do they have to actively exploit hardware/vms that they own? Isn't it pretty trivial for a hypervisor to "passively clone" data right out of the memory of the VM? Or to use management interfaces/custom peripherals to exfiltrate data if it were bare metal? AWS is kind of a black box to me but it seems hopeless to try to protect data from people that physically control the systems.


i was especially talking about passively mirroring/analysing network traffic. afak there is no easy and trivial way to "passively clone" aka dump memory of the hypervisor all the time without it being detectable in slowdowns and so on. my concern was not that i need to protect myself from amazon for the fear that they will hack my server, but to the way that they can get insight into my customers, maybe get a snippet of the data i get and so on.

we once saw this from some other company where they noticed we where talking to the competitors and wanted to talk.


Last time I checked, mTLS incurred significant performance penalties and required significant soak testing to ensure that performance would be acceptable for a given application. If you're a small company, you have much lower hanging fruit to chase.


In my understanding there's additional overhead at handshake, but after that the performance is basically identical. The client certificate mostly acts to identify the client to the server, but otherwise the business of picking session keys etc is the same. At this point TLS overhead is close to free.

I think the start of this thread was a plea not to terminate HTTPS at the edge, but instead to plumb it all the way to the serving container. That's unlikely to be mTLS in any case.


What's an extremely easy solution to set up and maintain automated certificate signing and provisioning?



It does not matter how easy it is, what is the security threat you are mitigating with E2E encryption? In a large scale system it is often not trivial to build a proper E2E, far from being impossible though.


Eh, don't teach my about the systems I run. I'd love to run TLS end to end but in this one? Nah, not worth it.


"Don't teach me about ..." -- aren't we all here to learn? Let's keep the tone civil and assume the best.


I work at $CORP. I don't trust my enterprise IT department with unencrypted traffic for fear of falling victim to stupid traffic shaping or deep packet inspection intrusion prevention going haywire.


Good for you. In my current gig the trade-off is different. I don't work for Google.


> and the traffic you deal with is purely http.

Which is a truly rare case as many backend APIs these days are mandatory secured by HTTPS (or LDAPS, SMTPS, IMAPS to name a couple other openssl-based secure protocols).


sure, as long as you don't connect to ANY outside url's for anything


Which you probably shouldn't be doing for most of your services anyway.


I don't remember the last project I did which didn't have some sort of integration with an external service. I guess if you use microservices, most components won't need it; I mostly use monoliths.


I guess if everything is in a single service you're bound to have some sort of outbound connection at some point. Although, in my experience, you can go very far within your vpc.



I wonder if this would be important with service mesh and mutual tls...


Service meshes make in-cluster mTLS a more-or-less automatic feature, which is worth having. Some will terminate TLS at ingress and convert to mTLS internally. The argument above is that you shouldn't do this and should instead plumb that ingressing TLS traffic all the way to the container.

The downside of plumbing directly to the container is that you lose many of the routing features of a service mesh. If it can't inspect the traffic, it can't do layer 7 routing. It can only route and shape at layer 4.


This has nothing to do with being a LB, if you need to do outgoing calls with https you most likely need ca-certificates.


With static linking your binary would contain openssl.


Not the certificates though, unless you do some special tricks. I think GP is probably thinking of kerberos/NSS, which has a plugin system that requires dynamic linking.


There are two basic flavors of distroless image, one is base, the other one is static.

The `static` image is very bare minimal and only contains things like nsswitch.conf and ca certificates. This is the recommended base image for statically linked languages.

There's also a `base` image that I usually use as the base image of C programs, e.g. stunnel/unbound DNS. In those cases, I usually use Debian Stretch as build environment (distroless uses binary from Debian stable, so ABI is compatible), build a dynamically linked binary, then copy the result binary along with all dynamic object files to a distroless base image.

So when I talk about openssl in the image, I was referring to the `base` flavor. If you are happy with the provided openssl version, the openssl in the base image is indeed useful.


You can easily use multi stage docker files to copy the certs.


You might want at least a shell in the container for debugging?


Distroless has debug images for this purpose: https://github.com/GoogleContainerTools/distroless/blob/mast...


Adding a shell seems antithetical to deploying production code as a static-linked binary, not to mention an expansion of the attack surface of the container.


Without a shell, how does one debug if anything goes wrong?


You can start a container with a shell that shares the PID and network namespaces of the container you want to debug.


Reading logs/traces on your log aggregation service and reproducing in a dev system?


How do you debug in the dev env without a shell?


from the host system, containers don't exist in a vacuum


With remote debugging?


remote debugging is a shell


not necessarily. e.g. java runtimes can expose debugging ports when needed that operate on a custom protocol.

or you can just build gdb into the container and run the process under gdb, then attach to the tty.

or you can debug from the host system where the container's pid namespace is a descendant of the root namespace and the other namespaces can be accessed via /proc or unshare.


What I meant is having a remote debugger is as good as having a remote shell in terms of remote code execution.


Debugging is about when the difference between theory and practice breaks down.


You can use nsenter


Depends. Normally I would do with golang apps. But if I need to debug an issue I'll redeploy them with Ubuntu underneath so I can use the debugging tools.


If your binary is static, why do you need a container at all?


So you can justify using k8s and shine up your resume!


It’s the new 3x SQLServer + 2x IIS + 2x SharePoint cluster to serve an intranet for 100 staff when a single <insert oss wiki on linux> would be more than sufficient.


For Python I'd also highly recommend Clear Linux for raw performance. It's not quite as easy to get started as with something like Ubuntu though.


Checked it out, is Intel supported with compiler tweaks to get the most out of their CPUs, for libs like math, pandas, etc.


Clear Linux is definitely an interesting project, however, using base image with 'latest' tag (only tag existing for https://hub.docker.com/r/clearlinux/python/tags) in the production is not a best strategy as the breaking changes can arrive anytime



If you want small images, why not use a tool like https://github.com/docker-slim/docker-slim Then it doesn’t matter which distro you favor?


Wow, I've never heard of this before - I'm looking forward to seeing how much this shrinks my images!


Search for "awesome-docker" and make sure you have some free time. :)


I don't quite get it. If you've somehow statically compiled all your dependencies, shouldn't that just run without a container?

Perhaps the point is not to enable program execution, but to make use of the benefits that may come with container orchestration.


Well at the very least running in a container gives you filesystem and network and PID space isolation, optionally also user namespace isolation.


> I’d think that a few megabytes of disk isn’t as valuable as the extra cpu cycles.

Depends on your workload, of course. Some people want to run a huge number of containers and each isn’t compute intensive.

Or maybe you don’t use libc at all in your fast path?

Lots of cases it makes sense.


If you reuse the base image, the libc files will be shared anyway.


This is the part most people don't realise.

They see Alpine at 5Mb and Ubuntu at 80Mb. They mentally multiply, without realising that each of these will be pulled once for each image built on top of them.

For a large cluster it's a wash. You might as well use Ubuntu, Centos -- anything where there are people working fulltime to fix CVEs quickly.


As far as I'm aware you'd have to load that 80mb into memory for each docker container you run so that can add up if you want to run a bunch of containers on a cheap host with 1GB of RAM.

I do agree that people prematurely optimise and mainly incorrectly consider disk space but I think there's a decent use case for tiny images.


Not quite. The 80Mb represents the uncompressed on-disk size. Those bits can appear in memory in two main ways. It can be executable, in which case large parts will be shared (remember, containers are not like VMs). Or it can in the FS cache, in which case bits will be evicted as necessary to make room for executables.

There's a case for tiny images, but it's in severely-constrained environments. Otherwise folks are fetishising the wrong thing based on a misunderstanding of how container images and runtimes work.


I've no idea why you wouldn't use Ubuntu which is only around 40mb, has a sane package manager and a standard glibc.


> Ubuntu which is only around 40mb […]

I just downloaded Ubuntu 18.04 and 19.04 and they are not 40MB:

    $ docker image ls | grep ubuntu
    ubuntu  19.04  f723e3b6f1bd   76.4MB
    ubuntu  18.04  d131e0fa2585  102.0MB
    ubuntu  16.04  a51debf7e1eb  116.0MB
How do you get a 40MB Ubuntu Docker image?

---

I followed @sofaofthedamned — https://blog.ubuntu.com/2018/07/09/minimal-ubuntu-released

But I’m still confused, where are they getting 29MB from? Even the compressed files are big:

    Ubuntu Bionic [1]
    - ubuntu-18.04-minimal-cloudimg-amd64-root.tar.xz |  77M
    - ubuntu-18.04-minimal-cloudimg-amd64.img         | 163M
    - ubuntu-18.04-minimal-cloudimg-amd64.squashfs    |  96M
    
    Ubuntu Cosmic [2]
    - ubuntu-18.10-minimal-cloudimg-amd64-root.tar.xz | 210M
    - ubuntu-18.10-minimal-cloudimg-amd64.img         | 295M
    - ubuntu-18.10-minimal-cloudimg-amd64.squashfs    | 229M
    
    Ubuntu Disco [3]
    - ubuntu-19.04-minimal-cloudimg-amd64-root.tar.xz |  69M
    - ubuntu-19.04-minimal-cloudimg-amd64.img         | 155M
    - ubuntu-19.04-minimal-cloudimg-amd64.squashfs    |  89M

[1] http://cloud-images.ubuntu.com/minimal/releases/bionic/relea...

[2] http://cloud-images.ubuntu.com/minimal/releases/cosmic/relea...

[3] http://cloud-images.ubuntu.com/minimal/releases/disco/releas...



Don't feel bad. You've discovered the charming little fact that the registry API will report compressed size and the Docker daemon will report uncompressed size.


i switched to ubuntu minimal for my cloud instances. it works great. highly recommended.


Ubuntu used to be much larger so I wouldn't be surprised if people switched to Alpine and never looked back.


40mb vs 5mb is like 5x difference.

There are also slimmed down images based on Debian or Ubuntu. A number of packages is a bit older versions, though.


You're only playing that 40mb once though. Multiple containers sharing the same parent layers will not require additional storage for the core OS layer.


Are you guys running everything on a single box?

Do you get all developers to agree on which base image to build all their services from?

I heard about this "oh, it's shared, don't worry" thing before. It started with 40MB. Now that supposedly shared image is half a gig. "Don't worry, it's shared anyway". Expect when it isn't. And when it is, it still slow us down in bringing up new nodes. And guess what, turns out that not everyone is starting from the same point, so there is a multitude of 'shared' images now.

Storage is cheap, but bandwidth may not be. And it still takes time to download. Try to keep your containers as small as possible for as long as possible. Your tech debt may grow slower that way.


As it happens, you're describing one of the motivations for Cloud Native Buildpacks[0]: consistent image layering leading to (very) efficient image updates.

Images built from dockerfiles can do this too, but it requires some degree of centralisation and control. Recently folks have done this with One Multibuild To Rule Them All.

By the time you're going to the trouble of reinventing buildpacks ... why not just use buildpacks? Let someone else worry about watching all the upstream dependencies, let someone else find and fix all the weird things that build systems can barf up, let someone else do all the heavy testing so you don't have to.

Disclosure: I worked on Cloud Native Buildpacks for a little while.

[0] https://buildpacks.io/


> single box

In production, the smallest box has half a gig of RAM.

In development, it's indeed a single box, usually a laptop.

> all developers to agree on which base image to build all their services from

Yes. In a small org it's easy. In a large org devops people will quickly explain the benefits of standardizing on one, at most two, base images. Special services that are run from a third-party image is a different beast.

> Storage is cheap, but bandwidth may not be.

Verily! Bandwidth, build time, etc.


Exactly. With copy on write and samepage merging it's more important to use the same base image for all your deployments.


Technically it's closer to 8. Which is quite a bit. That said even. If it's relatively very different, absolutely speaking its 40mb which is very little even if you have to transfer it up.


It's sad to me that it wasn't obvious to you 5*5 is not 40, or 40/5 is not 5.


In particular since the base image is shared among all containers using it. It's 35 mb extra for n containers (n>>1).


Exactly. Amazed more people don't know this.


Ubuntu is 40MB but if you add a few packages with tons of dependencies it can quickly reach 800MB. Alpine has much more reasonable dependency trees.


--no-install-recommends is your friend


Many debian based images set this option by default.


And a shitty library at the heart that makes everything suck just a bit more.


The base image may be small, but all the packages and metadata are large, and the dependencies are many. Alpine always leans to conservative options and intentionally removes mostly-unnecessary things. So average image sizes are higher with an Ubuntu base compared to Alpine.


My inability to personally audit systemd would be at the top of the list.


Why would anyone run an init system inside a container?

Just run the process. Let whatever is managing your container restart it if the process quits, be it docker or K8s.


Because sometimes it is easier to ship one thing to customers than 10 different images and several different run manifests.

There's plenty of other reasons.


Here is one reason: https://news.ycombinator.com/item?id=10852055 (TL;DR - default signal behaviour is different for PID 1).

[Edit - fix link]


given that containers usually do not include any init system at all that’s not a good reason to pick a side.


Ubuntu Docker images do not run systemd by default or in most configurations. In many images it is removed entirely.


I wouldn't be too concerned about that in a container since you're probably not running systemd in that context.


As far as I can tell, the recommended way to run several processes in an Ubuntu container is under supervisord. The default Ubuntu containers don't even include an init system.


Suoervisord is an awful thing to do under a container tbf. Any oom event and it will kill a random process. Not good in a container environment.


I've only ever done it when I needed to run exactly two processes- porting an older piece of software that was not originally designed to run inside containers, and orchestrating the separate processes to be able to communicate and run in separate containers didn't seem like worth the effort.


And supervisord will restart it.


I’ll try not to be opinionated, but starting an app inside Ubuntu typically has 50+ processes.

In most cases with Alpine-based containers, the only process is the one that you actually want to run.

Add to that that modern Ubuntu uses systemd which greatly exhausts the system’s inotify limits, so running 3-4 Ubuntu-containers can easily kill a systems ability to use inotify at all, across containers and the host system. Causing all kind of fun issues, I assure you.

So the cost is not just about disk-space.

Disclaimer: more experience with LXC than Docker.


I don't think your LXC experience applies to Docker. Most Ubuntu-based Docker containers are not running full init.


Their minimal images are well, minimal. Maybe not as minimal as Alpine, but not heavy either.


I guess the question is how much slower?


Benchmarks here: https://www.phoronix.com/scan.php?page=article&item=docker-s...

For Python in particular, significantly slower.


The Python test is indeed a large outlier! I wonder how reproducible it is, and what the reason might be.


That is very likely due to processor/architecture optimized precompiled Python. See e.g. https://clearlinux.org/blogs/transparent-use-library-package... and https://clearlinux.org/blogs/boosting-python-profile-guided-...


I did an analysis with the official Docker Python image [0] with the --enable-optimizations flag. Although not the point of the results [1], they show a decent speed up when using Debian vs Alpine as well.

[0] https://github.com/docker-library/python/issues/160 [1] https://gist.github.com/blopker/9fff37e67f14143d2757f9f2172c...


Because it's tiny, I tend to default to Alpine and then move away from it where necessary. Rather than worrying about potential CPU performance requirements upfront - premature optimisation and all that.


Isn't there an argument that using Alpine instead of something like Ubuntu is premature optimization for space?


It's faster in terms of development time. Faster to install packages and to push to docker repos.


Why? You should only need to push the base image once. Then only upper layers will have to be pushed.


Making it smaller than necessary is a bigger premature optimisation, though. Rather than worrying about potential disk space issues upfront just use glibc and everything works fast and fine.


If your software is heavily CPU-bound, and the difference matters for you, you can likely find or build a highly optimized image for your particular number-crunching task.

If your software is I/O-bound, and is written in a language like Python or Ruby, and sits idle waiting for requests for significant time anyway, CPU performance is likely not key for you. This also represents the majority case, AFAICT.


Why is musl slower?


I don't want to disparage any project or guess the motivation but there have been some undercurrents of anti-GPL sentiment at times and anti-complexity. Folks have sort of backed it up by posting some links to things that aren't as you might think. GLIBC in particular looks nothing like how you might imagine it. You can look at strlen in the K&R book and it's beautiful, like a textbook: int strlen(char s[]) { int i; i = 0; while( s[i] != '\0') ++i return i; }

then you look at: https://sourceware.org/git/?p=glibc.git;a=blob;f=string/strl...

and your mind will sort of explode for a bit. The difference is the GLIBC version is dramatically faster; take the comments away and most of us wouldn't even know that's strlen. It's more complex, no question, but it's much faster. GLIBC is full of stuff like that. qsort and memcpy are non-obvious to many folks. It's not complexity for no reason, you'd be challenged to build a better qsort than the one in glibc, it's not easy.


You'd have to use a pretty esoteric platform to get that version. For most platforms you'll have a handle-rolled assembly version that's dramatically faster than that one (the x86_64 one is here[1]).

[1]: https://github.com/bminor/glibc/blob/master/sysdeps/x86_64/s...


This reminds me of a classic blog post:

    http://ridiculousfish.com/blog/posts/old-age-and-treachery.html
All of those old Unix programs aren't fast by being simple and clean on the inside. Old age and treachery...



It's much simpler and more oriented towards POSIX compatibility than performance.


* musl prioritizes thread safety and static linking over performance * most software is written and optimized against glibc, not musl


The small size of an image can be an actual issue. Certainly for embedded devices, probably also for clusters.

For me it's mostly a non-issue. But things like redis or nginx work nicely with alpine as a base. And more than likely, whatever I want to containerize has already been containerized for Alpine. If not, getting something to work on Alpine may just not be worth it...


>"It’s awesome that the images are pretty small, but a wide variety of software has been shown to run noticeably slower on Alpine compared to other distributions, in part due to its usage of musl instead of glibc"

Might you have any citation(s) for this? I have not heard this before. Could you elaborate on what some the "wide variety of software" is? Thanks.


I've heard that having shared libraries like glibc allows for commonly shared hotpaths to remain in the CPU cache, making it faster. Whereas in musl, since the binaries all have their own copies of procedures, they are more often kicked out of cache. I don't imagine it has a big impact on a container where you are only running one or two binaries. Perhaps in a desktop environment with hundreds or thousands of different programs running, it might make a difference.


It's not just a few megabytes, it's more like hundreds. If you're moving an image across a bad network (like the internet) that could be hundreds of milliseconds.


I would also add that data transfers cost money, and having to transfer a few hundred MBs each time a container image is passed around can reflect in the expenses.


It's not a few megabytes, it's a few hundred to a thousand megabytes saved, on average. Multiplied by a thousand containers, and much larger layers on build servers, plus bandwidth, it makes a difference.

Worst case for slower processes, things take longer. Worst case for more disk use, things start crashing. For general cases, the former is preferable.


No, it's not. With samepage merging it's nothing, let alone docker only loading the image once.


This assumes everyone is on exactly the same version. Larger organizations can afford to appoint SREs who can institute a broad range of security and optimization policies (including "all apps should use the same Alpine base image version") and enforce them programmatically as well as ensure that apps are continually updated to match them. That kind of thing is expensive, resource-wise, for smaller organizations.


I'm not talking about image waste, I'm talking a single box with a half dozen different base image versions and a slew of extra packages thrown in. Every app built against glibc is going to get bigger, and not all Ubuntu packages are built with small size in mind. Compare these systems with Ubuntu vs Alpine base images and the average size for an Ubuntu ecosystem is substantially larger.


What is the better and safer alternative for cases where some extra megabytes are ok?


I don’t think it is. First, lawmakers in several countries around the world were the first to propose these types of regulations, not Facebook. Second, Mark already acknowledged in the senate hearing that such regulations can hurt small businesses, so lawmakers need to be careful.

Hard problems require complex solutions. It’s not easy for small companies to get “a foot in the door” in the space or auto industries either. Content moderation is a hard and unsolved problem — so yes, companies with more resources will have a better chance of succeeding.


Do we solve it for books? No. Very little if the content lawmakers are worried about is content we would censor books for.


Not all hard problems require complex solutions, unless the definition of a hard problem is one which requires a complex solution.


Lobbyists are a thing. Before lawmakers come out publicly, there are already backroom talks and legal bribes being made.


I like how the only comment that isn’t highly speculative gets immediately downvoted.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: