Accidental complexity, accidental complexity, accidental complexity and when we use a tool that is designed for problem A for a problem B it is not sufficiently dealing with complexity. This is why we need a new tool.
While a bit of a hot take, you're not wrong. We need something that's less scalability focused than Kubernetes/Mesos/Docker Swarm but that doesn't put too much burden on application developers. Something that focuses on being secure, reliable, and understandable, in that order. I'm not aware of anything going for that niche. That means a new tool is in order.
I think we need a composable system but I am not sure if the current frame where this problem and these tools exist is good enough. We might need to rethink how we handle access and usage patterns well. I only have wrong answers. Docker compose is amazing for local dev env, k8s is terrible for production. These are my experiences with this domain.
And how does anybody know just looking at this line what is the actual implementation? Creating new names like sst.aws.Cluster that hides actual details is problematic for me. ECS has three flavor as of 2024. How should I know which is in use when writing that line of code?
Amazon ECS capacity is the infrastructure where your containers run. The following is an overview of the capacity options:
- Amazon EC2 instances in the AWS cloud
- Serverless (AWS Fargate) in the AWS cloud
- On-premises virtual machines (VM) or servers
One more intresting detail I found that another services are not trying to hide the implementation detail.
The original k8s paper mentioned that the only use case was a low latency and a high latency workflow combination and the resource allocation is based on that. The generic idea is that you can easily move low latency work between nodes and there are no serios repercussions when a high latency job fails.
Based on this information, it is hard to justify to even consider k8s for the problem that gitpod has.
Not a C coder but isn't there a way to embed platform specific optimizations into your C project and make it conditional, so that during build time you get the best implementation?
Yes, but then you have to write (and debug and maintain) each part 3 times.
There are also various libraries that create cross platform abstractions of the underlying SIMD libraries. Highway, from Google, and xSIMD are two popular such libraries for C++. SIMDe is a nice library that also works with C.
> Yes, but then you have to write (and debug and maintain) each part 3 times.
could you not use a test suite structure (not saying it would be simple) that would run the suite across 3 different virtualized chip implementations? (The virtualization itself might introduce issues, of course)
FFmpeg is the most successful of such projects and it uses handwritten assembly. Ignore the seductive whispers of people trying to sell you unreliable abstractions. It's like this for good reasons.
Or you just say that my code is only fast on hardware that supports nativ AVX512 (or whatever). In many cases where speed really matters that is a reasonable tradeoff to make.
You can always do that using build flags, but it doesn't make it portable, you as a programmer still have to manually port the optimized code to all the other platforms.
Yeah, you can do that, but that still means you write platform-specific code. What you typically do is that you write a cross-platform scalar implementation in standard C, and then for each target you care about, you write a platform-specific vectorized implementation. Then, through some combination of compile-time feature detection, build flags, and runtime feature detection, you select which implementation to use.
(The runtime part comes in because you may want a single amd64 version of your program which uses AVX-512 if that's available but falls back to AVX-256 and/or SSE if it's not available)
For any code that's meant to last a bit more than a year, I would say that should also include runtime benchmarking. CPUs change, compilers change. The hand-written assembly might be faster today, but might be sub-optimal in the future.
The assumption that vectorized code is faster than scalar code is a pretty universally safe assumption (assuming the algorithm lends itself to vectorization of course). I'm not talking about runtime selection of hand-written asm compared to compiler-generated code, but rather runtime selection of vector vs scalar.
I am not sure if I understand. You have a country on Earth where the vast majority of people is one kind and happily co-exists with 56 minorities.
Can you point me to another country (the size does not even matter) where this happens like this?
> Soon after the establishment of the People's Republic of China, 39 ethnic groups were recognized by the first national census in 1954. This further increased to 54 by the second national census in 1964, with the Lhoba group added in 1965. The last change was the addition of the Jino people in 1979, bringing the number of recognized ethnic groups to the current 56. The following are the 56 ethnic groups (listed by population) officially recognized by the People's Republic of China.
How about Indonesia, South Africa, or Brazil? All are incredibly diverse and have pretty good inter-ethno/religious relations.
And I would not say that Tibetans nor natives from Xinjiang are happy with their current situation. The central govt has actively promoted the domination of Han people in these regions for more than 30 years. It has a real cultural impact.
Another thing from GP: 8% of 1.5 billion is a huge number of people!
As I understand, Japan and Korea are the most ethnically homogeneous countries in the world.
Sure, and if they made nuclear devices they should sell those without restriction too?
Neverminding the race to AGI, these cards can power things like targeting system, drone swarms, and other weapon systems. In any military conflict that is increasingly powered by non-human devices, it will be the speed and quality of AI which matters most. Which is part software, and part hardware.
Perhaps more importantly, though, our development of AI could be significantly enhanced by cheaper access to NVIDIA's devices. Which would be the case if we didn't have to compete as much with China, who have positioned themselves as adversaries, to buy them.
Accidental complexity, accidental complexity, accidental complexity and when we use a tool that is designed for problem A for a problem B it is not sufficiently dealing with complexity. This is why we need a new tool.
¯\_(ツ)_/¯
reply