Yes, it's still technically possible to write an iOS app in plain C in 2025 — but with caveats. You’ll need to wrap your C code in a minimal Objective-C or Swift layer to satisfy UIKit/AppKit requirements and Xcode’s project structure. Apple’s SDKs are built around Obj-C/Swift, so things like UI, lifecycle, and event handling need some glue code.
The CBasediOSApp repo you linked is still a good starting point, but expect to adapt it for modern toolchains and signing requirements.
Realistically, you'd write most logic in C (e.g. a game engine, parser, or core library) and interface with minimal Obj-C or Swift for the UI.
Anyone trying it in 2025 will likely be doing it for fun, education, or embedded-style constraints — not App Store production unless there’s a really good reason.
Great question — I’ve thought about this too. Technically, large-scale P2P streaming is possible, especially with today’s upload speeds and a tree-like distribution (log(N) structure). But there are big hurdles:
Churn & reliability: Peers come and go, making stable streaming tricky.
Latency: BitTorrent-style protocols aren’t built for real-time delivery.
Incentives: Without rewards, too many users just leech.
WebRTC: It hits limits fast and often relies on centralized relays.
Legal risks: Media companies don’t play nice with decentralized distribution.
Bram Cohen tried with BitTorrent Live, but it fizzled out. Would love to see someone revive this with modern tech — still feels like untapped potential.
I've worked on a similar setup in Go — managing a pool of "always-on" containers for isolated task execution via docker exec. The official Docker SDK is solid but pretty low-level, so I get the desire for something more ergonomic.
In my experience, there aren't many off-the-shelf Go libraries that give you full orchestration primitives (load balancing, health checks, scheduling) out of the box like you'd find in Nomad or K8s. But here are a few options worth exploring:
gofiber/fiber – not container-specific, but useful for building lightweight async schedulers if you're rolling your own orchestration logic.
dockertest – primarily for testing, but you can adapt its logic for simplified lifecycle management.
hashicorp/go-plugin – good for decoupling workloads, especially if you're considering container-based isolation per plugin/command.
That said, most teams I’ve seen build their own lightweight layer on top of the Docker SDK with Redis or internal queues for tracking load/health. Curious if you're doing multi-host management or keeping this local?
Also, make sure to aggressively timeout and clean up zombie exec sessions — they sneak up fast when you're doing docker exec a lot.
Would love to hear more if you open source anything from this!
Our use case is to execute test scripts in a sandbox mode. This is multi host and multi region setup. We might run millions of test scripts per day.
One of our engineers found https://testcontainers.com. We find it interesting and it seems like it won’t maintain container live. Instead, it start and remove the container for each test. We might need to implement lock mechanism to run only maximum number of containers at a time. I don’t know whether it fits for highly scalable test cases.
That’s a super exciting use case — running millions of test scripts across a multi-host, multi-region setup is no small feat. You're spot on about Testcontainers — it's elegant for one-off, isolated runs (like in CI), but when you're pushing at scale, the overhead of spinning up and tearing down containers for every single test can start to hurt.
In high-throughput environments, most scalable setups I’ve seen shift towards a pre-warmed pool of sandbox containers — essentially keeping a fleet of "hot" containers alive and routing tasks into them via docker exec. You lose a bit of isolation granularity but gain massively in performance.
You could even layer in a custom scheduler (Redis- or NATS-backed maybe?) that tracks container load and availability across hosts. Pair that with a smart TTL+health checker, and you can recycle containers efficiently without zombie buildup.
Also — curious if you've explored running lighter-weight VMs (like Firecracker or Kata Containers) instead of full Docker containers? They can offer tighter isolation with decent spin-up times, and could be a better fit for multi-tenant test runs at this scale.
Would love to nerd out more on this — are you planning to open source anything from your infra? Or even just blog about the journey? I feel like this would resonate with a lot of folks in the testing/devops space.
1. At sitebot, we focus on AI-driven solutions for businesses, but these bookmarklets remind me how simple automation can make life easier. Has anyone experimented with AI-powered bookmarklets, like quick content summarization or auto-filling forms based on context?
2. As someone who works across frontend and backend development, I still find bookmarklets incredibly useful for quick debugging and automation. While browser restrictions have tightened, tools like Tampermonkey or custom browser extensions can still help power users retain control. What are some creative ways you've found to keep bookmarklets alive?
If you’re aiming to be a "rogue" OSS contributor—essentially an independent developer thriving off open-source work—you’ll need a mix of strategy, skill, and visibility. Here’s how you can make it work:
Bounties & Sponsored Issues – Platforms like GitHub Sponsors, Bountysource, and Gitcoin offer paid opportunities to fix issues, build features, or improve security in open-source projects. Prioritize projects with active communities and funding.
Donations & Sponsorships – Set up GitHub Sponsors, Open Collective, or Patreon. This requires consistently contributing to high-impact projects and building a personal brand around your work.
Hackathons & Grants – Many OSS-focused hackathons and grants (like those from Mozilla, Linux Foundation, or NLnet) provide funding for impactful contributions. Target projects that align with grant programs.
Freelance Consulting & Custom Features – Companies often use open-source software but need custom solutions. If you’re an active contributor, businesses may pay for enhancements, bug fixes, or integration support.
Crowdfunding Specific OSS Projects – If you build something valuable (like a plugin, CLI tool, or framework extension), you can crowdfund its development via Kickstarter or similar platforms.
Merge-First Mindset – The key to sustaining this lifestyle is ensuring your PRs actually get merged. Engage with maintainers, follow their contribution guidelines, and build a reputation for delivering high-quality, non-disruptive code.
Content & Community Engagement – Write blogs, create tutorials, or host livestreams showcasing your contributions. Visibility brings opportunities.
It’s not the easiest path, but if you have the skills and discipline, you can make a living while staying independent.
At Sitebot, we deploy using AWS Lightsail for hosting, with GitHub Actions handling our CI/CD pipeline. For the database, we use MongoDB, and for file storage, we rely on AWS S3. This setup provides us with a balance of simplicity, scalability, and cost-effectiveness.
Would love to hear how others are handling their deployments!
In your case, you could start with an AI tool to streamline your recruitment process. For example:
Build a simple AI-powered resume screener to filter candidates based on key skills or an internal chatbot to answer common applicant questions.
Once you’ve tested and refined it within your own business, you can expand the tool’s capabilities or offer it to other companies facing similar recruitment challenges.
Start small, solve a real problem, and grow from there.
We (Start-up) publish high-quality blogs almost every day on our website, and some of them have received significant views, bringing real impact to our site. This strategy has proven to be an effective way to attract attention and engage potential users.