Hacker News new | past | comments | ask | show | jobs | submit login

> The way it stores and loads configs, jobs, logs, build workspaces, etc is all 1990s tech.

I agree with a good chunk of what you said, but.

It's files, my friend, files. Tech from the 1970's.

Nothing wrong with that, inherently, and they are easy to inspect, repair if needed, you can use standard tools, etc.

The design of the file structure is maybe the issue, because it makes high availability complicated, but just using files is not necessarily a bad idea.




Correct, in some circumstances, files are great. They suck for Jenkins.

What are they? Lots of different things: build logs, job configurations, server configuration, secrets, cached unpacked plugins, build workspaces, etc. Some of those you want in S3, some you want in a database, some you want on fast ephemeral storage, some you want in a credential store. Good luck with that; only the secrets are doable with plugins.

Where are they? Sitting on some EC2 instance's ephemeral or EBS storage. But you don't want them there, so now you have to throw a bunch of crappy wrappers in to occasionally move them if you want them somewhere else. (Even if you do JCasC/JobDSL/Jenkinsfiles for version-controlled configuration and secrets, you may still want to back up your build artifacts and logs)

And them being files, it doesn't scale. Using EBS? Only one host can mount it (unless Nitro), so good luck scaling one box's workspace filesystem past one gigantic EBS volume, or doing master-master. And you have to clean up the filesystem every time a plugin or core version changes, or the cached version on the filesystem will override your container/host's newer version. Using EFS? Network filesystems suck (scalability + reliability + security + performance woes).




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: