Hacker News new | past | comments | ask | show | jobs | submit login

Good response. This is not directly related, but wouldn't it be nice to have some sort of convention where install scripts declare the access that they need and people could allow it (or not)? Something like the Unix permissions systems, but more fine-grained. E.g. perhaps a chroot jail with symlinks to the places that you want to give the script access. Indeed, the first run of the script could be to generate the symlink commands you'd need to execute before it really executes. If you don't like the permissions it wants, you just exit the jail, kill the parent process, and move on with your life.



The UNIX permission system is ludicrously primitive for this day and age. Check out OLPC's Bitfrost for a more fresh take on the matter:

http://wiki.laptop.org/go/OLPC_Bitfrost#Foreword

Unfortunately, it doesn't look like it went anywhere even in the OLPC world... I'm not very familiar with OLPC, but the fact it carries a 2007 timestamp isn't very encouraging. I wonder if it fell victim to the "sugar" watering down of the project :-(


Which is why Linux, BSD's and most Unix versions have a wide range of more restrictive access control methods,such as various forms of containers, jails and VM's...

Personally I run pretty much everything in containers. Not always segregated from each others, but certainly segregated from most of the data I care about. All larger projects get their own containers or VMs, and I have several "scratch" VM's and containers of various types that I don't care if I lose.

Spawning a Virtualbox VM or LXC container (or equivalent) is so quick and painless today that there are few excuses to running all kinds of stuff unrestricted.


I don't really understand how that could possibly work. If you install tools on private machines then they never talk to each other.


Of course they can: Via network interfaces (which can be firewalled). Via shared directories on shared filesystems. Etc. With e.g. LXC the extent of isolation is can be controlled at a very detailed level. In practice, though, very little stuff needs more than a network connection to interact with each others, and very few applications actually have any business interacting with the other applications I run other than in very specific circumstances.

This is not to say that I run everything isolated from everything else. I have a "unsafe" VM for example where I compile and mess around with a lot of public code I don't want to evaluate the security of. To get further into my network from that one still takes a little bit of work. I also group together various things based on tasks.

But random code I don't have a reason to trust won't go straight into my normal user account on my laptop.

Note that a "reason to trust" can be as simple as "has been signed by the Debian packagers" for some systems. It's a trade off.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: