Hacker News new | past | comments | ask | show | jobs | submit login
Tor is a great sysadmin tool (2020) (jamieweb.net)
420 points by azalemeth on Aug 31, 2021 | hide | past | favorite | 118 comments



In many ways I think this blog post really makes quite compelling arguments and honestly opened my eyes a bit.

One (perhaps mad) idea for more secure access to a machine deep behind many levels of NAT where you, the sysadmin, have lawful access but are fed up with having to have a 12 KB ~/.ssh/config file in order to access it because of your university's overbearing IT department^W^W^W^W network topology, would be to "just" run an onionsite with onion services authentication [1], preventing it being publicly accessed without the pre-shared key. If your onion service just redirects to ssh (presumably with certificate-only auth) I can't help but think that this is almost an example of security by obscurity done right.

[1] https://support.torproject.org/en-US/onionservices/client-au...


Think from the beginning what will be the end: "I thought your security policy was too overbearing, so I used tor."

IT departments make their choices for reasons. The key is to help them understand your use-case, and they'll probably help you through the problem in a way that might limit collateral damage.

Source: have seen firewall bypasses (with a pre-shared key) get leveraged as a way to hack an entire university lab/department.


IT departments make choices that benefit their own needs and for their own convience, often forgetting that the entire point of their department is to make the rest of the organization more effective. Sadly, it often goes the other way.

Shadow IT is a signal that the IT organization is doing things wrong. People use shadow IT because the IT department is not doing it's job properly, serving it's customer base based on the needs they show via their actions.

For example, if you see someone like azalemeth do the things he does, it shows that the IT department needs to become responsive enough and cooperative enough to not push him to do such things in the first place. You notice he's tried to do thing the IT department standard way first, and spent considerable effort before he started his shadow IT method.


“Policy made my job slightly harder so because I know better than the netsec team who clearly has or should have unlimited time and resources to help me I will do what I want anyways, and put the organization at risk.”

Also known as “how to make the netsec team hate you 101”

I agree with you about why shadow IT exists, but most IT departments are spread so thin that expecting them to be super responsive to anything but the most critical business projects is often totally unreasonable.

Then they have to waste even more time hunting down idiots setting up Tor nodes on their internal networks.


If the IT department can't do its job because of resource constraints likely the whole organization is a failure.

If you find something like that, run…

If you can't run, do whatever makes your live better. The org is doomed anyway.


A recent example from me -- one VPN client of mine suddenly refused to connect one day for no discernible reason when they made a configuration change to their cisco vpn "concentrator" without documenting it or announcing it. Cisco AnyConnect GUI clients were fine and some magic happened behind the scenes to push the configuration change and, in typical Cisco style, avoid saying what exactly it was.

I had some esoteric monitoring machine that couldn't run anyconnect (for reasons I forget but almost certainly relating to it not having a linux arm64 client at that time) and naturally couldn't connect randomly one day with openconnect (which previously had worked perfectly). I asked what the configuration change was to prevent me having to reverse-engineer it. The response was "if you want to use unsupported clients we cannot offer any assistance [...] we are currently operating two heads down and we simply do not have the resources [...]." It took me about four or five hours to work out what change they had made, change the (122 line long) configuration file for openconnect, and then, boom, everything good again. A friendly "Hey, sorry about that -- we just $FLICKED_THIS_SWITCH because $REASON" would have been massively helpful and arguably take less words than their original response. (Edit: For context, approximately 10-20k people use that specific VPN. And their team is such that losing two members of staff temporarily is a major inconvenience.)

I totally understand it from the other side. IT departments have everything from state-sponsored ransomware attacks to important people loudly going "why doesn't the printer work any more". It's a different set of skills to being a C-junkie, a programming wizard, or, in my case, a young academic with one big grant and three PhD students trying to both do work, publish work, and get money to do more work where "work" is poorly defined and highly flexible. Over time I've noticed universities get far more corporate and many academics absolutely hate this, of which I am one. The "we control the network, bug off" may be technically true but at times it does feel a bit like an imposition of some sort of academic freedom, to be honest. At the very least, it's a nice little "dog egg" to find added to the pile of administrative crap to do for that day.


What you’ve just described is most post secondary institutions, public utilities, government, etc.


I'm working in an organization where we have one laptop from work, and another laptop to do work on. Because the one sized fits all IT policy doesn't work for our org, but it's forced on us because of the IP security needs of another parallel org.

We went from an organization moving towards BYOD, to, now the exact opposite.


> because I know better than the netsec team

For anyone who's been around the block a few times, there's a good chance this is true.

Most organizations' netsec teams are too busy throwing money at vendors to keep up.


“Because I think I might know better I will act in a disrespectful way, and make someone else’s job harder instead of working with them to solve the problem”

You’re not the one who’s phone is going to ring at 3am on Saturday when that Tor node gets compromised. You’re not the one who has to manage the security incident. You’re not the one who has to explain why your security controls and policy did not prevent this from happening. Nor are you the one who has to clean up the damage if something goes badly.

I also think you’re vastly overestimating the average developers awareness of security issues. Perhaps you are very well versed in this topic, but many developers are utterly clueless, even when it comes to basic application security practices.


I'm curious how you think an SSH service exposed over TOR is going to create a security issue? SSH is exposed all over the public internet.


It bypasses all proxies and interception, and hides all of the traffic contained in the tunnel. This means no traffic logging of the tunneled traffic, no IPS/IDS in front of the SSH service, and no visibility into the SSH traffic itself. If the box with the SSH service isn’t in a DMZ it also compromises network segmentation.

The problem isn’t SSH over TOR being insecure. It is sidestepping all of the security controls in place at your org and not talking to the netsec folks first.

Honestly I would be amazed if any competent netsec folks would even allow TOR outbound by default. I certainly wouldn’t allow it by default in an enterprise environment.


The idea of allowing any kind of inbound connection into a secured network (other than to/via its DMZs) is anathema.

I don't even disagree with the logic, but the BigCorp Infosec Team heavy-handed approach to working with developers invites the developers to produce creative circumventions.


I tried doing that, and largely succeeded, but the specific area of the university in question will not have a bastion SSH host anywhere on their network. They will not allow SSH access in at all. They will however allow SSH access to other parts of the university, with different people in charge, which explicitly do allow an SSH bastion host to exist (and provide several for that purpose). So, the net result is that they've effectively out-sourced the control and responsibility of their environment to someone else.

Normally this is fine, but my job involves programming and controlling large, expensive, and strangely fragile lab equipment. There's a resilience problem, and it's got to the point where others have suggested putting a GSM modem on a pci-e card inside some of the boxes in question, as the relevant IT department decides on a whim to block ports with no warning or justification. Some manufacturers of the devices in question do this as standard if you have a support contract. Trying to complain results in responses like "you have been used to doing things one way and this change now prevents you from working as before."

I completely accept that this is a political problem and best solved as one, but ultimately SSH is an industry standard for a reason -- it's secure, and it's flexible. The machines in question are valuable, prone to breaking in the middle of the night, and we are an international bunch who cannot always connect from a well-defined ipv4 address, or from the university's VPN. (The latter is blocked by the IT department automatically, as it has too large a pool of potential users). The thing I find most frustrating is that this sort of political decision creates days worth of work instantaneously, for little benefit. All of the actually confidential or sensitive information is held in a completely separate network at any rate...


To quote Dr. Manhattan, "Without condemning, or condoning, I understand".

I am in network security. I have stopped shadow IT, and been a part of it.

Your situation seems so ungodly stupid and anathema to the point of IT, that the remaining courses of action should be the following.

Thoroughly document via email your attempts at explaining requirements to Netsec, to document in writing their objections, to do your best with what they provide you... and WHEN things catastrophically break, point the finger at them and thoroughly document how if you had the proper, industry-standard tooling, you could have prevented the loss of research/time/money.


THIS. Don't paper over the issues with shadow IT. Make them painfully obvious to the point where IT has to do something or answer to it. Otherwise it will not change.

I've given teams the option to turn off their pagers when this sort of thing happens with the justification that they can't fix it anyway. And then documented the crap out of why they can't fix it so when someone asks I can point to existing policy. It's very effective if done right.


"What did you accomplish during your time with X research group?"

"Nothing since all our equipment broke, but we documented how it was all IT's fault. You shoulda seen the looks on their faces when we called them out on it to the dean!"


It seems weird to dump on IT when they’re a department responding to the incentive structure they’re placed under like everyone else. You going to the Dean/someone with actual authority to get top down approval for IT to give you what you want is basically how IT operates in large orgs. I have nigh infinite technical power but in return I am bound politically by polities that I’m explicitly not allowed to have any authority over (i.e. I can’t approve my own policy exceptions). I want to give you literally anything you ask for. As long as my ass is covered it literally doesn’t matter at all to me. When I worked Uni IT if someone wanted something we couldn’t give them because $dumb_reason weren’t in a position to have that fight with the higher-ups on their behalf. It doesn’t mean much coming from us and since it’s not impacting our work it falls on deaf ears.

From your tone you make it seem like you were proud to waste everyone’s time and money when one single meeting with the Dean and the CIO/Director of IT when the problem happened would have opened every door for you.


The parent comment was literally suggesting that GP should have allowed the equipment to fail so that IT could be blamed. I didn't get the impression GP's situation was the result of not bothering to sit down and talk to a higher-up.

Everyone else is responding to incentive structures too, it's no less legitimate for lab workers to circumvent IT due to their incentives than it is for IT workers to be unhelpful due it their incentives.


This is a diverging motivations issue.

Many people are not in a stable career such that they can hang around and do upper management's job for them by "expensively failing so as to demonstrate IT's failures".

Academics and PHD students in particular live from grant to grant. They can't afford to waste grant money "to make a point that IT doesn't work." Reputations - and by extension careers - can be made and unmade with such stuff.

Aside, I think the academic life being so fragile is ALSO silly but that is another story.


I was assuming that the OP was someone who worked in a department IT role, that had to abide by more centralized IT security requirements. You're right that someone who is more transient or less full-time has less motivation to make a long-term point to the administration.


Quite. IT people in universities have jobs for life. The overwhelming majority of academics -- often including many professors -- do not.


You might try Zerotier or Tailscale running either natively or using an RPi as a bridge. Assuming your IT rules don't forbid it, both should be fairly resilient to simple/random port blocking. They're actually used by a lot of enterprises to provide secure p2p networks with automatic port punching and nat traversal.


The problem in my experience is not that the security policy is too overbearing, but rather that the security policy is too rigid and designed with assumptions that are false. A common policy for example is that port 22 must always be closed. One can use a hardware secured two token authentication over ssh, and still the policy is that the port must be closed and that is that. That the policy allow remote desktop with just a password is completely irrelevant because the policy doesn't forbid that.

I have tried so many times to help people understand security and the purpose of a security policy when it is designed correctly, but it doesn't work. The policy exist so people don't need to think, not to make people understand why it exist and what use-cases should be given exceptions.


Often times these policies are driven by industry compliance. Exceptions have to be documented, and depend on the compliance regime, may carry liability. Lastly when exceptions are made the user often doesn’t know what they signed up for, and it ends up holding the bag for a breech.

It’s usually better to not make an exception.


> IT departments make their choices for reasons.

In a perfect world, yes. But I've worked with/at places where ineptitude is rampant, and any attempts of understanding their reasoning is seen as insubordination.


A simulated conversation with IT:

"Hey, IT department...I was wondering..."

"No."


Lucky me.

Our IT department goes out the of their way to help us stay sane and productive

- they're making sure most of us can continue to use our favourite Linux distro (I think most Debian/Ubuntu, Fedora and Arch is supported)

- make sure VPN etc works on Linux even if it is not officially supported

- taking time to sit down and debug hard problems (weird issues with WSL2 on one particular Windows laptop) instead of just blaming us engineers


Especially when, since it's Tor, potential attackers cannot be traced


This makes no sense. Onion services don't hide the source, they hide the destination -- a destination that, in this threat model, you run. If the client connecting wants to hide their source, they can use Tor, a VPN, an existing botnet, etc.; whether you're running SSH over an onion service or with vanilla exposed IPs/DNS is immaterial.


This.


Yup, and it's easy to make server and client side tooling use Tor to make this mostly transparent. Latency/bandwidth isn't _that_ bad when communicating with an onion service. And it can be even faster if server anonymity isn't a goal (server set HiddenServiceSingleHopMode and HiddenServiceNonAnonymousMode and create ephemerial onion service with NonAnonymous).

I use Tor plenty to self-host services from my house that are reachable anywhere (and often have a web interface I can access via Orbot). No hole-punching necessary.


Could you share more about your setup?


Sure. I wrote https://github.com/cretz/bine (though I admittedly don't work on it much these days). I just have a few-line daemon that starts an HTTP (or gRPC or whatever) server on ephemeral onion service. Then I use that onion ID to access it (via TorBrowser or Orbot or a client built with the same library).


Thank you!


at our lab the tor traffic would be noticed by the cyber security group's ids and all traffic from your host would start dropping at the border so fast your head would spin. you'd get an unpleasant phone call or visit to your office and be warned never to try side stepping the bastion ssh hosts that log all the things ever again.


Obviously, you should plan around this by gathering all the MAC addresses of every machine in the office, and then have your machine spoof through them in rotation. /s


It makes me sad every time I think about it, but Aaron Swartz did this during his saga. Well, sort of: he incremented the MAC address by 1.

Point being, it's not foolproof. If some clever undergrad is thinking about dodging the suits, win by fooling them, not by fighting them.

If you do insist on fighting, though, start at https://www.whonix.org/wiki/Mental_Model and then read the entire Whonix wiki https://www.whonix.org/wiki/Documentation. It's what I used when I was serious about dodging the cartels, and that knowledge will protect you as much as anything will.

(You'll hopefully conclude that the protection is too brittle to risk your life, as I did.)


building a new computer. want to be able to trust it 100% for at least a moment. i can't figure out how to "buy" a trusted copy of any linux and don't have any machines i have 100% trust in (who does), so can't burn it. current plan is to buy a chromebook solely for the purpose of downloading and burning ubuntu. alternatively, buy MSWindows, install on the new machine, burn, and then replace

but this mental exercise has convinced me that security is almost impossible in this day and age


One thing that helps a lot in this situation is to plan based on threat model. There’s no such thing as 100% trust, but you can have a computer which is safe for e.g. <thing>. It’s pretty crucial to pick one or two specific <thing>s and focus only on those.

If you just want to browse the darknet and see what the markets are like, for example, Tor on your current computers is fine.

If you’re wanting to make a purchase and you’re worried that your existing computers will narc on you, your plan of buy laptop + use ubuntu is A+.

If you want a computer to store information on, Edward Snowden style, you’ll need to take increasingly serious steps. Use tails as a baseline. (Note: I’ve been out of the game since 2016, so take this with salt.)

If you’re literally dodging the NSA, you need to put on a full face mask in winter, plan a route to a store you’ve recon’d, buy clothes with cash from goodwill, carry them in a trash bag as you walk out of your neighborhood, sneak in between two houses in the dead of night and put the outfit on + mask, walk to a taxi, have it take you near (but not to) the electronics store, buy yourself a burner phone + a few USB wifi dongles + anything else you want completely unlinkable to you (you’re on cameras), pay for all of it while getting some strange and worried looks that you’re going to rob something, then do the entire process in reverse until you’re back at your house with your untraceable electronics.

I did all that, and even then I was likely making some small mistake that would’ve blown everything.

Yet the city wide surveillance drones (god eye) will still have a nice little record of you that they can ID you with. And you sneaking around in the middle of the night putting on masks will probably get you in serious trouble. It never really occurs to you when you’re doing this sort of thing to stop and consider whether you’re just doing crazy things. (It’s tempting to believe the answer is “no,” especially the more you want to believe it.)

Suffice to say, threat modeling is key, and it’s worth thinking carefully about what exactly you want to accomplish.


> If you’re literally dodging the NSA, you need to...

Or just make friends with an developing-world advance-fee scammer, and then pay them to have one of their cash mules buy and send you (that is, an empty house somewhere in your city) a laptop.


That's an interesting idea I hadn't considered. But it involves a lot of the same problems: you need to get from where you are to where the laptop is, and back, without popping up on any sensors.

There are a lot of sensors. Gait detection + god eye is what convinced me this is probably impossible.

In my case, I was using NSA as a threat model for added security against the actual threat (cartels), so I wasn't as paranoid as I needed to be for NSA dodging. But in your case, you have quite a chicken-and-egg problem of getting that laptop to your doorstep in an untraceable way.

One optional step that I took, which is probably useless, is to live close to a wifi source that you can tap into from long range. I used a directional wifi antenna to a local restaurant. That way, if you do screw up and blow your opsec, it's traced to somewhere close but not equal to you.

(It's probably useless because once your physical location is traced, you're basically doomed – all they'd have to do is realize that someone's using the restaurant as a proxy. It's also quite unethical, since you're illegally using someone's equipment in a way that could very well land them in prison, depending on what you're doing. "Reasons not to fight the cartels" could fill up several notebooks, which is what ultimately persuaded me to stop trying.)


> you need to get from where you are to where the laptop is, and back, without popping up on any sensors.

Why? As far as They can tell, you're going to a house you've never been to before with no precedent for why, picking up an unlabelled brown box, and returning home.

The NSA would know you did that — but they wouldn't be able to connect it to a laptop in order to intercept/MITM it into being an insecure device (or to note down its MAC address for when you go online with it), since the "logistics chain" would be one entirely disconnected from you right until the moment you showed up at the house. To bug the laptop, they'd have to literally rip it out of your hands. Until the moment you pull into that house's driveway to pick up the parcel, they don't know it's your laptop (or what it is at all, really) so they don't know they should be trying to intercept it.

(And yes, They would likely have footage showing some other person dropping the unlabelled brown box off in the house's parking lot — but that would be a person who is not flagged as a Person of Interest in any NSA system, but rather some bright-eyed innocent college kid who had started a "new job" to "earn money fast" by "delivering parcels" just the day before. Parcels they pick up and re-box at AirBnB single-day rentals, rented just for the purpose of receiving that one parcel by the money-launderer.)

Replace "laptop" with "box full of dirty money" and this exact thing is done hundreds of times every day, with the NSA being able to do roughly zilch about it. "Cash mule" wouldn't exist as a profession if the transactions they facilitate could just be deanonymized+disintermediated in real time.


Or just buy a used computer at a pawn shop that doesn't keep track of the MAC addresses or serial numbers of its items, and pay in cash.


most distributions provide signatures/checksums to verify the download eg. https://ubuntu.com/tutorials/how-to-verify-ubuntu#1-overview


You might enjoy reading the "Cypherpunk Desert Bus" story by Peter Todd.


> you'd get an unpleasant phone call or visit to your office and be warned

sometimes I wonder why IT departments and security in general get a bad wrap, then I see things like this.


When someone just does whatever they feel like and violates policy, what do you think should happen?

Should someone send them a sternly worded email for them to ignore?

Or maybe they should be allowed to do whatever they want regardless of what risk it poses to the organization?


Why do people break rules? In that situation, I'd argue that education and understanding is the appropriate response -- for people on both sides of the table.


I can confirm as someone who works in netsec that this exactly how it would have gone at my previous employer.

There is a tone of “I know what’s best and will do what I want” in this thread.

If you think that the way to get the IT department to implement something for you is to sidestep around policy instead of working with them, you will just piss them off.


The meek pluggable transport together with Azure's domain fronting service explicitly makes it look like it's connecting to an Azure instance over https. [1]

[1] https://gitlab.torproject.org/legacy/trac/-/wikis/doc/meek


Is tor traffic that easy to detect?


Relay and exit node IPs aren't private, so admins will often collect them and just block them en masse. This causes problems, because a lot of that same IP space will often be shared with things like pool.ntp.org nodes.


By default Tor doesn't make any attempt to disguise the fact it's Tor traffic. Bridges are unlisted relays, which should work for IP blocking, while Plugable Transports are made to evade censorship-focused traffic analysis (i.e., they will prevent the adversary from detecting and blocking Tor connections as a whole, but not prevent a dedicated adversary from figuring out "has this client been using Tor?").


Yes. It goes to a known tor node.


Not necessarily true. Tor bridges exist precisely for this reason: https://tb-manual.torproject.org/bridges/


You will like this one as well “SSL/SSH Multiplexer” http://www.rutschle.net/tech/sslh/


Fixed link: https://www.rutschle.net/tech/sslh/README.html

Note that while this is a handy tool, its use is apparent to anyone observing the connection.


If your hard-to-reach server can connect to the internet (via a bunch of NATs and whatnot), you can just make it access your box of choice by e.g. Wireguard, or plain SSH with port-forwaring, or attach it as a node to your ZeroTier private network.

You only need a bunch of jump hosts if your target server has no Internet connectivity, and should not, in which case all these levels of bastions do make sense.


That requires having another publicly accessible box, or trusting ZeroTier though, doesn't it? The onion approach does not.


ZeroTier, Tailscale and such are OSS and have been independently security & crypto audited. I don't know if tailscale has been audited, but since they are a more popular tool I bet they probably are too. They're actually really good tools and would probably be more reliable than tor tbh, I would recommend looking into them.


> ZeroTier, Tailscale and such are OSS and have been independently security & crypto audited.

Both rely on their centralized coordinator servers which can mess with your routes (and thus your traffic) however they please.

ZeroTier has a published (but not OSS) coordinator, but their documentation pushes you towards their SaaS. Tailscale's coordinator is SaaS-only, unless something has changed very recently.


This is fair.

Their client node software is audited though, and the contents of your packets are not accessible to the router. This is why the amount of the possible meddling is limited to a DoS, AFAICT.

Who audits the Tor nodes that do onion routing is anyone's guess; I suppose ZeroTier is no worse than them.


> Their client node software is audited though, and the contents of your packets are not accessible to the router. This is why the amount of the possible meddling is limited to a DoS, AFAICT.

Normally the coordinator just forwards the keys from your peers, and so doesn't see the contents (the traffic doesn't pass through it, and even if it did it didn't have the key).

However, that assumes that the coordinator is being truthful with the network topology that it sends you. It could send you any topology that it wants to! This means that it could start MITMing whenever it wants to by telling you that $SERVER_IP's peer is now actually $COORDINATOR_KEY at $COORDINATOR_IP.

Theoretically you could defend against this by, say, running a cronjob that validates that the Wireguard keys are unchanged. But at that point you're not really gaining much compared to just using wg-quick.

Tor is different, because the .onion domain name inherently encodes the public key of the site you're connecting to. There is no way to change the key without also changing the URLs that people connect to!


zerotier adhoc networks are controllerless, though ipv6 only.

The client can be set to not allow routes/addresses from a controller.

The client and controller are licensed BSL.


Does this require addresses of nodes to be globally routable? (With such addresses you can as well connect directly.)


Ad-hoc networks don't seem particularly useful here. From their documentation:

> Keep in mind that these networks are public and anyone in the entire world can join them. Care must be taken to avoid exposing vulnerable services or sharing unwanted files or other resources.


you _could_ use your other device (the one you're connecting from) as the controller. whomst amongst us doesn't have a 3rd machine or VPS?


Your other device doesn't have a public IP address either.


Doesn't need one!


For that use case why not just use Wireguard?


Agree. Thats pretty interesting.

I use an SSH session and SOCKS5 proxy on a VPS provider for almost all of those other circumstances. Including checking external access etc.

But the last one is a solid use case.


I recently had to do some basic sysadmin stuff over tor and I disagree with OP.

Two things that failed mieserably, fetching a file that was just shy of 5M, and a reverse SSH tunnel.

The SSH tunnel was unusable, it would only last for minutes at the most. I wish I could use mosh but that requires UDP.

The file transfer was actually done with curl and the file was often incomplete.

This was all done within Europe where we have the highest concentration of tor nodes.

So no, I don't think tor is appropriate for sysadmin tasks.


> This was all done within Europe where we have the highest concentration of tor nodes.

So Tor nodes take locality into account? Although, that would improve speeds, it seems like an information leak.


Not sure, just an educated guess but peering is best in that region so there is a large selection of nodes with very good peering. No need to use a node outside of europe.


Out of curiosity: Have yout set up your onion service in single-hop/Non-Anonymous mode as suggested in the article?

I've been using tor for shell access only and it worked reasonably well for me, but I havent't tried this mode and wonder if your issues persist if it is used.


No I didn't know you could do that. But also in my use case anonymity was a requisite.


Several years ago I used a Tor Hidden Service in a professional capacity to expose an application from a Wireless network with properties that we wouldn't know ahead of time.

Worked like a charm, and no regrets. My favorite part was telling my employer "We're using TOR for this" eyebrows.


The article didn't mention another nice trick: Tor is also a great tool for accessing IPv4 sites in a IPv6-only network and vice versa.


Being a small cog, but using clever tricks to get your job done is not solving the problem.

An organisation that prevents itself from acting rationally is an organisation that should die Schumpter-style. Please don't prevent it.


Also circumventing this sort of thing in many orgs is a first class ticket to finding a new job. Friend of mine did that, they walked him to the curb with his cardboard box that day. His sin? Turned off virus scanning because it was taking 4 hours to do a 20 min build.


To be honest, if I were in that situation I'd be thinking something along the lines of "well, that was a dodged bullet".


The organization did him a favor. Many other, far more well paying companies response to doing that is working with the developer to figure out a system to make them both happy, or just silently ignoring it until they figure out a better solution. Or just talking to the person and asking them to stop, vs firing.


Took him nearly a year to find a job and had to go on food stamps. I do not think he saw it that way. If you are in some of these smaller markets it can take time to find a job. Especially if you do not live in the area.


I use similar "clever tricks", albeit with SSH and socks to do the same type of testing.

DNS can be funky, its useful to test resolution externally and internally.

Traffic can be funky when routed, its useful to t-shoot sites through a proxy here and there as there have been times it works internally and is broken externally (often security appliances are inline that may need debugging).

Working in IT infra/ops means its our jobs to use some of these tools to troubleshoot these methods.


I'm not seeing where this relates to organizational dysfunction. Using an external point to test a system is a standard practice.

I'm also a little confused because preventing someone from using their abilities to problem solve would be a cause of dysfunction -- a seemingly avoidable one.


ngrok.com allows some of these, at full (or at least, much better speed, haven't benchmarked), and is mostly free (paid plan required for custom subdomains). Sharing this for those still unaware of it, it's a great service.


Or better yet, use cloudflare tunnels and setup an actual permanent tunnel with custom subdomain support. If you want it to be a temporary one, it supports that too. For FREE.


Is that part of Cloudflare Teams? No offense to Cloudflare, but their pricing is really unclear. I have an account and I use them for a lot, but they have 3 different "plans" and then they have various ad-hoc products. Tunnel just says "view in dashboard." [0] If I click on that link while logged in, I'm taken to my dashboard with no indication of how to use Tunnel or anything. The plans page [1] indicates that it's part of argo smart routing. If I click on "activate argo" it actually does the exact same thing as the teams "view in dashboard" button -- it redirects me to the dashboard and has no indication of being activated or anything. Really frustrating.

[0]: https://www.cloudflare.com/products/tunnel/

[1]: https://www.cloudflare.com/plans/


It's confusing for me too

product page says it requires paid Argo (smart routing) subscription https://www.cloudflare.com/en-gb/products/tunnel/

the blog page says its free https://blog.cloudflare.com/tunnel-for-everyone/

and actually you can install and run it quite easily

   brew uninstall cloudflare/cloudflare/cloudflared
   cloudflared login
   cloudflared tunnel
this will launch a tunnel with a random subdomain listening to http://localhost:8080


It became free recently, so they've probably just forgotten to update their documentation which seems to be a pattern with CF.


Not sure why you'd use this instead of something like ZeroTier or a bounce box, but I can think of one reason: you want to hide the location of something in your infrastructure to make side channel attacks on the cloud provider or physical location a lot harder.


Back when I was managing system in a small company, I had a couple of systems on hidden service with auth cookies. When port forward failed or otherwise had problem accessing, it provided decent plan B for getting things back online.


I used to have Nessus installed on a NUC that I would just drop into a customer's network closet for a weekend, and monitor remotely.

I hosted the Nessus UI as a Tor Hidden Service, and it worked great. We just cycled the key every quarter for added security, and so that ex-employees wouldn't know where to find it.


Tor is also useful is to verify country specific customization on your website are working. I regularly used Tor on reports of issues with default language or currency. It's just a quick toggle of a setting in "torrc" to limit your exit node to a specific country code.


I actually use tailscale for exactly this reason.

NAT is the devil.

The latency of tor might be a bit too much though.


Using Tor for anything in a corporate network will rightfully get you into serious shit with IT security.

I see a lot of people also advocating ngrok, wireguard, etc. You all may not realize that actual threat actors use all of these same techniques and making yourself look like them could very well lead to your termination as this kind of circumvention of security controls is absolutely a threat to the org and a violation of security policy.

TLDR; If you need remote access, use the proper channels....pretty please. For everyone's sake.


This is the correct answer, and also the hardest answer because it's going to require you to have to swallow your pride.

Security will already be monitoring your traffic as a basic first step, which they will pipe straight into a SIEM or SOAR system. Doing this stuff will likely get you flagged for an audit.


I can confirm that Tor is very useful for exposing services when you cannot port forward!

Specifically, I've used Tor for connecting to GitHub Actions virtual machines over SSH. This is great for debugging Actions without running them over and over again. I also used this for a project that sets up an ephemeral, collaborative environment in one of the GitHub Actions VMs.

https://github.com/jstrieb/ctf-collab


Just wanted to send a drive-by comment that I very much like the design of this website. Very information dense. The top nav could use some work on mobile but other than that it is quite well done. Author, if you're reading this, I will probably "borrow" much of your design! (I'll give a shout-out in my footer however if I do end up "borrowing")


Heh, most of these use cases I solved by having a personal jumphost in a cabinet in a datacenter. But this is very clever! I like the idea of using Tor because you'll get much better tests.


In sysadmin use cases where you're only interested in accessing a website from a different IP, or setting up a reverse shell/service to hole-punch NATs, but don't need anonymity and untraceability, is Tor's multi-layered onion routing a latency and bandwidth impediment, and would you be better off turning it off (not sure if possible with the current codebase)?


I like using tor when testing DNS resolution related stuff, to circumvent some part of my system having a cached entry already.


So the big message is proxies are useful? I mean, sure. I'm not sure why Tor makes a better choice than anything else?


One very important thing not mentioned is that the tor exit node could be capturing your traffic or do a MITM attack. Its a great idea for testing but only after you have encryption working, and of course pay special attention to your ssh fingerprints.


If the endpoint is in your control and you'd like to experiment with Tor, you can configure your server as an Onion Service, so you are protected by Tor's own end-to-end encryption (whose traffic cannot be captured by MITM since the hostnames themselves are the public keys). For non-anonymous uses, you should active the "Single Service Onion" mode, so the 6-hop (extra 3-hop for server anonymity) is skipped, allowing standard 3-hop latency and performance. It also saves bandwidth for exit nodes - all non-exit relays can forward Onion traffic.


Hidden services are not accessed through exit nodes. Relay nodes cannot capture your traffic or perform MITM attacks.


Cloudflare is mitm, btw.


So is your network firewall, what's your point?


Cloudflare can see decrypted content when HTTPS is used.


“However, to take a literal view, X is just a Y tool, and it can be used in any way that you want.”

Society would be better if people took this view with all tools. They’re just tools. Unlike people they don’t have intent.


For some reason IT dept hates as I get notification when I try to use it. I think coz it jumps over so many IP addresses.


Smells a bit like Wireguard use case!


Wireguard is a great technology, and if latency and file transfers are important you should use it, but a Tor hidden service is way easier to set up, and way more reliable.


Interesting use of a security tool


This is an excellent set of use cases! I didn’t know about torsocks either.


I loved TOR when I was a broke student without enough money to have one or two always on machines with public IPs I could reverse proxy to.


why did I read sadism instead of sysadmin?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: