A preinstalled local server, presumably running in kernel space for it cause BSOD, crash when connected from localhost and attempted TLS handshake? The preinstalled crapware never changes.
I feel like the only good option at this point when purchasing a prebuilt desktop or a laptop is to nuke the drive and do a clean install of Windows. Seems like the only way to ensure that you've killed the crapware and any partitions meant to preserve/reinstall it.
Sometimes that isn't even enough. Windows for example ships with a feature called the "Windows Platform Binary Table" that will load and run DLLs embedded in a machine's ACPI tables.
Wow. Didn't know that existed. Hard to see what legitimate, user-serving purpose that would serve. Or at least how the good of it's inclusion would outweigh possible harms.
A family member with a Dell has similar BSOD problem when visiting some of these websites. Do we know what software preinstalled on Dell causing the problem?
the BSOD minidump should tell you which driver crashed. GUIs exist for analyzing minidumps, it might even be feasible to talk someone through the process over the phone.
I asked this earlier and nobody had a response, so thought I'd ask it again: is there an extension to block this?
Edit: @Windows users: pip install pydivert and then try to write a script to block connections from Chrome to non-Chrome processes. you might need GetTcpTable2() or something. (Looking into this now. Check out http://stackoverflow.com/a/25431340)
As far as I know these port scans are done using WebRTC. Using a browser extension[0] it is easy to deactivate it on the go. Personally, I always have WebRTC disabled by default (as it has several nasty security implications), and only activate it if I explicitly need it for something.
This is the kind of thing the webkit team at apple raised as privacy problems with webrtc. They got called IE.
But seriously, many new specs are very obviously abusable, yet on HN people seem unwilling to accept "this feature is trivially abusable" as a reason to not give developers a new feature, even when it is user hostile.
Web specs, and the webdevs he frequently want them, need to consider abusive developers being the default users of the API.
When working on WebGL it took an absurd amount of work to get non-web folk to understand that the spec had to be very tight and verifiable. I literally had to deal with people arguing that "developers won't ship shaders that crash the machine". It was painful.
I don't think that's what the poster meant by "They got called IE". IE is notable for - along with the things you mentioned - being about a decade behind in web specification support and taking non-standard approaches to whatever features it did have.
The implication being in this case that people were calling Safari too "conservative", so to speak.
The "safari is the new ie" is basically any time apple doesn't implement a web spec it's because they're trying to kill the web, and the privacy complaints are nonsense that doesn't matter. All that matters is developers having shiny new toys.
irrc, eBay uses websocket connections [0] to scan the ports. Firefox doesn't offer an option to disable websockets in the about:config page. However, I have read about workarounds by setting
network.websocket.max-connections=0
This is a global setting and is applied to all websites. I also wasn't able to test this myself yet. Are there any good extensions for blocking websockets on for specific domains?
To people reading this, many websites rely on websockets for real-time information. They would likely fallback to per-refresh HTTP requests, but it also may break a bunch of sites.
As a developer on a product that uses websocket extensively, I'm afraid that this will lead to the already huge distrust in the technology.
If IT admins get wind of this they'll just block it (or never unblock it since it's been blocked by some from day 1) and our product gets degraded experience.
like dropping ICMP replies on firewalls. idiotic because it gives a very false sense of security. it's been a useless "security" practice since the 90s.
Surely the websockets angle is a bit of a red herring?
Ebay will have your IP from your request so they can run nmap against your machine from their server without your browser ever knowing about it.
I also know of a bank that does similar via an old school sort of way, their online banking login page tries to load images from urls made up of your IP and various ports.
Presumably these are targeting known ports for online banking malware C&C http traffic rather than remote desktop services though.
And this is a bank that still uses frames 'for security', so it must be an old technique!
Websockets bring them past the router and any other hardware firewall or NAT. Also various software only listens to localhost, on the assumption that local traffic is trustworthy.
They could still portscan from afar and it would still be sketchy, but using Websockets makes it worse
uMatrix will usually stop things like this because you haven't explicitly turned them on, but I prefer to just use uBlock origin...we have a discussion and a blocklist for internal IP's here:
If this it WebRTC, uMatrix. Don't allow 127.0.0.1 anything, but specifically UHR requests in the UI. This is a fingerprinting technique I believe. My guess is they'll claim it's to prevent fraud.
uBlock Origin blocks this specific threat (and it's added a few extra filters since this article was posted), however as I noted near the end there are a bunch of other companies running these exact same scans and uBlock only has the ability to block one at a time as they pop up.
I've heard with uMatrix you can block all sites from trying to access localhost/127.0.0.1, which should stop the fingerprinting in its tracks. You may need to enable on a few sites that use localhost for legit things, but those aren't super common.
Many companies or persons share their desktops for remote usage. Later they sell this service to eBay users. And they're using it for different fraudulent activities - from making real sales (just for stars) to bidding to own items (for rising price).
It seems like eBay wants it both ways. They want to have a huge user base with low friction to get started, but they also don't want fraudulent players. Instead of doing KYC (know your customer) like many financial services, they're stuck doing dirty tricks like this to try and combat fraud.
That is largely down to the liablity being on the end points of the transaction not on the transport layer...
Either the consumer or the merchant bears the cost for fraud, rarely does paypal or the banks if they did the problem of identity fraud would be solved, and would not be called "identity theft"
Fraud is at the expense of the seller and raises prices for the customer. I don’t see how this is at the expense of the customer, I just visit eBay and buy something if I want to.
That might be the case but it's next to imposible prove it. You can always use remote software/VPN that allows changing ports(VNC over ssh). I do see your point about auctions though.
So let's we have fraudseller_1 that sell something like iPhone.
Here comes honestbuyer_1 that bid for this item with $50.
Seller sees offer, but want to rise it.
Since seller can't use his own IP he is using some remote desktop to login somewhere else as fraudbuyer_1 (different account!) and bid $100. Then he wait honestbuyer_1 to bid again.
If this happens then two - fraudbuyer_1 and honestbuyer_1 can "race" (note the quotes!) for this item.
Of course this is simplified scenario because can be involved pack of fraud buyers to fool buyers.
I know some person IRL that was involved 10 years ago in such activities. So his friend sells vacation homes and buyers from UK bidding for them. But if seller doesn't like bids he call to that person to make bid from his own account. And of course to rise price with least $1000.
But... this can't be track using eBay since it's via phone call and no remote sessions. Today sellers using virtual desktops (RDP, VNC, TeamViewer or other) to do this.
eBay has a policy that allows buys to not pay with limited consequences. If eBay simplicity forced the bidding account to have enough cash to cover the transaction this would no longer be viable as some percentage of the time the shell bidder would have to pay out the fees.
Seller A is selling an item he bought for $5 dollars. He's trying to sell it for $10, for a 100% profit. The current bid is $7 dollars. He logs in to a remote desktop and bids on his own item to bump the price with a different MAC and IP. It sells for $12 and he's happy.
Seller B is having a hard time making sales. She thinks he needs better ratings. So she logs into the site using a remote client and 'buys' 10 of her items on different accounts and leaves 10 glowing reviews.
Ebay is scanning ports to detect the tools used to do this.
> Seller A is selling an item he bought for $5 dollars. He's trying to sell it for $10, for a 100% profit. The current bid is $7 dollars. He logs in to a remote desktop and bids on his own item to bump the price with a different MAC and IP. It sells for $12 and he's happy.
So why don't you set the starting price at $10? Does ebay not let you set the base price?
An auction for an item with a history of an apparent bidding war up to $10 appears much more desirable than an auction for an item that just sits there for a week at its opening bid of $10.
At a lower starting price, you get people bidding. And then you fraudulently raise it. The legit bidders think they have a now have a "vested interest" in it and will bid up "just a little more".
It's the same thing with the penny auction sites like DealDash which is part of their terribleness.
They’re bypassing your local network firewall, and local machine firewall, then attempting to connect to blocked ports. People have been jailed for less.
>Many companies or persons share their desktops for remote usage. Later they sell this service to eBay users.
Is this really a thing? I thought everybody just used residential proxy/VPN services like luminati. It makes sense too, because a proxy service is way easier to adapt to your application than a remote desktop service.
Part of me thought it was done to combat bots. I still find it distateful. Part of me understands it and I can take an educated guess how it was justified.
In case anyone thinks this is a dupe, it's not. This post is inspired by the first article. It gives a much more detailed analysis of the code and what data is sent where.
eBay has a big fraud headache. They have a bunch of algorithms (from the pre-ML-hype days) that take a variety of inputs to determine whether a given transaction is fraudulent or not. Presence of remote login service on the user’s computer may tip the scale heavily in this calculation. Fraud detection is a necessary evil for all financial transaction companies in order to keep costs low for everyone else.
If you’re worried about privacy, use CCPA’s right to information and ask them for a dump of everything they have on you. They are supposed to give you info that other SPs like Threatmetrix have on you as well if they really are transmitting it to 3rd parties.
> Fraud detection is a necessary evil for all financial transaction companies in order to keep costs low for everyone else.
That doesn't mean they should be allowed to behave like cybercriminals. The risk of fraud doesn't give them a free pass to abuse our trust and invade our privacy. They aren't entitled to know what software people run on their own computers. Especially if they learn this information through underhanded means like port scanning people's local networks without their permission or knowledge. It doesn't matter how much money they're losing because of fraud, they don't get to violate these boundaries in order to reduce the risk associated with their own business.
Is “behaving like cyber criminals” bad though? Just because it’s doing something out of the ordinary in terms of tech doesn’t mean it’s bad. Fighting cybercrime is always a cat and mouse game akin to counter terrorism, counter espionage or even plain cops and robbers. You need to think like your enemy, have informers, etc etc, while not harming the good citizens. That’s what is going on here.
Are there cops who misuse their power? Absolutely. Are there spies who use information for personal gains? Sure. There need to be checks and balances that make it bad for such people to go rogue.
Privacy acts aim to do some of that. They bring accountability but also an ability to opt out (the latter is hard though - akin to ostracizing oneself from a community).
It's true that the consequences of being port scanned by some website are probably negligible. However, that is not the real problem.
The real problem is the audacity of these people. They think they can do whatever they want. Not only that, they think they are justified in doing it. They need this information for their own purposes, so they just take it from people without asking, without even informing them. In their minds, what they did was not objectionable. They needed to do it, so they didn't do anything wrong. Those fraudsters left them no choice: they just had to invade the privacy of every single person who visited their website.
It's the same logic every abuser uses. It betrays a fundamental lack of respect for the people who use their service. It's impossible to have trust without this respect.
This notion of invasion of privacy is all relative. If they ask you for your mother’s maiden name or first pet or the city where you had your first kiss, you are okay typing it in a form for them.
If they try to infer the active port numbers on your computer to see if there’s a Remote Desktop installed by a bot, you’re not okay.
What’s the alternative? Do you want them to disclose everything they do in a marketing article even though 99.99% of people will have no clue what that means and 10 of the 100 people that bother to read and understand will use it against them. To what end? To gain your trust? You - who has already given them your credit card number, mothers maiden name and city where you first got intimate with your first partner?
>They are supposed to give you info that other SPs like Threatmetrix have on you as well if they really are transmitting it to 3rd parties.
This isn't a privacy protection, though, but a measure for accountability. That is, only the absence of data aggregation would still someone's worry about privacy.
I never understood why websockets aren’t subject to same origin policy and CORS (or similar policies). Any web expert here could explain this design (non-)decision?
Surprisingly often websocket connections are made to a different domain from what is serving the site itself. I'm not sure about the root cause to the pattern, but sure as hell know that our company has been doing it at least since 2013.
Browsers set the "Origin" header for ws:// calls, and the websocket servers are expected to check that. Without the check, it'd be possible to issue blind writes (CSRF) from random webpages to the ws:// endpoints. If you're using main site authentication with a separate websocket domain and auth'd requests, all garden variety security scanners will flag the separate websocket domain as a problem, and only the robust ones actually try to validate the server side configuration.
Disclosure: I have triaged and responded to a few of such reports.
> Surprisingly often websocket connections are made to a different domain from what is serving the site itself.
That's probably because websockets require asynchronous servers optimizing for number of active connections, while normal sites are best served by servers optimized for response time.
Of course, you can have both handled by the same origin, but it's not the blatantly obvious way.
Connecting to a different domain is fine, that's what CORS is for. Letting any origin connect by default -- that's a problem.
> Browsers set the "Origin" header for ws:// calls, and the websocket servers are expected to check that.
I know that, but as always insecure by default protocols are terrible, and are guaranteed to be exploited beyond recognition... The question is why not be secure by default and force servers to whitelist origins they expect to talk to?
Websockets are subject to the same origin policy. There's nothing you can do to violate the SOP via websockets that you wouldn't be able to do with regular HTTP or XHR.
This is what happens when there is a browser monopoly. Fixing security do not have priority. Maximizing revenue is the priority. The browser should stop outgoing connections that are not from the same origin. Then users have to opt in like with popup windows.
Has anyone confirmed whether they're still continuing this practice? I'm curious, but at the same time feel highly uncomfortable visiting a website that has no problems exploiting a browser loophole.
Just wait until the database of ip and open ports is leaked and hackers start exploiting vulnerabilities of softwares listening to these ports to break into random people devices.
I don't understand the issue you are describing. You can already scan the entire ipv4 space for cheap with tools like zmap[0]. How does a premade list help ?
In addition, I would expect such a DB to go stale very quickly.
I saw him talk about how Threat Matrix is usually blocked.....but Threat Matrix has their clients get unique endpoint URLs to disguise it. I don't really know how AdBlock works, but aside from the extra time it would take, why doesn't adblockers look up the record of any URLs on the page and see if they are a CNAME for an A url that is on the block list?
The real issue is not that eBay or some other specific website uses Javascript to port scan network. The real issue is that browsers allow such behavior by default.
1. Local Overrides feature allows you to persist and edit source files across page loads (unfortunately only source files currently, so you're out of luck if the JS comes from an XHR or something)
2. F3 on the network panel will let you search for a string across all resources the page loaded. Can be useful for tracking down where stuff like user-agent checks are called (if not obfuscated).
Also calling the code obfuscated is pretty generous. It's amazing how common things like shift ciphers, XOR tricks, etc. are when the browser's REPL cuts through them like butter.
Thanks for the tips! Local Overrides is new to me and would have helped a lot. I made heavy use of F3 (also Ctrl+Shift+F) to find my way around those scripts, and to find my place again after each page refresh.
Huh. I've been using eBay a lot this week, and it might have triggered a kernel bug. I had terrible internet performance that got resolved by rebooting my Ubuntu 18.04 laptop.
macOS users, I believe based on my testing that you can block your installed web browsers from localhost port scanning using LittleSnitch. This way you can continue to allow WebRTC and WebSockets to the rest of the Internet (where it's useful), while denying web browsers access to localhost except for specific ports you allow.
However, I encourage you to be careful and only block web browsers to localhost using this method, because lots of macOS applications depend on localhost connections to talk to themselves, so if you block everything from talking to localhost you may break e.g. LittleSnitch, macOS itself, etc. NO WARRANTY, HAVE BACKUPS, standard stuff.
To set this up, for each /Applications/Browser.app, create a LittleSnitch 'Deny Connections' To 'IP Addresses' rule and enter '127.0.0.1, ::1' without quotes into the text field and click OK. Then right-click on the newly-created application rule and select 'Increase Priority', which will bold the rule text 'Deny outgoing connections to 2 IP addresses'. Repeat this for each Browser.app you use.
If you'd like to specifically enable certain localhost ports to be accessible by your browser (such as 80/443), you can create another rule using the above steps, but before saving the rule, change 'Deny' to 'Allow' and click the '\/' dropdown caret button and enter the appropriate port and select TCP. I encountered some UI quirks doing this but once it's created it works as it should.
Here's a screenshot of the results of my testing for comparing against. I'm not really familiar with how LS works so I can't offer much support, but I fresh-installed it and left all the defaults alone and it worked, so more advanced users shouldn't have much trouble. https://i.imgur.com/T0yqrdM.png
Good luck!
(For those wondering if other software can do this, I tested various macOS application firewalls today and most of them either global-allow localhost connections or don't offer outbound filtering at all. So far, the only one that can block web browsers only from connecting to localhost is LittleSnitch, with some quirks that I wrote a note to their support about. At least one let me create the rule and cheerfully said it was active and then it didn't block anything.)
Timing attacks can be used to determine if the TCP connection was successful, even if no data could be exchanged because the WebSocket failed to negotiate.
I remember while working at some company, I started using a local flask server.
For some reason, I remember one company router kept making http request on port 5000 or 8000, can't remember which port, because it was literally showing on the terminal, with the http path, at random times.
I'm sure being a hacker must be pretty fun these days.
Great article! Finally I got it. Between this one and the original post. 'Why is this website port scanning me'. Can anyone shed some thoughts / reason why the scan is not performed on Linux machines? Maybe not RDP, but VNC servers that the scan performs on Windows m/cs ..
I thought it was no secret to anyone that all services track digital fingerprints ... and there are many ways to do this. to fight them i use antidetection browser such as https://gologinapp.com/ or other. Are there any alternatives to this?
Google's internal sso (which I accidentally stumbled upon) collects other endpoint-specific parameters to compose the digital signature (like browser window size and monitor size).
This feels more effective and less intrusive. Not sure why ebay went this rather weird and creepy way instead.
Looking for fraud signs is my guess, people don't usually use eBay through TeamViewer so if it's on and the port is open then an otherwise normal transaction gets really suspicious for example. They're probably feeding the open port info into their model to determine fraud risk for user logins and transactions.
They may not even be using it actively yet because they'd need to gather a lot of example data to detect outliers.
I don't know if there's a detectable difference just looking at the ports; just having TeamViewer installed slightly increases the risk of any transaction being fraudulent though. I'm just trying to provide an example of how port data could be used for a meaningful purpose. Even just as an additional fingerprinting component it could be useful.
I respectfully disagree in this case—the last story was too recent, and this one does not contain any significant new information, so we're likely to end up just rehashing the same points (and have so far, in my estimation).
Edit: But, thefreeman above just informed me there's actual new information in this article (which I originally hadn't read, because the comments here made me assume it was the same as the last story on HN). So, thanks!
I immediately thought about this after reading the Wikipedia page of Peter Theil and Palentir where he stated he wanted to use technology that was used to protect PayPal (that was purchased by eBay)
That is one of the main purposes, yes. I have no problem with it, personally. Many HN users are very privacy-conscious and consider it not acceptable for any purpose, though; including that one. I think both positions are fine.
> Many HN users are very privacy-conscious and consider it not acceptable for any purpose, though; including that one
It's an opinion, I suppose, but it doesn't seem to be based on reasonable expectations. Anyone can portscan anyone else, lots of security researchers have published their findings from portscanning large IP address ranges etc. ... If I don't want other people to see or access open ports on my system, I can firewall them.
Slightly OT but I’d suggest that the thing that has “ruined” the internet (for me) more than any other single thing are trolls, and they predate the corporate web by a long way.