Hacker News new | past | comments | ask | show | jobs | submit login
eBay is port scanning visitors to their website (nem.ec)
478 points by joering2 on June 6, 2020 | hide | past | favorite | 143 comments



This might explain why some preinstalled HP laptop software (with open ports?) causes a BSOD when users visit ebay https://h30434.www3.hp.com/t5/Notebook-Operating-System-and-...


A preinstalled local server, presumably running in kernel space for it cause BSOD, crash when connected from localhost and attempted TLS handshake? The preinstalled crapware never changes.


I feel like the only good option at this point when purchasing a prebuilt desktop or a laptop is to nuke the drive and do a clean install of Windows. Seems like the only way to ensure that you've killed the crapware and any partitions meant to preserve/reinstall it.


Sometimes that isn't even enough. Windows for example ships with a feature called the "Windows Platform Binary Table" that will load and run DLLs embedded in a machine's ACPI tables.


Wow. Didn't know that existed. Hard to see what legitimate, user-serving purpose that would serve. Or at least how the good of it's inclusion would outweigh possible harms.


Is there software that will enumerate ACPI for DLLs?


You can look for ACPI table(s) called "WPBT" using RWEverything (if you are a Windows user): http://rweverything.com/downloads/RwPortableX64V1.7.zip

or look into the following filesystem path in Linux: /sys/firmware/acpi/tables


If you want to stay clean, nuke the drive and install linux instead


Only good option is nuke the Windows and use the better OS, like GNU/Linux, FreeBSD or OpenBSD etc.


Preinstalled vendor software is just an endless stream of exploits


A family member with a Dell has similar BSOD problem when visiting some of these websites. Do we know what software preinstalled on Dell causing the problem?


the BSOD minidump should tell you which driver crashed. GUIs exist for analyzing minidumps, it might even be feasible to talk someone through the process over the phone.


One example of a GUI analysis tool for these kinds of memory dumps is WhoCrashed; another is BlueScreenView.

https://www.resplendence.com/whocrashed

http://www.nirsoft.net/utils/blue_screen_view.html


Good to know. Thanks.


I asked this earlier and nobody had a response, so thought I'd ask it again: is there an extension to block this?

Edit: @Windows users: pip install pydivert and then try to write a script to block connections from Chrome to non-Chrome processes. you might need GetTcpTable2() or something. (Looking into this now. Check out http://stackoverflow.com/a/25431340)


As far as I know these port scans are done using WebRTC. Using a browser extension[0] it is easy to deactivate it on the go. Personally, I always have WebRTC disabled by default (as it has several nasty security implications), and only activate it if I explicitly need it for something.

[0] https://addons.mozilla.org/en-US/firefox/addon/happy-bonobo-...


This is the kind of thing the webkit team at apple raised as privacy problems with webrtc. They got called IE.

But seriously, many new specs are very obviously abusable, yet on HN people seem unwilling to accept "this feature is trivially abusable" as a reason to not give developers a new feature, even when it is user hostile.

Web specs, and the webdevs he frequently want them, need to consider abusive developers being the default users of the API.

When working on WebGL it took an absurd amount of work to get non-web folk to understand that the spec had to be very tight and verifiable. I literally had to deal with people arguing that "developers won't ship shaders that crash the machine". It was painful.


"IE". Difficult to search for what that means.


Internet Explorer. A much maligned legacy browser for legacy windows desktop platforms.

Aside from being intensely maligned and full of security holes, it doesn't (or didn't?) support WebRTC.

So being called IE is sarcastic like ha ha if you're so concerned about security downgrade to IE which doesn't support WebRTC.


I don't think that's what the poster meant by "They got called IE". IE is notable for - along with the things you mentioned - being about a decade behind in web specification support and taking non-standard approaches to whatever features it did have.

The implication being in this case that people were calling Safari too "conservative", so to speak.


The "safari is the new ie" is basically any time apple doesn't implement a web spec it's because they're trying to kill the web, and the privacy complaints are nonsense that doesn't matter. All that matters is developers having shiny new toys.


irrc, eBay uses websocket connections [0] to scan the ports. Firefox doesn't offer an option to disable websockets in the about:config page. However, I have read about workarounds by setting

  network.websocket.max-connections=0
This is a global setting and is applied to all websites. I also wasn't able to test this myself yet. Are there any good extensions for blocking websockets on for specific domains?

[0] https://nullsweep.com/why-is-this-website-port-scanning-me/


To people reading this, many websites rely on websockets for real-time information. They would likely fallback to per-refresh HTTP requests, but it also may break a bunch of sites.


As a developer on a product that uses websocket extensively, I'm afraid that this will lead to the already huge distrust in the technology.

If IT admins get wind of this they'll just block it (or never unblock it since it's been blocked by some from day 1) and our product gets degraded experience.


like dropping ICMP replies on firewalls. idiotic because it gives a very false sense of security. it's been a useless "security" practice since the 90s.


Zonealarm's "!1ALERT!! x.x.x.x is h4x0r1ng you with ICMP packets!" sold $millions worth of placebo in early 2000.

http://download.zonelabs.com/bin/media/pdf/ZA_Manual.pdf


But don't firewalls drop everything inbound by default? So it's more that people don't know they should create rules to allow ICMP, no?


Connection comes from your browser i.e. localhost to yourself, firewall is not setup to block such connections in most cases.


Yes, false sense of security hits the nail on the head.

Fits right there with unnecessary password rules too.


You could say that about JS and I am not running that shit either.


Surely the websockets angle is a bit of a red herring?

Ebay will have your IP from your request so they can run nmap against your machine from their server without your browser ever knowing about it.

I also know of a bank that does similar via an old school sort of way, their online banking login page tries to load images from urls made up of your IP and various ports.

Presumably these are targeting known ports for online banking malware C&C http traffic rather than remote desktop services though.

And this is a bank that still uses frames 'for security', so it must be an old technique!


Websockets bring them past the router and any other hardware firewall or NAT. Also various software only listens to localhost, on the assumption that local traffic is trustworthy.

They could still portscan from afar and it would still be sketchy, but using Websockets makes it worse


So does using websockets allow you to scan local IP ranges and find other devices on the LAN?


By the rule of all web technology sucks and is untrustworthy, they block 10.0.0.0 and 192.168.0.0, but inexplicably allow 172.16.0.0-172.31.255.255.

(Or at least, that was what someone else claimed last time this came up on HN)


Who's "they" in this context? It's unclear to me if this refers to eBay or browser vendors.


My understanding is it allows them to check for things listening on loopback, given they appear to be checking for SSH tunnels.


This won't actually disable websockets if I recall.

There's a blocklist that may make it into uBO in the future that blocks 3rd party access to localhost and other IANA reserved IP addresses, though.

https://github.com/uBlockOrigin/uBlock-issues/issues?q=is%3A...



That thread was a fun rabbit hole. !/s


No. It uses Web Sockets.


uBlock Origin on Firefox, at least according to a previous article[0] I read on HN. Doesn't work on Chrome/Edge because they never provided the API.

[0]: https://news.ycombinator.com/item?id=23361823



Previous discussion mentioned a filter rule for uBlock Origin like

$websocket

to override this for sites which break use e.g.

@@gateway.discord.gg$websocket



uMatrix will usually stop things like this because you haven't explicitly turned them on, but I prefer to just use uBlock origin...we have a discussion and a blocklist for internal IP's here:

https://github.com/uBlockOrigin/uBlock-issues/issues?q=is%3A...



If this it WebRTC, uMatrix. Don't allow 127.0.0.1 anything, but specifically UHR requests in the UI. This is a fingerprinting technique I believe. My guess is they'll claim it's to prevent fraud.


uBlock Origin blocks this specific threat (and it's added a few extra filters since this article was posted), however as I noted near the end there are a bunch of other companies running these exact same scans and uBlock only has the ability to block one at a time as they pop up.

I've heard with uMatrix you can block all sites from trying to access localhost/127.0.0.1, which should stop the fingerprinting in its tracks. You may need to enable on a few sites that use localhost for legit things, but those aren't super common.


Is there a way to make Chrome ask permission before allowing a site to utilize WebRTC?

Similar to how you can make it prompt to store Cookies, provide Location, use the Camera/Mic, etc.


I'm not aware of any extension for this but i think we run browser in some kind of docker containers we could avoid this.

One thing that could break due to this could login via XYZ network which opens a new tab in the default browser for authentication.


It's crystal clear why they do this.

Many companies or persons share their desktops for remote usage. Later they sell this service to eBay users. And they're using it for different fraudulent activities - from making real sales (just for stars) to bidding to own items (for rising price).

For years eBay fight this.


It seems like eBay wants it both ways. They want to have a huge user base with low friction to get started, but they also don't want fraudulent players. Instead of doing KYC (know your customer) like many financial services, they're stuck doing dirty tricks like this to try and combat fraud.


Cheating is rampant in all financial services that mediate transactions between end customers. No shortage of stories about card fraud or PayPal woes.


That is largely down to the liablity being on the end points of the transaction not on the transport layer...

Either the consumer or the merchant bears the cost for fraud, rarely does paypal or the banks if they did the problem of identity fraud would be solved, and would not be called "identity theft"


Banks also port scan. There was a list in the previous discussion.


Why is it bad to have it both ways?


Because it's at the expense of the customer.


Fraud is at the expense of the seller and raises prices for the customer. I don’t see how this is at the expense of the customer, I just visit eBay and buy something if I want to.


They actually do KYC (but criminals can also fake it)


That might be the case but it's next to imposible prove it. You can always use remote software/VPN that allows changing ports(VNC over ssh). I do see your point about auctions though.


I don't understand this could you explain?


Ratings are an important part of the eBay ecosystem.

He's saying people let others use their eBay accounts through RDC to game the system.

Also it's an auction site. People want to raise the price by bidding on their own items


Exactly.

So let's we have fraudseller_1 that sell something like iPhone. Here comes honestbuyer_1 that bid for this item with $50. Seller sees offer, but want to rise it.

Since seller can't use his own IP he is using some remote desktop to login somewhere else as fraudbuyer_1 (different account!) and bid $100. Then he wait honestbuyer_1 to bid again. If this happens then two - fraudbuyer_1 and honestbuyer_1 can "race" (note the quotes!) for this item.

Of course this is simplified scenario because can be involved pack of fraud buyers to fool buyers.

I know some person IRL that was involved 10 years ago in such activities. So his friend sells vacation homes and buyers from UK bidding for them. But if seller doesn't like bids he call to that person to make bid from his own account. And of course to rise price with least $1000.

But... this can't be track using eBay since it's via phone call and no remote sessions. Today sellers using virtual desktops (RDP, VNC, TeamViewer or other) to do this.


eBay has a policy that allows buys to not pay with limited consequences. If eBay simplicity forced the bidding account to have enough cash to cover the transaction this would no longer be viable as some percentage of the time the shell bidder would have to pay out the fees.


Seller A is selling an item he bought for $5 dollars. He's trying to sell it for $10, for a 100% profit. The current bid is $7 dollars. He logs in to a remote desktop and bids on his own item to bump the price with a different MAC and IP. It sells for $12 and he's happy.

Seller B is having a hard time making sales. She thinks he needs better ratings. So she logs into the site using a remote client and 'buys' 10 of her items on different accounts and leaves 10 glowing reviews.

Ebay is scanning ports to detect the tools used to do this.


> Seller A is selling an item he bought for $5 dollars. He's trying to sell it for $10, for a 100% profit. The current bid is $7 dollars. He logs in to a remote desktop and bids on his own item to bump the price with a different MAC and IP. It sells for $12 and he's happy.

So why don't you set the starting price at $10? Does ebay not let you set the base price?


An auction for an item with a history of an apparent bidding war up to $10 appears much more desirable than an auction for an item that just sits there for a week at its opening bid of $10.


This is why I never bid till 10sec out using a automated app. It’s stupid to actually bid on items and allow the other bidders to respond.


At a lower starting price, you get people bidding. And then you fraudulently raise it. The legit bidders think they have a now have a "vested interest" in it and will bid up "just a little more".

It's the same thing with the penny auction sites like DealDash which is part of their terribleness.


They should find a better way instead of compromising privacy of all the users.

And even if they decided don't they have any responsibility to tell the users.


I don't think they shouldn't, most people don't think this is an issue. I certainly don't.


They’re bypassing your local network firewall, and local machine firewall, then attempting to connect to blocked ports. People have been jailed for less.


I definitely don't agree with jail. Unless there is real damage, then I wouldn't consider it an issue.


Sucks for that corporation. They have teams and teams of lawyers - they should have consulted one of them.

It is not just for a corporation to suddenly decide the law is unfair when they are subjected to it when real people have been harmed by the same law.


>Many companies or persons share their desktops for remote usage. Later they sell this service to eBay users.

Is this really a thing? I thought everybody just used residential proxy/VPN services like luminati. It makes sense too, because a proxy service is way easier to adapt to your application than a remote desktop service.


Luminati can be detected too specially when you're company with DEEP pockets.


Part of me thought it was done to combat bots. I still find it distateful. Part of me understands it and I can take an educated guess how it was justified.


A similar article was on HN a few days ago:

https://www.bleepingcomputer.com/news/security/list-of-well-...

Here's the discussion about it:

https://news.ycombinator.com/item?id=23361823



In case anyone thinks this is a dupe, it's not. This post is inspired by the first article. It gives a much more detailed analysis of the code and what data is sent where.


The company that is actually providing this service is by Lexis Nexis https://risk.lexisnexis.com/corporations-and-non-profits/fra...


eBay has a big fraud headache. They have a bunch of algorithms (from the pre-ML-hype days) that take a variety of inputs to determine whether a given transaction is fraudulent or not. Presence of remote login service on the user’s computer may tip the scale heavily in this calculation. Fraud detection is a necessary evil for all financial transaction companies in order to keep costs low for everyone else.

If you’re worried about privacy, use CCPA’s right to information and ask them for a dump of everything they have on you. They are supposed to give you info that other SPs like Threatmetrix have on you as well if they really are transmitting it to 3rd parties.


> Fraud detection is a necessary evil for all financial transaction companies in order to keep costs low for everyone else.

That doesn't mean they should be allowed to behave like cybercriminals. The risk of fraud doesn't give them a free pass to abuse our trust and invade our privacy. They aren't entitled to know what software people run on their own computers. Especially if they learn this information through underhanded means like port scanning people's local networks without their permission or knowledge. It doesn't matter how much money they're losing because of fraud, they don't get to violate these boundaries in order to reduce the risk associated with their own business.


Is “behaving like cyber criminals” bad though? Just because it’s doing something out of the ordinary in terms of tech doesn’t mean it’s bad. Fighting cybercrime is always a cat and mouse game akin to counter terrorism, counter espionage or even plain cops and robbers. You need to think like your enemy, have informers, etc etc, while not harming the good citizens. That’s what is going on here.

Are there cops who misuse their power? Absolutely. Are there spies who use information for personal gains? Sure. There need to be checks and balances that make it bad for such people to go rogue.

Privacy acts aim to do some of that. They bring accountability but also an ability to opt out (the latter is hard though - akin to ostracizing oneself from a community).


It's true that the consequences of being port scanned by some website are probably negligible. However, that is not the real problem.

The real problem is the audacity of these people. They think they can do whatever they want. Not only that, they think they are justified in doing it. They need this information for their own purposes, so they just take it from people without asking, without even informing them. In their minds, what they did was not objectionable. They needed to do it, so they didn't do anything wrong. Those fraudsters left them no choice: they just had to invade the privacy of every single person who visited their website.

It's the same logic every abuser uses. It betrays a fundamental lack of respect for the people who use their service. It's impossible to have trust without this respect.


This notion of invasion of privacy is all relative. If they ask you for your mother’s maiden name or first pet or the city where you had your first kiss, you are okay typing it in a form for them.

If they try to infer the active port numbers on your computer to see if there’s a Remote Desktop installed by a bot, you’re not okay.

What’s the alternative? Do you want them to disclose everything they do in a marketing article even though 99.99% of people will have no clue what that means and 10 of the 100 people that bother to read and understand will use it against them. To what end? To gain your trust? You - who has already given them your credit card number, mothers maiden name and city where you first got intimate with your first partner?


>They are supposed to give you info that other SPs like Threatmetrix have on you as well if they really are transmitting it to 3rd parties.

This isn't a privacy protection, though, but a measure for accountability. That is, only the absence of data aggregation would still someone's worry about privacy.


Great analysis In the original article BTW. Hope to see more of these on HN.


Agreed. This is for fingerprinting the user.


I never understood why websockets aren’t subject to same origin policy and CORS (or similar policies). Any web expert here could explain this design (non-)decision?


Surprisingly often websocket connections are made to a different domain from what is serving the site itself. I'm not sure about the root cause to the pattern, but sure as hell know that our company has been doing it at least since 2013.

Browsers set the "Origin" header for ws:// calls, and the websocket servers are expected to check that. Without the check, it'd be possible to issue blind writes (CSRF) from random webpages to the ws:// endpoints. If you're using main site authentication with a separate websocket domain and auth'd requests, all garden variety security scanners will flag the separate websocket domain as a problem, and only the robust ones actually try to validate the server side configuration.

Disclosure: I have triaged and responded to a few of such reports.


> Surprisingly often websocket connections are made to a different domain from what is serving the site itself.

That's probably because websockets require asynchronous servers optimizing for number of active connections, while normal sites are best served by servers optimized for response time.

Of course, you can have both handled by the same origin, but it's not the blatantly obvious way.


Connecting to a different domain is fine, that's what CORS is for. Letting any origin connect by default -- that's a problem.

> Browsers set the "Origin" header for ws:// calls, and the websocket servers are expected to check that.

I know that, but as always insecure by default protocols are terrible, and are guaranteed to be exploited beyond recognition... The question is why not be secure by default and force servers to whitelist origins they expect to talk to?


Websockets are subject to the same origin policy. There's nothing you can do to violate the SOP via websockets that you wouldn't be able to do with regular HTTP or XHR.


That's not true. You can talk to any other origin with ws. Simple example:

index.html:

  <html>
    <head></head>
    <body>
      <div id="message"></div>
      <script>
        let ws = new WebSocket("ws://localhost:5000/");
        ws.onopen = () => ws.send("hello server!");
        ws.onmessage = ev => {
          const $message = document.getElementById("message");
          $message.textContent = `ws://localhost:5000/ response: ${ev.data}`;
        };
      </script>
    </body>
  </html>
server.js:

  const express = require("express");
  const http = require("http");
  const WebSocket = require("ws");
  
  const app = express();
  const server = http.createServer(app);
  const wss = new WebSocket.Server({ server });
  
  wss.on("connection", ws => {
    ws.on("message", message => {
      console.log(`received: ${message}`);
      ws.send("hello client");
    });
  });
  
  server.listen(5000, () => console.log("ws server listening on localhost:5000"));
Server runs on localhost:5000. Now serve index.html from any other origin and see it talk to the server without any problem.


This is what happens when there is a browser monopoly. Fixing security do not have priority. Maximizing revenue is the priority. The browser should stop outgoing connections that are not from the same origin. Then users have to opt in like with popup windows.


Has anyone confirmed whether they're still continuing this practice? I'm curious, but at the same time feel highly uncomfortable visiting a website that has no problems exploiting a browser loophole.


Just tried ebay.co.uk and I can see several clear.png calls with 204 and a payload as described in the article.


Extremely well written sum-up, learned a lot about how to approach an investigation like this. Thank you.


Just wait until the database of ip and open ports is leaked and hackers start exploiting vulnerabilities of softwares listening to these ports to break into random people devices.


That database has existed for ages: https://www.shodan.io/


Does Shodan have a map of public ip -> natted local network ip:port pairs?

I thought it was only public ip ports.


I don't understand the issue you are describing. You can already scan the entire ipv4 space for cheap with tools like zmap[0]. How does a premade list help ?

In addition, I would expect such a DB to go stale very quickly.

[0] https://zmap.io/


What is the legality of scanning large subsets that you don’t control?


I think it is generally acceptable for public networks, but eBay is using webrtc to bypass firewalls and then scanning private networks.

That’s illegal in most circumstances (at least in the US).


Any further pointer on the "illegal" aspect? Thanks.


Perhaps they've asked threat metrix to handle a portion of their security and this was their solution?

At the end of the day, we all need to realize that we send out far more information than we receive when we surf the web.


I saw him talk about how Threat Matrix is usually blocked.....but Threat Matrix has their clients get unique endpoint URLs to disguise it. I don't really know how AdBlock works, but aside from the extra time it would take, why doesn't adblockers look up the record of any URLs on the page and see if they are a CNAME for an A url that is on the block list?


uBlock origin does exactly that on browsers which support it. Which I believe is only Firefox at this time. https://github.com/uBlockOrigin/uBlock-issues/issues/780


Well, I was already using Ff+UblockO, so this is just another reason to keep using it. I would really prefer I not get port scanned.


The real issue is not that eBay or some other specific website uses Javascript to port scan network. The real issue is that browsers allow such behavior by default.


A couple Chrome devtools debugging tips:

1. Local Overrides feature allows you to persist and edit source files across page loads (unfortunately only source files currently, so you're out of luck if the JS comes from an XHR or something)

2. F3 on the network panel will let you search for a string across all resources the page loaded. Can be useful for tracking down where stuff like user-agent checks are called (if not obfuscated).

Also calling the code obfuscated is pretty generous. It's amazing how common things like shift ciphers, XOR tricks, etc. are when the browser's REPL cuts through them like butter.


Thanks for the tips! Local Overrides is new to me and would have helped a lot. I made heavy use of F3 (also Ctrl+Shift+F) to find my way around those scripts, and to find my place again after each page refresh.



Huh. I've been using eBay a lot this week, and it might have triggered a kernel bug. I had terrible internet performance that got resolved by rebooting my Ubuntu 18.04 laptop.


macOS users, I believe based on my testing that you can block your installed web browsers from localhost port scanning using LittleSnitch. This way you can continue to allow WebRTC and WebSockets to the rest of the Internet (where it's useful), while denying web browsers access to localhost except for specific ports you allow.

However, I encourage you to be careful and only block web browsers to localhost using this method, because lots of macOS applications depend on localhost connections to talk to themselves, so if you block everything from talking to localhost you may break e.g. LittleSnitch, macOS itself, etc. NO WARRANTY, HAVE BACKUPS, standard stuff.

To set this up, for each /Applications/Browser.app, create a LittleSnitch 'Deny Connections' To 'IP Addresses' rule and enter '127.0.0.1, ::1' without quotes into the text field and click OK. Then right-click on the newly-created application rule and select 'Increase Priority', which will bold the rule text 'Deny outgoing connections to 2 IP addresses'. Repeat this for each Browser.app you use.

If you'd like to specifically enable certain localhost ports to be accessible by your browser (such as 80/443), you can create another rule using the above steps, but before saving the rule, change 'Deny' to 'Allow' and click the '\/' dropdown caret button and enter the appropriate port and select TCP. I encountered some UI quirks doing this but once it's created it works as it should.

Here's a screenshot of the results of my testing for comparing against. I'm not really familiar with how LS works so I can't offer much support, but I fresh-installed it and left all the defaults alone and it worked, so more advanced users shouldn't have much trouble. https://i.imgur.com/T0yqrdM.png

Good luck!

(For those wondering if other software can do this, I tested various macOS application firewalls today and most of them either global-allow localhost connections or don't offer outbound filtering at all. So far, the only one that can block web browsers only from connecting to localhost is LittleSnitch, with some quirks that I wrote a note to their support about. At least one let me create the rule and cheerfully said it was active and then it didn't block anything.)


Ah thanks for this. I was looking for a way to accomplish this. I own LS so I'm going to give this a shot.


What can you possibly know from such a scan?

Standard clearly states that pretty much nothing: https://www.w3.org/TR/websockets/#concept-websocket-close-fa...

Sure they're shady and that needs to be blocked, but security implications? Pretty much nil.


Sure they're shady and that needs to be blocked, but security implications? Pretty much nil.

Fingerprinting. Vulnerability discovery. Messing up programs that don't know how to deal with unexpected HTTP requests.


Timing attacks can be used to determine if the TCP connection was successful, even if no data could be exchanged because the WebSocket failed to negotiate.


The original post about this seemed to indicate it might be being used to track, identify, and/or verify a client is a real system vs. a bot


I remember while working at some company, I started using a local flask server.

For some reason, I remember one company router kept making http request on port 5000 or 8000, can't remember which port, because it was literally showing on the terminal, with the http path, at random times.

I'm sure being a hacker must be pretty fun these days.


Great article! Finally I got it. Between this one and the original post. 'Why is this website port scanning me'. Can anyone shed some thoughts / reason why the scan is not performed on Linux machines? Maybe not RDP, but VNC servers that the scan performs on Windows m/cs ..


Could for ex. a website like ebay also access the intern Intranet in a company? Or my cloudstorage like SharePoint that is open in the same browser?


Here's how to block websites from port-scanning localhost through the browser: https://www.ctrl.blog/entry/block-localhost-port-scans.html


I thought it was no secret to anyone that all services track digital fingerprints ... and there are many ways to do this. to fight them i use antidetection browser such as https://gologinapp.com/ or other. Are there any alternatives to this?


Google's internal sso (which I accidentally stumbled upon) collects other endpoint-specific parameters to compose the digital signature (like browser window size and monitor size).

This feels more effective and less intrusive. Not sure why ebay went this rather weird and creepy way instead.


Looking for fraud signs is my guess, people don't usually use eBay through TeamViewer so if it's on and the port is open then an otherwise normal transaction gets really suspicious for example. They're probably feeding the open port info into their model to determine fraud risk for user logins and transactions.

They may not even be using it actively yet because they'd need to gather a lot of example data to detect outliers.


how can they differentiate between idling tv server running on (all) my machines vs. active tv session?


I don't know if there's a detectable difference just looking at the ports; just having TeamViewer installed slightly increases the risk of any transaction being fraudulent though. I'm just trying to provide an example of how port data could be used for a meaningful purpose. Even just as an additional fingerprinting component it could be useful.


wasn't this on the front page just a few days ago?


This article actually reverse engineers the javascript and has a lot more information about what they are tracking and how they are doing it.


More than one discussion can be had.


I respectfully disagree in this case—the last story was too recent, and this one does not contain any significant new information, so we're likely to end up just rehashing the same points (and have so far, in my estimation).

Edit: But, thefreeman above just informed me there's actual new information in this article (which I originally hadn't read, because the comments here made me assume it was the same as the last story on HN). So, thanks!


I immediately thought about this after reading the Wikipedia page of Peter Theil and Palentir where he stated he wanted to use technology that was used to protect PayPal (that was purchased by eBay)


I thought it would be obvious that eBay is doing this to identify bots (usually on servers) vs. real users (on client-only devices).


That is one of the main purposes, yes. I have no problem with it, personally. Many HN users are very privacy-conscious and consider it not acceptable for any purpose, though; including that one. I think both positions are fine.


> Many HN users are very privacy-conscious and consider it not acceptable for any purpose, though; including that one

It's an opinion, I suppose, but it doesn't seem to be based on reasonable expectations. Anyone can portscan anyone else, lots of security researchers have published their findings from portscanning large IP address ranges etc. ... If I don't want other people to see or access open ports on my system, I can firewall them.


External portscans are quite different from using your own machine to portscan itself/the local network.


I'm just a layman, but I don't see how ThreatMatrix could possibly be seen as within the spirit of GDPR. I hope regulators throw the book here.


Humanity constructed something wonderful and new. (The internet.) And the corporate web has all but destroyed it.


Slightly OT but I’d suggest that the thing that has “ruined” the internet (for me) more than any other single thing are trolls, and they predate the corporate web by a long way.


I would take 2x trolls over the abusive advertising and tracking network we have now.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: