Hacker News new | past | comments | ask | show | jobs | submit | wraptile's comments login

Is it really difficult though? You need a phone number but that's about it which in US doesn't even require presenting an ID right?

For people thinking of getting into moon gazing try binoculars first!

Laying down on your back, plopping a nice pair on your eyes and just looking at the moon is a fantastic experience. Aside from much better UX, binoculars also have depth-perception which makes the visuals all that more engaging.

If you have really nice clear sky in your area you can easily do that with stars and some planets as well.


I was a kid living in Botswana when Halley passed earth.

We watched it every night through binoculars.

Marvelous clean air - humidity around 0%, just some dust. No light pollution (there wasn't an electricity grid in some 100km around, just a handful of small diesel aggregates).

The binoculars were more than enough to see the comet, its tail. And even get a feeling of the tail arcing in three dimensions.


If you got good bins try looking at Jupiter as well. Typically on a clear night you can see several of the moons.

I was able to take pictures of them with a DSLR with a decent zoom lens! I used Stellarium to check the positions, waited for a very clear night, and was able to get a very nice photograph of that and of the Pleiades.

This blew my mind a few years ago when I got some decent binoculars. Depending on their positions you can see all four of the Galilean moons - even from a vantage point in a major city.

I saw them entirely accidentally. I was looking at the moon through bins and checked out The Other Bright Thing nearby and was shocked to see what looks like little dots next to the big one. As Galileo himself would have done, I immediately went on the internet and consulted a sky chart, which confirmed that I was seeing the moons of Jupiter.

I still occasionally drag my friends out to look at the moons on a clear night. It's my favorite bit of practical astronomy to share.


Depth perception? I would think objects as far away as the moon shouldn’t produce a meaningful difference between the left and right eye. But that does tell your brain they are far away, so perhaps that’s what you mean.

It's not really depth perception, but there is a significant difference in how objects are perceived when looking with both eyes. It's also applicable to binocular splitters used with a single mirror/lens telescope.

Your brain can also do 1+1 and end up with 2.5 or even 3 in the brain :) I have really bad eye sight on both eyes, one can not see enough alone but both together see much better than I would expect on the same distance.

This is great advice! It's really amazing how much more you can see with a regular decent pair of binoculars. I treasure the memory of being able to see some of the star clusters that I could only vaguely make out with the naked eye for the first time, now I basically bring them every night walk :).

I think the problem is that I would like to have an image stabilized version of that. Even small ticks of you fingers amplify quickly into shaky images.

I think instead of an eyepiece (or in addition to one) most consumer telescopes should include a usb image sensor that can screw into where the eyepiece is.


A lot of binoculars have a mount for a tripod which I can definitely recommend trying out of you happen to have both, or at least consider if you are planning to pick up a new pair.

There are also image stabilized binoculars. I have an older Canon 10x30 that I absolutely love (enough to tolerate the plasticizer now breaking down on the rubberized exterior)

I have the same, about 10 yrs old and agree the image stabilization is awesome.

You can buy generic replacement electric eyepieces that fit

Some suggestions of mark/model of binoculars good for that? With a budget of 1000$ /€(I could stretch it a bit more if it's worth the extra money)

I have the Cannon 10x30 IS from a long time ago and they are the best binoculars I’d ever tried. I’m pretty shaky so the image stabilizing is game changing. I’m sure the more powerful pairs are incredible and in that case image stabilization is a must. https://www.usa.canon.com/shop/lenses/binoculars

Are binocular superior to monocular for that use case? I thought binocular were good for depth perception which I'd assume doesn't matter here?

Any specific binoculars you can recommend?

The most important is that they capture enough light, for which the lenses must have a large diameter. 50mm is typical. Magnification around 10x is good. This is referred to as 10x50. I have a Celestron Skymaster 15x70 myself, which is specifically for night sky observation. The 70mm is very good, but the weight and the magnification make it difficult to hold still without a tripod, though you can still use it without, e.g. lying on your back

Try a good monopod. They're significantly more portable and give just enough stability in most cases to give good views whille allowing less restricted movement than most tripods.

I have a set of Vortex Diamondback HD 10x50s that are pretty affordable and do a good job with the moon (and hunting near dawn and dusk). The optics are definitely better than I expected for the price.

n of 8 is essentially meaningless tho

It's not n of 8. The article doesn't mention cohort size (year 8 is what 7th grade is called in the UK)

Found another source that said n was 26


> But Windows Phone was actually good

I think people just have rose-tinted glasses on. Sure the hardware from Nokia was great, but software was very poor even by the standards of that time.


Web scraping is becoming increasingly mainstream and it has never been as complex as it is today though modern Python tooling is quite incredible and very approachable given the right direction.

This article is my attempt to digest everything into a single starting point that should help anyone to bootstrap themselves into web scraping scene.


That's kinda what every major captcha distributor does already!

Even before captcha is being served your TLS is first fingerprinted, then your IP, then your HTTP2, then your request, then your javascript environment (including font and image rendering capabilities) and browser itself. These are used to calculate a trust score which determines whether captcha will be served at all. Only then it makes sense to analyze captcha's input but by that time you caught 90% of bots either way.

The amount your browser can tell about you to any server without your awareness is insane to the point where every single one us probably has a more unique digital fingerprint than our very own physical fingerprint!


This is how ClownFlare and its ilk, make life hell on the internet, when you use a "weird" browser on a "weird" OS.


My experience is that IP reputation does a lot more for Cloudflare than browsers ever did. I tried to see if they'd block me for using Ladybird and Servo, two unfinished browsers (Ladybird used to even have its own TLS stack), but I passed just fine. Public WiFi in restaurants and shared train WiFi often gets me jumping through hoops even in normal Firefox, though.

I can't imagine what the internet must be like if you're still on CG-NAT, sharing an IP address with bots and spammers and people using those "free VPN" extensions donating their bandwidth to botnets.


Re: your last paragraph, https://coveryourtracks.eff.org/

EFF have been running this for years. Gives an estimate about how many unique traits your browser has. Even things like screen resolution are measured.


Would it be possible to serve a fake fingerprint that appears legitimate? Or even better mimic the finger print of real users who've visited a site you own for example?


yep, but it can get tricky.

some projects worth checking out: https://github.com/refraction-networking/utls https://github.com/berstend/puppeteer-extra


Unrelated, but who runs this account?


Yes, that's what web scraping services do (full disclaimer I work at scrapfly.io). Collecting fingerprints and patching the web browser against this fingerprinting is quite a bit of work so most people outsource this to web scraping APIs.



In that case why do I ever receive a captcha?


It adds another layer of analysis. For example:

If the user solves the CAPTCHA in 0.0001 seconds, they're definitely a bot.

If the user keeps solving every CAPTCHA in exactly 2.0000 seconds, each time makes it increasingly likely that they're a bot.

If the user sets the CAPTCHA entry's input.value property directly instead of firing individual key press events with keycodes, they're probably either a bot, copy-pasting the solution, or using some kind of non-standard keyboard (maybe accessibility software?).

Basically, even if the CAPTCHA service already has a decent idea of whether the user is a bot, forcing them to solve a CAPTCHA gives the service more data to work with and increases the barrier of entry for bot makers.


I found several websites switched to 'press here until the timer runs out', probably they are doing the checks while the user is holding their mouse pressed, it would be trivial to bypass the long press by itself with automated mouse clickers.


You can spell as onomatopia and people will understand you just fine. You can't do that in Chinese.


Most of these are really just 10 lines of python code. The value of generating an entire HTML with GUI is great but then the overhead comes in when you need to modify it or fix something or god forbid add a dependency library and you end up spending more time than actually building the tool from scratch. It's getting close though.


How many google results amount to this level of complexity though? How many phone apps?

Anyone taking bets on how many years till the operating system doesn't install any software anymore, and just dynamically generates whatever software you need on the fly? "Give me a calculator app" is doable today "give me an internet browser" isn't but it should be a matter of time.


> "Give me a calculator app" is doable today "give me an internet browser" isn't but it should be a matter of time.

I just don't see that as being a sound solution. If the user requesting to solve a solved task then using an existing tool will always be a more efficient path.

What I do see as a desired option is where AI could take an existing tool and personalize it to your specific use case. In this example it takes web browser as a GUI lib and wraps it around generic solutions like library that quotes HTML entities which is kinda this just very poorly done so far.

I'd imagine that future programs will be much more component driven with AI connecting components to produce personalized solutions as this is really the only viable option until AI can reason at least in some reasonable capacity to fix its own mistakes.


> Give me a calculator app" is doable today "give me an internet browser" isn't but it should be a matter of time.

Based on what? Magical thinking?


All of the things you mentioned can be done in SVG and I've done all of these things before.

For multiple lines just duplicate your shapes and group or even join them. For varying thickness you can duplicate your shape and layer them. Rounded corners are supported on every vertice so I'm not sure where are you geting that.


I'm not the OP you're replying to but I think you missed some of their points.

1) multiple outlines: with your strategy, you can no longer edit the shape as a single shape afterwards.

2) varying thickness: it sounds like you're thinking of thickness varying abruptly from one line segment to another, but remaining constant throughout one segment. I would like a feature where the stroke width varies continuously from one vertex to another, that is, get smoothly thicker or thinner.

3) you can only have rounded corners on every vertex or no vertex of a shape. You can't choose individually for each vertex.

I think these are all features that SVG would benefit from adding.


Cloudflare has been the bane of my web existance on Thai IP and a Linux Firefox fingerprint. I wonder how much traffic is lost because of Cloudflare and of course none of that is reported to the web admins so everyone continues with their jolly ignorance.

I wrote my own RSS bridge that scrapes websites using Scrapfly web scraping API that bypasses all that because it's so annoying that I can't even scrape some company's /blog that they are literally buying ads for but somehow have an anti-bot enabled that blocks all RSS readers.

Modern web is so anti social that the web 2.0 guys should be rolling in their "everything will be connected with APIs" graves by now.


The late '90s-'00s solution was to blackhole address blocks associated with entire countries or continents. It was easily worth it for many US sites that weren't super-huge to lose the the 0.1% of legitimate requests they'd get from, say, China or Thailand or Russia, to cut the speed their logs scrolled at by 99%.

The state of the art isn't much better today, it seems. Similar outcome with more steps.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: